Next Article in Journal
Energy Efficiency Measurement of Mechanical Crushing Based on Non-Contact Identification Method
Previous Article in Journal
Minimization over Nonconvex Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm

Department of Industrial Engineering, Faculty of Engineering, Ruppin Academic Center, Decision-Making Research Center, Emek Hefer 4025000, Israel
Symmetry 2024, 16(7), 811; https://doi.org/10.3390/sym16070811
Submission received: 17 May 2024 / Revised: 17 June 2024 / Accepted: 20 June 2024 / Published: 28 June 2024
(This article belongs to the Section Engineering and Materials)

Abstract

:
Control charts (esp. X ¯ -chart) are proven and useful tools to preserve process alignment with its design mean. The control charts’ limits are designed symmetrically around the process’s mean. The assumption of symmetry is justifiable when assuming that the measurements are infinite. Typically, these assumptions are warranted since the measuring resolution is significantly (by orders of magnitude) lower than the deviation of the controlled process. However, when the measuring device has a resolution of the same order of magnitude as the standard deviation of the controlled process, the symmetrical nature is no longer justified. In low-resolution measurement process control, symmetry is not the norm and both these control limits should be built asymmetrically. To help remedy this issue, this article explores the asymmetrical nature of the low-resolution measurement and suggests a new (asymmetric) control limit based on false-alarm-required probabilities. This represents a novel approach to the problem

1. Introduction

The concept of process control was designed to maintain the performance parameters of a given process at their pre-designed values. A typical example is a production process where a machine is calibrated to a specific mean value. To ensure the uniformity of the products, samples are periodically taken from the process output (e.g., finished or intermediate products) and measured to detect a change in the process’s mean.
The most common control tool is the X ¯ -chart, which provides a simple device to verify that the process has not deviated from its calibrated value. For each sample, the average is calculated and if it exceeds certain predetermined values, the process is stopped and recalibrated. A method to set these control values (limits) was proposed by Shewhart in the 1920s and became the norm in the industry and spread over various fields. This method ensures a certain level of “false alarms” (cases of the process stopping unnecessarily), which is acceptable in most industry cases.
Figure 1 depicts an example of such a chart. The process mean is depicted with a dotted line. The upper control limit (UCL) and the lower control limit (LCL) are in full lines. The round dots represent the samples (average values) taken from the controlled process. As can be observed, the last sample’s value crossed the UCL, and thus, caused the process to stop and be recalibrated. Shewhart’s designed limits are always spaced symmetrically around the mean, as described in Equation (1). This is a result of the basic assumption of normality.
U C l m e a n = m e a n L C L
The aforementioned X ¯ -chart has been very popular since its introduction by Shewhart [1]. The mean of the measured quality is typically the key parameter of interest [2,3]. Most studies on control charts implicitly assume that this measurement system is perfect and error-free [4,5]. This assumption enabled Shewhart to suggest control limits that are perfectly symmetrical on both sides of the process mean, thus simplifying the construction of these limits. However, this assumption is not always warranted and may lead to biases and incorrect statistical analyses [6]. The assumption of symmetry may hold when the measurement is (almost) infinitely accurate. Obviously, when continuous sizes are measured, there is always a limit to the resolution and, therefore, a built-in limit on their accuracy (e.g., a real-valued measurand can only be observed to the nearest kth decimal floating point [7]). However, in most controlled processes, the resolution is small enough so that the measurements can be treated as exact. When this is not the case, the difference can accumulate substantially over batches of measurements [8]. Most notably, the estimation of the mean (µ) may have significant effects [9]. Findings have shown that in numerous industrial settings where digital instruments are utilized, data collection frequently relies on fairly basic measurement methods. This is critical because today’s Industry 4.0 revolution remains highly dependent on these types of sensors [10,11,12], underscoring the urgency of addressing these errors. Additionally, as machine learning techniques are becoming a widely used engineering tool [13], the need to collect large amounts of data from various sensors increases. The use of these large amounts of data is widespread in streamlining manufacturing and logistical processes worldwide [14,15,16], increasing the need for these measurements.
The immense quantity of data generated by these low-cost (and typically low-resolution) sensors makes manual control of the data infeasible and necessitates the implementation of statistical tools to manage the monitored processes.
While it might seem appealing to consider limited resolution as a type of measurement error, they are fundamentally different: resolution problems are dependent on the actual values, whereas measurement errors are not [17,18].
Measuring devices, whether digital or analog, have discrete scales ( h ). For example, some digital thermometer scales measure in steps of 0.1 °C. An analog thermometer scale has a similar limitation because the results are rounded off to the nearest mark. Furthermore, measurements are often stored in a way that imposes a further resolution [19]. A handy rule of thumb was proposed by Gertsbakh [20], who suggested that when the ratio of the standard deviation ( σ ) of the measured parameter to h (denoted by δ ) is greater than 2, the implicit assumption that the measuring device is exact is justified. The higher the value of δ , the more justified this assumption.
While the symmetry assumption is typically justifiable, when the measurement is of low resolution, this assumption fails and may lead to undesired results. An example of such a case is depicted in Appendix A.
The remainder of the paper is organized as follows: Section 2 presents a literature review of the current research on this topic. Section 3 describes the notations used in this paper. Section 4 describes the basic assumptions of the problem. Section 5 provides the statistical analysis of the control measurements. Section 6 deals with the asymmetrical considerations of setting the control limits. Section 7 describes the proposed method for setting these limits. The article concludes in Section 8, which provides the conclusion and suggests further research.

2. Literature Review

To the best of our knowledge, the first study to address the effects of measurement resolution was published by Sheppard [21] in 1897. In 1957 a practical guideline was proposed to determine when it is reasonable to neglect this issue [22]. Dempster and Rubin investigated the impact on linear regression models by exploring scenarios that either ignored or acknowledged this effect and by applying Sheppard’s correction and adjusting the covariance matrix [23]. Later studies [24,25] confirmed that the effects of rounding or grouping were minimal when the interval width ( h ) was small. A 1996 study delved into grouped measurements and discussed the challenges of expensive or impossible measurements and proposed control chart techniques for economically grouped data [26]. These authors showed that using the midpoint of each group as the rounded value could detrimentally affect the accuracy of control charts designed for precise measurements. The following year, the impact of rounded data on R-charts was assessed in a work that found that rounding data does not negatively influence the control limit factors for these charts [27]. Concurrently, Bryce et al. evaluated the accuracy of four standard deviation estimators; namely, the sample standard deviation, the mean moving range, the median moving range, and the Ciminera–Tukey measure using computer simulations to test their effectiveness in monitoring individual observations and identifying special causes [28]. Another work examined the specific effects of rounded data on R-charts and defined the degree of precision as r = w / σ (where w is the interval width of the rounded x , and σ is the standard deviation of x ) and suggested guidelines for the precision needed in data recording [29]. In 2001, McNames et al. examined the effects of mean quantization on control chart Q and explored how rounded data influence the variance, regression parameters, and distortion compared to unrounded data. They suggested a method to assess this distortion based on the length of the rounding interval [30,31]. In 2010, Meneces et al. explored the effectiveness of exponentially weighted moving average control charts using a Monte Carlo method to assess their detection capabilities related to measurement resolution [32].
Generally, both the variance and the mean are necessary for analysis, but in certain scenarios, the mean is presumed known from the production process setting [3]. Conversely, the variance must be estimated from the data, and since a straightforward calculation disregarding rounding effects is insufficient, various methods have been proposed to address this problem. Sheppard suggested a correction approach, which later faced criticism for its inadequacy in certain cases [6,21]. Building on the work of Schader and Shmid, Gertsbakh introduced a maximum-likelihood approach to estimate both the mean and the standard deviation [20,33]. In 2012, Benson-Karhi et al. proposed using the method of moments to calculate the variance when the mean is known and demonstrated its superiority over the maximum-likelihood method [34]. Lee and Vardeman expanded on this by discussing variance estimation in normally distributed processes when neither the mean nor the variance is established. They employed a parametric likelihood-based technique, the Maximum Likelihood Estimator, and compared it to traditional methods [35,36]. They also showed it could be applied to developing reliable confidence procedures for estimating variance components [37]. Benson-Karhi et al. further refined their technique by integrating the method of moments with calibration methods using the MSE (the mean standard error) as a criterion and simulations to create a database. They showed that their method outperformed traditional approaches, Sheppard’s correction, and the Vardeman and Schader method [6,33,38].
Given that most studies have focused on estimating the mean and the variance for parameters that are assumed to be normally distributed, it makes sense to question the appropriateness of this approach. Box and Luceno defended this methodology by noting that the random error, which averages the various component errors, aligns well with the central limit theorem. This principle supports the assumption that random errors will likely follow a normal distribution rather than any other [39].
Although there are numerous methodologies for determining the mean and the variance in data subjected to rounding, there has been a noticeable lack of exploration into how rounding affects control charts [34]. Since the control charts’ first introduction, they have become a fundamental tool in statistical process control. They are heralded for their efficacy, which explains their widespread application [40,41,42]. The renowned X ¯ -chart in particular is extensively utilized for monitoring the mean [43]. This highlights a significant gap in current research: studying the impact of rounding on the reliability and performance of these essential statistical tools.
As can be clearly observed from this section, significant work was recently accomplished in the field of evaluating the values of the process parameters (mean and standard deviation) where the measurement is of low resolution. Also, the importance of proper control is of an essential nature and is based on correctly measuring the controlled variables. Therefore, this article attempts to “connect the dots”—use the previous work of assessing the process parameters and develop a useful method to set the control process’s limits even when the measurements are of low resolution.

3. Notations

To improve the readability of the paper, the important notations are depicted in Table 1.

4. Basic Assumptions

This study is based on several underlying assumptions. For clarity, they are listed below.
(1)
Normal distribution—The value of the measurand [34,38] (denoted X ) is normally distributed ( X ~ N ( μ , σ 2 )). This assumption was presented by Shewhart and is common in all the following articles.
(2)
Resolution impact—The observed value (denoted Y ) is rounded off to the nearest observable value. For example, if the resolution is 0.1 °C, a value of 20.52 °C will be rounded off to 20.5 °C and a value of 20.57 °C will be rounded off to 20.6 °C. This assumption is the core of this research and was not introduced in previous research in this field.
(3)
Initial parameter measuring—The “exact” value of the mean and the variance of the measurand can be measured “accurately” (i.e., with high enough resolution). This can be achieved by either using exact measuring devices (applicable during the initial setting of the control limits, but not during the run off of the production process) or using the methods suggested by Gertsbakh and Benson-Karhi [20,38]. This assumption is well established in recent research.
(4)
Control limits–The X ¯ -charts are set to have upper and lower control limits. These limits were constructed symmetrically around the mean as depicted in Equation (2)
U C L ,   L C L = μ ± k σ n
Thus, when either X ¯ > U C L or X ¯ < L C L , the process is temporarily halted, and measures are taken to re-center it. This assumption was presented by Shewhart and is common in all the following articles.
Typically, the value of k is set to 3. This yields a rate of false alarms ( α ) of 0.0027 [44]. Here we assumed that this is the desired rate.

5. Measurement Distribution

This section is divided into three sub-sections. The first one describes the case when the un-common process’s mean is aligned with the measuring tool’s resolution (hence the symmetry). This sub-section depicts the distribution of the measured values in this case. The second sub-section follows that and expands the analysis to the general case and describes the asymmetrical nature of the general measured values. The last sub-section continues the distribution developed in the second sub-section and depicts the distribution of the average value of the sample).

5.1. Symmetrical Process

The basic assumption is that the control limits are set symmetrically around the mean; i.e., the mean is aligned with the resolution. Since the value of Y can only take discrete values, its distribution function is depicted in Equation (3).
P Y = y = Φ y h + h 2 μ σ Φ y h h 2 μ σ   f o r   a n y   i n t e g e r   y
For simplicity’s sake and without loss of generality, in the remainder of this article, it is assumed that σ = 1 . That is, all values are measured in units of σ ; hence Equation (4):
P Y = y = Φ ( y + 0.5 ) h μ Φ ( y 0.5 ) μ   f o r   a n y   i n t e g e r   y
where both h and μ are measured in units of σ .
Although Y can have any integer value, Gertsbach [20] and Benson-Karhi [34] pointed out that, practically, only finite values are significant. They claimed that five values are enough, as expressed in Equation (5)
Y μ 2 h , μ h , μ , μ + h , μ + 2 h
The distribution is depicted in Figure 2, where the shaded area (which spreads between μ 1.5 h and μ 0.5 h ) represents all the values that will be measured as μ h and, therefore, represents the probability P Y = μ h .

5.2. Asymmetrical Process

Typically, since the scale of the measuring device is somewhat arbitrary, there is no reason why the mean should be aligned with the scale. The general case is depicted in Figure 3.
The asymmetry can be measured by η = μ h 0 , thus η 0 , 0.5 .
The values of Y are depicted in Equation (6).
Y h 0 2 h , h 0 h , h 0 , h 0 + h , h 0 + 2 h
where h 0 is the closest measurable value to μ and, therefore, the mode of the distribution of Y .

5.3. Average Sample Value Distribution

Despite Gertsbach’s claim that five values are enough, for the remainder of this article a seven-value approach is used to handle less rough and more asymmetrical resolutions.
As depicted in Equation (7)
Y h 0 3 h , h 0 2 h , h 0 h , h 0 , h 0 + h , h 0 + 2 h , h 0 + 3 h
As described in Equation (2), the aim of the SPC is to monitor the average measured value ( X ¯ ). However, because X is not measured, the value of X ¯ remains unknown. The only calculable average is Y ¯ .
From Equation (7), assuming a sample size of n, the values that Y ¯ can take are depicted in (8).
Y ¯ h 0 3 h , h 0 3 1 n h , h 0 3 2 n h , , h 0 , h 0 + h n , , h 0 + 3 h
Before calculating the distribution function of Y ¯ , we first need some auxiliary calculations. Based on the second assumption, we derive Equation (9):
P Y = y = Φ ( y + 0.5 ) h μ Φ ( y 0.5 ) μ
Specifically, let us define p i as depicted in Equation (10).
p i = P Y = h 0 + i h                     i 3 , 2 , 1 , 0 , 1 , 2 , 3
Following Equation (7), in a sample of n measurements, each of the measurements will be one of these values. For each value of Y , we denote an auxiliary variable corresponding to the number of measurements equal to this value, where n i denotes the number of measurements with a value h 0 + i h (for example, if n 2 is the number of measurements equal to h 0 2 h , n 2 is the number of measurements equal to h 0 + 2 h and n 0 is the number of measurements equal to h 0 ). Equation (11) emerges from these definitions.
n = i = 3 3 n i             n i 0,1 , . . , n
Following these equations, the probability distribution function of Y ¯ is depicted in Equation (12).
P Y ¯ = v = P i = 3 3 i n i = n v i = 3 3 n i = n
Following the definition of p i , the distribution function of Y ¯ can be calculated as in Equation (13).
P Y ¯ = v = i = 3 3 n ! j = 3 3 n j ! j = 3 3 p j n j i = 3 3 i n i = n v , i = 3 3 n i = n
For example, let us consider the case of measuring some content (measured in micrograms— μ g ). The mean is 20   μ g and the standard deviation is 0.9   μ g . The measuring device resolution is in whole micrograms. In this example, the probabilities of measuring values between 17   μ g and 23   μ g are depicted in Figure 4 (the probability of higher or lower values is negligible). For example, the probability of a measured value of 19 is Φ 19.5 Φ ( 18.5 ) .
Let us assume that the sample size is set to n = 3 . Therefore, the Y ¯ value can only have the following values: 17 , 17 1 3 , 17 2 3 , 18,18 1 3 , 18 2 3 , 19,19 1 3 , 19 2 3 , 20,20 1 3 , 20 2 3 , 21,21 1 3 , 21 2 3 , 22,22 1 3 , 22 2 3 , 23 .
There are 84 combinations for n = 3 as depicted in Table 2.
The distribution of Y ¯ can be calculated from Table 1, as depicted in Figure 5.

6. Asymmetry Considerations When Setting Control Limits

The original control limits suggested by Shewhart [40] remain commonly used today and are set to be 3 standard deviations from the mean [45,46,47,48,49]. Thus Equation (2) becomes Equation (14).
U C L , L C L = μ ± 3 σ n
The arbitrary value of 3 means that the tolerable rate of false alarms is 0.0027, or an average run length (ARL) between false alarms of approximately 370. Obviously, because of the discrete nature of the sample, there is no possibility of reaching these exact values, but the control limits can be determined by setting the value that yields the required ARL value (or close to it). Based on the example in Figure 5, the control limits can be set as depicted in Figure 6.
In Figure 6, two sets of control limits are depicted. The first (denoted by U C L H , L C L H ) yields A R L = 1675 and the second ( U C L L , L C L L ) yields A R L = 196 . It is worth noting that if we were to ignore the resolution problem and calculate the control limits as originally proposed, the result would be identical for all practical purposes to the U C L L , L C L L set (18.44 and 21.55, respectively). At first glance, it may seem that the proposed process is somewhat redundant, but as we show below, this is not the case.
The original control limits were set to be symmetrical around the mean (because of the symmetry of the normal curve). This symmetry is seldom preserved since typically there is no adaptation of the mean to the arbitrary human-set measuring scale (as depicted in Figure 3). If the process mean deviates from the measurement scale (i.e., in the example, the mean is not exactly 20   μ g ), the A R L changes significantly, as depicted in Figure 7.
As can be seen in Figure 7, ignoring the asymmetry when constructing the control lines leads to unwanted results. Thus overall, when setting the control limits, the asymmetry must be considered.
This result indicates that the asymmetrical nature of the measured values yields an unwanted phenomenon: the procedure, designed to keep the measured process mean aligned with its intended value, does not provide the required results. Instead of experiencing false alarms once in about 370 samples, the control process may yield false alarms on average every ~40 samples in extreme asymmetrical cases.
If left unchanged, this phenomenon may cause severe economic damage. For example, imagine a production line where the process is stopped for recalibration 10 times more often than planned.
Clearly, this situation calls for a revised method for constructing control limits. The new method should account for the asymmetrical nature of the measuring process and thus provide different upper/lower control limits, which are not symmetrically spaced around the mean.

7. Suggested Method for Setting Control Limits

The proposed method is based on the realization that we need two different control-limit calculations: one for the UCL and one for the LCL. Without loss of generality, let us assume that the case is μ > h 0 as depicted in Figure 3. The planner setting the lines needs to determine the required (or acceptable) ARL goal so that the UCL/LCL can be set accordingly. The control limits are influenced by the sample size ( n ), the asymmetry ( η ), and the σ to h ratio ( δ ).
The algorithm flowchart is depicted in Figure 8 with some elaborations following.
The algorithm depicted in Figure 8 describes the method for setting the upper control limit. The steps are further explained below (an elaboration for finding the LCL is provided in curly brackets):
(1)
The input for the algorithm is the required ARL {typically the same ARL is required for both upper and lower control limits, but this need not be the general case}
(2)
As depicted in Figure 4, the specific probability mass function for the measured value is calculated.
(3)
Depending on the sample size ( n ), the 6 n + 1 values of the probability mass function’s values for Y ¯ are calculated.
(4)
U C L is set for the initial highest possible value { L C L is set to the initial lowest possible value, i.e., L C L = h 0 3 + 1 n h }.
(5)
The value of β (false-alarm probability) is initialized to 0 (indicating A R L ).
(6)
At each iteration, the value of the U C L is reduced by one “notch” (reduced to the next possible measurable value of Y ¯ {the value of L C L is increased by the same value}, the new β is calculated, until the required A R L is reached (i.e., the loop ends when A R L β 1 ).
(7)
The calculated U C L { L C L } is the required output.
Appendix B provides an example of this procedure.

8. Conclusions, Limitations, and Future Research

This paper examines the basic assumptions underlying the current (classical) construction of control limits. The article describes the underlying assumptions that may be incorrect in various applications due to low-resolution measurements and demonstrates the resulting errors that may occur.
Based on these assumptions, the distribution of the measured value was developed. This distribution was shown to be skewed, thereby denying any symmetrical solution to the control-limits setting problem. From this distribution, the distribution of the average measured value was computed, which is also asymmetrical. The final step was to re-evaluate the control limits in a way that considers this phenomenon.
Although this article is inherently limited to specific cases of low-resolution measurement, it provides valuable insights into these scenarios. The primary finding of this research is that certain fundamental assumptions about the symmetry of control processes do not hold when measurements are of low resolution. This asymmetrical nature of the general case can lead to severe loss in production, due to unnecessary production interruptions. Furthermore, the large number of false alarms may lead to a loss of confidence in the process’s capabilities and quality and thus cause some wrong managerial decisions.
In the modern production environment of gathering large amounts of data from a large number of cheap sensors, it is of high importance to have a trustworthy automatic control process to verify that the controlled process is still calibrated.
By demonstrating this and presenting a relatively simple algorithm, the paper contributes to the field of control with low-resolution measurements, where the naïve application of control charts may lead to erroneous decisions.
While this study primarily focuses on the specific methodology of X ¯ -charts, there is an acknowledged need for the rectification of other types of control charts, such as those dealing with variance (like the R-chart) or the more robust charts required for six-sigma applications [50]. Furthermore, recognizing the asymmetry caused by measurement may offer insights into the analysis of big data. Furthermore, this entire method is built on a somewhat contentious premise: the assumption of normality. This assumption may not hold true in many scenarios, and the nonlinear transformations used to achieve normality could also be flawed [51].

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. An Example of an Asymmetric Low-Resolution Process Control

The following case demonstrates the problem that may arise from using symmetrical control limits.
High-quality vegetable seeds are typically expensive; therefore, customers are assured of receiving seeds that match the specified variety, germination duration, and germination percentage, which generally ranges from 97% to 100%. To achieve such high germination rates, the seeds undergo a ‘priming’ process designed to enhance these percentages. The specific details of this priming process are proprietary to the seed companies and beyond the scope of this article. Following priming, the seeds are subjected to various tests to ensure that the process has not significantly reduced their shelf life. Some tests are straightforward, such as germination in a controlled environment for several days until sprouting occurs, while others involve more advanced measurements.
One advanced test involves measuring the phosphor (P) content in the seeds. The mean of the process should yield a mean of 20.3 μg, with a standard deviation of 1 μg. The purpose of the X ¯ -chart is to ensure the phosphor content remains stable post-priming. For this test, four seeds are sampled periodically. While there are highly accurate, albeit expensive, devices available for measuring phosphor content, the commercial device used for this purpose has an accuracy of 1 μg.
Using Shewhart’s symmetrical limits, the UCL and LCL should be as depicted in Equation (A1).
U C L , L C L = 20.3 ± 3 1 4 = 21.8 , 18.8
However, these limits may cause an undesirable alarm rate. The distribution of the measured value (assuming the process mean remains 20.3) is depicted in Figure A1 (the thin vertical line denotes the process’s mean).
Figure A1. Result distribution.
Figure A1. Result distribution.
Symmetry 16 00811 g0a1
As can be seen from this figure, the distribution is not symmetrical. Therefore, the probability of an upper false alarm (i.e., the sample’s average values exceed the UCL) is about a third of the probability of crossing the LCL (~0.009 and 0.027, respectively). This simple example demonstrates the asymmetrical nature of the test and the need for some re-design of the current tools in use.

Appendix B. Calculating the Control Limits

Let us assume a process that needs to measure the phosphorous content (micrograms) in seeds. The mean is μ = 10.32 , the standard deviation is σ = 0.09 , and the resolution is h = 0.1 . The sample size is 5.
In this example, the asymmetry is η = 0.02 .
The probability mass function for the specific measured value is depicted in Figure A2.
Figure A2. Measured value (y) distribution.
Figure A2. Measured value (y) distribution.
Symmetry 16 00811 g0a2
The cumulative distribution function of Y ¯ is calculated and depicted in Figure A3.
Figure A3. Y ¯ distribution.
Figure A3. Y ¯ distribution.
Symmetry 16 00811 g0a3
The procedure starts from Figure A3. Let us assume that the requirements are similar to the ones set by Shewhart, that is A R L U = A R L L 740 .
Applying the procedure to the UCL yields two possible values: either U C L = 10.5 with A R L U = 546 or U C L = 10.48 with A R L U = 126 . The planner thus needs to decide which of these values is preferable. Similarly, there are two possible values for the LCL: either L C L = 10.24 with A R L L = 483 or L C L = 10.26 with A R L L = 118 .

References

  1. Braden, P.; Matis, T.; Benneyan, J.T.; Chen, B. Estimating X Statistical Control Limits for Any Arbitrary Probability Distribution Using Re-Expressed Truncated Cumulants. Mathematics 2022, 10, 1044. [Google Scholar] [CrossRef]
  2. Bradford, P.G.; Miranti, P.J. Information in an industrial culture: Walter A. Shewhart and the evolution of the control chart, 1917–1954. Inf. Cult. 2019, 54, 179–219. [Google Scholar] [CrossRef]
  3. Montgomery, D.C. Introduction to Statistical Quality Control, 7th ed.; Wiley: New York, NY, USA, 2019. [Google Scholar]
  4. Haridy, S.; Maged, A.; Alherimi, N.; Shamsuzzaman, M.; Al-Ali, S. Optimization design of control charts: A systematic review. Qual. Reliab. Eng. Int. 2024, 40, 2122–2157. [Google Scholar] [CrossRef]
  5. Quintero-Arteaga, C.; Peñabaena-Niebles, R.; Vélez, J.I.; Jubiz-Diaz, M. Statistical design of an adaptive synthetic X control chart for autocorrelated processes. Qual. Reliab. Eng. Int. 2022, 28, 2475–2500. [Google Scholar] [CrossRef]
  6. Vardeman, S.B.; Lee, C.S. Likelihood-based statistical estimation from quantized data. Trans. Instrum. Meas. 2005, 54, 409–414. [Google Scholar] [CrossRef]
  7. Vardeman, S.; Hamada, M.S.; Burr, T.; Morris, M.; Wendelberger, J.; Jobe, J.M.; Moore, L.; Wu, H. An Introduction to Statistical Issues and Methods in Metrology for Physical Science and Engineering. J. Qual. Technol. 2014, 46, 33–62. [Google Scholar] [CrossRef]
  8. Burr, T.; Croft, S.; Vardeman, S.; Hamada, M.S.; Weaver, B. Rounding error effects in the presence of underlying measurement error. Accredit. Qual. Assur. 2012, 17, 485–490. [Google Scholar] [CrossRef]
  9. Lee, C.S.; Vardeman, S.B. Interval estimation of a normal process mean from rounded data. J. Qual. Technol. 2001, 33, 335–348. [Google Scholar] [CrossRef]
  10. Kalsoom, T.; Ramzan, N.; Ahmed, S.; Ur-Rehman, M. Advances in Sensor Technologies in the Era of Smart Factory and Industry 4.0. Sensors 2020, 20, 6783. [Google Scholar] [CrossRef]
  11. Schütze, A.; Helwig, N.; Schneider, T. Sensors 4.0–smart sensors and measurement technology enable Industry 4.0. J. Sens. Sens. Syst. 2018, 7, 359–371. [Google Scholar] [CrossRef]
  12. Gaddam, A.; Wilkin, T.; Angelova, M.; Gaddam, J.; Faults, D.S. Anomalies and Outliers in the Internet of Things: A Survey on the Challenges and Solutions. Electronics 2020, 9, 511. [Google Scholar] [CrossRef]
  13. Arab, M.; Akbarian, H.; Gheibi, M.; Akrami, M.; Fathollahi-Fard, A.M.; Hajiaghaei-Keshteli, M.; Tian, G. A soft-sensor for sustainable operation of coagulation and flocculation units. Eng. Appl. Artif. Intell. 2022, 115, 105315. [Google Scholar] [CrossRef]
  14. Zhang, C.; Fathollahi-Fard, M.A.; Li, J.; Tian, G.; Zhang, T. Disassembly Sequence Planning for Intelligent Manufacturing Using Social Engineering Optimizer. Symmetry 2021, 13, 663. [Google Scholar] [CrossRef]
  15. Wang, W.; Zhou, X.; Tian, G.; Fathollahi-Fard, M.A.; Wu, P.; Zhang, C.; Liu, C.; Li, Z. Multi-objective low-carbon hybrid flow shop scheduling via an improved teaching-learning-based optimization algorithm. Sci. Iran. 2022. [Google Scholar] [CrossRef]
  16. Pasha, J.; Dulebenets, A.M.; Fathollahi-Fard, M.A.; Tian, G.; Lau, Y.-Y.; Singh, P.; Liang, B. An integrated optimization method for tactical-level planning in liner shipping with heterogeneous ship fleet and environmental considerations. Adv. Eng. Inform. 2021, 48, 101299. [Google Scholar] [CrossRef]
  17. Zhao, N.; Bai, Z. Bayesian statistical inference based on rounded data. Commun. Stat. Simul. Comput. 2020, 49, 135–146. [Google Scholar] [CrossRef]
  18. Zhao, N.; Bai, Z. Analysis of rounded data in measurement error regression. J. Korean Stat. Soc. 2013, 42, 415–429. [Google Scholar] [CrossRef]
  19. Zhidong, B.; Shurong, Z.; Bauxue, Z.; Gourong, H. Statistical analysis for rounded data. J. Stat. Plan. Inference 2009, 139, 2526–2542. [Google Scholar]
  20. Gertsbakh, I. Measurement Theory for Engineers; Springer: Berlin, Germany, 2003. [Google Scholar]
  21. Sheppard, W.F. On the Calculation of the most Probable Values of Frequency-Constants, for Data arranged according to Equidistant Division of a Scale. Proc. Lond. Math. Soc. 1897, 1, 353–380. [Google Scholar] [CrossRef]
  22. Cochran, W.G.; Cox, G.M. Experimental Designs, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1957. [Google Scholar]
  23. Dempster, A.P.; Rubin, D.B. Rounding error in regression: The appropriateness of Sheppard’s corrections. J. R. Stat. Soc. Ser. B (Methodol.) 1983, 45, 51–59. [Google Scholar] [CrossRef]
  24. Heitjan, D.F.; Rubin, D.B. Ignorability and coarse data. Ann. Stat. 1991, 19, 2244–2253. [Google Scholar] [CrossRef]
  25. Heitjan, D.F. Ignorability and coarse data: Some biomedical examples. Biometrics 1993, 49, 1099–1109. [Google Scholar] [CrossRef] [PubMed]
  26. Steiner, S.H.; Geyer, P.L.; Wesolowsky, G.O. Shewhart control charts to detect mean and standard deviation shifts based on grouped data. Qual. Reliab. Eng. Int. 1996, 12, 345–353. [Google Scholar] [CrossRef]
  27. Tricker, A.; Coates, E.; Okell, E. Precision of measurement and its effect on the R chart. Total Qual. Manag. 1997, 8, 296. [Google Scholar] [CrossRef]
  28. Bryce, R.; Gaudard, M.A.; Joiner, B.L. Estimating the standard deviation for individuals control charts. Qual. Eng. 1997, 10, 331–341. [Google Scholar] [CrossRef]
  29. Tricker, A.; Coates, E.; Okell, E. The effect on the R chart of precision of measurement. J. Qual. Technol. 1998, 30, 232–239. [Google Scholar] [CrossRef]
  30. McNames, J.; Evans, W.; Abercrombie, D. Quantization Compensation for SPC Q Charts. 2001. Available online: https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=IYJ5qH0AAAAJ&cstart=200&pagesize=100&sortby=pubdate&citation_for_view=IYJ5qH0AAAAJ:J-pR_7NvFogC (accessed on 16 May 2024).
  31. Schneeweiss, H.; Komlos, J.; Ahmad, A.S. Symmetric and asymmetric rounding: A review and some new results. AStA Adv. Stat. Anal. 2010, 94, 247–271. [Google Scholar] [CrossRef]
  32. Meneces, N.; Olivera, S.A.; Saccone, C.D.; Tessore, J. Effect of resolution of measurements in the behavior of exponentially weighted moving average control charts. PDA J. Pharm. Sci. Technol. 2013, 67, 288–295. [Google Scholar] [CrossRef]
  33. Schader, M.; Schmid, F. Small sample properties of the maximum likelihood estimators of the parameters μ and σ from a grouped sample of a normal population. Commun. Stat. -Simul. Comput. 1988, 17, 229–239. [Google Scholar] [CrossRef]
  34. Benson-Karhi, D.; Ellite, D.; Regev, I.; Schechtman, E. Estimation of a Normal Process Variance from Measurements with Large Round-Off Errors. IET Sci. Meas. Technol. 2013, 7, 80–189. [Google Scholar]
  35. Lee, C.S.; Vardeman, S.B. Interval estimation of a normal process standard deviation from rounded data. Commun. Stat.-Simul. Comput. 2002, 31, 13–34. [Google Scholar] [CrossRef]
  36. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models: A Modern Perspective; Chapman & Hall: London, UK, 2006. [Google Scholar]
  37. Lee, C.S.; Vardeman, S.B. Confidence intervals based on rounded data from the balanced one-way normal random effects model. Commun. Stat. Simul. Comput. 2003, 32, 835–856. [Google Scholar] [CrossRef]
  38. Benson-Karhi, D.; Ellite, D.; Regev, I.; Schechtman, E. Using Measurements with Large Round-Off Errors Interval Estimation of Normal Process Variance. IET Sci. Meas. Technol. 2015, 9, 1050–1056. [Google Scholar]
  39. Box, G.E.; Luceno, A.; del Carmen Paniagua-Quinones, M. Statistical Control by Monitoring and Adjustment; John Wiley & Sons: New York, NY, USA, 2011. [Google Scholar]
  40. Shewhart, W.A. Economic Control of Quality of Manufactured Products; Macmillan And Co., Ltd.: London, UK, 1931. [Google Scholar]
  41. Koutras, M.V.; Bersimis, S.; Maravelakis, P.E. Statistical process control using Shewhart control charts with supplementary runs rules. Methodol. Comput. Appl. Probab. 2007, 9, 207–224. [Google Scholar] [CrossRef]
  42. Goedhart, R.; Schoonhoven, M.; Does, R.J. Guaranteed in-control performance for the Shewhart X and X control charts. J. Qual. Technol. 2017, 49, 155–171. [Google Scholar] [CrossRef]
  43. Saleh, N.A.; Mahmoud, A.M.; Keefe, M.J.; Woodall, W.H. The Difficulty in Designing Shewhart X and X Control Charts with Estimated Parameters. J. Qual. Technol. 2015, 47, 127–138. [Google Scholar] [CrossRef]
  44. Jensen, W.A.; Jones-Farmer, L.A.; Champ, C.W.; Woodall, W.H. Effects of parameter estimation on control chart properties: A literature review. J. Qual. Technol. 2006, 38, 349–364. [Google Scholar] [CrossRef]
  45. Zhou, M. Variable sample size and variable sampling interval Shewhart control chart with estimated parameters. Oper. Res. 2017, 17, 17–37. [Google Scholar] [CrossRef]
  46. Faraz, A.; Saniga, E.; Montgomery, D. Percentile-based control chart design with an application to Shewhart X ¯ and S2 control charts. Qual. Reliab. Eng. Int. 2019, 35, 116–126. [Google Scholar] [CrossRef]
  47. Das, T. Control chart for continuous quality improvement-analysis in the industries of Bangladesh. Mil. Inst. Sci. Technol. 2017, 5, 65–72. [Google Scholar]
  48. Jardim, F.; Chakraborti, S.; Epprecht, E.K. Two perspectives for designing a phase II control chart with estimated parameters: The case of the Shewhart Chart. J. Qual. Technol. 2020, 52, 198–217. [Google Scholar] [CrossRef]
  49. Al-Omari, A.I.; Haq, A. Improved quality control charts for monitoring the process mean, using double-ranked set sampling methods. J. Appl. Stat. 2012, 39, 745–763. [Google Scholar] [CrossRef]
  50. Dalalah, D.; Diabat, A. Repeatability and reproducibility in med labs: A procedure to measurement system analysis. IET Sci. Meas. Technol. 2015, 9, 826–835. [Google Scholar] [CrossRef]
  51. Khakifirooz, M.; Tercero-Gómez, V.G.; Woodallb, W.H. The role of the normal distribution in statistical process monitoring. Qual. Eng. 2021, 33, 497–510. [Google Scholar] [CrossRef]
Figure 1. X ¯ -chart example.
Figure 1. X ¯ -chart example.
Symmetry 16 00811 g001
Figure 2. Measurement distribution (symmetrical process).
Figure 2. Measurement distribution (symmetrical process).
Symmetry 16 00811 g002
Figure 3. Measurement distribution (asymmetrical process).
Figure 3. Measurement distribution (asymmetrical process).
Symmetry 16 00811 g003
Figure 4. Probability mass function.
Figure 4. Probability mass function.
Symmetry 16 00811 g004
Figure 5. Distribution.
Figure 5. Distribution.
Symmetry 16 00811 g005
Figure 6. Control limits.
Figure 6. Control limits.
Symmetry 16 00811 g006
Figure 7. Average run length till false alarm.
Figure 7. Average run length till false alarm.
Symmetry 16 00811 g007
Figure 8. Algorithm for setting UCL.
Figure 8. Algorithm for setting UCL.
Symmetry 16 00811 g008
Table 1. Notations.
Table 1. Notations.
NotationExplanation
Process parameters
x Real value of the measured controlled parameter. This value is unknown and cannot be measured and approximated by the measured value y .
X ¯ Real average value of the obtained sample. This value is unknown and estimated by Y ¯ .
μ Mean value of the controlled process value
σ Standard deviation of the controlled process real value. It is assumed that the process values are normally distributed.
Measurement parameters
h Resolution of the measuring device. For example: a thermometer that can only measure in units of 0.01 °C
y Measured value of the controlled process.
Y ¯ Measured average value of the controlled process.
η Asymmetry measure. The minimal distance between the process’s mean, the closest measurable value η = μ h 0 .
δ Ratio between the standard deviation of the controlled process and the measuring resolution ( δ = σ h ).
A R L Average run length—the average number of samples until a deviation of the process from the designed mean is discovered.
n Sample size from the controlled process.
Decision variables
L C L , U C L Lower and upper control limits of the controlled process
General notation
Φ ( . ) Normal distribution cumulative function
Table 2. Combinations.
Table 2. Combinations.
Measured Value Y ¯ Probability
17181920212223
0000003 23 1.94 × 10 8
0000012 22 2 3 9.75 × 10 7
…..
3000000 17 1.94 × 10 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Etgar, R. Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm. Symmetry 2024, 16, 811. https://doi.org/10.3390/sym16070811

AMA Style

Etgar R. Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm. Symmetry. 2024; 16(7):811. https://doi.org/10.3390/sym16070811

Chicago/Turabian Style

Etgar, Ran. 2024. "Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm" Symmetry 16, no. 7: 811. https://doi.org/10.3390/sym16070811

APA Style

Etgar, R. (2024). Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm. Symmetry, 16(7), 811. https://doi.org/10.3390/sym16070811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop