1. Introduction
In both direct and indirect measurements, calculating the value of the measured quantity
is always encumbered with a certain error
. This error is typically delineated as the variance between the actual value of the measured quantity
and the ideal value of this quantity
(achieved when the measurement remains unaffected by any phenomenon, commonly referred to as an error source) [
1]:
Through repeated measurements, a specific set of instances of the discussed error signal
is acquired. Parameters like the expected value
, variance
, standard deviation
, and the probability density function’s shape
of the signal’s distribution can depict the signal’s characteristics. It is crucial to mention that when multiple measurements are conducted for various values of the measured quantity, the set of error signal instances
encompasses both A and B type components, detailed in the guide [
1].
In measurement practice, besides stating the measured quantity’s obtained value, it is essential to specify a quantity that describes the precision with which the value was determined. The two primary measures commonly utilized for this purpose are the standard uncertainty and the expanded uncertainty [
1]. In the classical definition of the error signal, as referenced in Equation (
1), under the assumption of a symmetric distribution of instances and a signal with a zero expected value, the standard uncertainty equals the signal’s standard deviation. In this scenario, the expanded uncertainty can be calculated using the equation:
where
represents the absolute value of any instance of the error signal
,
signifies the expanded uncertainty value, and
represents the specified confidence level [
1], where
. The concept behind the expanded uncertainty metric lies in assuming that if the error signal value is equal to or lower than the expanded uncertainty value, this applies to
across all signal instances. This metric can also be established by correlating its value with the standard uncertainty value:
where
represents the coverage factor derived from the distribution shape of the analyzed error signal implementations and the chosen confidence level [
1]. The characteristics of specific distribution types and the connection between the expansion coefficient values and the provided confidence level are extensively discussed in the study by Horálek [
2].
The determination of the expanded uncertainty value, as defined in Equation (
2), can also be achieved through experimental means. To do so, successive realizations of the signal outlined in Equation (
1) need to be acquired via measurement or simulation experiments. Given a discrete set comprising
N instances of the studied error signal, and assuming
as a function that specifies the count of error signal instances
with a value of
x for
, where the signal’s mean value is zero, Equation (
2) is expressed as:
for cases where the probability density function of the analyzed signal is positive and symmetrical with respect to the ordinate axes. A graphical representation of Equations (
2) and (
4) is shown in
Figure 1.
Considering the information above, it is evident that the expanded uncertainty metric effectively quantifies the accuracy of determining the measured quantities’ values. Unlike the standard uncertainty metric, it enables the assessment of the likelihood of obtaining an error signal value equal to or less than the assumed expanded uncertainty value, with the flexibility to freely set the confidence level within the range . Nevertheless, the challenge with this metric lies in the intricate calculations involved in determining its final value, especially when multiple error signals perturb the output quantity.
If the uncertainty budget encompasses
N error signals
,
, …,
, with variances
,
, …,
correspondingly, the standard uncertainty of the signal
is defined by the equation cited in [
1]:
where symbol
denotes subsequent Pearson correlation coefficients, determined on the basis of the equation:
wherein the mentioned signals may represent both sources of errors related to type A and type B uncertainties.
When dealing with the expanded uncertainty metric, utilizing Equation (
5) is unfeasible. Various methods in the literature offer ways to determine the resulting expanded uncertainty value, such as the distributions propagation method [
3,
4], the expanded uncertainty combination rule method [
5,
6], and the fuzzy sets-based method [
7]. However, these approaches often rely on analytical techniques, necessitating intricate calculations involving convolution operations. In practical measurements, the Monte Carlo method is frequently employed [
8,
9], or calculations are simplified by assuming compliance with the central limit theorem conditions, as suggested in the guide [
1]. Nonetheless, each of these methods has its specific limitations, as elaborated upon later in the text.
Considering the existing knowledge and challenges identified, the objective is to propose a method for determining the expanded uncertainty value with reduced computational complexity (in comparison to the Monte Carlo method) while maintaining an acceptable level of accuracy. The Monte Carlo method, renowned for its precision, serves as the benchmark method in this study. It is anticipated that the values obtained through the proposed method should deviate from the reference values by no more than ±5%. Additionally, the research will present results obtained through the classical approach [
1], which assumes adherence to the central limit theorem conditions. Owing to the complexity associated with the other aforementioned methods, their results are not included in the analysis.
The article is structured into six sections.
Section 1 serves as an introduction, outlining the work’s motivation, current knowledge, and key assumptions.
Section 2 elaborates on the fundamental assumptions of the proposed method in a general context.
Section 3 details the process of determining coherence coefficient values for both uncorrelated and correlated error signals. In
Section 4, a simulation experiment is conducted to validate the method’s efficacy, presenting the obtained results.
Section 5 showcases practical examples applying the discussed method, including one that addresses an existing measurement setup detailed earlier in the work [
10].
Section 6 encapsulates the key findings from the study, highlighting the most important advantages and disadvantages of the method.
A crucial component of the project is the literature review, essential for consolidating information related to the analysis method under discussion. However, the primary focus of the study delves into unexplored territories, encompassing experimental descriptions and method modifications aimed at fulfilling the previously mentioned assumptions.
2. Reductive Interval Arithmetic Method
The reductive interval arithmetic method, rooted in interval arithmetic originally attributed to Moore, R. E. [
11], has been a subject of study since 1958. Concurrently, Warmus, M. [
12], delved into this area since 1956. It was Moore, R. E. who initially demonstrated the practical application of this method. Interval arithmetic facilitates the examination of interval interactions, enabling the treatment of the expanded uncertainty measure as an interval. Consequently, it can be leveraged to compute the cumulative sum of successive expanded uncertainties.
When adhering to the classical definition of expanded uncertainty, as outlined in Equations (
2) and (
4), this measure can be represented as an unbiased interval with a radius equivalent to its value. Notably, in the realm of expanded uncertainty measures, the summation of two such intervals does not align with the sum derived from classical interval arithmetic, typically resulting in a smaller real value. Jakubiec, J., proposed a modified interval arithmetic approach, termed “reductive interval arithmetic”, denoting the capability to narrow down the resultant interval [
13,
14,
15,
16].
Considering an error signal
comprising
N partial error signals, each associated with expanded uncertainties determined at the same confidence level
as
,
, …,
, respectively, the calculation of the expanded uncertainty using the reduced interval arithmetic method is expressed in the equation cited in [
13]:
where the symbol
signifies the coherence coefficient value for the
i-th and
j-th error signal pair. These coefficients reflect the correlation between the analyzed error signals and the interdependency of their distribution shapes with the resulting error signal implementation’s distribution shape. Establishing these coefficient values is a pivotal undertaking in the application of the method under discussion, detailed further in the subsequent sections of the study.
The discussed method was proposed and used by its author to analyze the uncertainty budget of measurement chains processing series of measurement data. A feature of the indicated measurement chains is the fact that they process multiple successive implementations of error signals originating from the same source, and, therefore, have identical parameters and properties. Unfortunately, due to further research focused on other areas, subsequent works describing the indicated method have not been published, similarly to its further development. Additionally, in the English-language literature there are very few works available on the method of reductive interval arithmetic. One of its applications can be found in the work [
17], however, most of the works devoted to the discussed method were published in Polish. Due to the unique features of the indicated method, the authors of this work decided to continue the research conducted by Jakubiec, J., additionally modifying the original algorithm of its application, thus expanding the area of application of the method.
3. Coherence Factors Values Estimation
Analyzing the case that assumes the existence of only two error signals
and
with expanded uncertainties
and
, the Equation (
7) takes the form:
and for the known resultant value of the expanded uncertainty
, resulting from the combination of the uncertainties
and
, the value of the coherence coefficient
can, therefore, be determined according to the following equation:
Determining the coefficient value
in the analyzed scenario involves conducting an experiment using the Monte Carlo method. This process entails iteratively summing the implementation values of the studied error signals, expressed as
, and subsequently calculating the resulting expanded uncertainty
following Equation (
4) [
1,
9]. Alternatively, analytical methods requiring convolution operations can be employed to determine the coherence coefficient value, as outlined in [
13]. Notably, the coherence coefficient value obtained is only valid for a signal pair with parameters identical to those of the
and
signals, rendering this method impractical in real-world applications.
The essence of employing the interval arithmetic reduction method is to offer a way to ascertain the coherence coefficient values for any number and parameters of error signals without the need for repetitive Monte Carlo simulations or analytical derivations. The approach involves estimating these coefficients appropriately, a process elucidated in subsequent sections of the study. Given that practical measurement scenarios often lack precise realization values and probability density function insights for the identified error signals, assumptions are made in this paper regarding the unknown nature of these parameters.
3.1. Case of Uncorrelated Error Signals
For uncorrelated error signals, the methodology presented in [
18] offers a viable solution. This approach involves initially determining the shape factor value
for a signal pair
and
with specified parameters. Subsequently, this factor is adjusted based on the actual parameters of the studied signals. This technique found practical application in the study by [
19], with the correction initially proposed in [
18]. The correction primarily accounted for the assumptions of the central limit theorem in computing the resultant expanded uncertainty value. Further refinements, including adjustments to address discrepancies in the expanded uncertainty values of the examined error signals, are elaborated upon in detail.
Assuming two uncorrelated error signals
and
, whose expanded uncertainty values are
, the value of the shape factor
can be determined for the pairs of signals discussed in the form:
where the expanded uncertainty values for both signals need to be determined at the same confidence level
. It is worth mentioning that with a sufficiently large dataset (approximately 100,000 implementation values) for the signals in question, the coefficient’s value can be estimated using the Monte Carlo method. In cases where equal expanded uncertainty values are assumed (
), the coefficient’s value is influenced solely by the probability density function shape of the signals under analysis. Alternatively, an approach to determining the shape factor values could involve assuming identical standard uncertainties (
). Under this assumption, Equation (
10) is expressed as:
However, the values obtained in this way may differ from those obtained by adopting the assumption described for Equation (
10). The values of shape factors should be determined for pairs of all unique shapes of error signal distributions occurring in the analyzed measurement chain.
Table 1 presents the acquired shape factor values for typical probability density distribution functions, considering the specified confidence level and the method employed for determining these values. Upon examining the data within the table, it becomes evident that the shape factor values escalate with the rise in confidence level. Additionally, there is a trend where values calculated using Equation (
10) are generally higher than those derived from Equation (
11). The impact of the method used to ascertain shape factor values on the efficacy of the proposed approach for determining the resultant expanded uncertainty value is explored in subsequent sections of the study.
Upon analyzing Equations (
9)–(
11), it is observed that the shape coefficients determined through the chosen method align with the coherence coefficients for signals sharing identical parameters to those used in the shape factor calculations. Any alterations in signal parameters or their quantity lead to variations in coherence coefficient values. Hence, a methodology for estimating coherence coefficients based on shape coefficient values and the current signal parameters needs to be proposed.
The initial aspect to address pertains to the central limit theorem. As the number of summed independent random variables increases, the probability density distribution of their sum tends towards a normal distribution [
1]. Referencing the outcomes in
Table 1, it is apparent that the closer the distributions of a pair of summed signals resemble the normal distribution, the closer the shape coefficient value approaches zero. Consequently, as the quantity of summed signals rises, the subsequent coherence coefficients utilized in Equation (
7) tend towards zero. In the work [
18], it was proposed to correct the shape coefficients according to the equation:
while the application of the described correction allows for obtaining results that are overestimated by approximately 5% in relation to those obtained using the Monte Carlo method (only when assuming equal expanded uncertainty values for analyzed error signals).
The second significant phenomenon concerns the impact of variations in the expanded uncertainty values associated with the analyzed signal pair, for which the coherence coefficient is defined in Equation (
9). To explore this effect, an experiment was conducted using the Monte Carlo method, determining the coherence coefficient for a signal pair with distinct expanded uncertainty values
and
in accordance with Equation (
9), where the ratio
ranged from 1 to 10. The experimental findings for specific confidence levels are depicted in
Figure 2.
Generally, the absolute value of the coherence coefficient diminishes as the disparity between the expanded uncertainty values of the signal pair increases. However, this relationship is non-linear, as in some instances, for lower expanded uncertainty ratios, the coherence coefficient values increase before declining with larger disparities. Observing the results presented in
Figure 2, it can be noticed that in the case when the summed signals are characterized by distributions close to normal, the dependence of the coherence coefficient value as a function of the ratio of the expanded uncertainty values associated with these signals becomes monotonic and begins to approach a linear function. The discussed dependence becomes strongly non-linear and devoid of monotonicity in the case of distributions that are characterized by a large width (in particular in the case of the distribution in the shape of the letter “u” [
2]). The indicated circumstances mean that the exact value of the correction taking into account the occurring phenomena should be determined separately for each pair of signals with the given distributions; however, a universal approach is proposed in the paper, discussed later in the section.
Taking into account the conclusions presented, a universal correction is proposed to address the phenomenon in question:
Nevertheless, it is essential to note that this correction may not consistently capture the dynamics observed in
Figure 2.
Lastly, in scenarios involving uncorrelated error signals, the coherence coefficient value can be determined based on the adjusted shape coefficient value:
while in the case of the original method [
18], these values were determined according to the equation:
3.2. Case of Correlated Error Signals
Dealing with correlated error signals presents a significantly more intricate scenario. In such instances, the coherence coefficient value is a product of the convolution of the distributions’ shapes and their correlation coefficient. The existing literature [
13,
15] delves into analytical techniques for determining coherence coefficient values in correlated scenarios. Nonetheless, these methods pertain to specific signal parameters, necessitating a strategy for estimating these coefficients akin to the approach taken for uncorrelated signals.
However, before delving into estimation methods, it is imperative to establish a clear understanding of how a pair of correlated error signals is defined. In experimental settings, a correlated signal pair
and
is characterized by:
where the signal
is a signal with a given distribution, uncorrelated with the signal
, and
is the Pearson correlation coefficient of signals
and
, determined according to the Equation (
6). It is crucial to acknowledge that the definition outlined in Equation (
16) may not precisely mirror the relationships within the analyzed measurement setup. In such instances, the definition’s formulation should be adjusted to align with the specific case under investigation. Subsequently, the dependencies discussed in the subsequent subsection should be determined to adhere to the established methodology.
For a signal pair governed by the relationship in Equation (
16), the coherence coefficient value, contingent on the expanded uncertainty values and correlation coefficient, can be computed using Equation (
9).
Figure 3 illustrates the coherence coefficient values’ dependency for a signal pair with defined probability density function shapes, assuming equivalent standard uncertainties across the signals.
Figure 4 showcases the coherence coefficient values’ correlation with varying expanded uncertainty values and correlation coefficients for signal pairs sharing a similar distribution type. In this analysis, the distribution of
aligns with the first type, while the distribution of
, as previously outlined in Equation (
16), corresponds to the second type.
Upon scrutinizing the outcomes of the conducted experiments, it is evident that the coherence coefficient value for a correlated signal pair is contingent on both the standard uncertainty ratio of these signals and the Pearson correlation coefficient. Establishing a relationship akin to that described in Equation (
14), allowing for a reasonably accurate estimation of the coherence coefficient utilizing the shape coefficients determined from Equation (
11), proves to be exceedingly challenging in this scenario. The discussed relationships are solely explored for signal pairs; complexity escalates with an increasing number of error signals.
It can be noticed that in the case of the results presented in
Figure 3, one can draw conclusions similar to those discussed earlier for the results presented in
Figure 2. Regardless of the adopted confidence level, the value of the coherence coefficient is directly proportional to the value of the Pearson correlation coefficient of the analyzed signals. The main reason for the non-linearity of the discussed characteristics is the fact that in the case of no correlation (for
), the value of the coherence coefficient is nonzero for signal pairs whose shape of the resultant distribution of implementation values deviates from the shape of the normal distribution. As depicted in
Figure 3, the relationships become notably more non-linear with higher confidence levels. Additionally, based on the data presented in
Figure 4, it can be seen that the absolute value of the coherence coefficient decreases with increase in the ratio of the expanded uncertainty values of the analyzed pair of signals. The same property was demonstrated by the experiments, the results of which are presented in
Figure 2.
In conclusion, to implement the discussed method effectively, the initial step involves estimating the resultant expanded uncertainty for all sets of correlated error signals. Various approaches can be employed for this purpose, for example, utilizing the method outlined in the study to estimate coherence coefficients based on the insights from
Figure 3 and
Figure 4, determining correlation coefficient values analytically per the works of [
15], or following the method elucidated in the guide [
1]. Subsequent to determining the parameters for the resultant error signal in the correlated signal grouping, Equation (
7) can be applied for coherence coefficients estimated according to Equation (
14). Notably, in scenarios of complete correlation within the analyzed signals, the coherence coefficient will be 1 for full positive and −1 for full negative correlation, serving as a crucial feature of the method.
4. Verification of the Proposed Analysis Method
To assess the efficacy of the proposed method, a plethora of simulation experiments were conducted. These endeavors aimed to elucidate the conditions warranting the method’s application and to ascertain the typical discrepancies between the results derived from the proposed method and those derived from the Monte Carlo method. Typical discrepancies refer to the range encompassing 95% of the error values in estimating the resultant expanded uncertainty, calculated using the relationship:
where
denotes the estimated expanded uncertainty value following Equation (
7) or Equation (
3), while
represents the expanded uncertainty value determined based on Equation (
4) for the resultant signal error:
The parameters and quantity of signals varied depending on the experiment variant. Subsequently, 100,000 successive implementation values of this signal were computed for the resulting error signal . Given the intricate nature of correlated signals, all experiments in this section emphasized the absence of correlation among the analyzed error signals.
A single iteration of the experiment therefore consisted of:
Generating 100,000 subsequent implementation values for N error signals , , …, with selected parameters;
Determining the implementation value of the resulting error signal
, according to Equation (
18);
Determining the expanded uncertainty value
for the signal
, according to Equation (
4), for the confidence level
;
Determining the value of the expanded uncertainty
for the signal
, according to Equation (
7), using the shape factor values previously determined according to Equation (
10) and coherence coefficients estimated according to Equation (
14), for the confidence level
;
Determining the value of the expanded uncertainty
for the signal
, according to Equation (
7), using the shape factor values previously determined according to Equation (
11) and coherence coefficients estimated according to Equation (
14), for the confidence level
;
Determining the value of the expanded uncertainty
for the signal
, according to Equation (
3), assuming that the distribution of the implementation of this signal is close to the normal distribution [
1];
Determining the value of the relative expanded uncertainty estimation error
for
,
for
and
for
according to Equation (
17).
For different variants of the experiment:
A selected confidence level of was used, constant for each iteration of the experiment implementation;
For a single iteration, the number of error signals was drawn from the interval with the same probability of obtaining each possible value;
For each error signal , the expanded uncertainty value was drawn from the interval with the same probability of obtaining each possible value associated with this signal;
The shape of the distribution of the signal implementation value was each time drawn from a pool of distributions: normal, uniform, triangular, u-shape (sine function distribution), and each type of distribution was equally likely to be drawn.
The values , , , and were constant for all iterations of the implemented variant of the experiment;
Upon deriving the relative error values
for estimating the expanded uncertainty
in each experiment variant, the average, standard deviation, and expanded uncertainty values
and
were computed for a confidence level of 95%.
Table 2 showcases specific results from the experiment variants, while
Figure 5 displays histograms depicting the distribution of
values obtained.
Analyzing the presented results, it can be noticed that the use of shape coefficients determined on the basis of Equation (
11) causes the obtained uncertainty values to be smaller than those determined according to Equation (
10). For selected cases, this allows to obtain more accurate results, while assessing the entire experimental results, it is concluded that it is more beneficial to use the coefficients determined in accordance with Equation (
10). In measurement practice, it is a much better approach to slightly overestimate the uncertainty value than to underestimate it. Due to the above, the subsequent results presented will only concern the case of using Equation (
10) and will be compared to those obtained by the Monte Carlo method and those obtained assuming the fulfillment of the conditions of the central limit theorem.
Figure 6 shows selected experimental results for the indicated calculation methods.
Analyzing the results presented in
Figure 6, it can be noticed that the shapes of the histograms obtained for the
(a) and
(b) methods are almost identical. A significant difference is the fact that for the
(a) case, a higher expected value can be observed than for the
(b) case. The indicated property means that the estimated values of the resultant expanded uncertainty will be higher for the
(a) method. In the case of using the
(c) method, a much larger spread of the implementation values of the relative error in estimating the resultant expanded uncertainty can be noticed than in the case of the methods
(a) and
(b). The discussed property is particularly visible in the case when the analyzed uncertainty budget consists of a small number of error sources and there is no dominant source of error, i.e., in the case when the conditions of the central limit theorem have not been met. In the case of all analyzed methods, the obtained distributions of the implementation of the relative error in the estimation of the resultant value of the expanded uncertainty are asymmetric, which results directly from the features of the distributions of the error signal components, the properties of which were discussed earlier by analyzing
Figure 2,
Figure 3 and
Figure 4.
6. Conclusions
The experiments conducted demonstrate that the method for determining the resulting expanded uncertainty yields results nearly identical to those obtained through the Monte Carlo method. This method is applicable for the specified confidence level and exhibits minimal sensitivity to its conditions of use, such as the number of error signals or the range of expanded uncertainty values of these signals. In most scenarios, the results obtained using this method prove more accurate than those derived from the approach outlined in [
1]. Notably, the method boasts a straightforward application algorithm and low computational complexity. However, it is essential to highlight that when accepting an error in estimating the resultant expanded uncertainty exceeding approximately ±15%, the method delineated in the guide [
1] should be preferred.
A notable drawback of the method lies in the prerequisite of initially determining the shape coefficient values, necessitating knowledge of all probability density function shapes for the analyzed error signals. Despite this, the process is not intricate and allows for analytical, simulation-based, and measurement-based approaches. The determined coefficients can be stored in a microcontroller’s memory for subsequent estimation of coherence coefficient values, considering the current parameters of the analyzed error signals. Challenges arise when dealing with correlated error signals, where it is suggested to analyze correlated signal groups separately to determine the resultant expanded uncertainty value for subsequent consideration in the analysis.
The key achievements of the authors of this work include:
In conclusion, the employed analysis method is justified when precise determination of the resultant expanded uncertainty is crucial, irrespective of the number of error signals or the chosen confidence level. Its seamless integration within the measurement system’s software and minimal computational demands make it a practical choice. Notably, alterations in signal parameters entail no additional calculations, underscoring a key advantage of this method.
Future research endeavors could involve refining the method for estimating shape coefficients with greater precision than outlined in this study. Additionally, exploring the analysis of correlated error signals, particularly when the signal count exceeds two, presents a significant area for investigation. One potential approach could involve a strategy akin to that proposed in Equation (
14), where a function determining the coherence coefficient value for each pair of distributions individually is derived, or delving into the feasibility of employing artificial neural networks for coherence coefficient estimation.
To facilitate user understanding of the method’s functionality, an illustrative application example has been developed for the GNU Octave (Authors used GNU Octave version 9.2.0, Copyright (C) 1993–2024 The Octave Project Developers) program [
20]. This example enables users to replicate the results demonstrated in the study and gain familiarity with the method’s operational algorithm. The example encompasses shape coefficient values for confidence levels
and
. Users can access the example on GitHub [
21].