Next Article in Journal
Dynamic Properties of Foreign Exchange Complex Network
Next Article in Special Issue
A Novel System Reliability Modeling of Hardware, Software, and Interactions of Hardware and Software
Previous Article in Journal
On Fractional Operators and Their Classifications
Previous Article in Special Issue
Bayesian Inference of δ = P(X < Y) for Burr Type XII Distribution Based on Progressively First Failure-Censored Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation

1
Department of Mathematics, Kuwait College of Science and Technology, Kuwait City 27235, Kuwait
2
Faculty of Management Sciences, October University for Modern Sciences and Arts, 6th October City 12566, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 831; https://doi.org/10.3390/math7090831
Submission received: 4 August 2019 / Revised: 3 September 2019 / Accepted: 5 September 2019 / Published: 8 September 2019
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)

Abstract

:
This paper considers sequentially two main problems. First, we estimate both the mean and the variance of the normal distribution under a unified one decision framework using Hall’s three-stage procedure. We consider a minimum risk point estimation problem for the variance considering a squared-error loss function with linear sampling cost. Then we construct a confidence interval for the mean with a preassigned width and coverage probability. Second, as an application, we develop Fortran codes that tackle both the point estimation and confidence interval problems for the inverse coefficient of variation using a Monte Carlo simulation. The simulation results show negative regret in the estimation of the inverse coefficient of variation, which indicates that the three-stage procedure provides better estimation than the optimal.
Mathematics Subject Classification:
62L10; 62L12; 62L15

1. Introduction

Let { X i ,   i 1 } be a sequence of independent and identically distributed IID random variables from a normal distribution with mean μ and variance σ 2 + , both μ and σ 2 are unknown. Assume further that a random sample of size n ( 2 ) from the normal distribution becomes available then we propose to estimate μ and σ 2 by the corresponding sample measures X ¯ n and S n 2 , respectively. It is a common practice, over the last decays, to treat each problem separately, where we consider one decision framework for each inference problem of the mean or the variance.
The objective in this paper is to combine the inference of both problems under one decision framework in order to achieve maximal use of the available sample information to handle these problems simultaneously. Given pre-defined α , 0 < α < 1 and d ( > 0 ) , where ( 1 α ) is the confidence coefficient and 2 d is the fixed-width of the interval, we want to construct a fixed-width ( = 2 d ) confidence interval for the mean μ whose confidence coefficient is at least the nominal value 100 ( 1 α ) % , where at the same time, we will be able to use the same available data to estimate the population variance σ 2 under squared-error loss function with linear sampling cost. Hence, we combine both optimal sample sizes in one decision rule to propose the three-stage sampling decision framework.
Therefore, the optimal sample size required to construct a fixed-width confidence interval for μ whose coverage probability is least the nominal value 100 ( 1 α ) % must satisfy the following:
n c o n f * = ( a / d ) 2 σ 2
where a is the upper ( α / 2 ) % critical point of the standard normal distribution N ( 0 ,   1 ) . For more details about Equation (1), see Mukhopadhyay and de Silva ([1]; chapter 6, p. 97).

1.1. Minimum Risk Estimation

In the literature of sequential point estimation problems, one may consider several types of loss functions such as the squared-error loss function, the absolute-error loss function, the linex-loss function and others. It was shown that the commonly used one is the squared-error loss function due to its simplicity in mathematical computations; see, for example, Degroot [2]. Therefore, we write the loss incurred in estimating σ 2 by the corresponding sample measure S n 2 as
L n ( A ) = A ( S n 2 σ 2 ) 2 + c n
where A > 0 is a known constant and c is the known cost per unit sample observation. We will elaborate on the determination of A in the following lines. Now, the risk corresponding to Equation (2) is
R n ( A ) = 2 A ( n 1 ) 1 σ 4 + c n 2 A σ 4 / n + c n ,
Thus, the minimum value of n that minimizes the risk in Equation (3) is
n n point * = 2 A / c   σ 2
moreover, the associated minimum risk is
R n * ( A ) = 2 c n *
The value of n * in Equation (4) is called the optimal sample size required to generate a point estimate for σ 2 under Equation (2) while Equation (5) is the minimum risk obtained if σ 2 is known.

1.2. A Unified One Decision Framework

If we want to combine both the confidence interval estimation and the point estimation in one decision framework, we have to have the constant A = ( 1 / 2 ) ( a 4 / d 4 ) c to perform both confidence and point estimation in one decision rule. Careful investigation of the constant A = ( n * / 2 σ 4 ) ( c n * ) provided the statistical interpretation, that is c n * is the cost of optimal sampling while ( n * / 2 σ 4 ) represents the optimal information; in other words, it is the amount of information required to explore a unit of variance in order to achieve minimum risk. Thus A is the cost of perfect information, and it is contrary to what has been said in the literature—that it is the cost of estimation.
Therefore, we proceed to use the following optimal sample size, to perform the required inference,
n n * = ( a 2 / d 2 ) σ 2 = ξ σ 2 , ξ = a 2 / d 2
Since σ 2 in Equation (6) is unknown, then no fixed sample size procedure can estimate the mean μ , independent of σ 2 ; see Dantzig [3]. Therefore, we resort to a triple sampling sequential procedure to achieve the previously required goals. Henceforth, we continue to use the asymptotic sample size defined in Equation (6) to propose the following triple sampling procedure to estimate the unknown population mean μ and the unknown population variance σ 2 via estimation of n * .

2. Three-Stage Estimation of the Mean and Variance

In his seminal work, Hall [4] introduced the idea of sampling in three stages to tackle several problems in sequential estimation. He combined the asymptotic characteristics of one-by-one purely sequential sampling procedures of Anscombe [5], Robbins [6], and Chow and Robbins [7] and the operational saving made possible by Stein [8], and Cox [9] group sampling.
From 1965 until the early 1980s, the research in sequential estimation was mainly devoted to two types of sequential sampling procedures—the two-stage procedure, which satisfies the operational savings, and the one-by-one purely sequential procedure that satisfies the asymptotic efficiency. The objective was to use these methods under non-normal distributions. For brevity, see Mukhopadhyay [10], Mukhopadhyay and Hilton [11], Mukhopadhyay and Darmanto [12], Mukhopadhyay and Hamdy [13], Ghosh and Mukhopadhyay [14], Mukhopadhyay and Ekwo [15], Sinha and Mukhopadhyay [16], Zacks [17], and Khan [18]. For a complete list of references, see Ghosh, Mukhopadhyay, and Sen [19].
In the early 1980s, Hall [4,20] considered the normal distribution with an unknown finite mean and an unknown finite variance. His objective was to construct a confidence interval for the mean with a pre-assigned fixed-width and coverage probability. We will describe Hall’s three-stage procedure in Section 2.1.
Since the publication of Hall’s paper, research in multistage sampling has extended Halls results in several directions. Some have utilized the triple sampling technique to generate inference for other distributions, others have tried to improve the quality of inference such as protecting the inference against type II error probability, studying the characteristic operating curve, or/and discussing the sensitivity of triple sampling when the underlying distribution departs away from normality. For more details see Mukhopadhyay [21,22,23], Mukhopadhyay et al. [24], Mukhopadhyay and Mauromoustakos [25], Hamdy and Palotta [26], Hamdy et al. [27], Hamdy [28], Hamdy et al. [29], Lohr [30], Mukhopadhyay and Padmanabhan [31], Takada [32], Hamdy et al. [33], Hamdy [34], Al-Mahmeed and Hamdy [35], AlMahmeed et al. [36], Costanzo et al. [37], Yousef et. al. [38], Yousef [39], Hamdy et al. [40] and Yousef [41]. Liu [42] used Hall’s results to tackle hypothesis-testing problems for the mean of the normal distribution while Son et al. [43] used the three-stage procedure to tackle the problem of testing hypotheses concerning shifts in the population normal mean with controlled Type II error probability.

2.1. Three-Stage Sampling Procedure

As the name suggests, an inference in triple sampling is performed in three consecutive stages—the pilot phase, the main study phase, and the fine-tuning phase.
The Pilot Phase: In the pilot study phase, a random sample of size m ( 2 ) from the population say, ( X 1 , , X m ) to initiate sample measures, X ¯ m for the population mean μ and S m for the population standard deviation σ , where X ¯ m = m 1 i = 1 m X i and S m 2 = ( m 1 ) 1 i = 1 m ( X i X ¯ m ) 2 .
The main Study Phase: In the main study phase, we only estimate a portion γ ( 0 ,   1 ) of n * to avoid possible oversampling. In literature, γ is known as the design factor. Let [ x ] be the largest integer x and ξ as defined before we have
N 1 = max { m ,   [ γ ξ S m 2 ] + 1 }
If m N 1 then we stop at this stage, otherwise we continue to sample an extra sample of size N 1 m , say X m + 1 , X m + 2 , , X N 1 , then we update the sampling measures to X ¯ N 1 and S N 1 for the population’s unknown parameters μ and σ , respectively. Hence, we proceed to define the fine-tuning phase.
The Fine-Tuning Phase: In the fine-tuning phase, the decision to stop or continue sampling is taken according to the following stopping rule
N = max { N 1 ,   [ ξ S N 1 2 ] + 1 }
If N 1 N then sampling is terminated at this stage, or else we continue to sample an additional sample of size N N 1 , say X N 1 + 1 , X N 1 + 2 , , X N . Hence, we augment the previously collected N 1 samples with the new N N 1 to update the sample estimates to X ¯ N and S N for the unknown parameters μ and σ . Upon terminating the sampling process, we propose to estimate the unknown population mean μ by the corresponding triple sampling confidence interval I N = ( X ¯ N d ,   X ¯ N + d ) and the unknown population variance σ 2 by the corresponding triple sampling point estimate S N 2 .
The asymptotic results in this paper are developed under the Assumption (A) set forward by Hall [20] to develop a theory for the triple sampling procedure. That is,
Assumption (A) Let ξ ( > 0 ) such that ξ ( m ) , lim sup ( m / ξ ( m ) ) < γ σ 2 , and ξ ( m ) = O ( m k ) , k > 1 .
Preliminaries: Recall the sample variance S n 2 = ( n 1 ) 1 i = 1 n ( X i X ¯ n ) 2 for all n 2 , and consider the following Helmert’s transformation to the original normal random variables X 1 , , X n , to write S n 2 as an average of IID random variables for all n 2 . Now, let Z i = ( X i μ ) / σ for i = 1 ,   2 , , n , and write W i = ( i ( i + 1 ) ) 1 / 2 { j = 1 i Z j i 2 Z i + 1 } for i = 1 ,   2 , , n 1 and W n = n 1 j = 1 n Z j . They W i s are IID N ( 0 ,   1 ) for all i = 1 ,   2 , , n . Let V i = σ 2 W i 2 , then the random variables V 1 , V 2 , , V n are IID random variables each distributed as σ 2 χ 2 ( 1 ) . Which means j = 2 n V j σ 2 χ 2 ( n 1 ) . From Lemma 2 of Robbins [6], it follows that S n 2 and V n ¯ = ( n 1 ) 1 i = 2 n V i are identically distributed for all n 2 . That is, S n 2 = D   V ¯ n , for all n 2 .
We continue to use the representation of V ¯ n instead of S n 2 for all n 2 to develop the asymptotic theory for both the main study phase and the fine-tuning phase.

2.2. The Asymptotic Characteristics of the Main Study Phase

Under Assumption (A), we have
As ξ , P ( N = [ ξ S N 1 2 ] + 1 ) = P ( N = [ ξ V ¯ N 1 ] + 1 ) 1 a.s and as m , N 1 / γ n * 1 , in probability likewise m / γ n * 1 in probability. While, from the Anscombe [5] Central Limit Theorem, we have as ξ , N 1 ( X ¯ N 1 μ ) N ( 0 ,   σ 2 ) and N 1 ( S N 1 2 σ 2 ) N ( 0 ,   2 σ 4 ) in distribution.
From Theorem 1 of Yousef et al. [38] as ξ we have
( i ) E ( X ¯ N 1 ) = μ + o ( ξ 1 ) ( ii ) E   ( X ¯ N 1 2 ) = μ 2 + σ 2 ( γ n * ) 1 + o ( ξ 2 ) ( iii ) V a r ( X ¯ N 1 ) = σ 2 ( γ n * ) 1 + o ( ξ 2 )
Theorem 1.
Under Assumption (A) and using Equation (7), we can show for any real k , as ξ
E ( S N 1 2 k ) = σ 2 k + k ( k 3 ) σ 2 k ( γ n * ) 1 + o ( ξ 1 )
Proof. 
Since S N 1 2 and V ¯ N 1 are identically distributed, we write
E ( S N 1 2 k ) = E ( V ¯ N 1 k ) = E ( ( N 1 1 ) 1 j = 1 N 1 1 V j ) k
Conditioning on the σ f i e l d generated by V i ( i = 1 ,   2 ,   3 , , m 1 ) we have
E ( V ¯ N 1 k ) = E ( ( N 1 1 ) k E ( j = 1 m 1 V j + j = m N 1 1 V j ) k | V 1 ,   V 2 , , V m 1 )
By using binomial expansion, it follows
E ( V ¯ N 1 k ) = E ( ( N 1 1 ) k j = 0 λ ( k ,   j ) ( j = 1 m 1 V j ) k j E ( j = m N 1 1 V j ) j | V 1 ,   V 2 , , V m 1 )
where λ ( k , j ) = t = 1 j ( k t + 1 ) j ! for j = 1 ,   2 , and λ ( k ,   0 ) = 1 . Conditioning on the σ f i e l d generated by V i ( i = 1 ,   2 ,   3 , , m 1 ) that is ( j = m N 1 1 V j / σ 2 | V 1 ,   V 2 , , V m 1 ) χ 2 ( N 1 m ) . It follows E ( j = m N 1 1 V j / σ 2 | V 1 ,   V 2 , , V m 1 ) j = 2 j Γ ( j + ( N 1 m ) / 2 ) Γ ( ( N 1 m ) / 2 ) where Γ ( x ) = 0 t x 1 e t   d t and hence E ( j = m N 1 1 V j | V 1 ,   V 2 , , V m 1 ) j = ( N 1 m ) j σ 2 j ( 1 + O ( N 1 1 ) ) . □
Further simplifications similar to those given in Hamdy [28], we get
E ( S N 1 2 k ) = σ 2 k E ( 1 + 1 N 1 1 i = 1 m 1 Y i ) k + o ( ξ 1 )
where Y i = ( V i σ 2 ) / σ 2 are IID with E ( Y i ) = 0 and V a r ( Y i ) = 2 .
By applying the first two terms of the infinite binomial series and taking the expectation, we get
E ( S N 1 2 k ) = σ 2 k + σ 2 k k   E ( 1 N 1 1 i = 1 m 1 Y i ) + 1 2 σ 2 k k ( k 1 ) E ( 1 N 1 1 i = 1 m 1 Y i ) 2 + E ( R ( Y ) ) = σ 2 k + I + I I
E ( R ( Y ) ) = M    E ( 1 N 1 1 i = 1 m 1 Y i ) 3 , where M is a generic constant. Since m 1 N 1 1 we have
E ( R ( Y ) ) = M    E ( 1 m 1 i = 1 m 1 Y i ) 3 = M ( m 1 ) 3 E ( V ¯ m σ 2 ) 3 / ( m 1 ) 3 = M   E ( V ¯ m σ 2 ) 3 = 0 .
Consider I = σ 2 k k E ( 1 N 1 1 i = 1 m 1 Y i ) and expand ( N 1 1 ) 1 around γ n * , and then take the expectation.
E { ( N 1 1 ) 1 i = 1 m 1 Y i } = 2 ( γ n * ) 1 E ( i = 1 m 1 Y i ) ( γ n * ) 2 E ( N 1 i = 1 m 1 Y i ) + ( 1 / 2 ) E ( ρ 3 i = 1 m 1 Y i ( N 1 γ n * ) 2 ) , where ρ is a random variable between N 1 and γ n * . It is not hard to show that E { i = 1 m 1 Y i ( N 1 γ n * ) 2 ρ 3 } = o ( ξ 1 ) , we have omitted the proof for brevity.
Substituting for Y i = ( V i σ 2 ) / σ 2 , we have
I = σ 2 k k σ ( γ n * ) 1 ( m 1 ) E ( V ¯ m σ 2 1 ) σ 2 k k σ 2 ( γ n * ) 2 ( m 1 ) γ ξ ( E ( V ¯ m 2 ) σ 2 E ( V ¯ m ) ) + o ( ξ 1 )
It follows that
I    = σ γ n * σ 2 k k + o ( ξ 1 )
Likewise, we recall the second term and expand ( N 1 1 ) 2 around γ n * , we get
I I = 1 2 σ 2 k k ( k 1 ) E ( 1 N 1 1 i = 1 m 1 Y i ) 2 = ( γ n * ) 1 σ 2 k k ( k 1 )
Substituting Equations (11) and (12) into Equation (10), we get the result. The proof is complete.
As a particular case of Theorem 1, for k = 1 / 2 ,   1 ,   2 and k = 3 we have as ξ ,
( i ) E ( S N 1 ) = σ 5 σ 4 γ n * + o ( ξ 1 ) ( ii ) E ( S N 1 2 ) = σ 2 2 σ 2 γ n * + o ( ξ 1 ) ( iii ) E ( S N 1 4 ) = σ 4 2 σ 4 γ n * + o ( ξ 1 ) ( iv ) E ( S N 1 6 ) = σ 6 + o ( ξ 1 )
while from the Equation (13) and the results of (ii) and (iii) we obtain
V a r ( S N 1 2 ) = 2 σ 4 ( γ n * ) 1 + o ( ξ 1 )
The following Theorem 2 gives the second-order asymptotic expansion of the moments of a real-valued continuously differentiable function of S N 1 2 .
Theorem 2.
Under Assumption (A) and let g ( > 0 ) be a real-valued continuously differentiable function in a neighborhood around σ 2 such that sup n > m | g ( n ) | = O ( | g ( n * ) | ) , then
E { g ( S N 1 2 ) } = g ( σ 2 ) 2 σ 2 ( γ n * ) 1 { g ( σ 2 ) ( 1 / 2 ) σ 2 g ( σ 2 ) } + o ( ξ 1 )
Proof. 
Taylor expansion of g ( S N 1 2 ) around σ 2 provides,
g ( S N 1 2 ) = g ( σ 2 ) + ( S N 1 2 σ 2 ) g ( σ 2 ) + ( 1 / 2 ) ( S N 1 2 σ 2 ) 2 g ( σ 2 ) + ( 1 / 6 ) ( S N 1 2 σ 2 ) 3 g ( η ) ,
where η is a random variable between S N 1 2 and σ 2 . Now, taking the expectation all through we have,
E ( g ( S N 1 2 ) ) = g ( σ 2 ) + E ( S N 1 2 σ 2 ) g ( σ 2 ) + ( 1 / 2 ) E ( S N 1 2 σ 2 ) 2 g ( σ 2 ) + ( 1 / 6 ) E ( ( S N 1 2 σ 2 ) 3 g ( η ) ) ,
From Equation (13), parts (ii) and (14), we have
E ( g ( S N 1 2 ) ) = g ( σ 2 ) 2 σ 2 ( γ n * ) 1 { g ( σ 2 ) ( σ 2 / 2 ) g ( σ 2 ) } + ( 1 / 6 ) E ( ( S N 1 2 σ 2 ) 3 g ( η ) ) ,
However, ( 1 / 6 ) E ( ( S N 1 2 σ 2 ) 3 g ( η ) ) ( 1 / 6 ) E | S N 1 2 σ 2 | 3 sup n m | g ( n ) | = O ( | ξ 1 g ( η ) | ) from Equation (13), part (iv), and the assumption that g ( ) is a bounded function. The proof is complete. □
Corollary 1.
Under Assumption (A) and let g ( > 0 ) , be a real-valued continuously differentiable function in a neighborhood around σ such that sup n > m | g ( n ) | = O ( | g ( n * ) | ) then
E { g ( S N 1 ) } = g ( σ ) + σ ( 4 γ n * ) 1 { σ g ( σ ) 5 g ( σ ) } + o ( ξ 1 )
Proof. 
First, by using Taylor series expansion of the function g ( ) around σ , we get
g ( S N 1 ) = g ( σ ) g ( σ ) ( S N 1 σ ) + 1 2 g ( σ ) ( S N 1 σ ) 2 + 1 6 g ( η ) ( S N 1 σ ) 3
By taking the expectation all through we have
E ( g ( S N 1 ) ) = g ( σ ) g ( σ ) E ( S N 1 σ ) + 1 2 g ( σ ) E ( S N 1 σ ) 2 + 1 6 E ( g ( η ) ( S N 1 σ ) 3 )
by using Equation (13), parts (i), (ii) and (iii) and the fact, that g ( ) is bounded. The proof is complete.
As an especial case of Corollary 1, take f ( t ) = t 1 , and f ( t ) = t 2 we obtain,
( i ) E ( S N 1 1 ) = σ 1 + 7 σ 1 ( 4 γ n * ) 1 ( ii ) E ( S N 1 2 ) = σ 2 + 4 σ 2 ( γ n * ) 1 + o ( ξ 1 ) ( iii ) var ( S N 1 1 ) = σ 2 ( 2 γ n * ) 1 + o ( ξ 1 )
This completes our first assertion regarding the asymptotic characteristics of the main-study phase. In the following section, we find the asymptotic characteristics of the final random sample size. □

2.3. The Asymptotic Characteristics of the Fine-Tuning Phase

Asymptotic characteristics of the variable N are given in the following Theorem.
Theorem 3.
Under Assumption (A) and using Equation (8), let h ( > 0 ) be a real-valued continuously differentiable function in a neighborhood around n * such that sup n > m | h ( n ) | = O | h ( n * ) | . Then as ξ
E { h ( N ) } = h ( n * ) + ( 1 2 2 γ 1 ) h ( n * ) + γ 1 n * h ( n * ) + O ( ξ 2 h ( ξ ) )
Proof. 
We write N = [ ξ S N 1 2 ] + 1 , except possibly on a set ϕ = ( N 1 < m ) ( ξ V ¯ N 1 < γ ξ V ¯ m + 1 ) of measure zero. Therefore, for real r , we have
E ( N r ) = E ( [ ξ S N 1 2 ] + 1 ) r + ϕ N r   d p = E ( ( [ ξ S N 1 2 ] + 1 ) + β N 1 ) r + o ( ξ r 1 )
provided that the r t h moment exists, and β N 1 = 1 { ( ξ S N 1 2 ) [ ξ S N 1 2 ] } , where [ x ] as defined before. From Hall [4], as ξ , β N 1 is an asymptotically uniform distribution.
Now, for r = 1 , we have,
E ( N ) = E ( ( [ ξ S N 1 2 ] + 1 ) ) + E ( β N 1 ) + o ( 1 ) = ξ ( σ 2 2 σ 2 γ n * + o ( ξ 1 ) + 1 2 + o ( 1 ) ) = ξ σ 2 2 γ 1 + 1 / 2 + o ( 1 ) = n * 2 γ 1 + 1 / 2 + o ( 1 )
Likewise, for r = 2 , we have
E ( N n * ) 2 = 2 γ 1 n * + O ( ξ )
For r = 3 , we have
E | N n * | 3 = O ( ξ 2 )
We turn to prove Theorem 3.
First, write h ( N ) in Tayler series expansion as
E ( h ( N ) ) = h ( n * ) + E ( N n * ) h ( n * ) + 1 2 E ( N n * ) 2 h ( n * ) + 1 6 E ( ( N n * ) 3 h ( ν ) )
where ν is a random variable between N and n * . By using Equations (16)–(18) we have
E ( h ( N ) ) = h ( n * ) + ( 1 2 2 σ 2 γ 1 ) h ( n * ) + 2 σ 2 γ 1 n * h ( n * ) + 1 6 E ( ( N n * ) 3 h ( ν ) )
However, 1 6 E ( N n * ) 3 h ( ν ) 1 6 E | N n * | 3 sup n > m | h ( n ) | = O | ξ 2 h ( ξ ) | from Equation (18) since h ( ) and its derivatives are bounded. The proof is complete. □
Let N be defined as in Equation (8) and assume (A) holds, the asymptotic characteristics of the fine-tuning phase as ξ we have (see Yousef et al. [38])
E ( X ¯ N ) = μ + o ( ξ 1 ) E ( X ¯ N 2 ) = μ 2 + σ 2 / n * + o ( ξ 1 ) V a r ( X ¯ N ) = σ 2 / n * + o ( ξ 2 )
Theorem 4.
Under Assumption (A) and using Equation (8), we can show that for any real k and as ξ
E ( S N 2 k ) = σ 2 k + σ 2 k n * k ( γ k γ 2 ) + o ( ξ 1 )
Proof. 
Write E ( S N 2 k ) = E ( V ¯ N k ) , conditioning on the σ f i e l d generated V i ( i = 1 ,   2 ,   3 , ,   N 1 1 ) , we have
E ( V ¯ N k ) = E ( N 1 ) 1 E ( i = 1 N 1 1 V i + N 1 N 1 V i ) k | V i ( i = 1 ,   2 ,   3 , ,   N 1 1 )
Thus, we write the binomial expression as an infinite series and we get
E { ( N 1 ) k E ( i = 1 N 1 1 V i + N 1 N 1 V i ) k | V i ( i = 1 ,   2 ,   3 , ,   N 1 1 ) } = E { ( N 1 ) k E ( i = 1 λ ( k ,   j ) ( i = 1 N 1 1 V i ) k j ( i = N 1 N 1 V i ) j ) | V i ( i = 1 ,   2 ,   3 , ,   N 1 1 ) } ,  
where λ ( k , j ) = t = 1 j ( k t + 1 ) j ! for j = 1 ,   2 , and λ ( k ,   0 ) = 1 .
Conditioning on the σ f i e l d generated by V i ( i = 1 ,   2 , , N 1 1 ) the random sum ( i = N 1 N 1 V i ) | V i ( i = 1 ,   2 ,   3 , ,   N 1 1 ) is distributed as σ 2 χ 2 -distribution with N N 1 degrees of freedom. Therefore,
E ( i = N 1 N 1 V i ) j | V i ( i = 1 ,   2 ,   3 , ,   N 1 1 ) = σ 2 j ( N N 1 ) j ( 1 + O ( N 1 ) )
Consequently, this yields
E ( S N 2 k ) = σ 2 k E ( 1 + 1 N 1 i = 1 N 1 1 Y i ) k + o ( ξ 1 )
Consider the first three terms in the expansion and the remainder term R ( ξ )
E ( S N 2 k ) = σ 2 k + σ 2 k k E ( 1 N 1 i = 1 N 1 1 Y i ) + ( 1 2 ) σ 2 k k ( k 1 ) E ( 1 N 1 i = 1 N 1 1 Y i ) 2 + E ( R ( ξ ) )
where E ( R ( ξ ) ) = o ( ξ 1 ) . Let us evaluate the second term σ 2 k k E ( 1 N 1 i = 1 N 1 1 Y i ) , first expand ( N 1 ) 1 around n * using Taylor series ( N 1 ) 1 = n * 1 ( N n * ) n * 2 + ( 1 / 2 ) ( N n * ) 2 ν 3 , where ν is arandom variable lies between N and n * . Furthermore,
( N 1 ) 1 = n * 1 ξ ( V ¯ N 1 σ 2 ) n * 2 + ( 1 / 2 ) ξ 2 ( V ¯ N 1 σ 2 ) 2 ν 3 , = n * 1 ( 1 N 1 1 i = 1 N 1 1 Y i ) n * 1 + ( 1 / 2 ) ( 1 N 1 1 i = 1 N 1 1 Y i ) 2 n * 2 ν 3 ,
where we have used the fact that N ξ   V ¯ N 1 . Thus,
σ 2 k k E ( 1 N 1 i = 1 N 1 1 Y i ) = n * 1 σ 2 k k E ( i = 1 N 1 1 Y i ) n * 1 σ 2 k k E { ( N 1 1 ) 1 ( i = 1 N 1 1 Y i ) 2 } + 1 2 n * 2 σ 2 k k E { ν 3 ( N 1 1 ) 2 ( i = 1 N 1 1 Y i ) 3 }
However, the first term in (21) n * 1 σ 2 k k E ( i = 1 N 1 1 Y i ) = 0 , by Wald’s [44] first equation.
For the second term in Equation (21), n * 1 σ 2 k k E { ( N 1 1 ) 1 ( i = 1 N 1 1 Y i ) 2 } , conditioning on the σ f i e l d generated by V i ( i = 1 ,   2 , , m 1 ) , we have
n * 1 σ 2 k k E { ( N 1 1 ) 1 ( i = 1 N 1 1 Y i ) 2 } = σ 2 k k n * 1 E { ( N 1 1 ) 1 E ( ( i = 1 m 1 Y i + i = m N 1 1 Y i ) 2 ) | V i ( i = 1 ,   2 ,   3 , , m 1 ) }
Expanding the binomial term and taking the expectation conditional on V i ( i = 1 ,   2 , ,   m 1 ) then expanding ( N 1 1 ) 1 in Taylor series we obtain
n * 1 σ 2 k k E { ( N 1 1 ) 1 ( i = 1 N 1 1 Y i ) 2 } = 2 k σ 2 k n * 1 + o ( ξ 1 )
Now, recall the third term ( 1 2 ) σ 2 k k ( k 1 ) E ( 1 N 1 i = 1 N 1 1 Y i ) 2 in Equation (21) and expand ( N 1 1 ) 2 in the Taylor series and applying Wald’s [44] second equation, we get
( 1 2 ) σ 2 k k ( k 1 ) E ( 1 N 1 1 i = 1 N 1 1 Y i ) 2 = k ( k 1 ) σ 2 k ( γ n * ) 1 + o ( ξ 1 )
Finally, recall the remainder term in Equation (21), and consider the following two cases:
Case 1, if ν n * then n * 1 ν 1 and
1 2 n * 2 σ 2 k k E { ν 3 ( N 1 1 ) 2 ( i = 1 N 1 1 Y i ) 3 } 1 2 n * 1 σ 2 k m 2 5 E ( N 1 1 ) = o ( ξ 1 )
, as m , since m N 1 and Assumption (A) holds.
Case 2, if ν n * then ν N N 1 m and m 1 ν 1 , it follows that
1 2 n * 2 σ 2 k k E { ν 3 ( N 1 1 ) 2 ( i = 1 N 1 1 Y i ) 3 } 1 2 n * 2 σ 2 k 5 k   m 5 5 E ( N 1 1 ) = o ( ξ 1 )
, as m by Assumption (A). Therefore,
E ( S N 2 k ) = σ 2 k 2 k σ 2 k n * 1 + γ k ( k 1 ) σ 2 k n * 1 + o ( ξ 1 )
The proof is complete. □

3. Three-Stage Coverage Probability of the Mean

P ( μ I N ) = P ( | X ¯ N μ | d ) = n = m P ( | X ¯ N μ | d , N = n ) = n = m P ( | X ¯ N μ | d | N = n ) P ( N = n )
Since X ¯ N and the events { N = n } , n = m ,   m + 1 ,   m + 2 , are independent because N is a function of S N 1 2 also because X ¯ N and S N 1 2 are independent for all n = m ,   m + 1 ,   m + 2 , for the normal distribution, it follows that,
P ( μ I N ) = n = m P ( | X ¯ n μ | d ) P ( N = n ) = E N ( 2 Φ ( d N / σ ) ) 1
where Φ ( u ) = ( 2 π ) 1 / 2 u e t 2 / 2   d t . Recall Theorem 3, it follows
E N ( 2 Φ ( d N / σ ) ) 1 = ( 2 Φ ( a ) 1 ) + ( 1 2 2 γ 1 ) h ( n * ) + γ 1 n * h ( n * ) + o ( ξ 2 h ( ξ ) )                                                = ( 1 α ) + a n * Φ ( a ) ( 1 2 2 γ 1 ) a 2 n * Φ ( a ) ( 5 + a 2 ) γ 1 + O ( d 2 )                                               = ( 1 α ) a 2 γ n * Φ ( a ) ( 5 γ + a 2 ) + o ( d 2 )
as ξ The quantity ( 2 γ ) 1 ( 5 γ + a 2 ) is known as the cost of ignorance or the cost of not knowing to σ 2 (see Simons [45] for details).

4. The Asymptotic Regret Incurred in Estimating σ 2

Theorem 5.
The risk associated with (2) as m is given by
R N ( A ) = c n * ( 1 + γ ) + c ( γ 4 ) ( 2 γ ) 1 + o ( 1 )
Moreover, the asymptotic regret is
ω ( d ) = c n * ( γ 1 ) + c ( γ 4 ) ( 2 γ ) 1 + o ( 1 )
Proof. 
Recall the squared-error loss function given in Equation (2) and take the expectation all through,
R N ( A ) = E ( L N ( A ) ) = ( 1 / 2 ) ( c n * 2 / σ 4 ) E ( S N 2 σ 2 ) 2 + c   E ( N )
By using Equation (16) and Theorem 4 with k = 1 we have
R N ( d ) = c n * ( 1 + γ ) + c ( γ 4 ) ( 2 γ ) 1 + o ( 1 )
while the asymptotic regret of the triple sampling point estimation of σ 2 under (2) is
ω ( d ) = E ( L N ( d ) ) E ( L n * ( d ) ) = c n * ( 1 + γ ) + c ( γ 4 ) ( 2 γ ) 1 2 c n * + o ( 1 )                                                                   = c n * ( γ 1 ) + c ( γ 4 ) ( 2 γ ) 1 + o ( 1 )
The proof is complete. □
Clearly, for zero cost, we obtain zero regrets. While for a non zero cost, we obtain negative regret for all 0 < γ < 1 . This means that the triple sampling procedure provides better estimates than the optimal (see Martinsek [46]).

5. Simulation Results

Since the results are asymptotic, it is worth mentioning to record the performance of the estimates under a moderate sample size performance. Microsoft Developer Studio software was used to run FORTRAN codes using Equations (7) and (8). A series of 50,000 replications were generated from a normal distribution with different values of μ and σ 2 . The optimal sample sizes were chosen to represent small, medium to large sample sizes n * = 24, 43, 61, 76, 96, 125, 171, 246, and 500 with γ = 0.5 as recommended by Hall [4,20]. For brevity, we report the case at m = 10 .

5.1. The Mean and the Variance of the Normal Distribution

We estimate the optimal final sample size and its standard error, the mean and its standard error, the coverage probability of the mean, the variance and its standard error, the asymptotic regret of using the sample variance instead of the population variance. For constructing a fixed-width confidence interval for the mean we take α = 0.05 a = 1.96 . In each Table, we report N ¯ as an estimate of n * , S ( N ¯ ) as the standard error of N ¯ , μ ^ as an estimate of μ with standard error S ( μ ^ ) . The estimated coverage probability is 1 α ^ while the estimated asymptotic regret is ω ^ .
The simulation process is performed as follows: Fix γ , α and n * as in Equation (6).
First: For the i -th sample generated from the normal distribution, take a pilot sample of size m , that is ( X 1 , i , X 2 , i , , X m , i ) .
Second: Compute the sample mean X ¯ i and the sample variance S i 2 .
Third: Apply Equations (7) and (8) to determine the stopping sample size at this iteration, whether in the first stage or the second stage, say N i * .
The inverse coefficient of variation is the ratio of the population mean to the population standard deviation, that is θ = μ / σ , θ (no singularity point can exist over the entire real line). Assume further a random sample of size n ( 2 ) from the normal distribution becomes available, we propose to estimate θ by θ ^ n = X ¯ n S n 1 . It is a dimensionless quantity that makes comparisons across several populations that have different units of measurements has useful meanings. In practical life, the inverse coefficient of variation is equal to the signal to noise ratio, which measures how much signal has been corrupted by noise (see McGibney and Smith [47]).
Fourth: Record the resultant sample size, the sample mean, the sample standard deviation, and the estimated inverse coefficient of variation ( N i * ,   X ¯ i * ,   S i * , θ ^ i ) for i = 1 , 2 , , k where k = 50 , 000
Hence, for each experimental combination, we have four vectors of size k as follows:
( N 1 * , N 2 * , , N k * ) ,   ( X ¯ * 1 , X ¯ * 2 , , X ¯ * k ) , ( S 1 2 * , S 2 2 * , , , S k 2 * ) , ( θ ^ 1 * ,   θ ^ 2 * , , θ ^ k * )
Let N ¯ = k 1 i = 1 k N i * , μ ^ = X ¯ ¯ = k 1 i = 1 k X ¯ i * , σ ^ 2 = S ¯ 2 = k 1 i = 1 k S i 2 * and θ ^ ¯ = k 1 i = 1 k θ i * , where, N ¯ , X ¯ ¯ , S ¯ 2 and θ ^ ¯ are respectively the estimated mean sample size, the estimated mean of the population mean, the estimated mean of the sample variance and the estimated mean of the inverse coefficient of variation across replicates. The standard errors are, respectively,
S ( N ¯ ) = ( k ( k 1 ) ) 1 / 2 i = 1 k ( N i * N ¯ ) 2 S ( μ ^ ) = ( k ( k 1 ) ) 1 / 2 i = 1 k ( X ¯ i * X ¯ ¯ ) 2 S ( S ¯ ) = ( k ( k 1 ) ) 1 / 2 i = 1 k ( S i 2 * S ¯ 2 ) 2 S ( θ ¯ ) = ( k ( k 1 ) ) 1 / 2 i = 1 k ( θ ^ i * θ ^ ¯ ) 2
Fifth: The simulated regret ω ^ ( A ) = A k 1 i = 1 k ( S i 2 * σ 2 ) 2 + c N ¯ R n * with A = ( a / d ) 4 c .
Table 1 and Table 2 below show the performance of the estimates under m = 10 and γ = 0.5 .
Regarding the final random sample size N , we noticed that as n * increases, N ¯ is always less than n * (early stopping) with standard error increases, N ¯ / n * 1 . While as n * increases μ ^ μ and σ ^ σ with standard error decreases. Regarding the coverage probability, the three-stage procedure under the rules in Equations (7) and (8) provides coverage probabilities that are always less than the desired nominal value while it attains it only asymptotically. Regarding the estimated asymptotic regret, ω ^ we obtain negative regret, which agrees with the result of Theorem 5.

5.2. The Inverse Coefficient of Variation

As an application, we invest the three-stage estimation of both the mean and the variance to estimate the inverse coefficient of variation θ , and its standard error S ( θ ^ ) , the coverage probability of θ and the asymptotic regret. To estimate θ we perform the previous steps in addition to the simulated regret w ^ ( A ) using a squared-error loss function with linear sampling cost is
w ^ ( A ) A k 1 i = 1 k ( θ ^ i θ ^ ¯ ) 2 + c N ¯ R n *
Table 3 below shows the performance of the procedure for estimating θ . Obviously, as n * increases θ ^ / θ 1 with standard errors decrease. Regarding the coverage probability of θ we noticed that P ( | θ ^ N θ | d ) 0.95 for all θ . This means that the procedure attains exact consistency. Regarding the asymptotic regret, we noticed that as n * increases, the regret decreases with negative values. This means that the three-stage procedure does better than the optimal.

6. Conclusions

We use a three-stage procedure to tackle the point estimation problem for the variance while estimating the mean by a confidence interval with preassigned width and coverage probability. We use one unified stopping rule for this estimation and use the results in developing both point and interval estimations for the inverse coefficient of variation. Monte Carlo simulations were performed to investigate the performance of all estimators. We conclude that the estimation of the inverse coefficient of variation through the mean and variance obtained better results with negative regret. As an application in engineering reliability see Ghosh, Mukhopadhyay and Sen ([19]; chapter 1, p. 11). For applications in real-world problems see Mukhopadhyay, Datta, and Chattopadhyay [48].

Author Contributions

Conceptualization, A.Y.; methodology, A.Y and H.H.; software, A.Y.; validation, A.Y and H.H.; formal analysis, A.Y.; investigation, A.Y and H.H.; resources, A.Y.; data curation, A.Y and H.H.; writing—original draft preparation, A.Y Yousef.; writing—review and editing, A.Y.; visualization, A.Y.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mukhopadhyay, N.; de Silva, B. Sequential Methods and Their Applications; CRC: New York, NY, USA, 2009. [Google Scholar]
  2. Degroot, M.H. Optimal Statistical Decisions; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  3. Dantzig, G.B. On the nonexistence of tests of ‘students’ hypothesis having power functions independent of σ. Ann. Math. Stat. 1940, 11, 186–192. [Google Scholar] [CrossRef]
  4. Hall, P. Asymptotic theory and triple sampling of sequential estimation of a mean. Ann. Stat. 1981, 9, 1229–1238. [Google Scholar] [CrossRef]
  5. Anscombe, F.J. Sequential estimation. J. Roy. Stat. Soc. 1953, 15, 1–21. [Google Scholar] [CrossRef]
  6. Robbins, H. Sequential Estimation of the Mean of a Normal Population. Probability and Statistics (Harald Cramer Volume); Almquist and Wiksell: Uppsala, Sweden, 1959; pp. 235–245. [Google Scholar]
  7. Chow, Y.S.; Robbins, H. On the asymptotic theory of fixed width sequential confidence intervals for the mean. Ann. Math. Stat. 1965, 36, 1203–1212. [Google Scholar] [CrossRef]
  8. Stein, C. A two-sample test for a linear hypothesis whose power is independent of the variance. Ann. Math. Stat. 1945, 16, 243–258. [Google Scholar] [CrossRef]
  9. Cox, D.R. Estimation by double sampling. Biometrika 1952, 39, 217–227. [Google Scholar] [CrossRef]
  10. Mukhopadhyay, N. Sequential estimation of a location parameter in exponential distributions. Calcutta Stat. Assoc. Bull. 1974, 23, 85–95. [Google Scholar] [CrossRef]
  11. Mukhopadhyay, N.; Hilton, G.F. Two-stage and sequential procedures for estimating the location parameter of a negative exponential distribution. S. Afr. Stat. J. 1986, 20, 117–136. [Google Scholar]
  12. Mukhopadhyay, N.; Darmanto, S. Sequential estimation of the difference of means of two negative exponential populations. Seq. Anal. 1988, 7, 165–190. [Google Scholar] [CrossRef]
  13. Mukhopadhyay, N.; Hamdy, H.I. On estimating the difference of location parameters of two negative exponential distributions. Can. J. Stat. 1984, 12, 67–76. [Google Scholar] [CrossRef]
  14. Ghosh, M.; Mukhopadhyay, N. Sequential point estimation of the parameter of a rectangular distribution. Calcutta Stat. Assoc. Bull. 1975, 24, 117–122. [Google Scholar] [CrossRef]
  15. Mukhopadhyay, N.; Ekwo, M.E. Sequential estimation problems for the scale parameter of a Pareto distribution. Scand. Actuar. J. 1987, 83–103. [Google Scholar] [CrossRef]
  16. Sinha, B.K.; Mukhopadhyay, N. Sequential estimation of a bivariate normal mean vector. Sankhya Ser. B 1976, 38, 219–230. [Google Scholar]
  17. Zacks, S. Sequential estimation of the mean of a lognormal distribution having a prescribed proportional closeness. Ann. Math. Stat. 1966, 37, 1688–1696. [Google Scholar] [CrossRef]
  18. Khan, R.A. Sequential estimation of the mean vector of a multivariate normal distribution. Indian Stat. Inst. 1968, 30, 331–334. [Google Scholar]
  19. Ghosh, M.; Mukhopadhyay, N.; Sen, P.K. Sequential Estimation; Wiley: New York, NY, USA, 1997. [Google Scholar]
  20. Hall, P. Sequential estimation saving sampling operations. J. Roy. Stat. Soc. B 1983, 45, 1229–1238. [Google Scholar] [CrossRef]
  21. Mukhopadhyay, N. A note on three-stage and sequential point estimation procedures for a normal mean. Seq. Anal. 1985, 4, 311–319. [Google Scholar] [CrossRef]
  22. Mukhopadhyay, N. Sequential estimation problems for negative exponential populations. Commun. Stat. Theory Methods A 1988, 17, 2471–2506. [Google Scholar] [CrossRef]
  23. Mukhopadhyay, N. Some properties of a three-stage procedure with applications in sequential analysis. Indian J. Stat Ser. A 1990, 52, 218–231. [Google Scholar]
  24. Mukhopadhyay, N.; Hamdy, H.I.; Al Mahmeed, M.; Costanza, M.C. Three-stage point estimation procedures for a normal mean. Seq. Anal. 1987, 6, 21–36. [Google Scholar] [CrossRef]
  25. Mukhopadhyay, N.; Mauromoustakos, A. Three-stage estimation procedures of the negative exponential distribution. Metrika 1987, 34, 83–93. [Google Scholar] [CrossRef]
  26. Hamdy, H.I.; Pallotta, W.J. Triple sampling procedure for estimating the scale parameter of Pareto distribution. Commun. Stat. Theory Methods 1987, 16, 2155–2164. [Google Scholar] [CrossRef]
  27. Hamdy, H.I.; Mukhopadhyay, N.; Costanza, M.C.; Son, M.S. Triple stage point estimation for the exponential location parameter. Ann. Inst. Stat. Math. 1988, 40, 785–797. [Google Scholar] [CrossRef]
  28. Hamdy, H.I. Remarks on the asymptotic theory of triple stage estimation of the normal mean. Scand. J. Stat. 1988, 15, 303–310. [Google Scholar]
  29. Hamdy, H.I.; AlMahmeed, M.; Nigm, A.; Son, M.S. Three-stage estimation for the exponential location parameters. Metron 1989, 47, 279–294. [Google Scholar]
  30. Lohr, S.L. Accurate multivariate estimation using triple sampling. Ann. Stat. 1990, 18, 1615–1633. [Google Scholar] [CrossRef]
  31. Mukhopadhyay, N.; Padmanabhan, A.R. A note on three-stage confidence intervals for the difference of locations: The exponential case. Metrika 1993, 40, 121–128. [Google Scholar] [CrossRef]
  32. Takada, Y. Three-stage estimation procedure of the multivariate normal mean. Indian J. Stat. Ser. B 1993, 55, 124–129. [Google Scholar]
  33. Hamdy, H.I.; Costanza, M.C.; Ashikaga, T. On the Behrens-Fisher problem: An integrated triple sampling approach. 1995; in press. [Google Scholar]
  34. Hamdy, H.I. Performance of fixed width confidence intervals under Type II errors: The exponential case. South. African Stat. J. 1997, 31, 259–269. [Google Scholar]
  35. AlMahmeed, M.; Hamdy, H.I. Sequential estimation of linear models in three stages. Metrika 1990, 37, 19–36. [Google Scholar] [CrossRef]
  36. AlMahmeed, M.; AlHessainan, A.; Son, M.S.; Hamdy, H.I. Three-stage estimation for the mean of a one-parameter exponential family. Korean Commun. Stat. 1998, 5, 539–557. [Google Scholar]
  37. Costanza, M.C.; Hamdy, H.I.; Haugh, L.D.; Son, M.S. Type II error performance of triple sampling fixed precision confidence intervals for the normal mean. Metron 1995, 53, 69–82. [Google Scholar]
  38. Yousef, A.; Kimber, A.; Hamdy, H.I. Sensitivity of Normal-Based Triple Sampling Sequential Point Estimation to the Normality Assumption. J. Stat. Plan. Inference 2013, 143, 1606–1618. [Google Scholar] [CrossRef]
  39. Yousef, A. Construction a Three-Stage Asymptotic Coverage Probability for the Mean Using Edgeworth Second-Order Approximation. Selected Papers on the International Conference on Mathematical Sciences and Statistics 2013; Springer: Singapore, 2014; pp. 53–67. [Google Scholar]
  40. Hamdy, H.I.; Son, S.M.; Yousef, S.A. Sensitivity Analysis of Multi-Stage Sampling to Departure of an underlying Distribution from Normality with Computer Simulations. J. Seq. Anal. 2015, 34, 532–558. [Google Scholar] [CrossRef]
  41. Yousef, A. A Note on a Three-Stage Sequential Confidence Interval for the Mean When the Underlying Distribution Departs away from Normality. Int. J. Appl. Math. Stat. 2018, 57, 57–69. [Google Scholar]
  42. Liu, W. A k-stage sequential sampling procedure for estimation of a normal mean. J. Stat. Plan. Inf. 1995, 65, 109–127. [Google Scholar] [CrossRef]
  43. Son, M.S.; Haugh, H.I.; Hamdy, H.I.; Costanza, M.C. Controlling type II error while constructing triple sampling fixed precision confidence intervals for the normal mean. Ann. Inst. Stat. Math. 1997, 49, 681–692. [Google Scholar] [CrossRef]
  44. Wald, A. Sequential Analysis; Wiley: New York, NY, USA, 1947. [Google Scholar]
  45. Simon, G. On the cost of not knowing the variance when making a fixed-width confidence interval for the mean. Ann. Math. Stat. 1968, 39, 1946–1952. [Google Scholar] [CrossRef]
  46. Martinsek, A.T. Negative regret, optimal stopping, and the elimination of outliers. J. Amer. Stat. Assoc. 1988, 83, 160–163. [Google Scholar] [CrossRef]
  47. McGibney, G.; Smith, M.R. An unbiased signal to noise ratio measure for magnetic resonance images. Med. Phys. 1993, 20, 1077–1079. [Google Scholar] [CrossRef]
  48. Mukhopadhyay, N.; Datta, S.; Chattopadhyay, S. Applied Sequential Methodologies: Real-World Examples with Data Analysis; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar] [CrossRef]
Table 1. Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with m = 10 ,   γ = 0.5 ,   μ = 10 ,   σ = 5 ,   α = 0.05 .
Table 1. Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with m = 10 ,   γ = 0.5 ,   μ = 10 ,   σ = 5 ,   α = 0.05 .
n * N ¯ S ( N ¯ ) μ ^ S ( μ ^ ) 1 α ^ σ ^ S ( σ ^ ) ω
2420.40.0459.9890.0060.8924.6600.035−27.58
4338.40.0689.9970.0040.9054.7670.031−47.55
6156.20.08410.0020.0030.9184.8480.026−65.80
7671.40.0959.9970.0030.9254.8920.022−80.64
9691.50.10710.0000.0020.9334.9250.019−100.50
125120.40.12110.0000.0020.9364.9430.016−129.56
171166.80.1429.9990.0020.9404.9640.013−175.21
246242.60.17010.0000.0010.9454.9780.010−249.37
500498.00.24510.0010.0010.9474.9890.007−501.99
Table 2. Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with m = 10 ,   γ = 0.5 ,   μ = 5 ,   σ = 10 ,   α = 0.05 .
Table 2. Three-stage estimation of the mean and variance of the normal distribution under a unified stopping rule with m = 10 ,   γ = 0.5 ,   μ = 5 ,   σ = 10 ,   α = 0.05 .
n * N ¯ S ( N ¯ ) μ ^ S ( μ ^ ) 1 α ^ σ ^ S ( σ ^ ) ω
2420.40.0445.0020.0110.89086.7980.141−27.65
4338.50.0685.0070.0080.90590.9520.125−47.52
6156.20.0845.0110.0070.91693.8660.104−65.84
7671.10.0955.0060.0060.92495.3920.090−80.86
9691.30.1075.0070.0050.93196.9750.077−100.68
125120.40.1215.0020.0040.93597.7560.064−129.57
171167.10.1414.9980.0040.94198.6030.052−174.91
246242.40.1694.9990.0030.94699.0880.042−249.62
500497.70.2494.9970.0020.94799.5970.029−502.29
Table 3. Three-stage estimation of the inverse coefficient of variation under a unified stopping rule.
Table 3. Three-stage estimation of the inverse coefficient of variation under a unified stopping rule.
  μ = 10 ,   σ = 5 , θ = 2   μ = 5 ,   σ = 10 , θ = 0.5
n * θ ^ S ( θ ^ ) w ^ 1 α ^ n * θ ^ S ( θ ^ ) w ^ 1 α ^
242.2800.003−27.470.979240.5710.002−27.641.000
432.2130.003−47.490.956430.5540.001−47.511.000
612.1430.002−65.760.967610.5370.001−65.841.000
762.0980.002−80.610.981760.5260.001−80.861.000
962.0670.001−100.470.990960.5180.001−100.681.000
1252.0470.001−129.540.9951250.5120.001−129.570.999
1712.0270.001−175.200.9981710.5060.000−174.911.000
2462.0160.001−249.360.9992460.5040.000−249.621.000
5002.0080.000−501.991.0005000.5010.000−502.291.000

Share and Cite

MDPI and ACS Style

Yousef, A.; Hamdy, H. Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation. Mathematics 2019, 7, 831. https://doi.org/10.3390/math7090831

AMA Style

Yousef A, Hamdy H. Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation. Mathematics. 2019; 7(9):831. https://doi.org/10.3390/math7090831

Chicago/Turabian Style

Yousef, Ali, and Hosny Hamdy. 2019. "Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation" Mathematics 7, no. 9: 831. https://doi.org/10.3390/math7090831

APA Style

Yousef, A., & Hamdy, H. (2019). Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation. Mathematics, 7(9), 831. https://doi.org/10.3390/math7090831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop