Next Article in Journal
ADMM-Based Differential Privacy Learning for Penalized Quantile Regression on Distributed Functional Data
Next Article in Special Issue
A Robust Variable Selection Method for Sparse Online Regression via the Elastic Net Penalty
Previous Article in Journal
Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations
Previous Article in Special Issue
Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem

1
College of Liberal Studies, Seoul National University, Seoul 08826, Korea
2
NOVA Math (CMA-FCT/UNL) and Mathematics Department, NOVA School of Science and Technology, NOVA University of Lisbon (FCT/UNL), 2829-516 Caparica, Portugal
3
Department of Statistics, Seoul National University, Seoul 08826, Korea
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2953; https://doi.org/10.3390/math10162953
Submission received: 4 July 2022 / Revised: 10 August 2022 / Accepted: 13 August 2022 / Published: 16 August 2022
(This article belongs to the Special Issue Mathematical and Computational Statistics and Their Applications)

Abstract

:
The Behrens–Fisher problem occurs when testing the equality of means of two normal distributions without the assumption that the two variances are equal. This paper presents approaches based on the exact and near-exact distributions for the test statistic of the Behrens–Fisher problem, depending on different combinations of even or odd sample sizes. We present the exact distribution when both sample sizes are odd and the near-exact distribution when one or both sample sizes are even. The near-exact distributions are based on a finite mixture of generalized integer gamma (GIG) distributions, used as an approximation to the exact distribution, which consists of an infinite series. The proposed tests, based on the exact and the near-exact distributions, are compared with Welch’s t-test through Monte Carlo simulations, in particular for small and unbalanced sample sizes. The results show that the proposed approaches are competent solutions to the Behrens–Fisher problem, exhibiting precise sizes and better powers than Welch’s approach for those cases. Numerical studies show that the Welch’s t-test tends to be a bit more conservative than the test statistics based on the exact or near-exact distribution, in particular when sample sizes are small and unbalanced, situations in which the proposed exact or near-exact distributions obtain higher powers than Welch’s t-test.

1. Introduction

Let us suppose we have two independent random samples X 1 i , i = 1 , , n 1 and X 2 i , i = 1 , , n 2 , which are drawn from two normal distributions, N μ 1 , σ 1 2 and N μ 2 , σ 2 2 , respectively. The Behrens–Fisher problem occurs when testing the equality of the two means μ 1 and μ 2 based on random samples like these without the assumption that the two variances, σ 1 2 and σ 2 2 , are equal. [1] showed that a uniformly most powerful test does not exist in this case, and the Behrens–Fisher problem remains one of the unsolved problems of statistics. Many different approaches have been tried to solve this problem. Among those approaches are the fiducial approach proposed by [2,3], which somehow opened the way to the Bayesian approach proposed by [4,5], based on setting independent and locally uniform prior distributions for μ 1 , μ 2 , log σ 1 , log σ 2 . The frequentist approach proposed by [6,7] uses Student’s t distribution with S 1 2 n 1 + S 2 2 n 2 2 / S 1 4 n 1 1 n 1 2 + S 2 4 n 2 1 n 2 2 degrees of freedom as the approximate distribution of the Behrens–Fisher statistic.
In this paper, we will obtain the exact and near-exact distributions, both the probability density function and cumulative distribution function, of the Behrens–Fisher statistic,
T * = X ¯ 1 X ¯ 2 S 1 2 n 1 + S 2 2 n 2 ,
under H 0 : μ 1 = μ 2 , where X ¯ j = 1 n j i = 0 n j X j i and S j 2 = 1 n j 1 i = 0 n j X j i X ¯ j 2 for j = 1 , 2 , in the form of mixtures of Student’s t distributions multiplied by constants. Particularly for the case when both sample sizes are odd, the exact distribution will be derived in a finite closed form without any unsolved integrals or infinite sums by using the GIG (generalized integer gamma) distribution in [8], which is the distribution of the sum of independent gamma variables with integer shape parameters and nonequal rate parameters. For the other cases, that is, when both sample sizes are even or one of them is even and the other one is odd, the near-exact distribution will be obtained by approximating the exact distribution using a finite mixture of GIG distributions to obtain a more manageable cumulative distribution function. Such exact and near-exact distributions include σ 1 2 and σ 2 2 as unknown parameters, which have to be estimated, based on the observed samples, being then the p-values obtained from these exact or near-exact distributions with estimated parameters. The results will be compared with Welch’s t-test, one of the most widely used solutions to the problem, through Monte-Carlo simulations for relatively small sample sizes. We will see that the tests based on the exact or near-exact distribution show some advantage in terms of being able to obtain higher power than Welch’s t-test, especially when sample sizes are small and unbalanced and variances are also unbalanced.
This paper is organized as follows: Section 2 presents the exact distribution when both sample sizes are odd; Section 3 provides the near-exact distribution when one sample size is even and the other one odd, and Section 4 presents the exact near-exact distribution of the test statistic when both sample sizes are even. Numerical studies are provided in Section 5 to compare the exact and the near-exact distribution approaches with Welch’s t-test, and concluding remarks are presented in Section 6.

2. The Exact Distribution of the Behrens–Fisher Statistic for Odd-Numbered Sample Sizes

In this section, we present the exact distribution of the Behrens–Fisher statistic in (1), when both n 1 and n 2 are odd. Since S j 2 n j = σ j 2 n j n j 1 n j 1 s j 2 σ j 2 where n j 1 s j 2 σ j 2 χ n j 1 2 ( j = 1 , 2 ) , W = S 1 2 n 1 + S 2 2 n 2 is the sum of two independent gamma variables, each of which follows the distributions Γ n 1 1 2 , n 1 n 1 1 2 σ 1 2 and Γ n 2 1 2 , n 2 n 2 1 2 σ 2 2 , where Γ ( r , λ ) indicates a gamma distribution with shape parameter r and rate parameter λ .
Now, we can divide the problem into two cases: one is the case of n 1 n 1 1 2 σ 1 2 n 2 n 2 1 2 σ 2 2 , and the other is the case n 1 n 1 1 2 σ 1 2 = n 2 n 2 1 2 σ 2 2 . When n 1 n 1 1 2 σ 1 2 n 2 n 2 1 2 σ 2 2 , W follows a GIG distribution of depth 2 with shape parameters r j = n j 1 2 ( j = 1 , 2 ) and rate parameters λ j = n j n j 1 2 σ j 2 ( j = 1 , 2 ) , which are different. Notice that n 1 1 2 and n 2 1 2 are both integers because of odd-numbered sample sizes. The probability density function for this distribution is
f W ( w ) = j = 1 2 k = 1 r j j = 1 2 λ j r j c j , k w k 1 e λ j w I ( w > 0 )
where c j , k are given by (11)–(13) in [8]. In this case,
c i , r i = 1 r i 1 ! j = 1 , j i 2 λ j λ i r j
and
c i , r i k = 1 k j = 1 k r i k + j 1 ! r i k 1 ! s = 1 , s i 2 r s λ i λ s j c i , r i k + j
for k = 1 , , r i 1 , i = 1 , 2 .
As a matter of fact, this probability density function can be rewritten as
f W ( w ) = j = 1 2 k = 1 r j j = 1 2 λ j r j c j , k Γ ( k ) λ j k p j , k λ j k Γ ( k ) w k 1 e λ j w probability   density   function   of   Γ k , λ j I ( w > 0 )
which is a finite mixture of integer gamma distributions in [9]. Now that we have obtained the exact distribution of W, we can easily get the joint distribution of W and Y = X ¯ 1 X ¯ 2 , under H 0 : μ 1 = μ 2 . Y follows a normal distribution N 0 , σ 1 2 n 1 + σ 2 2 n 2 under H 0 , and so, given the independence of W and Y, the joint probability density function of these two random variables is
f W , Y ( w , y ) = j = 1 2 k = 1 r j p j , k λ j k Γ ( k ) w k 1 e λ j w 1 σ Φ y σ ( w > 0 , y ( , ) )
where σ = σ 1 2 n 1 + σ 2 2 n 2 and Φ is the probability density function of a standard normal distribution.
Since we want to derive the distribution of T * = Y W , we need to go further and obtain the joint distribution of T * and V = W from f W , Y ( w , y ) by a simple change of variables. From this process, we obtain
f T * , V ( t , v ) = j = 1 2 k = 1 r j p j , k 2 λ j k Γ ( k ) v 2 k e λ j v 2 1 σ Φ t v σ ,
for v > 0 and t ( , ) and
f T * ( t ) = j = 1 2 k = 1 r j p j , k 0 2 λ j k Γ ( k ) v 2 k e λ j v 2 1 σ Φ t v σ d v ,
where
0 2 λ j k Γ ( k ) v 2 k e λ j v 2 1 σ Φ t v σ d v
yields the probability density function of a σ λ j k T 2 k random variable, with T 2 k denoting a Student’s t distribution with 2 k degrees of freedom.
Hence, f T * ( t ) is a mixture of probability density functions of σ λ j k T 2 k random variables k = 1 , , r j j = 1 , 2 , with weights p j , k . Given that the probability density function of Student’s t variable with n degrees of freedom is given by
f T n ( t ) = 1 B n 2 , 1 2 1 n 1 + t 2 n ( n + 1 ) / 2
for t ( , ) , the probability density function of T * , under H 0 , can be rewritten as
f T * ( t ) = j = 1 2 k = 1 r j p j , k f σ λ j / k T 2 k ( t ) = j = 1 2 λ j r j 1 σ 2 π j = 1 2 k = 1 r j c j , k Γ k + 1 2 t 2 2 σ 2 + λ j k + 1 2
for t ( , ) .
Then, the cumulative distribution function of T * would also be a mixture of cumulative distribution functions of σ λ j k T 2 k k = 1 , , r j j = 1 , 2 with weights p j , k . For Student’s t distribution with even degrees of freedom, the cumulative distribution function is given by
F T 2 k ( t ) = 1 2 + t B k , 1 2 1 2 k 2 F 1 k + 1 2 , 1 2 ; 3 2 ; t 2 2 k = 1 2 + t B k , 1 2 1 2 k 2 F 1 1 k , 1 ; 3 2 ; t 2 2 k = 1 2 + Γ k + 1 2 2 k t 2 1 + t 2 2 k 1 2 k i = 0 k 1 t 2 2 k i Γ 3 2 + i Γ ( k i )
for t ( , ) , which is obtained by applying 15.3.3 and 15.4.1 from [10], where 2 F 1 denotes a Gaussian hypergeometric function.
Thus, the cumulative distribution function of T * , under H 0 , can be expressed as
F T * ( t ) = j = 1 2 k = 1 r j p j , k F T 2 k t / σ λ j k = 1 2 + j = 1 2 k = 1 r j j = 1 2 λ j r j 2 σ 2 c j , k Γ ( k ) Γ k + 1 2 λ j t 2 t 2 2 σ 2 + λ j 1 2 k i = 0 k 1 t 2 2 σ 2 λ j i Γ 3 2 + i Γ ( k i )
for t ( , ) .
The distribution of T * is much simpler when n 1 n 1 1 2 σ 1 2 = n 2 n 2 1 2 σ 2 2 . Since the rate parameters of the two independent gamma variables, S 1 2 n 1 and S 2 2 n 2 , are, in this case, equal, W simply follows the gamma distribution Γ n 1 + n 2 2 2 , n 1 n 1 1 2 σ 1 2 = n 2 n 2 1 2 σ 2 2 . Therefore, in this case, T * T n 1 + n 2 2 . This means that the probability density function and cumulative distribution function of T * are the same as those of Student’s t distribution with n 1 + n 2 2 degrees of freedom. The probability density function for this distribution is written as
f T * ( t ) = 1 B n 1 + n 2 2 2 , 1 2 1 n 1 + n 2 2 1 + t 2 n 1 + n 2 2 n 1 + n 2 1 / 2
for t ( , ) .
Additionally, as n 1 + n 2 2 is an even number when sample sizes are odd-numbered, the cumulative distribution function can be written as
F T * ( t ) = 1 2 + Γ n 1 + n 2 1 2 n 1 + n 2 2 t 2 1 + t 2 n 1 + n 2 2 3 n 1 n 2 2 i = 0 n 1 + n 2 4 2 t 2 n 1 + n 2 2 i Γ 3 2 + i Γ n 1 + n 2 2 2 i
for t ( , ) .

3. The Exact and Near-Exact Distribution of the Behrens–Fisher Statistic for Even-Numbered Sample Sizes

In this section, we present the exact distribution of T * when sample sizes are both even. The exact distribution consists of an infinite series, but we provide a near-exact distribution based on a finite mixture of GIG distributions, which yields an approximation to the exact distribution.

3.1. The Exact Distribution

Unlike the case where both sample sizes are odd, W = S 1 2 n 1 + S 2 2 n 2 does not follow a GIG distribution when both sample sizes are even. This is so because the shape parameters for Γ n 1 1 2 , n 1 n 1 1 2 σ 1 2 and Γ n 2 1 2 , n 2 n 2 1 2 σ 2 2 are not integers when sample sizes are even. However, we can use the Kummer confluent hypergeometric function to obtain the exact distribution of W for this case.
Given the integral definition of the Kummer confluent hypergeometric function, the probability density function of W, which is the sum of two independent gamma variables with shape parameters
r j = n j 1 2 , j = 1 , 2 ,
and rate parameters
λ j = n j n j 1 2 σ j 2 , j = 1 , 2 ,
can be expressed as
f W ( w ) = λ 1 r 1 Γ r 1 λ 2 r 2 Γ r 2 e λ 2 w 0 w e λ 2 λ 1 s s r 1 1 ( w s ) r 2 1 d s = λ 1 r 1 λ 2 r 2 Γ r 1 + r 2 e λ 2 w w r 1 + r 2 1 1 F 1 r 1 , r 1 + r 2 , λ 2 λ 1 w
for w > 0 .
Since
1 F 1 r 1 , r 1 + r 2 , λ 2 λ 1 w = i = 0 Γ r 1 + i Γ r 1 Γ r 1 + r 2 Γ r 1 + r 2 + i λ 2 λ 1 i i ! w i ,
the probability density function of W can be further written as
f W ( w ) = i = 0 Γ r 1 + i Γ r 1 λ 1 r 1 λ 2 r 2 Γ r 1 + r 2 + i λ 2 λ 1 i i ! w r 1 + r 2 + i 1 e λ 2 w = i = 0 Γ r 1 + i Γ r 1 i ! λ 1 λ 2 r 1 1 λ 1 λ 2 i p i λ 2 r 1 + r 2 + i Γ r 1 + r 2 + i w r 1 + r 2 + i 1 e λ 2 w p . d . f   of   Γ r 1 + r 2 + i , λ 2 ,
for w > 0 , which is the probability density function of an infinite mixture of gamma distributions with weights p i ( i = 0 , 1 , ) . Now, using a similar approach to the one used in Section 2 for the case of odd-numbered sample sizes, we can obtain the exact probability density function of T * under H 0 in the form of an infinite mixture of probability density functions of σ λ 2 r 1 + r 2 + i T 2 r 1 + r 2 + i distributions, with weights p i , for i = 0 , 1 , . The probability density function of T * may be then stated as follows:
f T * ( t ) = i = 0 p i f σ λ 2 r 1 + r 2 + i T 2 r 1 + r 2 + i ( t ) = i = 0 p i 1 B r 1 + r 2 + i , 1 2 2 σ 2 λ 2 1 + t 2 2 σ 2 λ 2 r 1 + r 2 + i + 1 2
for t ( , ) .
Regarding the exact cumulative distribution function of T * , under H 0 , this cumulative distribution function would also be an infinite mixture of cumulative distribution functions of σ λ 2 r 1 + r 2 + i T 2 r 1 + r 2 + i distributions, with weights p i , for i = 0 , 1 , . It can be written as
F T * ( t ) = i = 0 p i F T 2 r 1 + r 2 + i t / σ λ 2 r 1 + r 2 + i = 1 2 + i = 0 p i Γ r 1 + r 2 + i + 1 2 2 σ 2 λ 2 t 2 1 + t 2 2 σ 2 λ 2 1 2 r i k = 0 r i 1 t 2 2 σ 2 λ 2 k Γ 3 2 + k Γ r 1 + r 2 + i k
for t ( , ) , where r i = r 1 + r 2 + i , p i = Γ r 1 + i Γ r 1 i ! λ 1 λ 2 r 1 1 λ 1 λ 2 i , by applying the expression for F T 2 k ( t ) obtained in Section 2.
As the exact cumulative distribution function of T * is expressed as an infinite sum when λ 1 λ 2 , it is not much of a manageable cumulative distribution function. In order to obtain numerical values of this cumulative distribution function in a reasonable amount of time, use of an integer upper bound for the summation in i is required. However, the number of terms required in order to obtain a small enough truncation error is often very large. Hence, a near-exact distribution of T * with a manageable cumulative distribution function needs to be obtained for the case where λ 1 λ 2 so that quantiles and p-values can be computed in a faster and more practical way.

3.2. Near-Exact Distribution

In order to obtain a near-exact distribution for T * , based on a finite mixture, we will first obtain a near-exact distribution for W and then derive the distribution of T * from this distribution of W.
Let r j * = n j 2 2 ( j = 1 , 2 ) , and let λ 1 λ 2 . Then, the exact characteristic function of W is given by
Φ W ( t ) = λ 1 r 1 λ 1 i t r 1 λ 2 r 2 λ 2 i t r 2 = λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 * λ 2 i t r 2 * λ 1 0.5 λ 1 i t 0.5 λ 2 0.5 λ 2 i t 0.5
where i is the imaginary number with i 2 = 1 . Since λ 1 0.5 λ 1 i t 0.5 λ 2 0.5 λ 2 i t 0.5 is the characteristic function of the sum of two independent gamma variables Γ 0.5 , λ 1 and Γ 0.5 , λ 2 , we can make use of the probability density function of W expressed as an infinite mixture of probability density functions of gamma distributions obtained in Section 3.1 to write
λ 1 0.5 λ 1 i t 0.5 λ 2 0.5 λ 2 i t 0.5 = j = 0 p j λ 2 j + 1 λ 2 i t ( j + 1 ) ,
where p j = Γ ( 0.5 + j ) Γ ( 0.5 ) j ! λ 1 λ 2 0.5 1 λ 1 λ 2 j . This leads to
Φ W ( t ) = λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 * λ 2 i t r 2 * j = 0 p j λ 2 j + 1 λ 2 i t ( j + 1 ) .
Now, for a given integer m * , we propose to approximate Φ W ( t ) by
Φ W * ( t ) = λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 * λ 2 i t r 2 * j = 0 m * π j λ 2 j + 1 λ 2 i t ( j + 1 ) = j = 0 m * π j λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 * + j + 1 λ 2 i t r 2 * + j + 1
where π j j = 0 , , m * 1 are determined in such a way that satisfies
h t h Φ W ( t ) t = 0 = h t h Φ W * ( t ) t = 0 for h = 1 , , m * ,
with π m * = 1 i = 0 m * 1 π i . This approximate characteristic function Φ W * ( t ) is, in fact, the characteristic function of a finite mixture (with weights π j ) of m * + 1 GIG distributions of depth 2 with integer shape parameters r 1 * , r 2 * + j + 1 j = 0 , , m * and rate parameters λ 1 and λ 2 , the first m * moments of which are the same as those of W.
Hence, a finite mixture of probability density functions of m * + 1 GIG distributions with weights π j for j = 0 , , m * will then be a near-exact probability density function of W. As the GIG distribution itself is a finite mixture of gamma distributions, as shown in Section 2, this probability density function can be written as
f W * ( w ) = i = 0 m * π i j = 1 2 k = 1 r j i * * j = 1 2 λ j r j i * * c j k , i * w k 1 e λ j w = i = 0 m * π i j = 1 2 k = 1 r j i * * j = 1 2 λ j r j i * * c * j k , i Γ ( k ) λ j k p j k , i * λ j k Γ ( k ) w k 1 e λ j w p . d . f of Γ k , λ j
for w > 0 , where
r j i * * = r 1 * j = 1 r 2 * + i + 1 j = 2
for i = 0 , , m * and c j k , i * are defined in the same way as c j , k in Section 2, except here we use r j i * * instead of r j .
From this near-exact probability density function of W, using the same logic as before just like in Section 2 and Section 3.1, we can obtain a near-exact probability density function of T * in the form of a finite mixture of probability density function s of σ λ j k T 2 k with weights π i p j k , i * . Thus, the near-exact probability density function under H 0 is given as follows:
f T * * ( t ) = i = 0 m * j = 1 2 k = 1 r j i * * π i p j k , i * f σ λ j k T 2 k ( t ) = i = 0 m * j = 1 2 k = 1 r j i * * π i j = 1 2 λ j r j i * * c j k , i * Γ ( k + 0.5 ) 2 π σ 2 λ j + t 2 2 σ 2 ( k + 0.5 )
for t ( , ) . Naturally, the corresponding near-exact cumulative distribution function of T * would be a finite mixture of cumulative distribution function s of σ λ j k T 2 k with weights π i p j k , i * , which can be written as
F T * * ( t ) = 1 2 + i = 0 m * j = 1 2 k = 1 r j i * * { π i j = 1 2 λ j j i * * c j k , i * Γ ( k ) Γ ( k + 0.5 ) 2 σ 2 t 2 λ j λ j + t 2 2 σ 2 0.5 k × s = 0 k 1 t 2 2 σ 2 λ j s Γ 3 2 + s Γ ( k s ) }
for t ( , ) , by applying F T 2 k ( t ) obtained in Section 2.

4. One of the Sample Sizes Is Even and the Other Is Odd

The exact distribution of the Behrens–Fisher statistic T * for this case is given by the same expressions used in Section 3.1. Just like the case of even-numbered sample sizes, the exact cumulative distribution function is not that manageable for usage when λ 1 λ 2 in this case, too. Hence there is a need to obtain a near-exact distribution of T * with a manageable cumulative distribution function for faster and more practical computation of quantiles and p -values.
As we did in Section 3.2, we will first obtain a near-exact distribution for W, and then we will derive the near-exact distribution for T * from the distribution of W. Let, without any loss of generality, n 1 be even and n 2 be odd. Additionally, let r 1 * = n 1 2 2 , which is equivalent to r 1 , and let λ 1 λ 2 . Then, for r j and λ j as defined in (2) and (3), the exact characteristic function of W can be written as
Φ W ( t ) = λ 1 r 1 λ 1 i t r 1 λ 2 r 2 λ 2 i t r 2 = λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 λ 2 i t r 2 Φ W 1 ( t ) λ 1 1 2 λ 1 i t 1 2 Φ W 2 ( t ) ,
where Φ W 1 ( t ) is the characteristic function of a GIG distribution of depth 2 with integer shape parameters r 1 * and r 2 and rate parameters λ 1 and λ 2 , while Φ W 2 ( t ) is the characteristic function of a Γ 0.5 , λ 1 distribution. Now, for a given m * N , we propose to approximate Φ W 2 ( t ) by
Φ W 2 * ( t ) = j = 0 m * π j 2 λ 1 ( j + 1 ) 2 λ 1 i t ( j + 1 )
where π j j = 0 , , m * 1 are determined in such a way that
h t h Φ W 2 ( t ) t = 0 = h t h Φ W 2 * ( t ) t = 0 for h = 1 , , m *
with π m * = 1 i = 0 m * 1 π i .
The characteristic function Φ W 2 * ( t ) is the characteristic function of a finite mixture of m * + 1 distributions, which are Γ j + 1 , 2 λ 1 distributions j = 0 , , m * . The first m * moments of this mixture are the same as those of Γ 0.5 , λ 1 .
We should note that the weights π j do not depend on λ 1 . The h-th non-central moment of Γ 0.5 , λ 1 and of the mixture of Γ j + 1 , 2 λ 1 distributions are, respectively, given by
Γ h + 1 2 Γ 1 2 λ 1 h and j = 0 m * π j Γ ( h + j + 1 ) Γ ( j + 1 ) 2 λ 1 h ,
which means π j are determined in such a way that satisfies
Γ h + 1 2 Γ 1 2 = j = 0 m * π j Γ ( h + j + 1 ) Γ ( j + 1 ) 2 h ,
for h = 1 , , m * , definitely not depending on λ 1 .
By using Φ W 2 * ( t ) instead of Φ W 2 ( t ) , we can then approximate Φ W ( t ) with
Φ W * ( t ) = λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 λ 2 i t r 2 j = 0 m * π j 2 λ 1 ( j + 1 ) 2 λ 1 i t ( j + 1 ) = j = 0 m * π j λ 1 r 1 * λ 1 i t r 1 * λ 2 r 2 λ 2 i t r 2 2 λ 1 ( j + 1 ) 2 λ 1 i t ( j + 1 )
which is the characteristic function of a finite mixture of m * + 1 GIG distributions of depth 3 with shape parameters r 1 * , r 2 , j + 1 j = 0 , , m * and rate parameters λ 1 , λ 2 , 2 λ 1 .
Thus, a finite mixture of m * + 1 probability density functions s of G I G distributions of depth 3 with weights π j can be a near-exact probability density function of W. As GIG distribution itself is a finite mixture of gamma distributions, this near-exact probability density function of W is written as
f W * ( w ) = i = 0 m * π i j = 1 3 k = 1 r j i * j = 1 3 λ j * r j i * c j k , i * w k 1 e λ j * w = i = 0 m * π i j = 1 3 k = 1 r j i * j = 1 3 λ j * r j i * c j k , i * Γ ( k ) ( λ j * ) k p j k , i * ( λ j * ) k Γ ( k ) w k 1 e λ * w p . d . f of Γ k , λ j * ( w > 0 )
where
r j i * = r 1 * j = 1 r 2 j = 2 i + 1 j = 3 i = 0 , , m * , λ j * = λ 1 j = 1 λ 2 j = 2 2 λ 1 j = 3
and c j k , i * are given by (11)–(13) in [8] using r j i * and λ j * , which are
c j , r j i * , i = 1 r j i * 1 ! s = 1 , s j 3 λ s * λ * j r s i * , c j , r j i * k , i * = 1 k n = 1 k r j i * k + n 1 ! r j i * k 1 ! s = 1 , s j 3 r s λ j * λ s * n c j , r j i * k + n , i
where k = 1 , , r j i * 1 and j = 1 , 2 , 3 , i = 0 , , m * . From this near-exact probability density function of W, using the same logic as before, we can once again obtain a near-exact probability density function of T * in the form of a finite mixture of probability density functions of σ λ j * k T 2 k distributions, with weights π i p j k , i * . The near-exact probability density function of T * , under H 0 , is thus given by
f T * * ( t ) = i = 0 m * j = 1 3 k = 1 r j i * π i p j k , i * f σ λ j * k T 2 k ( t ) = i = 0 m * j = 1 3 k = 1 r j i * π i j = 1 3 λ j * r j i * c j k , i * Γ k + 1 2 2 π σ 2 λ j * + t 2 2 σ 2 k + 1 2
for t ( , ) .
Hence, the corresponding near-exact cumulative distribution function of T * , under H 0 , is also a finite mixture of cumulative distribution functions of σ λ j * k T 2 k , with weights π i p j k , i * , which can be written as
F T * * ( t ) = 1 2 + i = 0 m * j = 1 3 k = 1 r j i * { π i j = 1 3 λ j * r j i * c j k , i * Γ ( k ) Γ ( k + 0.5 ) 2 σ 2 t 2 λ j * λ j * + t 2 2 σ 2 0.5 k × s = 0 k 1 t 2 2 σ 2 λ j * s Γ 3 2 + s Γ ( k s ) }
for t ( , ) by applying F T 2 k ( t ) obtained in Section 2.

5. Comparison of the Exact or Near-Exact Distribution and Welch’s t Test

When it is plausible to assume that λ 1 λ 2 , that is, that n 1 n 1 1 2 σ 1 2 n 2 n 2 1 2 σ 2 2 , we can make use of the exact and near-exact distribution of T * to solve the Behrens–Fisher problem. The exact cumulative distribution function of T * will be used for computation of p-values when both sample sizes are odd, while the near-exact cumulative distribution functions obtained in Section 3 and Section 4 will be used for computing p-values when both sample sizes are even or when one of the sample sizes is even and the other is odd. Because the cumulative distribution functions of T * include the unknown parameters σ 1 2 and σ 2 2 , these will be estimated by the sample variances S i 2 ( i = 1 , 2 ) .
We will compare the exact or near-exact distributions and Welch’s t-test by their actual sizes and powers for testing H 0 : μ 1 = μ 2 versus H 1 : μ 1 > μ 2 . For T * = t , since the hypothesis is for the right-tailed test, the corresponding p-value is computed from 1 F T * ( t ) in (4) or 1 F T * * ( t ) in (5) or (6), depending on the parity of the sample sizes, according to the derivations in the previous sections, and with σ estimated by σ ^ = S 1 2 / n 1 + S 2 2 / n 2 . We used different type I error rates α { 0.1 , 0.05 , 0.01 } and we conducted Monte Carlo experiments under a range of different sets of parameters.
Simulations were conducted for variance and sample size pairs that correspond to
θ = σ 1 2 n 1 / σ 1 2 n 1 + σ 2 2 n 2 { 0.1 , 0.3 , 0.5 } .
For μ 1 μ 2 , we covered the cases where μ 1 μ 2 satisfies
δ = μ 1 μ 2 σ 1 2 n 1 + σ 2 2 n 2 { 0 , 1 , 2 } ,
with δ = 0 corresponding to the null hypothesis H 0 : μ 1 = μ 2 .
For each combination of parameters, the number of replications was 50,000, and in each subsection, we provide two scenarios: one in which sample sizes were balanced, and another one where they were unbalanced. For each generated sample, we computed t i for i = 1 , , 50,000 and then obtained the type I error under δ = 0 and the power under δ > 0 from
1 50 , 000 i = 1 50 , 000 I ( p i α )
where I ( · ) is the indicator function and the p-value p i = 1 F T * ( t i ) is computed using F T * ( · ) in (4) for the case of both odd sample sizes or F T * ( · ) in (5) or (6) when at least one of the sample sizes is even.
All computations were done with the software R, version 4.1.0.

5.1. Odd n1 and n2

In Table 1 and Table 2, we present the power values for the exact distribution and Welch’s test, represented, respectively, by E and W. We considered for θ and δ the sets of values indicated above.
We may see that when sample sizes are unbalanced, as in Table 2, the exact distribution gives larger values of power than Welch’s t-test, namely for larger values of δ and smaller values of α , while they tend to give similar results when the two sample sizes are homogeneous, still with the exact distribution giving larger power values when the variances are unbalanced. Namely, for the case of ( n 1 , n 2 ) = ( 15 , 3 ) and ( σ 1 2 , σ 2 2 ) = ( 15 , 27 ) and δ = 2 , the near-exact distribution shows a gain of over 30% in power in relation to Welch’s t-test for α = 0.01 .

5.2. Even n 1 and n 2

Table 3 and Table 4 provide numerical results for type I error rates and powers for Welch’s t-test and near-exact distributions that use m * = 4 for cases where both sample sizes are even. In these tables, the near-exact distributions and Welch’s test are, respectively, denoted by NE and W.
Similar to the case of odd sample sizes, we see that the differences in power between the near-exact distribution and Welch’s t-test are quite slim when the sample sizes are equal, as shown in Table 3, although with a tendency for the near-exact distributions to exhibit larger powers when variances are unbalanced, while in the unbalanced case ( n 1 , n 2 ) = ( 12 , 4 ) shown in Table 4 the power displayed by the near-exact distributions is quite larger, particularly if the variances are also unbalanced. Namely, for the case ( σ 1 2 , σ 2 2 ) = ( 8 , 72 ) and δ = 2 , the near-exact distribution shows a gain of over 20% in power.

5.3. n 1 Is Even and n 2 Is Odd

Table 5 displays the case of similar sample sizes, and it shows that once again, the near-exact distribution and Welch’s test are fairly similar to each other in terms of Type I error rates and values of power. On the other hand, Table 6 presents the results for the unbalanced sample size case, and it shows that the near-exact distribution and Welch’s test can control well the type I error rates, but that the near-exact distribution can obtain larger powers than Welch’s test. In particular, when ( σ 1 2 , σ 2 2 ) = ( 12 , 27 ) and δ = 2 , the near-exact distribution shows a gain of almost 30% in power in relation to Welch’s t-test for α = 0.01 . The near-exact distributions and Welch’s test are, respectively, denoted by NE and W in these tables.

5.4. Brief Study of Power Evolution for Increasing Sample Sizes

With the aim of showing the evolution of the values of power for increasing sample sizes, the plots of power curves for increasing values of the sample sizes are shown in Figure 1, Figure 2 and Figure 3, for given σ 1 2 , σ 2 2 , μ 1 and μ 2 . As expected, it is clear that the power curves are increasing with increasing values of the sample sizes. Figure 1, Figure 2 and Figure 3 represent, respectively, cases of ( n 1 , n 2 ) = ( o d d , o d d ) , ( e v e n , e v e n ) and ( e v e n , o d d ) . It is also clear from these Figures that, also as expected, although the use of the exact or near-exact distributions lead to an increase in the values of power, the power values from Welch’s test approach asymptotically those obtained when using the exact or near-exact distributions for increasing sample sizes. In Figure 1, Figure 2 and Figure 3, we use ( μ 1 , μ 2 ) = ( 40 , 0 ) and present the values of ( σ 1 2 , σ 2 2 ) and ( n 1 , n 2 ) in each figure. In addition, the corresponding values of δ are also presented.

6. Conclusions

Over the years since the Behrens–Fisher problem was first introduced, many different solutions have been presented for the problem. In this paper, we propose another approach for the Behrens–Fisher problem that is based on a version of the exact distribution and near-exact distributions for its statistic, which are based on GIG (generalized integer gamma) distributions. Overall, the differences between the sizes of the near-exact distribution and Welch’s t-test are negligible, while the use of the exact or near-exact distributions provide powers that are larger than those provided by Welch’s t-test, mainly for the cases where the sample sizes and/or the variances are unbalanced, and namely when smaller sample sizes are associated with larger variances. The results thus show that, mainly for the cases of unbalanced sample sizes and/or unbalanced variances, the use of Welch’s t-test leads to some loss in power compared with the use of the exact or near-exact distributions developed, thus advising towards the use of these latter ones.
The computation of the exact or near-exact distributions poses absolutely no problems, even for large sample sizes, with the computation times remaining in the hundredths of a second for sample sizes in the order of a few hundreds.
In order to decide which distribution to use, the user may want to test the hypothesis λ 1 = λ 2 . We may note that, given the definition of λ 1 and λ 2 in (3), testing that hypothesis is indeed equivalent to testinf the hypothesis
σ 2 σ 1 n 1 ( n 1 1 ) n 2 ( n 2 1 ) = 1 ,
which may tested in as much the same way we run a test of equality of two variances based on two independent samples.

Author Contributions

Conceptualization, S.H., C.A.C. and J.P.; methodology, S.H., C.A.C. and J.P.; software, S.H.; validation, S.H., C.A.C. and J.P.; formal analysis, S.H.; investigation, C.A.C. and J.P.; writing—original draft preparation, S.H.; writing—review and editing, C.A.C. and J.P.; funding acquisition, C.A.C. and J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C1A01100526) and also funded by Portuguese national funds through FCT—Fundação para a Ciência e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications-NOVAMath).

Conflicts of Interest

The authors declare no conflict of interest. The founding institutions had no role in the design of the study, in the collection, analysis, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Linnik, J.V. Statistical Problems with Nuisance Parameters (Scripta Technica, Trans.); (Original Work Published 1966); American Mathematical Society: Providence, RI, USA, 1968. [Google Scholar]
  2. Fisher, R.A. The fiducial argument in statistical inference. Ann. Eugen. 1935, 6, 391–398. [Google Scholar] [CrossRef]
  3. Fisher, R.A. The comparison of samples with possibly unequal variances. Ann. Eugen. 1939, 9, 174–180. [Google Scholar] [CrossRef]
  4. Jeffreys, H. Note on the Behrens-Fisher formula. Ann. Eugen. 1940, 10, 48–51. [Google Scholar] [CrossRef]
  5. Jeffreys, H. Theory of Probability, 3rd ed.; Oxford University Press: London, UK, 1961. [Google Scholar]
  6. Welch, B.L. The significance of the difference between two means when the population variances are unequal. Biometrika 1938, 29, 350–362. [Google Scholar] [CrossRef]
  7. Welch, B.L. The generalization of ‘Student’s’ problem when several different population variances are involved. Biometrika 1947, 34, 28–35. [Google Scholar] [CrossRef] [PubMed]
  8. Coelho, C.A. The Generalized Integer Gamma distribution—A basis for distributions in Multivariate Stastistics. J. Multivar. Anal. 1998, 64, 86–102. [Google Scholar] [CrossRef]
  9. Coelho, C.A. The wrapped Gamma distribution and wrapped sums and linear combinations of independent Gamma and Laplace distributions. J. Stat. Theory Pract. 2007, 1, 1–29. [Google Scholar] [CrossRef]
  10. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions, 9th ed.; Dover: New York, NY, USA, 1974. [Google Scholar]
Figure 1. The left panel is based on σ 1 2 = 5 , σ 2 2 = 45 . ( n 1 , n 2 ) = ( 5 + 2 k , 5 + 2 k ) for k = 0 , 1 , , 12 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 5 5 + 2 k + 45 5 + 2 k . The right panel is based on σ 1 2 = 15 , σ 2 2 = 27 and ( n 1 , n 2 ) = ( 15 + 2 k , 3 + 2 k ) for k = 0 , 1 , , 9 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 15 15 + 2 k + 27 3 + 2 k . Red and blue lines represent the power curves from exact distribution and Welch’s test, respectively.
Figure 1. The left panel is based on σ 1 2 = 5 , σ 2 2 = 45 . ( n 1 , n 2 ) = ( 5 + 2 k , 5 + 2 k ) for k = 0 , 1 , , 12 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 5 5 + 2 k + 45 5 + 2 k . The right panel is based on σ 1 2 = 15 , σ 2 2 = 27 and ( n 1 , n 2 ) = ( 15 + 2 k , 3 + 2 k ) for k = 0 , 1 , , 9 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 15 15 + 2 k + 27 3 + 2 k . Red and blue lines represent the power curves from exact distribution and Welch’s test, respectively.
Mathematics 10 02953 g001
Figure 2. The left panel is based on σ 1 2 = 8 , σ 2 2 = 72 . ( n 1 , n 2 ) = ( 8 + 2 k , 8 + 2 k ) for k = 0 , 1 , , 17 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 8 8 + 2 k + 72 8 + 2 k . The right panel is based on σ 1 2 = 12 , σ 2 2 = 36 and ( n 1 , n 2 ) = ( 12 + 2 k , 4 + 2 k ) for k = 0 , 1 , , 11 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 12 12 + 2 k + 36 4 + 2 k Red and blue lines represent the power curves from the nearly-exact distribution and Welch’s test, respectively.
Figure 2. The left panel is based on σ 1 2 = 8 , σ 2 2 = 72 . ( n 1 , n 2 ) = ( 8 + 2 k , 8 + 2 k ) for k = 0 , 1 , , 17 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 8 8 + 2 k + 72 8 + 2 k . The right panel is based on σ 1 2 = 12 , σ 2 2 = 36 and ( n 1 , n 2 ) = ( 12 + 2 k , 4 + 2 k ) for k = 0 , 1 , , 11 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 12 12 + 2 k + 36 4 + 2 k Red and blue lines represent the power curves from the nearly-exact distribution and Welch’s test, respectively.
Mathematics 10 02953 g002
Figure 3. The left panel is based on σ 1 2 = 8 , σ 2 2 = 63 . ( n 1 , n 2 ) = ( 8 + 2 k , 7 + 2 k ) for k = 0 , 1 , , 14 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 8 8 + 2 k + 63 7 + 2 k . The right panel is based on σ 1 2 = 12 , σ 2 2 = 27 and ( n 1 , n 2 ) = ( 12 + 2 k , 3 + 2 k ) for k = 0 , 1 , , 9 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 12 12 + 2 k + 27 3 + 2 k . Red and blue lines represent the power curves from the nearly exact distribution and Welch’s test, respectively.
Figure 3. The left panel is based on σ 1 2 = 8 , σ 2 2 = 63 . ( n 1 , n 2 ) = ( 8 + 2 k , 7 + 2 k ) for k = 0 , 1 , , 14 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 8 8 + 2 k + 63 7 + 2 k . The right panel is based on σ 1 2 = 12 , σ 2 2 = 27 and ( n 1 , n 2 ) = ( 12 + 2 k , 3 + 2 k ) for k = 0 , 1 , , 9 and δ k = ( μ 1 μ 2 ) / σ 1 2 n 1 + σ 2 2 n 2 = 40 / 12 12 + 2 k + 27 3 + 2 k . Red and blue lines represent the power curves from the nearly exact distribution and Welch’s test, respectively.
Mathematics 10 02953 g003
Table 1. Power values for n 1 , n 2 = ( 5 , 5 ) .
Table 1. Power values for n 1 , n 2 = ( 5 , 5 ) .
θ α δ = 0 δ = 1 δ = 2
EWEWEW
0.1 0.1 0.1009 0.1000 0.3578 0.3558 0.7059 0.7036
σ 1 2 = 5 0.05 0.0518 0.0506 0.2234 0.2199 0.5368 0.5313
σ 2 2 = 45 0.01 0.0123 0.0114 0.0731 0.0676 0.2291 0.2139
0.3 0.1 0.0971 0.0965 0.3664 0.3648 0.7205 0.7187
σ 1 2 = 15 0.05 0.0480 0.0473 0.2271 0.2241 0.5528 0.5482
σ 2 2 = 35 0.01 0.0100 0.0092 0.0636 0.0601 0.2376 0.2263
0.5 0.1 0.0958 0.0953 0.3628 0.3614 0.7221 0.7205
σ 1 2 = 5 0.05 0.0462 0.0456 0.2243 0.2218 0.5582 0.5545
σ 2 2 = 5 0.01 0.0094 0.0005 0.0641 0.0606 0.2375 0.2288
Table 2. Power values for n 1 , n 2 = ( 15 , 3 ) .
Table 2. Power values for n 1 , n 2 = ( 15 , 3 ) .
θ α δ = 0 δ = 1 δ = 2
EWEWEW
0.1 0.1 0.1100 0.1048 0.3542 0.3402 0.6614 0.6441
σ 1 2 = 15 0.05 0.0660 0.0595 0.2304 0.2090 0.4893 0.4522
σ 2 2 = 27 0.01 0.0273 0.0053 0.1076 0.0843 0.2609 0.1987
0.3 0.1 0.1052 0.1013 0.3636 0.3538 0.7033 0.6900
σ 1 2 = 45 0.05 0.0593 0.0557 0.2434 0.2287 0.5414 0.5133
σ 2 2 = 21 0.01 0.0180 0.0161 0.0990 0.0872 0.2940 0.2533
0.5 0.1 0.1001 0.0976 0.3743 0.3679 0.7263 0.7185
σ 1 2 = 15 0.05 0.0532 0.0507 0.2390 0.2301 0.5657 0.5488
σ 2 2 = 3 0.01 0.0133 0.0119 0.0866 0.0788 0.2949 0.2687
Table 3. Power values for n 1 , n 2 = ( 8 , 8 ) .
Table 3. Power values for n 1 , n 2 = ( 8 , 8 ) .
θ α δ = 0 δ = 1 δ = 2
NEWNEWNEW
0.1 0.1 0.1002 0.0998 0.3731 0.3722 0.7319 0.7311
σ 1 2 = 8 0.05 0.0525 0.0519 0.2384 0.2371 0.5775 0.5750
σ 2 2 = 72 0.01 0.0111 0.0106 0.0755 0.0726 0.2747 0.2672
0.3 0.1 0.1001 0.0992 0.3746 0.3741 0.7392 0.7395
σ 1 2 = 24 0.05 0.0497 0.0488 0.2373 0.2360 0.5927 0.5922
σ 2 2 = 56 0.01 0.0105 0.0094 0.0760 0.0744 0.2929 0.2879
0.5 0.1 0.0981 0.0968 0.3775 0.3772 0.7370 0.7385
σ 1 2 = 8 0.05 0.0511 0.0495 0.2403 0.2393 0.5942 0.5947
σ 2 2 = 8 0.01 0.0103 0.0087 0.0757 0.0739 0.2911 0.2890
Table 4. Power values for n 1 , n 2 = ( 12 , 4 ) .
Table 4. Power values for n 1 , n 2 = ( 12 , 4 ) .
θ α δ = 0 δ = 1 δ = 2
NEWNEWNEW
0.1 0.1 0.1061 0.1020 0.3554 0.3458 0.6925 0.6823
σ 1 2 = 12 0.05 0.0578 0.0539 0.2293 0.2129 0.5233 0.4974
σ 2 2 = 36 0.01 0.0175 0.0148 0.0894 0.0725 0.2568 0.2082
0.3 0.1 0.1031 0.1009 0.3666 0.3612 0.7186 0.7120
σ 1 2 = 36 0.05 0.0545 0.0523 0.2361 0.2266 0.5657 0.5492
σ 2 2 = 28 0.01 0.0131 0.0117 0.0864 0.0769 0.2781 0.2498
0.5 0.1 0.0991 0.0976 0.3730 0.3701 0.7332 0.7308
σ 1 2 = 12 0.05 0.0511 0.0492 0.2388 0.2340 0.5802 0.5718
σ 2 2 = 4 0.01 0.0117 0.0103 0.0785 0.0732 0.2841 0.2683
Table 5. Power values for n 1 , n 2 = ( 8 , 7 ) .
Table 5. Power values for n 1 , n 2 = ( 8 , 7 ) .
θ α δ = 0 δ = 1 δ = 2
NEWNEWNEW
0.1 0.1 0.1015 0.1012 0.3684 0.3674 0.7279 0.7269
σ 1 2 = 8 0.05 0.0504 0.0497 0.2377 0.2354 0.5669 0.5638
σ 2 2 = 63 0.01 0.0117 0.0111 0.0744 0.0712 0.2642 0.2549
0.3 0.1 0.0980 0.0971 0.3736 0.3730 0.7366 0.7367
σ 1 2 = 24 0.05 0.0496 0.0486 0.2354 0.2341 0.5874 0.5857
σ 2 2 = 49 0.01 0.0115 0.0104 0.0747 0.0722 0.2876 0.2820
0.5 0.1 0.1015 0.0995 0.3711 0.3706 0.7389 0.7405
σ 1 2 = 8 0.05 0.0494 0.0475 0.2410 0.2393 0.5897 0.5896
σ 2 2 = 7 0.01 0.0116 0.0089 0.0737 0.0701 0.2844 0.2806
Table 6. Power values for n 1 , n 2 = ( 12 , 3 ) .
Table 6. Power values for n 1 , n 2 = ( 12 , 3 ) .
θ α δ = 0 δ = 1 δ = 2
NEWNEWNEW
0.1 0.1 0.1099 0.1045 0.3495 0.3355 0.6590 0.6418
σ 1 2 = 12 0.05 0.0632 0.0574 0.2311 0.2099 0.4807 0.4455
σ 2 2 = 27 0.01 0.0254 0.0206 0.1066 0.0849 0.2586 0.1994
0.3 0.1 0.1074 0.1035 0.3632 0.3523 0.6966 0.6817
σ 1 2 = 36 0.05 0.0582 0.0546 0.2417 0.2261 0.5417 0.5120
σ 2 2 = 21 0.01 0.0157 0.0137 0.0929 0.0808 0.2825 0.2428
0.5 0.1 0.1006 0.0986 0.3671 0.3611 0.7185 0.7102
σ 1 2 = 12 0.05 0.0503 0.0486 0.2351 0.2263 0.5664 0.5493
σ 2 2 = 3 0.01 0.0115 0.01058 0.0817 0.0755 0.2826 0.2581
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hong, S.; Coelho, C.A.; Park, J. An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem. Mathematics 2022, 10, 2953. https://doi.org/10.3390/math10162953

AMA Style

Hong S, Coelho CA, Park J. An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem. Mathematics. 2022; 10(16):2953. https://doi.org/10.3390/math10162953

Chicago/Turabian Style

Hong, Serim, Carlos A. Coelho, and Junyong Park. 2022. "An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem" Mathematics 10, no. 16: 2953. https://doi.org/10.3390/math10162953

APA Style

Hong, S., Coelho, C. A., & Park, J. (2022). An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem. Mathematics, 10(16), 2953. https://doi.org/10.3390/math10162953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop