Next Article in Journal
Large Deviations for Hawkes Processes with Randomized Baseline Intensity
Previous Article in Journal
Multiple Myeloma Cell Simulation Using an Agent-Based Framework Coupled with a Continuous Fluid Model
Previous Article in Special Issue
Uniform Consistency for Functional Conditional U-Statistics Using Delta-Sequences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration-Based Mean Estimators under Stratified Median Ranked Set Sampling

1
Department of Mathematics and Statistics, International Islamic University, Islamabad 44000, Pakistan
2
Department of Mathematics and Statistics, PMAS-Arid Agriculture University, Rawalpindi 44000, Pakistan
3
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Mathematics, College of Science, King Khalid University, Abha 62223, Saudi Arabia
5
Department of Statistics, Shaheed Benazir Bhutto Women University, Peshawar 25120, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1825; https://doi.org/10.3390/math11081825
Submission received: 2 March 2023 / Revised: 26 March 2023 / Accepted: 10 April 2023 / Published: 12 April 2023
(This article belongs to the Special Issue Statistics for Stochastic Processes)

Abstract

:
Using auxiliary information, the calibration approach modifies the original design weights to enhance the mean estimates. This paper initially proposes two families of estimators based on an adaptation of the estimators presented by recent researchers, and then, it presents a new family of calibration estimators with the set of some calibration constraints under stratified median ranked set sampling (MRSS). The result has also been implemented to the situation of two-stage stratified median ranked set sampling (MRSS). To best of our knowledge, we are presenting for the first time calibration-based mean estimators under stratified MRSS, so the performance evaluation is made between adapted and proposed estimators on behalf of the simulation study with real and artificial datasets. For real-world data or applications, we use information on the body mass index (BMI) of 800 people in Turkey in 2014 as a research variable and age as an auxiliary variable.

1. Introduction

In many real-life studies, specifically in ecological and environmental research, the variable of interest, say Y, may not be effectively perceptible; the measurements might be costly, tedious, intrusive or even destructive on the subjects being measured. Despite the difficulties or complexities in data collection, ranking the sampled units may be relatively straightforward at no extra cost or with almost no expense. Consider the following example: calliphoridae flies detect and colonize on a food source, such as a decaying corpse, as a natural means of survival within minutes of death. Thus, forensic entomologists frequently use calliphoridae fly larvae to estimate a cadaver’s time since death during their post-mortem investigations. As soon as the larvae reach their largest size, they cease eating. Because their anterior intestine is always empty during the course of their future development, forensic entomologists can accurately determine the post-mortem interval by observing how full their intestines are. However, it is challenging to determine changes in the intestinal contents of maggots using radiographic techniques [1].
Meanwhile, since the larvae appear to lengthen in a continuous manner during their growth, it is relatively easy to measure and rank their length. As another example, in a health-related study, suppose that the interest is in estimating the mean cholesterol level of a population. Instead of performing an invasive blood test on all subjects in the sample, subjects can be ranked with respect to their weights, even just visually, and the blood sample can be taken only on a small number of subjects.
For such circumstances, as described in examples, ranked set sampling (RSS) is a method for handling data collecting and processing. In order to estimate mean pasture yields, McIntyre originally proposed RSS in 1952. Takahasi and Wakimoto [2] later developed the RSS theory under the presumption of perfect ranking. The RSS is carried out as follows: the population is divided into a simple random sample of size  ϑ , each unit is rated according to subjective criteria, the smallest unit in the sample is measured, and the remaining units are eliminated. After ranking each unit according to the same criteria, a second simple random sample of size  ϑ  is chosen from the population. The second smallest unit is then measured, and the remaining units are discarded. Until the ordered units are measured, this process is repeated. A cycle is defined as the ordered observations  Y i [ 1 ] , Y i [ 2 ] , , Y i [ ϑ ] , where  i = 1 , ... , m  denotes the cycle number. A total sample size of  ϑ  is produced once the cycles are repeated m times.
Since its inception, RSS has attracted a great deal of attention from scholars, and it continues to be a very active research area. Beyond its initial horticultural-based origins in the foundational work by McIntyre [3], it has now begun to find its way into commercial applications. For more details regarding RSS, intrigued readers may refer to Chen et al. [4], Samawi and Muttlak [5], Bouza [6], Jeelani and Bouza [7], Eftekharian and Razmkhah [8] and Koyuncu [9]. In order to estimate the population mean, Muttlak [10] suggested median ranked set sampling (MRSS) and demonstrated that it produces estimates that are more accurate than RSS. MRSS can be thought of as a modified form of RSS, where the median of each sample in a cycle is measured rather than the  k t h  ( k = 1 , 2 , ... , ϑ ) smallest unit in each ranked sample.
The most popular estimator of the population mean in sampling theory is the classic ratio estimator when there is a high positive correlation between the study variable (Y) and the auxiliary variable (X) [11]. Al-Omari [12] took the MRSS scheme into consideration when proposing new ratio-type estimators that are based on the first and third quartiles of the auxiliary variable. The original structure of the MRSS proposed by Al-Omari [12] requires the use of  ϑ  independent samples of each size  ϑ  bivariate units from a finite population. The variable of interest Y is ranked by individual judgment, such as by a visual examination, or by means of the utilization of an accompanying variable associated with Y. Al-Omari [12] considered ranking on the auxiliary variable X as follows: When  ϑ  is odd in a cycle, the  ϑ + 1 2 t h  smallest X and the corresponding Y are chosen from each sample. When  ϑ  is even, the  ϑ 2 t h  smallest X and the associated Y are chosen from the first  ϑ 2  set and the  ϑ + 2 2 t h  smallest X and associated Y are chosen from the remaining  ϑ 2  set. For more information, see Al-Omari [12]. The cycles can be repeated  m 1  times to obtain a total sample size of  m ϑ . Later, Koyuncu [13] expanded on Al-Omari [12] concept and introduced estimators of the regression, exponential, and difference types. However, all this work is completed on traditional ratio and regression-type mean estimation under MRSS. In this paper, taking motivation from these, we have made an attempt to develop calibration-type mean estimators under stratified MRSS.
The remainder of this article is structured as follows: In Section 2, we present a calibration technique and present adapted estimators under stratified MRSS. In Section 3, we propose a new family of estimators with a set of calibration constraints. Section 4 is dedicated to a two-stage MRSS scheme. In Section 5, where we compare the effectiveness of our suggested estimators with modified estimators, we conducted a thorough simulation analysis. Finally, in Section 6, we offer our concluding remarks.

2. Adapted Estimators under Stratified MRSS Design

The effectiveness of the mean estimator from a finite population can be increased at various stages when auxiliary information is supplied. There are many instances in everyday life where the research variable Y and the auxiliary variable X have a linear relationship. Think about your height and weight, as taller people tend to weigh more; think about your GPA and SAT scores, as students with higher GPAs typically perform better on the SAT test; think about the relationship between depression and suicide: severe depression increases the chance of suicide compared to those who do not have depression [14]; take body mass index (BMI) and total cholesterol as an example. It has been demonstrated that these two variables have a direct and positive association [15].
A basic method of adjusting the initial weights with the goal of minimizing a specified distance measure while taking into account auxiliary data is known as calibration estimation. By creating new calibration weights in stratified sampling, researchers have attempted to boost estimates of the population parameter in the literature. A distance metric and a set of calibration constraints are the two fundamental building blocks in the creation of new calibration weights.
The development of calibration estimation in survey sampling dates back to Deville and Sarndal [16]. In the presence of auxiliary information, they created the calibration restrictions. They claim that when the sample sum of the weighted auxiliary variable equals the known population total for that auxiliary variable, the calibrated weights may provide accurate estimations. Because there is a significant correlation between the study variable and the auxiliary variables, weights that are effective for the auxiliary variable should also be effective for the research variable. Numerous authors have investigated calibration estimates utilizing various calibration constraints in survey sampling in the wake of Deville and Sarndal [16]. The first extended calibration method for a stratified sampling design was introduced by Singh, Horn, and Yu [17]. Koyuncu and Kadilar [13] provided corrected expressions of Tracy et al. [18] calibrated weights, and also new improved calibration weights are introduced. Furthermore, Sinha et al. [19] and Garg and Pachori [20] have extended the work in the two-stage stratified sampling scheme. Taking motivation from these important studies, we are adapting Sinha et al. [19] and Garg and Pachori [20] estimators under MRSS.

2.1. Sinha et al. (2017) Estimator [19]

In a stratified sampling design, a random sample of size  n δ , is drawn without replacement from a population of size  N δ  in stratum  δ ( δ = 1 , 2 , ... γ ) . Let  ( X i ( 1 ) , Y i [ 1 ] ) , ( X i ( 2 ) , Y i [ 2 ] ) , ( X i ( n δ ) , Y i [ n δ ] )  be the order statistics of  X i 1 , X i 2 , , X i n δ  and the judgment order of  Y i 1 , Y i 2 , , Y i n δ , in  δ t h  stratum, for  ( i = 1 , 2 , ... n δ ) . Furthermore,  ( )  and  [ ]  indicate that the ranking of X is perfect and the ranking of Y has errors. For odd and even sample sizes, the units measured using MRSS are denoted by M(O) and M(E), respectively.
As per each reviewer suggestion, let us provide small examples for sample selection in case of even and odd sample sizes so that readers catch the true spirit of this article as given below:
  • In case of an even sample size in the  δ t h  stratum, the  n δ 2 t h  smallest X and the associated Y are chosen from the first  n δ 2  set and the  n δ + 2 2 t h  smallest X and associated Y are chosen from the remaining  n δ 2  set. Let us take a small example of MRSS for the even sample size in Table 1 for (i = 1,2,...,4). Clearly, for  n δ = 4 X ( n δ 2 ) = X ( 4 2 ) = X ( 2 )  with an associated Y is selected for the first and second cycles, i.e.,  ( X 1 ( 2 ) , Y 1 [ 2 ] )  and  ( X 2 ( 2 ) , Y 2 [ 2 ] ) . Furthermore,  X ( n δ + 2 2 ) = X ( 4 + 2 2 ) = X ( 6 2 ) = X ( 3 )  with associated Y is selected for the remaining two cycles, i.e.,  ( X 3 ( 3 ) , Y 3 [ 3 ] )  and  ( X 4 ( 3 ) , Y 4 [ 3 ] ) .
  • In case of odd sample size in the  δ t h  stratum, the  n δ + 2 t h  smallest X and the associated Y are chosen from each set. Let us take a small example of MRSS for the odd sample size in Table 2 for (i = 1, 2, 3). Clearly, for  n δ = 3 X ( n δ + 1 2 ) = X ( 3 + 1 2 ) = X ( 4 2 ) = X ( 2 )  with associated Y is selected from each cycle, i.e.,  ( X 1 ( 2 ) , Y 1 [ 2 ] ) , ( X 2 ( 2 ) , Y 2 [ 2 ] )  and  ( X 3 ( 2 ) , Y 3 [ 2 ] ) .
For odd sample size, let  ( X 1 ( n δ + 1 2 ) , Y 1 [ n δ + 1 2 ] ) , ( X 2 ( n δ + 1 2 ) , Y 2 [ n δ + 1 2 ] ) , , ( X n δ ( n δ + 1 2 ) , Y n δ [ n δ + 1 2 ] )  denote the observed units by M(O) in  δ t h  stratum. Let  x ¯ s t ( M ( O ) ) = δ = 1 γ W δ x ¯ δ ( M ( O ) )  and  y ¯ s t ( M ( O ) ) = δ = 1 γ W δ y ¯ δ ( M ( O ) )  be the overall sample means of  δ t h  strata for X and Y, respectively. Furthermore,  y ¯ δ ( M ( O ) ) = 1 n δ i = 1 n δ Y δ i [ n δ + 1 2 ]  and  x ¯ δ ( M ( O ) ) = 1 n δ i = 1 n δ X δ i ( n δ + 1 2 ) , be the sample means in  δ t h  stratum. In addition,  V a r ( x ¯ s t ( M ( O ) ) ) = δ = 1 γ W δ 2 2 n δ σ x ( n δ + 1 2 ) 2  and  V a r ( y ¯ s t ( M ( O ) ) ) = δ = 1 γ W δ 2 2 n δ σ x [ n δ + 1 2 ] 2 , where  σ x ( n δ + 1 2 ) 2 = 1 n δ 2 δ = 1 γ V a r X δ i ( n δ + 1 2 )  and  σ y [ n δ + 1 2 ] 2 = 1 n δ 2 δ = 1 γ V a r Y δ i [ n δ + 1 2 ] . Note that  V a r ( y ¯ s t ( M ( O ) ) )  and  V a r ( x ¯ s t ( M ( O ) ) )  are the overall sample variances of  δ t h  strata for Y and X, respectively. The notations  Y δ i [ n δ + 1 2 ]  and  X δ i ( n δ + 1 2 )  are representing the selected MRSS sample values of study and auxiliary variables for odd sample size.
For even sample size, let  ( X 1 ( n δ 2 ) , Y 1 [ n δ 2 ] ) , ( X 2 ( n δ 2 ) , Y 2 [ n δ 2 ] ) , , ( X n 2 ( n δ 2 ) , Y n δ 2 [ n δ 2 ] ) ( X n δ + 2 2 ( n δ + 2 2 ) Y n δ + 2 2 [ n δ + 2 2 ] ) , ( X n δ + 4 2 ( n δ + 2 2 ) , Y n δ + 4 2 [ n δ + 2 2 ] ) , , ( X n δ ( n δ 2 ) , Y n δ [ n δ 2 ] )  denote the observed units by M(E) in the  δ t h  stratum. Let  x ¯ s t ( M ( E ) ) = δ = 1 γ W δ x ¯ δ ( M ( E ) )  and  y ¯ s t ( M ( E ) ) = δ = 1 γ W δ y ¯ δ ( M ( E ) )  be the overall sample means of  δ t h  strata for X and Y, respectively. Here,  x ¯ δ ( M ( E ) ) = 1 n δ i = 1 n δ 2 X δ i ( n δ 2 ) + i = n δ + 2 2 n δ X δ i ( n δ + 2 2 )  and  y ¯ δ ( M ( E ) ) = 1 n δ i = 1 n δ 2 Y δ i [ n δ 2 ] + i = n δ + 2 2 n δ Y δ i [ n δ + 2 2 ]  are the sample means in  δ t h  stratum. In addition,  V a r ( x ¯ s t ( M ( E ) ) ) = δ = 1 γ W δ 2 2 n δ σ x ( n δ 2 ) 2 + σ x ( n δ + 2 2 ) 2  and  V a r ( y ¯ s t ( M ( E ) ) ) = δ = 1 γ W δ 2 2 n δ σ y [ n δ 2 ] 2 + σ y [ n δ + 2 2 ] 2 , where  σ x ( n δ 2 ) 2 = 1 n δ 2 δ = 1 γ V a r X δ i ( n δ 2 ) σ x ( n δ + 2 2 ) 2 = 1 n δ 2 δ = 1 γ V a r X δ i ( n δ + 2 2 ) σ y [ n δ 2 ] 2 = 1 n δ 2 δ = 1 γ V a r Y δ i [ n δ 2 ]  and  σ y [ n δ + 2 2 ] 2 = 1 n δ 2 δ = 1 γ V a r Y δ i [ n δ + 2 2 ] . Note that  V a r ( y ¯ s t ( M ( E ) ) )  and  V a r ( x ¯ s t ( M ( E ) ) )  are the overall sample variances of  δ t h  strata for Y and X, respectively. The notations  Y δ i [ n δ 2 ] , Y δ i [ n δ + 2 2 ] X δ i ( n δ 2 ) , X δ i ( n δ + 2 2 )  are representing the selected MRSS sample values of study and auxiliary variables for even sample size. For more details about MRSS notations, interested readers may refer to Koyuncu [13].
Let  j = ( E , O )  denote the sample size even or odd; we are adapting Sinha et al.’s [19] calibration estimator under MRSS design as given by
y ¯ S M = δ = 1 γ Φ δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ δ = δ = 1 γ W δ ,
δ = 1 γ Φ δ x ¯ δ ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M )
where  X ¯ δ ( M )  is the population mean of auxiliary variable in the  δ t h  stratum. By defining  λ 1 ( M ( j ) )  and  λ 2 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ δ δ = 1 γ W δ 2 λ 2 ( M ( j ) ) δ = 1 γ Φ δ x ¯ δ ( M ( j ) ) δ = 1 γ W δ X ¯ δ ( M ) .
Differentiating  Δ ( M ( j ) )  according to calibration weight and obtaining the optimum value of  Φ δ
Φ δ = W δ + Q δ W δ λ 2 ( M ( j ) ) x ¯ δ ( M ( j ) ) + λ 1 ( M ( j ) ) ,
putting (5) in (2) and (3), we obtain
λ 1 ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 ,
λ 2 ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
By substituting these weights in (5), we obtain calibration weights as
Φ δ = W δ + Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) ,
Finally, by putting  Φ δ  in  y ¯ S M  and obtaining the calibrated mean estimator of study variable
y ¯ S M = δ = 1 γ W δ y ¯ δ ( M ( j ) ) + δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) .
This estimator can be rewritten as
y ¯ S M = y ¯ s t ( M ( j ) ) + b ^ ( j ) δ = 1 γ W δ ( X ¯ δ ( M ) x ¯ δ ( M ( j ) ) ) ,
where
b ^ ( j ) = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
y ¯ S M = y ¯ s t ( M ( O ) ) + b ^ ( O ) δ = 1 γ W δ ( X ¯ δ ( M ) x ¯ δ ( M ( O ) ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ ( E ) δ = 1 γ W δ ( X ¯ δ ( M ) x ¯ δ ( M ( E ) ) ) when n is even .

2.2. Garg and Pachori (2019) Estimator [20]

Let  j = ( E , O )  denote the sample size even or odd; we are adapting Garg and Pachori’s (2019) calibration estimator under MRSS design as given by
y ¯ G M = δ = 1 γ Φ δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ δ = δ = 1 γ W δ ,
δ = 1 γ Φ δ C ^ x δ ( M ( j ) ) = δ = 1 γ W δ C x δ ( M )
where  ( C ^ x δ ( M ( j ) ) , C x δ ( M ) )  represent the sample and population coefficient of variation (CV) of the auxiliary variable in the  δ t h  stratum. By defining  λ 1 ( M ( j ) )  and  λ 2 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ δ δ = 1 γ W δ 2 λ 2 ( M ( j ) ) δ = 1 γ Φ δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C x δ ( M ) .
Differentiating  Δ ( M ( j ) )  according to calibration weight and obtaining the optimum value of  Φ δ
Φ δ = W δ + Q δ W δ λ 2 ( M ( j ) ) C ^ x δ ( M ( j ) ) + λ 1 ( M ( j ) ) ,
putting (17) in (14) and (15), we obtain
λ 1 ( M ( j ) ) = δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 ,
λ 2 ( M ( j ) ) = δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 .
By substituting these weights in (17), we obtain calibration weights as
Φ δ = W δ + Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) ,
By putting  Φ δ  in  y ¯ G M  and obtaining the calibrated mean estimator of study variable
y ¯ G M = δ = 1 γ W δ y ¯ δ ( M ( j ) ) + δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) .
This estimator can be rewritten as
y ¯ G M = y ¯ s t ( M ( j ) ) + b ^ j δ = 1 γ W δ ( C x δ ( M ) C ^ x δ ( M ( j ) ) ) ,
where
b ^ j = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 .
y ¯ G M = y ¯ s t ( M ( O ) ) + b ^ ( O ) δ = 1 γ W δ ( C x δ ( M ) C ^ x δ ( M ( O ) ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ ( E ) δ = 1 γ W δ ( C x δ ( M ) C ^ x δ ( M ( E ) ) ) when n is even .

3. Proposed Family of Estimators in MRSS

Taking motivation from Sinha et al. [19] and Garg and Pachori [20], we propose the following estimator under stratified MRSS
y ¯ P M = δ = 1 γ Φ δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ δ x ¯ δ ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M )
δ = 1 γ Φ δ C ^ x δ ( M ( j ) ) = δ = 1 γ W δ C x δ ( M )
δ = 1 γ Φ δ = δ = 1 γ W δ ,
Defining  λ 1 ( M ( j ) ) λ 2 ( M ( j ) )  and  λ 3 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ δ x ¯ δ ( M ( j ) ) δ = 1 γ W δ X ¯ δ ( M ) 2 λ 2 ( M ( j ) ) δ = 1 γ Φ δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C x δ ( M ) 2 λ 3 ( M ( j ) ) δ = 1 γ Φ δ δ = 1 γ W δ .
Differentiating  Δ ( M ( j ) )  w.r.t  Φ δ  and equating to zero, we obtain the calibration weight
Φ δ = W δ + Q δ W δ λ 1 ( M ( j ) ) x ¯ δ ( M ( j ) ) + λ 2 ( M ( j ) ) C ^ x δ ( M ( j ) ) + λ 3 ( M ( j ) ) ,
Substituting (30) in (26), (27), and (28), respectively, we obtain a system of equations containing three equations. The system of equations in matrix form
G ( 3 × 3 ) λ ( 3 × 1 ) = F ( 3 × 1 ) ,
where
λ ( 3 × 1 ) = λ 1 ( M ( j ) ) λ 2 ( M ( j ) ) λ 3 ( M ( j ) ) ,
F ( 3 × 1 ) = δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) 0 ,
G ( 3 × 3 ) = δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ .
Solving the system of equations for lambdas, we obtain
λ 1 ( M ( j ) ) = D 1 H , λ 2 ( M ( j ) ) = D 2 H , λ 3 ( M ( j ) ) = D 3 H ,
where
D 1 ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 + δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) ,
D 2 ( M ( j ) ) = δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ + δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) ,
D 3 ( M ( j ) ) = δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) + δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) .
H = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 + 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) .
Substituting these values in (30) and (25), we obtain the calibrated estimator for study variable
y ¯ P M = y ¯ s t ( M ( j ) ) + λ 1 ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) + λ 2 ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) + λ 3 ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) ,
= δ = 1 γ W δ y ¯ δ ( M ( j ) ) + b ^ 1 δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( j ) ) + b ^ 2 δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( j ) ) ,
where
b ^ 1 ( j ) = D 4 H , b ^ 2 ( j ) = D 5 H ,
where
D 4 ( M ( j ) ) = δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ + δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) + δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) ,
D 5 ( M ( j ) ) = δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) + δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) + δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
y ¯ P M = y ¯ s t ( M ( O ) ) + b ^ 1 ( O ) δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( O ) ) + b ^ 2 ( O ) δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( O ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ 1 ( E ) δ = 1 γ W δ X ¯ δ ( M ) x ¯ δ ( M ( E ) ) + b ^ 2 ( E ) δ = 1 γ W δ C x δ ( M ) C ^ x δ ( M ( E ) ) when n is even .

4. Two Stage Stratified MRSS

In real life, the double sampling approach can be used to estimate population mean when the population mean of an auxiliary variable is unknown. This section makes the assumption that the auxiliary variable’s mean is not available. As a result, using the two-stage MRSS approach described in Al-Omari [12] and Koyuncu [13], basic random sampling is utilized at the first stage and median ranked set sampling is utilized at the second stage. Keep in mind that the first-phase sample is  n a = n 2  and the second-phase sample is n. Let  x ¯ a δ ( M ) C ^ x a δ ( M )  be the first-phase sample mean and CV of the auxiliary variable. While  x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) )  and  C ^ x δ ( M ( j ) )  are the second-phase sample characteristics of the auxiliary variable and study variable.

4.1. Adapted Estimators in Two-Stage Stratified MRSS

Sinha et al.’s [19] estimator under two-stage stratified MRSS is as follows
y ¯ S a M = δ = 1 γ Φ a δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ a δ = δ = 1 γ W δ ,
δ = 1 γ Φ a δ x ¯ δ ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M )
Defining  λ 1 ( M ( j ) )  and  λ 2 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ a δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ a δ δ = 1 γ W δ 2 λ 2 ( M ( j ) ) δ = 1 γ Φ a δ x ¯ δ ( M ( j ) ) δ = 1 γ W δ x ¯ a δ ( M ) .
Differentiating  Δ ( M ( j ) )  according to calibration weights, we obtain
Φ a δ = W δ + Q δ W δ λ 2 ( M ( j ) ) x ¯ δ ( M ( j ) ) + λ 1 ( M ( j ) ) ,
putting (39) in (36) and (37), we obtain
λ 1 ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 ,
λ 2 ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
By substituting these weights in (39), we obtain the optimum calibration weight as
Φ a δ = W δ + Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) ,
By putting  Φ a δ  in  y ¯ S a M
y ¯ S a M = δ = 1 γ W δ y ¯ δ ( M ( j ) ) + δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) .
This estimator can be rewritten as
y ¯ S a M = y ¯ s t ( M ( j ) ) + b ^ j δ = 1 γ W δ ( x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) ) ,
where
b ^ j = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
y ¯ S a M = y ¯ s t ( M ( O ) ) + b ^ O δ = 1 γ W δ ( x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ E δ = 1 γ W δ ( x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) ) when n is even .
Garg and Pachori’s [20] estimator under two-stage MRSS is given below
y ¯ G a M = δ = 1 γ Φ a δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ a δ = δ = 1 γ W δ ,
δ = 1 γ Φ a δ C ^ x δ ( M ( j ) ) = δ = 1 γ W δ C ^ x a δ ( M )
Defining  λ 1 ( M ( j ) )  and  λ 2 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ a δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ a δ δ = 1 γ W δ 2 λ 2 ( M ( j ) ) δ = 1 γ Φ a δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C ^ x a δ ( M ) .
Differentiating  Δ ( M ( j ) )  according to calibration weights, we obtain
Φ a δ = W δ + Q δ W δ λ 2 ( M ( j ) ) C ^ x δ ( M ( j ) ) + λ 1 ( M ( j ) ) ,
putting (51) in (48) and (49), we obtain
λ 1 ( M ( j ) ) = δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 ,
λ 2 ( M ( j ) ) = δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 .
By substituting these weights in (51), we obtain calibration weights as
Φ a δ = W δ + Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) ,
By putting  Φ a δ  in  y ¯ G a M
y ¯ G a M = δ = 1 γ W δ y ¯ δ ( M ( j ) ) + δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) .
This estimator can be rewritten as
y ¯ G a M = y ¯ s t ( M ( j ) ) + b ^ j δ = 1 γ W δ ( C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) ) ,
where
b ^ j = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 .
y ¯ G a M = y ¯ s t ( M ( O ) ) + b ^ O δ = 1 γ W δ ( C ^ x a δ ( M ) C ^ x δ ( M ( O ) ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ E δ = 1 γ W δ ( C ^ x a δ ( M ) C ^ x δ ( M ( E ) ) ) when n is even .

4.2. Proposed Family of Estimators in Two Stage Stratified MRSS

The proposed estimator under stratified MRSS is given below
y ¯ P a M = δ = 1 γ Φ a δ y ¯ δ ( M ( j ) ) ,
subject to the constraints
δ = 1 γ Φ a δ x ¯ δ ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M )
δ = 1 γ Φ a δ C ^ x δ ( M ( j ) ) = δ = 1 γ W δ C ^ x a δ ( M )
δ = 1 γ Φ a δ = δ = 1 γ W δ ,
Defining  λ 1 ( M ( j ) ) λ 2 ( M ( j ) )  and  λ 3 ( M ( j ) )  as Lagrange multipliers, the Lagrange function is given by
Δ ( M ( j ) ) = δ = 1 γ ( Φ a δ W δ ) 2 Q δ W δ 2 λ 1 ( M ( j ) ) δ = 1 γ Φ a δ x ¯ δ ( M ( j ) ) δ = 1 γ W δ x ¯ a δ ( M ) 2 λ 2 ( M ( j ) ) δ = 1 γ Φ a δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C ^ x a δ ( M ) 2 λ 3 ( M ( j ) ) δ = 1 γ Φ a δ δ = 1 γ W δ .
Differentiating  Δ ( M ( j ) )  according to calibration weights, we obtain
Φ a δ = W δ + Q δ W δ λ 1 ( M ( j ) ) x ¯ δ ( M ( j ) ) + λ 2 ( M ( j ) ) C ^ x δ ( M ( j ) ) + λ 3 ( M ( j ) ) ,
Substituting (64) in (60), (61), and (62), respectively, we obtain a system of equations containing three equations. The system of equations in matrix form
G ( 3 × 3 ) λ ( 3 × 1 ) = F ( 3 × 1 ) ,
where
λ ( 3 × 1 ) = λ 1 ( M ( j ) ) λ 2 ( M ( j ) ) λ 3 ( M ( j ) ) ,
F ( 3 × 1 ) = δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) 0 ,
G ( 3 × 3 ) = δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ .
By substituting the values of  G ( 3 × 3 ) λ ( 3 × 1 ) F ( 3 × 1 )  in (65) and then solving the system of equations for lambdas, we obtain
λ 1 ( M ( j ) ) = D 1 ( M ( j ) ) H , λ 2 ( M ( j ) ) = D 2 ( M ( j ) ) H , λ 3 ( M ( j ) ) = D 1 ( M ( j ) ) H ,
where
D 1 ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 + δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) ,
D 2 ( M ( j ) ) = δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ + δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) ,
D 3 ( M ( j ) ) = δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) + δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) .
H = δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 + 2 δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) .
Substituting these values in (64) and (59), we have
y ¯ P a M = y ¯ s t ( M ( j ) ) + λ 1 ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) + λ 2 ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) + λ 3 ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) ,
= δ = 1 γ W δ y ¯ δ ( M ( j ) ) + b ^ 1 δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( j ) ) + b ^ 2 δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( j ) ) ,
where
b ^ 1 ( j ) = D 4 ( M ( j ) ) H , b ^ 2 ( j ) = D 5 ( M ( j ) ) H ,
where
D 4 ( M ( j ) ) = δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ + δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) + δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ C ^ 2 x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) 2 ,
D 5 ( M ( j ) ) = δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) + δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) C ^ x δ ( M ( j ) ) δ = 1 γ Q δ W δ y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) + δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 δ = 1 γ Q δ W δ C ^ x δ ( M ( j ) ) y ¯ δ ( M ( j ) ) δ = 1 γ Q δ W δ x ¯ δ ( M ( j ) ) 2 .
y ¯ P a M = y ¯ s t ( M ( O ) ) + b ^ 1 ( O ) δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( O ) ) + b ^ 2 ( O ) δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( O ) ) when n is odd y ¯ s t ( M ( E ) ) + b ^ 1 ( E ) δ = 1 γ W δ x ¯ a δ ( M ) x ¯ δ ( M ( E ) ) + b ^ 2 ( E ) δ = 1 γ W δ C ^ x a δ ( M ) C ^ x δ ( M ( E ) ) when n is even .
Note that all the family members of the adapted and proposed classes of estimators based on different values  Q δ  are provided in Table 3. It is worth mentioning that  Q δ  is a suitably chosen weight for determining various types of estimators as provided in Table 3.

5. Simulation Study

5.1. Simulation Design

The simulation experiments considered in this section are designed to provide insight into the efficiency of the proposed estimators  y ¯ P M I y ¯ P M I I y ¯ P M I I I y ¯ P a M I y ¯ P a M I I  and  y ¯ P a M I I I  compared to the estimators  y ¯ S M I y ¯ S M I I y ¯ S M I I I y ¯ S a M I y ¯ S a M I I y ¯ S a M I I I y ¯ G M I y ¯ G M I I y ¯ G M I I I y ¯ G a M I y ¯ G a M I I , and  y ¯ G a M I I I . All samples were generated from a finite stratified population having size  Ω = 1000  in each stratum, using four distinctive (with respect to variance–covariance matrix) bivariate Gaussian distributions for each stratum with  ( μ x = 2 , μ y = 4 )  and the variance–covariance matrix given, respectively, by
  • Stratum 1
    Σ = 1 0.90 0.90 1
  • Stratum 2
    Σ = 1 0.76 0.76 1
  • Stratum 3
    Σ = 1 0.55 0.55 1
  • Stratum 4
    Σ = 1 0.30 0.30 1 .
Taking motivation from Koyuncu [13], we select samples from the above-mentioned stratified population. As they used stratified SRS, however, we are adapting their framework under stratified MRSS design. For the fair comparison among adapted and proposed estimators, we draw different sample sizes regarding even and odd sample sizes under MRSS. For increasing the readability of the article, we are providing the considered sample sizes in Table 4, where  A 1 A 2 A 3 A 4 B 1 B 2 B 3 , and  B 4  represent the overall selected strata sample sizes at the first and second stage.
Table 4. Details of different sample sizes for simulation study.
Table 4. Details of different sample sizes for simulation study.
MRSS Two-Stage MRSS
  ( n 1 , n 2 , n 3 , n 4 ) Table   ( na 1 , na 2 , na 3 , na 4 )
  ( n 1 , n 2 , n 3 , n 4 )
  A 1   ( 3 , 5 , 5 , 3 ) Table 5   ( 9 , 25 , 25 , 9 )
  B 1 Table 5   ( 3 , 5 , 5 , 3 )
  A 2   ( 4 , 6 , 6 , 4 ) Table 6   ( 16 , 36 , 36 , 16 )
  B 2 Table 6   ( 4 , 6 , 6 , 4 )
  A 3   ( 5 , 7 , 7 , 5 ) Table 7   ( 25 , 49 , 49 , 25 )
  B 3 Table 7   ( 5 , 7 , 7 , 5 )
  A 4   ( 6 , 8 , 8 , 6 ) Table 8   ( 36 , 64 , 64 , 36 )
  B 4 Table 8   ( 6 , 8 , 8 , 6 )
Table 5. PRE values for  ( A 1 , B 1 ) .
Table 5. PRE values for  ( A 1 , B 1 ) .
PRE MRSS PRE Two-Stage MRSS
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 133.0992132.9015133.0761   y ¯ S a M I 118.0992119.9015116.5761
  y ¯ S M I I 132.8973132.7000132.8742   y ¯ S a M I I 127.8973126.7000126.3742
  y ¯ S M I I I 133.1524132.9547133.1293   y ¯ S a M I I I 129.1524127.9547129.6293
  y ¯ G M I 536.1575535.3614536.0645   y ¯ G a M I 534.1575532.3614532.7645
  y ¯ G M I I 530.6326529.8446530.5405   y ¯ G a M I I 522.6326522.8446520.3405
  y ¯ G M I I I 536.6391535.8423536.5460   y ¯ G a M I I I 525.6391525.8423522.1460
Table 6. PRE values for  ( A 2 , B 2 ) .
Table 6. PRE values for  ( A 2 , B 2 ) .
PRE MRSS PRE Two-Stage MRSS
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 1017.1001017.5281017.366   y ¯ S a M I 992.0999994.52821009.366
  y ¯ S M I I 1018.9841019.4131019.251   y ¯ S a M I I 1003.98421003.41341009.751
  y ¯ S M I I I 1016.8841017.3121017.149   y ¯ S a M I I I 1002.88361002.31191010.649
  y ¯ G M I 1560.2701553.8991563.418   y ¯ G a M I 1557.27011562.89911561.118
  y ¯ G M I I 1647.8911650.8851648.266   y ¯ G a M I I 1645.89071649.88521648.766
  y ¯ G M I I I 1561.8181564.4531563.970   y ¯ G a M I I I 1559.81801567.45271562.370
Table 7. PRE values for  ( A 3 , B 3 ) .
Table 7. PRE values for  ( A 3 , B 3 ) .
PRE MRSS PRE Two-Stage MRSS
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 122.9272124.3531122.8358   y ¯ S a M I 101.92718101.3531114.8358
  y ¯ S M I I 126.9125128.3847126.8182   y ¯ S a M I I 111.91254112.3847117.3182
  y ¯ S M I I I 123.1588124.5874123.0673   y ¯ S a M I I I 109.15884109.5874116.5673
  y ¯ G M I 1627.43551704.31191622.5118   y ¯ G a M I 1615.435511691.31191617.2118
  y ¯ G M I I 1720.37141798.32591715.3786   y ¯ G a M I I 1702.371441791.32591705.8786
  y ¯ G M I I I 1658.32181735.55651653.3751   y ¯ G a M I I I 1637.321781715.55651637.7751
Table 8. PRE values for  ( A 4 , B 4 ) .
Table 8. PRE values for  ( A 4 , B 4 ) .
PRE MRSS PRE Two-Stage MRSS
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 782.1886800.9280772.5701   y ¯ S a M I 757.1886777.9280764.5701
  y ¯ S M I I 788.7738807.6710779.0744   y ¯ S a M I I 773.7738791.6710769.5744
  y ¯ S M I I I 781.7996800.5298772.1859   y ¯ S a M I I I 767.7996785.5298765.6859
  y ¯ G M I 1485.60271521.55771467.9012   y ¯ G a M I 1484.60271520.55771466.6012
  y ¯ G M I I 1484.52641520.16811467.9857   y ¯ G a M I I 1482.52641519.16811465.4857
  y ¯ G M I I I 1492.32041528.96961474.7493   y ¯ G a M I I I 1490.32041526.96961472.1493
For single-stage stratified MRSS,  K 1 = 7000  samples of sizes  n = A 1 , A 2 , A 3 , A 4  were chosen independently under the stratified MRSS design from the population, and for the kth sample, the estimate  ( ϕ ^ ( k 1 ) , ϕ ^ ( k 2 ) )  of  μ y  was calculated, where
ϕ ^ ( k 1 ) = y ¯ S M I , y ¯ S M I I , y ¯ S M I I I , y ¯ G M I , y ¯ G M I I , y ¯ S M I I I .
ϕ ^ ( k 2 ) = y ¯ P M I , y ¯ P M I I , y ¯ P M I I I .
Al-Omari [12] and Koyuncu [13] considered double MRSS design. However, we are adapting their strategy for double MRSS in  δ t h  stratum where  K 1 = 7000  samples of sizes  n a δ = n δ × n δ = A 1 , A 2 , A 3 , A 4  were chosen independently under the SRS at the first stage and then stratified MRSS samples of sizes  n δ = B 1 , B 2 , B 3 , B 4  were chosen from  n δ × n δ  at the second stage, and for the kth sample, the estimate  ( ϕ ^ ( k 1 ) , ϕ ^ ( k 2 ) )  of  μ y  was calculated, where
ϕ ^ ( k 1 ) = y ¯ S a M I , y ¯ S a M I I , y ¯ S a M I I I , y ¯ G a M I , y ¯ G a M I I , y ¯ S a M I I I .
ϕ ^ ( k 2 ) = y ¯ P a M I , y ¯ P a M I I , y ¯ P a M I I I .
The bias and MSE were calculated from the formula given below
Bias ( ϕ ^ ( k 1 ) ) = k 1 = 1 K 1 ( ϕ ^ ( k 1 ) μ y ) / K 1 .
MSE ( ϕ ^ ( k 1 ) ) = k 1 = 1 K 1 ( ϕ ^ ( k 1 ) μ y ) 2 / K 1 .
Bias ( ϕ ^ ( k 2 ) ) = k 1 = 1 K 1 ( ϕ ^ ( k 2 ) μ y ) / K 1 .
MSE ( ϕ ^ ( k 2 ) ) = k 1 = 1 K 1 ( ϕ ^ ( k 2 ) μ y ) 2 / K 1 .
The calculated bias values are presented in Table 9.
After calculating the MSE values separately, the efficiency of the estimators was compared by using the percent relative efficiency (PRE) formula
PRE ( ϕ ^ ( k 1 ) , ϕ ^ ( k 2 ) ) = MSE ( ϕ ^ ( k 1 ) ) MSE ( ϕ ^ ( k 2 ) ) × 100 ,
We provide our PRE results in Table 5, Table 6, Table 7 and Table 8.

5.2. Real Life Application

We also assessed the properties of the proposed estimators using a real-life example. We use the data concerning body mass index (BMI) as a study variable and the weight as auxiliary variables for 800 people in Turkey in 2014. The open-access dataset belongs to a health survey prepared by the Turkish Statistical Institute (TSI) that examines the determinants ”factors which may affect obesity” of health-related behaviors in Turkey for 800 people. All the dataset information is already available in Cetin and Koyuncu [21]. The collected data consist of  N = 800  observations with  ρ x y = 0.86 μ y = 23.77 μ x = 67.55 C x = 0.20  and  C y = 0.17 . We stratified the dataset using gender in two strata. Some major characteristics of the strata as follows
  • Stratum-I  N h 1 = 477 ρ y x h 1 = 0.90 μ y h 1 = 22.36 μ x h 1 = 59.99 C x h 1 = 0.17 C y h 1 = 0.17 .
  • Stratum-II  N h 2 = 323 ρ y x h 2 = 0.80 μ y h 2 = 25.85 μ x h 2 = 78.72 C x h 2 = 0.04 C y h 1 = 0.13 .
The calculated bias values are presented in Table 10.
The numerical comparisons based on PRE for BMT data are provided in Table 11 and Table 12.
We explore the following points from numerical investigation:
  • Table 9 and Table 10 show bias results for proposed and existing estimators based on simulation and real-life data. It is worth mentioning that the proposed estimators have less bias as compared to existing ones. Furthermore, in the simulation study bias results, i.e., Table 9, the bias is reducing by increasing the sampling size.
  • Clearly, PRE  > 100 , which means all the proposed estimators are performing better as compared to the adapted estimators. Although we make this conclusion based on our simulation and real-life study, we are confident that this result would be valid under different settings as well.

6. Conclusions

MRSS is a well-known sampling technique. In this paper, we adapt Sinha et al. [19] and Garg and Pachori [20] estimators under stratified MRSS design. Additionally, new calibration estimators that use the mean and the coefficient of variation of an auxiliary variable as a calibration constraint are proposed in this study to estimate the population mean in the case of stratified MRSS and stratified two-stage MRSS. It has been discovered that fresh ideas are more effective than modified ones. The proposed work has been supported by a simulation study. We hope to extend the present work in light of Koyuncu [13].

Author Contributions

Methodology, U.S., I.A., F.A., I.M.A. and S.I.; Software, U.S. and I.M.A.; Writing—original draft, U.S., I.A., F.A., I.M.A. and S.I.; Writing— review & editing, U.S., I.A., F.A., I.M.A. and S.I.; Visualization, U.S.; Supervision, I.A.; Project administration, F.A.; Funding acquisition, F.A. and I.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was funded by (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) The Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/177/44.

Data Availability Statement

All the dataset information is already available in Cetin and Koyuncu [21].

Acknowledgments

The authors thank and extend their appreciation to the funders of this work: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) The Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/177/44.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sharma, R.; Garg, R.K.; Gaur, J.R. Various methods for the estimation of the post mortem interval from Calliphoridae: A review. Egypt. J. Forensic Sci. 2015, 5, 1–12. [Google Scholar] [CrossRef] [Green Version]
  2. Takahasi, K.; Wakimoto, K. On unbiased estimates of the population mean based on the sample stratified by means of ordering. Ann. Inst. Stat. Math. 1968, 20, 1–31. [Google Scholar] [CrossRef]
  3. McIntyre, G.A. A method for unbiased selective sampling, using ranked sets. Aust. J. Agric. Res. 1952, 3, 385–390. [Google Scholar] [CrossRef]
  4. Chen, Z.; Bai, Z.; Sinha, B. Ranked Set Sampling: Theory and Applications; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2003; Volume 176. [Google Scholar]
  5. Samawi, H.M.; Muttlak, H.A. Estimation of ratio using rank set sampling. Biom. J. 1996, 38, 753–764. [Google Scholar] [CrossRef]
  6. Bouza, C.N. Ranked set sampling for the product estimator. Investig. Oper. 2013, 29, 201–206. [Google Scholar]
  7. Jeelani, M.I.; Bouza, C.N. New ratio method of estimation under ranked set sampling. Investig. Oper. 2015, 36, 151–155. [Google Scholar]
  8. Eftekharian, A.; Razmkhah, M. On estimating the distribution function and odds using ranked set sampling. Stat. Probab. Lett. 2017, 122, 1–10. [Google Scholar] [CrossRef]
  9. Koyuncu, N. Calibration estimator of population mean under stratified ranked set sampling design. Commun.-Stat.-Theory Methods 2018, 47, 5845–5853. [Google Scholar] [CrossRef]
  10. Muttlak, H.A. Median ranked set sampling. J. Appl. Stat. Sci. 1997, 6, 245–255. [Google Scholar]
  11. Oral, E.; Oral, E. A Robust Alternative to the Ratio Estimator under Non-normality. Stat. Probab. Lett. 2011, 81, 930–936. [Google Scholar] [CrossRef]
  12. Al-Omari, A.I. Ratio estimation of the population mean using auxiliary information in simple random sampling and median ranked set sampling. Stat. Probab. Lett. 2012, 82, 1883–1890. [Google Scholar] [CrossRef]
  13. Koyuncu, N. New difference-cum-ratio and exponential type estimators in median ranked set sampling. Hacet. J. Math. Stat. 2016, 45, 207–225. [Google Scholar] [CrossRef]
  14. Johnson, D.; Dupuis, G.; Piche, J.; Clayborne, Z.; Colman, I. Adult mental health outcomes of adolescent depression: A systematic review. Depress. Anxiety 2018, 35, 700–716. [Google Scholar] [CrossRef] [PubMed]
  15. Schroder, H.; Marrugat, J.; Elosua, R.; Covas, M.I.; REGICOR Investigators. Relationship between body mass index, serum cholesterol, leisure-time physical activity, and diet in a Mediterranean Southern-Europe population. Br. J. Nutr. 2003, 90, 431–440. [Google Scholar] [CrossRef] [PubMed]
  16. Deville, J.C.; Särndal, C.E. Calibration estimators in survey sampling. J. Am. Stat. Assoc. 1992, 87, 376–382. [Google Scholar] [CrossRef]
  17. Singh, S.; Horn, S.; Yu, F. Estimation variance of general regression estimator: Higher level calibration approach. Surv. Methodol. 1998, 48, 41–50. [Google Scholar]
  18. Tracy, D.S.; Singh, S.; Arnab, R. Note on calibration in stratified and double sampling. Surv. Methodol. 2003, 29, 99–104. [Google Scholar]
  19. Sinha, N.; Sisodia, B.V.S.; Singh, S.; Singh, S.K. Calibration approach estimation of the mean in stratified sampling and stratified double sampling. Commun.-Stat.-Theory Methods 2017, 46, 4932–4942. [Google Scholar]
  20. Garg, N.; Pachori, M. Use of coefficient of variation in calibration estimation of population mean in stratified sampling. Commun.-Stat.-Theory Methods 2019, 49, 5842–5852. [Google Scholar] [CrossRef]
  21. Cetin, A.E.; Koyuncu, N. Estimation of population mean under different stratified ranked set sampling designs with simulation study application to BMI data. Commun. Fac. Sci. Univ. Ank. Ser. Math. Stat. 2020, 69, 560–575. [Google Scholar] [CrossRef]
Table 1. MRSS for even sample size, i.e.,  n δ = 4 .
Table 1. MRSS for even sample size, i.e.,  n δ = 4 .
  ( X 1 ( 1 ) , Y 1 [ 1 ] )   ( X 1 ( 2 ) , y 1 [ 2 ] )   ( X 1 ( 3 ) , Y 1 [ 3 ] )   ( X 1 ( 4 ) , Y 1 [ 4 ] )
  ( X 2 ( 1 ) , Y 2 [ 1 ] )   ( X 2 ( 2 ) , y 2 [ 2 ] )   ( X 2 ( 3 ) , Y 2 [ 3 ] )   ( X 2 ( 4 ) , Y 2 [ 4 ] )
  ( X 3 ( 1 ) , Y 3 [ 1 ] )   ( X 3 ( 2 ) , Y 3 [ 2 ] )   ( X 3 ( 3 ) , y 3 [ 3 ] )   ( X 3 ( 4 ) , Y 3 [ 4 ] )
  ( X 4 ( 1 ) , Y 4 [ 1 ] )   ( X 4 ( 2 ) , Y 4 [ 2 ] )   ( X 4 ( 3 ) , y 4 [ 3 ] )   ( X 4 ( 4 ) , Y 4 [ 4 ] )
Table 2. MRSS for odd sample size i.e.,  n δ = 3 .
Table 2. MRSS for odd sample size i.e.,  n δ = 3 .
  ( X 1 ( 1 ) , Y 1 [ 1 ] )   ( X 1 ( 2 ) , y 1 [ 2 ] )   ( X 1 ( 3 ) , Y 1 [ 3 ] )
  ( X 2 ( 1 ) , Y 2 [ 1 ] )   ( X 2 ( 2 ) , y 2 [ 2 ] )   ( X 2 ( 3 ) , Y 2 [ 3 ] )
  ( X 3 ( 1 ) , Y 3 [ 1 ] )   ( X 3 ( 2 ) , y 3 [ 2 ] )   ( X 3 ( 3 ) , Y 3 [ 3 ] )
Table 3. Family members of all classes.
Table 3. Family members of all classes.
MRSS Estimators   Q δ Two-Stage MRSS Estimators
  y ¯ S M I 1   y ¯ S a M I
  y ¯ S M I I   1 / C ^ x δ ( M ( j ) )   y ¯ S a M I I
  y ¯ S M I I I   1 / x ¯ δ ( M ( j ) )   y ¯ S a M I I I
  y ¯ G M I 1   y ¯ G a M I
  y ¯ G M I I   1 / C ^ x δ ( M ( j ) )   y ¯ G a M I I
  y ¯ G M I I I   1 / x ¯ δ ( M ( j ) )   y ¯ G a M I I I
  y ¯ P M I 1   y ¯ P a M I
  y ¯ P M I I   1 / C ^ x δ ( M ( j ) )   y ¯ P a M I I
  y ¯ P M I I I   1 / x ¯ δ ( M ( j ) )   y ¯ P a M I I I
Table 9. Bias values of estimators for simulation study.
Table 9. Bias values of estimators for simulation study.
  ϕ ^   ( A 1 , B 1 )   ( A 2 , B 2 )   ( A 3 , B 3 )   ( A 4 , B 4 )
MRSS
  y ¯ S M I 0.95870.84800.82100.7302
  y ¯ S M I I 0.60950.49880.47180.3810
  y ¯ S M I I I 0.71040.59970.57270.4819
  y ¯ G M I 0.78640.67570.64870.5579
  y ¯ G M I I 0.63810.52740.50040.4096
  y ¯ G M I I I 0.57530.46460.43760.3468
  y ¯ P M I 0.55500.44430.41730.3265
  y ¯ P M I I 0.44710.33640.30940.2186
  y ¯ P M I I I 0.42610.31540.28840.1976
Two stage
MRSS
  y ¯ S a M I 0.66560.55490.52790.4371
  y ¯ S a M I I 0.51140.40070.37370.2829
  y ¯ S a M I I I 0.51630.40560.37860.2878
  y ¯ G a M I 0.60200.49130.46430.3735
  y ¯ G a M I I 0.48160.37090.34390.2531
  y ¯ G a M I I I 0.43980.32910.30210.2113
  y ¯ P a M I 0.25370.14300.11600.0252
  y ¯ P a M I I 0.27390.16320.13620.0454
  y ¯ P a M I I I 0.25930.14860.12160.0308
Table 10. Bias values of estimators for real-life data.
Table 10. Bias values of estimators for real-life data.
  ϕ ^ MRSS   ϕ ^ Two Stage MRSS
  y ¯ S M I 2.9587   y ¯ S a M I 2.8480
  y ¯ S M I I 2.7404   y ¯ S a M I I 2.6297
  y ¯ S M I I I 2.6413   y ¯ S a M I I I 2.5306
  y ¯ G M I 2.6293   y ¯ G a M I 2.5186
  y ¯ G M I I 2.4296   y ¯ G a M I I 2.3189
  y ¯ G M I I I 2.3194   y ¯ G a M I I I 2.2087
  y ¯ P M I 2.0246   y ¯ P a M I 1.9139
  y ¯ P M I I 2.0024   y ¯ P a M I I 1.8917
  y ¯ P M I I I 2.1110   y ¯ P a M I I I 2.0003
Table 11. PRE values for BMI data for odd sample size.
Table 11. PRE values for BMI data for odd sample size.
PRE MRSS (3,5) PRE Two Stage MRSS (9,25,3,5)
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 569.1733618.0211579.6620   y ¯ S a M I 549.1333593.1211531.5231
  y ¯ S M I I 548.0034598.8768567.9848   y ¯ S a M I I 530.8009571.5667521.7440
  y ¯ S M I I I 590.3432637.1654591.3392   y ¯ S a M I I I 567.4657614.6755541.3022
  y ¯ G M I 593.0307642.8811603.7382   y ¯ G a M I 572.5669617.4752554.5779
  y ¯ G M I I 571.8608623.7368592.0610   y ¯ G a M I I 554.2345595.9208544.7988
  y ¯ G M I I I 614.2006662.0254615.4154   y ¯ G a M I I I 590.8993639.0296564.3570
Table 12. PRE values for BMI data for even sample size.
Table 12. PRE values for BMI data for even sample size.
PRE MRSS (4,6) PRE Two Stage MRSS (16,36,4,6)
  ϕ ^   y ¯ P M I   y ¯ P M II   y ¯ P M III   ϕ ^   y ¯ Pa M I   y ¯ Pa M II   y ¯ Pa M III
  y ¯ S M I 877.1798931.0274881.5154   y ¯ S a M I 851.1444896.1555866.5199
  y ¯ S M I I 856.0099911.8831869.8382   y ¯ S a M I I 832.8120874.6011856.7408
  y ¯ S M I I I 898.3497950.1717893.1926   y ¯ S a M I I I 869.4768917.7099876.2990
  y ¯ G M I 906.7970961.5401911.2057   y ¯ G a M I 880.3188926.0914895.9566
  y ¯ G M I I 885.6271942.3958899.5285   y ¯ G a M I I 861.9864904.5370886.1775
  y ¯ G M I I I 927.9669980.6844922.8829   y ¯ G a M I I I 898.6512947.6458905.7357
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shahzad, U.; Ahmad, I.; Alshahrani, F.; Almanjahie, I.M.; Iftikhar, S. Calibration-Based Mean Estimators under Stratified Median Ranked Set Sampling. Mathematics 2023, 11, 1825. https://doi.org/10.3390/math11081825

AMA Style

Shahzad U, Ahmad I, Alshahrani F, Almanjahie IM, Iftikhar S. Calibration-Based Mean Estimators under Stratified Median Ranked Set Sampling. Mathematics. 2023; 11(8):1825. https://doi.org/10.3390/math11081825

Chicago/Turabian Style

Shahzad, Usman, Ishfaq Ahmad, Fatimah Alshahrani, Ibrahim M. Almanjahie, and Soofia Iftikhar. 2023. "Calibration-Based Mean Estimators under Stratified Median Ranked Set Sampling" Mathematics 11, no. 8: 1825. https://doi.org/10.3390/math11081825

APA Style

Shahzad, U., Ahmad, I., Alshahrani, F., Almanjahie, I. M., & Iftikhar, S. (2023). Calibration-Based Mean Estimators under Stratified Median Ranked Set Sampling. Mathematics, 11(8), 1825. https://doi.org/10.3390/math11081825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop