1. Introduction
Consider the following general nonparametric regression model:
where
is the response variable, and
T is a scalar covariate independent of
, where
is the random error.
is an unknown smooth function, and
is a non-negative function which represents the standard deviation of the model error
.
and
. As we all know, lots of estimation methods, including the kernel regression method, the spline smoothing method, the orthogonal series approximation method and the local polynomial method, have been investigated for estimating the nonparametric regression function
. The relevant works can be referred but not limited to [
1,
2,
3,
4,
5]. However, we need to estimate
, that is, the derivative of
, in many situations. Therefore, in this article, we intend to derive the efficient estimators of
.
Since the seminal work of Koenker and Bassett [
6], quantile regression has been deeply studied in the literature and applied in econometrics, biomedicine and other fields. It referred to Koenker [
7] for comprehensive treatments. Based on the quantile regression, Zou and Yuan [
8] proposed composite quantile regression (abbreviated as CQR) for linear models. It assumes that the regression coefficients are the same across different quantile regression models by combining the strength across multiple quantile regression models. The advantage of the CQR method is that it can improve the relative efficiency of the relevant estimators significantly and is usually used in regression models with non-normal error distribution. Kai et al. [
9] introduced the local CQR for the general nonparametric regression model. As a kind of nonlinear smoother, the local CQR method does not require finite error variance and hence can work well even when the error distribution has infinite variance. Meanwhile, the local CQR method can significantly improve the estimation efficiency of local linear least squares for some cases. Kai et al. [
10] applied the local CQR method to a semiparametric varying coefficient partially linear model, and the results showed that the local CQR method outperformed both the least squares and the single quantile regression. Jiang et al. [
11] applied the two step CQR method to the single index model and established its efficiency. Ning and Tang [
12] considered the estimation and test issues for CQR method with missing covariates. Recently, many researchers applied the COR method to other various models under different data cases. It can be referred but not limited to [
13,
14,
15,
16,
17,
18,
19].
To make full use of quantile information, one may consider combining information over different quantiles by the criterion function of the estimation procedure or combining information based on estimators at different quantiles. In this article, we argue that, although a composite quantile regression estimator based on the aggregated criterion function may outperform the ordinary least square estimator for some non-normal distributions, it is noted that simple averaging usually does not make full use of all the information of quantiles. Information at different quantiles are correlated, and improperly using multiple quantile information may even reduce the efficiency. Roughly speaking, simple average delivers good estimators when the error distribution is close to normal. In fact, for nonparametric regression, a simple averaging-based composite quantile regression estimator is asymptotically equivalent to the local least squares estimator. However, the main purpose of combining quantile information is to improve efficiency when the error distribution is not normal and the ordinary least square method does not work well. It is therefore important to combine quantile information appropriately to achieve efficiency. In this paper, we mainly study optimal combination of quantile regression information for estimating the derivative of nonparametric function. As stated above, we intend to propose and develop two ways of combining quantile information. One is the weighted local CQR estimator based on weighted quantile loss functions, and the other is the weighted quantile average estimator based on a weighted average of quantile regression estimators at single quantiles. Our proposed estimators inherit many advantages. Both the theoretical results and simulation studies further illustrate that both the weighted local CQR estimator and the weighted quantile average estimator work better than the common local linear least square estimator for all the symmetric errors considered except the normal error, and the weighted quantile average estimator performs better than the weighted composite quantile regression estimator in most situations.
The rest of the paper is organized as follows. The weighted local CQR estimator and the weighted quantile average estimator are proposed in
Section 2. Meanwhile, the main theoretical results including asymptotic normality and the optimal weights are presented. In
Section 3, the asymptotic relative efficiency of the weighted local CQR estimator and the weighted quantile average estimator are compared. The feasibility of our proposed method is verified by random simulations in
Section 4. The technical proofs of the theoretical results are presented in
Section 5. Conclusions are added in
Section 2.
2. Methodology
Firstly, we give some conditions and notations, which are required in our subsequent discussions. Let
and
be the density function and cumulative distribution function of the error, respectively. Denote
as the marginal density function of the covariate
T. Let
be the
field generated by
. Choose kernel
as a symmetric density function and allow the following:
The following Conditions (C1)–(C4) are needed for Theorems 1–3:
- (C1)
has a continuous 3-th derivative in the neighborhood of ;
- (C2)
, the marginal density function of T, is differentiable and positive in the neighborhood of ;
- (C3)
The conditional variance is continuous in the neighborhood of ;
- (C4)
, the density function of the error , is always positive on its support.
2.1. Weighted Composite Quantile Regression Estimation
Let
be the
quantile of
and
be the conditional
th quantile of
Y given
. Then, the nonparametric regression model (
1) has the following quantile regression representation:
Suppose that
are independent and identically distributed random samples from model (
2). We consider estimating
at
over
jointly based on the following weighted local quadratic CQR loss function:
where
are the quantile loss functions at
q quantile positions
, respectively, and
with
are weights. Hereinafter, we use
to denote the weight vector in different scenarios whenever no confusion arises. Then, the weighted local quadratic composite quantile regression
estimator for
is given by:
In the subsequent Theorem 1, we present the asymptotic bias, variance and asymptotic normality of
, and the proofs can be found in
Section 5.
Theorem 1. Suppose that is an interior point of the support of . Under the regularity conditions , if and , then the asymptotic conditional bias and variance of are given, respectively, by:where Furthermore, we have the following asymptotic normal result:where stands for convergence in distribution. Remark 1. If we use equal weights over all quantiles, then the asymptotic variance of the unweighted local quadratic CQR estimator is given by , where: However, the importance of information at different quantiles are often different, and the information at different quantiles are correlated—depending on the error distribution. Thus, it is essential to optimally combine these quantiles’ information together.
From Theorem 1, the asymptotic variance of
depends on the weight vector
through
, thus, a natural way to select the optimal weight vector is to minimize
in (
7). It is easy to know that the optimal weight vector
is the solution of the following optimization problem:
such that
and
.
2.2. Weighted Quantile Average Estimation
As described in
Section 2.1, the
estimator combines the information at different quantiles by means of the criterion function. In the following, we consider an alternative approach which combines information based on the estimators at different quantiles. As stated in
Section 1, we focus on the estimation of
. For a fixed
, consider the local quadratic nonparametric quantile regression:
The solution of
in the above optimization, denoted as
, provides an estimator for
. In the following, we construct the weighted quantile averaging nonparametric estimator for
based on the weighted average of
over
where
satisfies the following conditions:
The introduction of
can eliminate the bias term caused by the asymmetric random error. Meanwhile, the weight vector
satisfying conditions (
11) and (
12) can guarantee the estimation consistency and asymptotic unbiasedness of
asymptotically, which can also be seen from the proof of the asymptotic properties of
in
Section 5. In these weight vectors, which satisfy conditions (
11) and (
12), we can select the optimal one by optimization. The details are discussed in the subsequent Section.
In the subsequent Theorem 2, we present the asymptotic bias, variance and asymptotic normality of
, and the proofs can be found in
Section 5.
Theorem 2. Suppose that is an interior point of the support of , and the weight vector ω satisfies conditions (10) and (11). Under the regularity Conditions – in Section 5, if and , then the asymptotic conditional bias and variance of the weighted quantile average estimation are given, respectively, by:where H is the matrix with the element , that is Furthermore, conditioning on , we have the following asymptotic normal distribution: Remark 2. If we simply use equal weights in (14), then the resulting unweighted quantile average estimator for has the asymptotic normality in Theorem 2 with replaced by: Remark 3. From Theorem 2, the covariance matrix of depends on the weight vector through , thus, a natural way to select the optimal weight vector is to minimize in (14). The following Theorem gives the optimal weight vector and the optimal weighted quantile average estimation of .
Theorem 3. Supposing that the Conditions hold, then the optimal weight vector minimizing is:where c is a dimensional column vector with k-th element and is a dimensional column vector with all elements 1. Furthermore, the corresponding conditionally variance of the optimal weighted quantile average estimation of , denoted as , is given by:where: Comparing the weighted local quadratic CQR estimation, the weighted quantile average estimation and the local quadratic least squares estimators for , we see that they have the same leading bias term , whereas their asymptotic variances are different.
3. Comparison of Asymptotic Efficiency
The WQAE differs from the WCQR estimator in several aspects. While the WCQR estimator is based on the aggregation of several quantile loss functions, the WQAE is based on a weighted average of separate estimators from different quantiles. As a result, computing the WQAE only involves q separate parameter minimization problems, whereas the WCQR requires solving a larger parameter minimization problem. In addition, to ensure a proper loss function, the weights in are restricted to be non-negative; by contrast, the weights in can be negative. Obviously, it is computationally appealing to impose less constraint on the weights.
From Theorem 1, the mean square of error (MSE) of the WCQR estimation
is given by:
Thus, the optimal variable bandwidth minimizing
is:
From Theorem 2, the MSE of the WQAE
is given by:
Thus, the optimal variable bandwidth minimizing
is:
4. Simulation Studies
In this Section, we implement simulation studies to compare the finite sample performance of our WQAE with those of the WCQR and the local linear least square estimates. In all examples, the kernel function is chosen to be the Gaussian kernel function, and the number of replications is set to be 400. Similar to Kai et al. [
9], we use the short-cut strategy to select the bandwidth in our simulation studies.
An important quantity to valuate the performance of different estimators is the average squared errors (ASE), which can be represented by:
with
g being
in the simulation, where
are the grid points at which the estimator
of
g is evaluated. Set
and set the grid points to evenly distribute over the interval on which
is estimated. Furthermore, we can also evaluate the different estimators
and
of
via the ratio of the average squared errors (RASE) defined by:
In the simulation below, we compare the finite behaviors of these three estimators under different conditions, including different symmetric or asymmetric errors, homoscedastic or heteroscedastic models.
4.1. Homoscedastic Model
We consider the following homoscedastic model:
where the covariate
T follows
, and the nonparametric regression function is
. In our simulation, we generate 400 random samples, each covering
observations. Our aim is to estimate the derivative of
over
. It is noted that the Cauchy distribution is the truncated Cauchy distribution on
. We investigate the finite sample behaviors of the WCQR, WQAE and LSE (the local linear least squares estimator) separately according to the RASE and ASE under the models under some symmetric and asymmetric errors in the next two examples.
Example 1. In this example, we consider Model 1 under different symmetric error distributions. The mean and standard deviations of RASEs with 400 simulations are established in Table 1. The following can be seen for Table 1: - (1)
Under the standard normal error , the LSE estimator is the best one among these three estimators, and WQAE performs nearly as well as WCQR.
- (2)
Under the non-normal and symmetric errors, WCQR is consistently superior with WQAE and LSE methods, LSE especially performs consistently the worst. Obviously, our RASEs are significantly larger than 1.
Example 2. In this example, we consider Model 1 under different asymmetric error distributions. The mean and standard deviations of ASEs with 400 simulations are presented in Table 2. It is noted that denotes the centralized . The following can be seen for Table 2: - (1)
WCQR, whose bias is non-vanishing and does not approach to zero under asymmetric errors, breaks down. So, WCQR is the worst in the case of asymmetric errors.
- (2)
For all the asymmetric errors considered, WQAE performs consistently better than local linear least squares.
4.2. Heteroscedastic Model
In this subsection, we mainly consider the following heteroscedastic model:
where the covariate
T follows
,
, and the nonparametric regression function is
. Our aim is to estimate the derivative of
on
In our simulation, we generate 400 random samples, each having
observations. Similar to the arguments in the homoscedastic model, we evaluate their behaviors under symmetric and asymmetric errors in the following Examples 3 and 4.
Example 3. We consider Model 2 under different symmetric error distributions in this example. The simulation results of biases and standard deviations of RASEs over 400 simulations are summarized in Table 3. We have the following findings: - (1)
Under the standard normal error , LSE outperforms both WQAE and WCQR. Even for normal errors, our RASEs are a little smaller than 1.
- (2)
For all the symmetric errors considered except the normal error, WQAE performs significantly better than the LSE, and our RASEs are obviously greater than 1 compared with the LSE. Furthermore, WQAE performs better than WCQR in most cases.
Example 4. In this example, we consider Model 2 under different asymmetric error distributions. The simulation results of biases and standard deviations of RASEs over 400 simulations are summarized in Table 4. Similar to Example 2, the simulation results show the following: - (1)
WCQR breaks down for its non-vanishing bias in the case of asymmetric errors. Therefore, WCQR performs the worst among the three estimators.
- (2)
For all the asymmetric errors considered, WQAE performs consistently better than local linear least squares.
5. Proofs
Before embarking on the proofs of Theorems 1–3, we first give and prove Lemma 1. Suppose that the kernel function K has the finite support . The following notations are needed to present our theoretical results:
Set
and
Write
,
. Define
, and let
, with
Furthermore, let be a diagonal matrix with diagonal elements ; be a be a matrix with element and ; and be a diagonal with diagonal elements and . In addition, let be a diagonal matrix with diagonal elements , let be a matrix with element and , and let be a diagonal with diagonal elements . Similarly, let be a matrix with element , , let be a matrix with element , and . , and let be a matrix with element for
Partition
into four submatrices as follows:
where
stands for the top left-hand
submatrix, and
stands for the bottom right-hand element.
Lemma 1. Denote as the minimizer of the weighted local quadratic CQR loss. Under regularity Conditions (C1)–(C4), we have: Proof of Lemma 1. The proof is similar to that of Theorem 5 of Kai et al. [
9]. We divide the whole procedure into three steps:
- Step 1
Minimizing the weighted local quadratic CQR loss is equivalent to minimizing
, defined as:
with respect to
, where
For further details, see the proofs of Lemma 2 in Kai et al. [
9].
- Step 2
Under regularity conditions (C1)–(C4), we have:
For further details, see the proof of Lemma 2 of Kai et al. [
9].
- Step 3
It is easy to obtain:
where
stands for convergence in probability. Thus we have:
Together with the results of Step 2, we have:
Note that the convex function
converges in probability to the convex function
. Then, it can be deduced from the convexity lemma of Pollard [
20] that the quadratic approximation to
holds uniformly for
in any compact set, which leads to:
Finally, similar to the procedures of Theorem 5 in Kai et al. [
9], we have:
This completes the proof.
□
Proof of Remark 1. If we use equal weights
over all quantiles, then we have:
Therefore, the asymptotic variance of the unweighted local quadratic CQR estimator can by given by . □
Proof of Remark 2. If we use equal weights
over all quantiles, then we have:
then the resulting unweighted quantile average estimator for
has the asymptotic normality in Theorem 2 with
replaced by:
□
Proof of Theorem 1. We apply Lemma 1 to obtain the asymptotic normality of
. It is easy to obtain:
and
Since
,
Note that
Thus, we have:
where
and
denote the
dimensional column vector with all entries of 0 and 1, respectively. Denote
. By Lemma 1, we can obtain:
Furthermore, the conditional variance of
is:
which completes the proof. □
Proof of Theorem 2. Using the condition
, we obtain:
By using the following fact:
we obtain:
Using the similar arguments in Kai et al. [
9], we have:
Denote
, where
. Note that
and
if
. It is easy to obtain
, where
It is easy to obtain that
. Therefore:
Combined with the result (
13), we have:
which completes the proof. □
Proof of Theorem 3. The optimal
can be obtained by solving the following optimization problem:
where
satisfies
and
. With the help of the Lagrange multiplier method, we can obtain the optimal weight vector. This leads to the corresponding minimum conditional variance of
. □
6. Conclusions
In this article, we mainly investigated the efficient estimators of the derivative of the nonparametric function in the nonparametric quantile regression model (
1). We developed two ways of combining quantile regression information to derive the estimators of
. One is the weighted composite quantile regression estimator based on the quantile weighted loss functions, and the other is the weighted quantile average estimator based on the weighted average of quantile regression estimators at a single quantile. Furthermore, by minimizing the asymptotic variance, the optimal weight vector is computed, and consequently, the optimal estimator can be obtained. Moreover, we conduct some simulations to compare the performance of our proposed estimators to the local linear least square estimator under different symmetric error distributions. Simulation studies illustrate that both the estimators works better than the local linear least square estimator for all the symmetric errors considered except the normal error, and the weighted quantile average estimator performs better than the weighted composite quantile regression estimator in most situations.
Author Contributions
Conceptualization, methodology and formal analysis, X.Z. and X.G.; validation and data analysis, X.Y.; visualization and supervision, Y.Z.; investigation and resources, Y.S.; writing—original draft preparation, X.Z. and X.Y.; writing—review and editing, X.G.; project administration, X.Z., X.G. and Y.Z. All authors have read and agreed to the published version of the manuscript.
Funding
Zhou’s work was supported by the Ministry of Education Humanities and Social Sciences Research Youth Foundation (Grant No. 19YJC910011), the Natural Science Foundation of Shandong Province (Grant No. ZR2020MA021) and the Project of Shandong Province Higher Educational Science and Technology Program (Grant No. J18KB099). Yin’s research is supported by the Research and Development Project of Dezhou City in China. Shen’s research is supported by the Natural Science Foundation of Shandong Province (Grant No. ZR2018LA003) and the Open Research Fund Program of Data Recovery Key Laboratory of Sichuan Province (Grant No. DRN19020).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Some data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflict of interest. The funders X.Z., X.Y. and Y.S. have roles in the design of the study, in the analyses and interpretation of data, in the writing of the manuscript, and in the decision to publish the results.
References
- Dou, X.; Shirahata, S. Comparisons of B-spline procedures with kernel procedures in estimating regression functions and their derivatives. J. Jpn. Soc. Comput. Stat. 2009, 22, 57–77. [Google Scholar] [CrossRef]
- Ruppert, D. Nonparametric Regression and Spline Smoothing. J. Am. Stat. Assoc. 2001, 96, 1522–1523. [Google Scholar] [CrossRef]
- Fan, J.; Gasser, T.; Gijbels, I.; Brockmann, M.; Engel, J. Local Polynomial Regression: Optimal Kernels and Asymptotic Minimax Efficiency. Ann. Inst. Stat. Math. 1997, 49, 79–99. [Google Scholar] [CrossRef]
- Zhang, X.; King, M.L.; Shang, H.L. Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors. Econometrics 2016, 4, 24. [Google Scholar] [CrossRef] [Green Version]
- Souza-Rodrigues, E.A. Nonparametric Regression with Common Shocks. Econometrics 2016, 4, 36. [Google Scholar] [CrossRef] [Green Version]
- Koenker, R.; Bassett, G. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
- Koenker, R. Quantile Regression; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
- Zou, H.; Yuan, M. Composite quantile regression and the oracle Model Selection Theory. Ann. Stat. 2008, 36, 1108–1126. [Google Scholar] [CrossRef]
- Kai, B.; Li, R.; Zou, H. Local CQR smoothing: An efficient and safe alternative to local polynomial regression. J. R. Stat. Soc. Ser. B 2010, 72, 49–69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kai, B.; Li, R.; Zou, H. New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann. Stat. 2011, 39, 305–332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jiang, R.; Zhou, Z.; Qian, W. Single index composite quantile regression. J. Korean Stat. Soc. 2011, 3, 323–332. [Google Scholar]
- Ning, Z.; Tang, L. Estimation and test procedures for composite quantile regression with covariates missing at random. Stat. Probab. Lett. 2014, 95, 15–25. [Google Scholar] [CrossRef]
- Jiang, R. Composite quantile regression for linear errors-in-variables models. Hacet. J. Math. Stat. 2015, 44, 707–713. [Google Scholar] [CrossRef]
- Jiang, R.; Qian, W.; Zhou, Z. Single-index composite quantile regression with heteroscedasticity and general error distributions. Stat. Pap. 2016, 57, 185–203. [Google Scholar] [CrossRef]
- Zhang, R.; Lv, Y.; Zhao, W.; Liu, J. Composite quantile regression and variable selection in single-index coefficient model. J. Stat. Plan. Inference 2016, 176, 1–21. [Google Scholar] [CrossRef]
- Zhao, W.; Lian, H.; Song, X. Composite quantile regression for correlated data. Comput. Stat. Data Anal. 2016, 109, 15–33. [Google Scholar] [CrossRef]
- Luo, S.; Zhang, C.; Wang, M. Composite Quantile Regression for Varying Coefficient Models with Response Data Missing at Random. Symmetry 2019, 11, 1065. [Google Scholar] [CrossRef] [Green Version]
- Sun, J.; Gai, Y.; Lin, L. Weighted local linear composite quantile estimation for the case of general error distributions. J. Stat. Plan. Inference 2013, 143, 1049–1063. [Google Scholar] [CrossRef]
- Zhao, Z.; Xiao, Z. Efficient Regressions via Optimally Combining Quantile Information. Econom. Theory 2014, 30, 1272–1314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pollard, D. Asymptotics for least absolute deviation regression estimators. Econom. Theory 1991, 7, 186–199. [Google Scholar] [CrossRef]
Table 1.
The biases and standard deviations for RASEs in Example 1.
Table 1.
The biases and standard deviations for RASEs in Example 1.
| | |
---|
| | | | | |
---|
| 0.9823 | 0.9745 | 0.9379 | 0.9217 | 0.9391 | 0.9264 |
(0.0784) | (0.1028) | (0.1235) | (0.1332) | (0.1213) | (0.1229) |
| 1.0696 | 1.1052 | 1.1143 | 1.3076 | 1.2919 | 1.2298 |
(0.1765) | (0.2354) | (0.2689) | (0.3757) | (0.3723) | (0.3326) |
| 1.0235 | 1.0536 | 1.1137 | 1.6441 | 1.5678 | 1.4705 |
(0.1329) | (0.2092) | (0.2901) | (0.8123) | (0.8002) | (0.6790) |
| 1.2243 | 1.4221 | 1.4765 | 1.1268 | 1.4373 | 1.6057 |
(0.2029) | (0.3048) | (0.3832) | (0.2035) | (0.3414) | (0.4565) |
| 1.3066 | 1.5016 | 1.5831 | 2.2993 | 2.2939 | 2.1652 |
(0.3960) | (0.5601) | (0.6746) | (0.9127) | (0.9850) | (0.9245) |
Table 2.
The means and standard deviations for ASEs in Example 2.
Table 2.
The means and standard deviations for ASEs in Example 2.
Distribution of | | | |
---|
| | | | | |
---|
| 0.0297 | 0.0302 | 0.0339 | 0.1302 | 0.0883 | 0.0565 | 0.0426 |
(0.0185) | (0.0180) | (0.0220) | (0.0275) | (0.0227) | (0.0181) | (0.0264) |
| 0.0223 | 0.0226 | 0.0289 | 0.0871 | 0.0614 | 0.0422 | 0.0377 |
(0.0121) | (0.0132) | (0.0333) | (0.0200) | (0.0268) | (0.0299) | (0.0719) |
| 0.0492 | 0.0480 | 0.0483 | 0.1191 | 0.0888 | 0.0665 | 0.0515 |
(0.0243) | (0.0243) | (0.0245) | (0.0348) | (0.0286) | (0.0248) | (0.0231) |
| 0.0528 | 0.0525 | 0.0543 | 0.0960 | 0.0786 | 0.0647 | 0.0570 |
(0.0257) | (0.0251) | (0.0252) | (0.0343) | (0.0303) | (0.0269) | (0.0267) |
| 0.0194 | 0.0205 | 0.0213 | 0.0450 | 0.0366 | 0.0300 | 0.0223 |
(0.0172) | (0.0209) | (0.0231) | (0.0153) | (0.0146) | (0.0155) | (0.0192) |
Table 3.
The biases and standard deviations for RASEs in Example 3.
Table 3.
The biases and standard deviations for RASEs in Example 3.
| | |
---|
| | | | | |
---|
| 0.9841 | 0.9749 | 0.9532 | 0.9328 | 0.9476 | 0.9357 |
(0.0518) | (0.0642) | (0.0767) | (0.1211) | (0.1115) | (0.1109) |
| 0.9963 | 1.0152 | 1.0130 | 1.0658 | 1.0636 | 1.0417 |
| 1.0368 | 1.0657 | 1.0688 | 1.3067 | 1.2584 | 1.1891 |
(0.1886) | (0.2442) | (0.2638) | (0.7175) | (0.5569) | (0.4523) |
| 1.0859 | 1.0830 | 0.9387 | 1.1836 | 1.2602 | 1.1277 |
(0.1335) | (0.2008) | (0.2842) | (0.2804) | (0.3641) | (0.4413) |
| 1.1273 | 1.1784 | 1.1683 | 1.5532 | 1.5329 | 1.4596 |
(0.2488) | (0.3176) | (0.3371) | (0.4678) | (0.4672) | (0.4234) |
Table 4.
The means and standard deviations for ASEs in Example 4.
Table 4.
The means and standard deviations for ASEs in Example 4.
| | | |
---|
| | | | | |
---|
| 0.0249 | 0.0287 | 0.0348 | 0.0711 | 0.0644 | 0.0553 | 0.0752 |
(0.0383) | (0.0514) | (0.0562) | (0.0779) | (0.1101) | (0.1245) | (0.2549) |
| 0.0312 | 0.0323 | 0.0451 | 0.0831 | 0.0678 | 0.0556 | 0.0724 |
(0.1021) | (0.0902) | (0.1179) | (0.0824) | (0.0723) | (0.0638) | (0.1357) |
| 0.0369 | 0.0365 | 0.0376 | 0.0783 | 0.0602 | 0.0476 | 0.0434 |
(0.0198) | (0.0183) | (0.0192) | (0.0190) | (0.0169) | (0.0158) | (0.0221) |
| 0.0321 | 0.0302 | 0.0329 | 0.0567 | 0.0465 | 0.0392 | 0.0387 |
(0.0169) | (0.0173) | (0.0171) | (0.0194) | (0.0177) | (0.0166) | (0.0214) |
| 0.0348 | 0.0375 | 0.0414 | 0.1061 | 0.0833 | 0.0657 | 0.0569 |
(0.0219) | (0.0316) | (0.0455) | (0.0277) | (0.0274) | (0.0301) | (0.0526) |
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).