Next Article in Journal
A Multi-Modal Fusion Method Based on Higher-Order Orthogonal Iteration Decomposition
Next Article in Special Issue
Improved Dividend Estimation from Intraday Quotes
Previous Article in Journal
Multifractality in Quasienergy Space of Coherent States as a Signature of Quantum Chaos
Previous Article in Special Issue
Edge-Preserving Denoising of Image Sequences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application

1
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8P 5C2, Canada
2
Department of Mathematics and Statistics, Brock University, St. Catharines, ON L2S 3A1, Canada
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(10), 1348; https://doi.org/10.3390/e23101348
Submission received: 9 September 2021 / Revised: 8 October 2021 / Accepted: 12 October 2021 / Published: 15 October 2021

Abstract

:
In a host of business applications, biomedical and epidemiological studies, the problem of multicollinearity among predictor variables is a frequent issue in longitudinal data analysis for linear mixed models (LMM). We consider an efficient estimation strategy for high-dimensional data application, where the dimensions of the parameters are larger than the number of observations. In this paper, we are interested in estimating the fixed effects parameters of the LMM when it is assumed that some prior information is available in the form of linear restrictions on the parameters. We propose the pretest and shrinkage estimation strategies using the ridge full model as the base estimator. We establish the asymptotic distributional bias and risks of the suggested estimators and investigate their relative performance with respect to the ridge full model estimator. Furthermore, we compare the numerical performance of the LASSO-type estimators with the pretest and shrinkage ridge estimators. The methodology is investigated using simulation studies and then demonstrated on an application exploring how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer’s disease.

1. Introduction

In many fields such as bio-informatics, physical biology, and epidemiology, the response of interest is represented by repeated measures of some variables of interest that are collected over a specified time period for different independent subjects or individuals. These types of data are commonly encountered in medical research where the responses are subject to various time-dependent and time-constant effects such as pre- and post-treatment types, gender effect, and baseline measures, among others. A widely-used statistical tool in the analysis and modeling of longitudinal and repeated measures data is the linear mixed effects model (LMM) [1,2]. This model provides an effective and flexible way to describe the means and the covariance structures of a response variable after accounting for within subject correlation.
The rapid growth in the size and scope of longitudinal data has created a need for innovative statistical strategies in longitudinal data analysis. Classical methods are based on the assumption that the number of predictors is less than the number of observations. However, there is an increasing demand for efficient prediction strategies for analysis of high-dimensional data, where the number of observed data elements (sample size) are smaller than the number of predictors in a linear model context. Existing techniques that deal with high-dimensional data mostly rely on various penalized estimators. Due to the trade-off between model complexity and model prediction, the statistical inference of model selection becomes an extremely important and challenging problem in high-dimensional data analysis.
Over the years, many penalized regularization approaches have been developed to do variable selection and estimation simultaneously. Among them, the least absolute shrinkage and selection operator (LASSO) is commonly used [3]. It is a useful estimation technique in part due to its convexity and computational efficiency. The LASSO approach is based on an 1 penalty for regularization of regression parameters. Ref. [4] provides a comprehensive summary of the consistency properties of the LASSO approach. Related penalized likelihood methods have been extensively studied in the literature, see for example [5,6,7,8,9,10]. The penalized likelihood methods have a close connection to Bayesian procedures. Thus, the LASSO estimate corresponds to a Bayes method that puts a Laplacian (double-exponential) prior on the regression coefficients [11,12].
In this paper, our interest lies in estimating the fixed effect parameters of the LMM using a ridge estimation technique when it is assumed that some prior information is available in the form of potential linear restrictions on the parameters. One possible source of prior information is using a Bayesian approach. An alternative source of prior information may be obtained from previous studies or expert knowledge that search for or assume sparsity patterns.
We consider the problem of fixed effect parameter estimation for LMMs when there exist many predictors relative to the sample size. These predictors may be classified into two groups: sparse and non-sparse. Thus, there are two choices to be considered: a full model with all predictors, and a sub-model that contains only non-sparse predictors. When the sub-model based on available subspace information is true (i.e., the assumed restriction holds), it then provides more efficient statistical inferences than those based on a full model. In contrast, if the sub-model is not true, the estimates could become biased and inefficient. The consequences of incorporating subspace information therefore depend on the quality or reliability of the information being incorporated into the estimation procedure. One way to deal with uncertain subspace information is to use a pretest estimation strategy. The validity of the information is tested before incorporation into a final estimator. Another approach is shrinkage estimation, which shrinks the full model estimator to the sub-model estimator by utilizing subspace information. Besides these estimation strategies, there is a growing literature on simultaneous model selection and estimation. These approaches are known as penalty strategies. By shrinking some regression coefficients toward zero, the penalty methods simultaneously select a sub-model and estimate its regression parameters. Several authors have investigated the pretest, shrinkage, and penalty estimation strategies in partial linear model, Poisson regression model, and Weibull censored regression model [13,14,15].
To formulate the problem, we suppose that the vector of the fixed effects parameter β in the LMM can be partitioned into two sub-vectors β = ( β 1 , β 2 ) , where β 1 is the coefficient vector of non-sparse predictors and β 2 is the coefficient vector of sparse predictors. Our interest lies in the estimation of β 1 when β 2 is close to zero. To deal with this problem in the context of low dimensional data, ref. [16] propose an improved estimation strategy using sub-model selection and post-estimation for the LMM. Within this framework, linear shrinkage and shrinkage pretest estimation strategies are developed, which combine full model and sub-model estimators in an effective way as a trade-off between bias and variance. Ref. [17] extend this study by using a likelihood ratio test to develop James–Stein shrinkage and pretest estimation methods based on LMM for longitudinal data. In addition, the non-penalty estimators are compared with several penalty estimators (LASSO, adaptive LASSO and Elastic Net) for best performance.
In most real data situations, there is also the problem of multicollinearity among predictor variables for high-dimensional data. Various biased estimation techniques such as shrinkage estimation, partial least squares estimation [18] and Liu estimators [19] have been implemented to deal with this problem, but the widely used technique is ridge estimation [20]. The ridge estimator overcomes the weakness of the least squares estimator with a smaller mean squared error. To overcome and combat multicollinearity, ref. [21] propose pretest and Stein-type ridge regression estimators for linear and partially linear models. Furthermore, ref. [22] also develop shrinkage estimation based on Liu regression to overcome multicollinearity in linear models.
Our primary focus is on the estimation and prediction problem for linear mixed effect models when there are many potential predictors that have a weak or no influence on the response of interest. This method simultaneously controls overfitting using general least square estimation with a roughness penalty. We propose pretest and shrinkage estimation strategies using the ridge estimation technique as a base estimator and numerically compare their performance with the LASSO and adaptive LASSO estimators. Our proposed estimation strategy is applied to both high-dimensional and low-dimensional data.
The rest of this article is organized as follows. In Section 2, we present the linear mixed effect model and the proposed estimation techniques. We introduce the full and sub-model estimators based on ridge estimation. Thereafter, we construct the pretest and shrinkage ridge estimators. Section 3 provides the asymptotic bias and risk of these estimators. A Monte Carlo simulation is used to evaluate the performance of the estimators including a comparison with the lasso-type estimators, and the results are reported in Section 4. Section 5 presents a demonstration of the proposed methodology on a high-dimensional resting-state effective brain connectivity and genetic data. We also illustrate the proposed estimation methods in an application to a low-dimensional Amsterdam growth and health study. Section 6 presents a discussion with recommendations.

2. Model and Estimation Strategies

In this section, we present the linear mixed effect model and the proposed estimation strategies.

2.1. Linear Mixed Model

Suppose that we have a sample of N subjects. For the i t h subject, we collect the response variable y i j for the jth time, where i = 1 , n ; j = 1 , n i and N = i = 1 n n i . Let Y i = ( y i 1 , y i n i ) denotes the n i × 1 vector of responses from the ith subject. Let X i = ( x i 1 , , x i n i ) and Z i = ( z i 1 , , z i n i ) be n i × p and n i × q known fixed-effects and random-effect design matrix for the ith subject of full rank p and q, respectively. The linear mixed effect model [1] for a vector of repeated responses Y i on the ith subject is assumed to have the form
Y i = X i β + Z i a i + ϵ i ,
where β = ( β 1 , , β p ) is the p × 1 vector of unknown fixed-effect parameters or regression coefficients, a i is the q × 1 vector of unobservable random effects for the ith subject, assumed to come from a multivariate normal distribution with zero mean and a covariance matrix G, where G is an unknown q × q covariance matrix and ϵ i denotes n i × 1 vector of error terms assumed to be normally distributed with zero mean, covariance matrix σ 2 I n i . Further, ϵ i are assumed to be independent of the random effects a i .
The marginal distribution for the response y i is normal with mean X i β and covariance matrix C o v ( Y i ) = Z i σ i 2 Z i T + σ 2 I n . By stacking the vectors, the mixed model can be can be expressed as Y = X β + Za + ϵ . From the Equation (1), the distribution of the model follows Y N n ( X β , V ) , where E ( Y ) = X β with covariance, V = i = 1 n Z i σ i 2 Z i T + σ 2 I n .

2.2. Ridge Full Model and Sub-Model Estimator

The generalized least square estimator (GLS) is defined as β ^ GLS = ( X T V 1 X ) 1 X T V 1 Y and the ridge full model estimator can be obtained by introducing a penalized regression so that β ^ = arg min β ( Y X β ) T V 1 ( Y X β ) + k β T β and β ^ Ridge = ( X T V 1 X + k I ) 1 X T V 1 Y , where β ^ Ridge is the ridge full model estimator and k [ 0 , ) is the tuning parameter. If k = 0, β ^ Ridge is the GLS estimator and β ^ Ridge = 0 for k is sufficiently large. We select the value of k using cross validation.
We let X = ( X 1 , X 2 ) , where X 1 is an n × p 1 sub-matrix containing the non-sparse predictors and X 2 is an n × p 2 sub-matrix that contains the sparse predictors. Accordingly, β = ( β 1 , β 2 ) where β 1 and β 2 have dimensions p 1 and p 2 , respectively, with p 1 + p 2 = p , p i 0 for i = 1, 2.
A sub-model is defined as Y = X β + Za + ϵ subject to β T β ϕ and β 2 = 0 which corresponds to Y = X 1 β 1 + Za + ϵ subject to β 1 T β 1 ϕ . The sub-model estimator β ^ 1 RSM of β 1 has the form β ^ 1 RSM = ( X 1 T V 1 X 1 + k I ) 1 X 1 T V 1 Y . We denote β ^ 1 RFM as the full model ridge estimator of β 1 and given as β ^ 1 RFM = ( X 1 T V 1 / 2 M X 2 V 1 / 2 X 1 + k I ) 1 X 1 T V 1 / 2 M X 2 V 1 / 2 Y , where M X 2 = I P = I V 1 / 2 X 2 ( X 2 V 1 X 2 ) 1 X 2 T V 1 / 2 .

2.3. Pretest Ridge Estimation Strategy

Generally, the sub-model estimator will be more efficient than the full model estimator if the information embodied in the imposed linear restrictions is valid, thus β 2 is close to zero. However, if the information is not valid the sub-model estimator is likely to be more biased and may have a higher risk than the full model estimator. There is, therefore, some doubt as to whether or not to impose the restrictions on the model’s parameter. It is in response to this uncertainty that a statistical test may be used to determine the validity of the proposed restrictions. Accordingly, the procedure to follow in practice is pretest the validity of the restrictions and if the outcome of the pretest suggests that they are correct then the model parameters are estimated incorporating the restrictions. If the pretest rejects the restrictions then the parameters are estimated from the sample information alone. This motivates the consideration of the pretest estimation strategy for the LMM.
The pretest estimator is a combination of the full model estimator β ^ 1 RFM , and sub-model estimator β ^ 1 RSM , through an indicator function I ( L n d n , α ) , where L n is an appropriate test statistic to test H 0 : β 2 = 0 versus H A : β 2 0 . Moreover, d n , α is an α level critical value based on distribution of L n under H 0 . We define test statistics based on the log-likelihood ratio test as L n = 2 * ( β ^ RFM Y ) * ( β ^ RSM Y ) .
Under H 0 , the test statistic L n follows asymptotic chi-square distribution with p 2 degrees of freedom. The pretest test ridge estimator β ^ 1 RPT of β 1 is then defined by
β ^ 1 RPT = β ^ 1 RFM ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) , p 2 1
.

2.4. Shrinkage Ridge Estimation Strategy

The pre-test estimator is a discontinuous function of the sub-model β ^ 1 RSM and full model β ^ 1 RFM , which depends on the hard threshold ( d n , α = χ p 2 , α 2 ) . We address this limitation by defining the shrinkage ridge estimator based on soft thresholding. The shrinkage ridge estimator (RSE) of β 1 , denoted as β ^ 1 RSE , is defined as
β ^ 1 RSE = β ^ 1 RSM + ( β ^ 1 RFM β ^ 1 RSM ) ( 1 ( p 2 2 ) L n 1 ) , p 2 3 .
Here, β ^ 1 RSE is the linear combination of the full model β ^ 1 RFM and sub-model β ^ 1 RSM estimates. If L n ( p 2 2 ) , then a relatively large weight is placed on β ^ 1 RSM otherwise, more weight is on β ^ 1 RFM . A setback with β ^ 1 RSE is that it is not a convex combination of β ^ 1 RFM and β ^ 1 RSM . This can cause over-shrinkage, which gives the estimator opposite sign of β ^ 1 RFM . This could happen if ( p 2 2 ) L n 1 is larger than one. To counter this, we use the positive-part shrinkage ridge estimator (RPS) defined as
β ^ 1 RPS = β ^ 1 RSM + ( β ^ 1 RFM β ^ 1 RSM ) ( 1 ( p 2 2 ) L n 1 ) + , p 2 3
where ( 1 ( p 2 2 ) L n 1 ) + = max ( 0 , 1 ( p 2 2 ) L n 1 ) . The RPS estimator will control possible over-shrinking in the RSE estimator.

3. Asymptotic Results

In this section, we derive the asymptotic distributional bias and risk of the estimators considered in Section 2. We examine the properties of the estimators for increasing n and as β 2 approaches the null vector under the sequence of local alternatives defined as
K n : β 2 = β 2 ( n ) = κ n ,
where κ = ( κ 1 , κ 2 , κ p 2 ) R p 2 is a fixed vector. The vector κ n is a measure of how far local alternatives K n differ from the subspace information β 2 = 0 . In order to evaluate the performance of the estimators, we define the asymptotic distributional bias of the estimator β ^ 1 * as
ADB ( β ^ 1 * ) = lim n E n ( β ^ 1 * β 1 ) ,
In order to compute the risk functions, we first compute the asymptotic covariance of the estimators. The asymptotic covariance of an estimator β ^ 1 * is expressed as
Cov ( β ^ 1 * ) = lim n E n ( β ^ 1 * β 1 ) ( β ^ 1 * β 1 ) T .
Following the asymptotic covariance matrix, we define the asymptotic risk of an estimator β ^ 1 * as R ( β ^ 1 * ) = tr Q Cov ( β ^ 1 * ) . Q is a positive definite matrix of weights with dimensions of p × p . We set Q = I in this study.
Assumption 1.
We make the following two regularity conditions to establish the asymptotic properties of the estimators.
1. 1 n max 1 i n x i T X T V 1 X 1 x i 0as n , where x i T is the ith row ofX.
2. B n = n 1 X T V 1 X 1 B , for some finiteB = B 11 B 12 B 21 B 22 .
Theorem 1.
For k < , If k / n λ o and B is non-singular, the distribution of the full model ridge estimator, β ^ n RFM is
n ( β ^ n RFM β ) D N ( λ o B 1 β , B 1 ) ,
where D denotes convergence in distribution.
Proof. 
See Theorem 2 in [23]. □
Proposition 1.
Assuming the above assumption 1 together with Theorem 1 hold, under the local alternatives K n , we have
φ 1 φ 3 D N μ 11.2 δ , B 11.2 1 Φ Φ Φ ,
φ 3 φ 2 D N δ γ , Φ 0 0 B 11 1 ,
where φ 1 = n ( β ^ 1 RFM β 1 ) , φ 2 = n ( β ^ 1 RSM β 1 ) , φ 3 = n ( β ^ 1 RFM β ^ 1 RSM ) , γ = μ 11.2 + δ , δ = B 11 1 B 12 κ , Φ = B 11 1 B 12 B 22.1 1 B 21 B 11 1 , B 22.1 = B 22 B 21 B 11 1 B 12 , μ = λ o B 1 β = μ 1 μ 2 and μ 11.2 = μ 1 B 12 B 22 1 ( ( β 2 κ ) μ 2 ) .
Proof. 
See Appendix A. □
Theorem 2.
Under the condition of Theorem 1 and the local alternatives K n , the ADBs of the proposed estimators are
A D B ( β ^ 1 RFM ) = μ 11.2 , A D B ( β ^ 1 RSM ) = μ 11.2 B 11 1 B 12 δ = γ , A D B ( β ^ 1 RPT ) = μ 11.2 δ H p 2 + 2 ( χ p 2 , α 2 ; Δ ) , A D B ( β ^ 1 RSE ) = μ 11.2 ( p 2 2 ) δ E ( χ p 2 + 2 2 ( Δ ) ) , A D B ( β ^ 1 RPS ) = μ 11.2 δ H p 2 + 2 ( χ p 2 2 2 ; Δ ) } ( p 2 2 ) δ E χ p 2 + 2 2 ( Δ ) I ( χ p 2 + 2 2 > p 2 2 ) ,
where Δ = κ T B 22.1 1 κ , B 22.1 = B 22 B 21 B 11 1 B 12 , and H v ( x ; Δ ) is the cumulative distribution function of the non-central chi-squared distribution with non-centrality parameter Δ and v degrees of freedom, and E ( χ v 2 j ( Δ ) ) is the expected value of the inverse of a non-central χ 2 distribution with v degrees of freedom and non-centrality parameter Δ,
E ( χ v 2 j ( Δ ) ) = 0 x 2 j d H v ( x , Δ ) .
Proof. 
See Appendix B.1. □
Since the ADBs of the estimators are in non-scalar form, we define the following asymptotic quadratic bias (AQDB) of β ^ 1 * by
AQDB ( β ^ 1 * ) = ADB ( β ^ 1 * ) B 11.2 ADB ( β ^ 1 * ) ,
where B 11.2 = B 11 B 12 B 22 1 B 21 .
Corollary 1.
Suppose Theorem 2 holds. Then, under { K n } , the AQDBs of the estimators are
AQDB ( β ^ 1 RFM ) = μ 11.2 T B 11.2 μ 11.2 , AQDB ( β ^ 1 RSM ) = γ T B 11.2 γ , AQDB ( β ^ 1 RPT ) = μ 11.2 T B 11.2 μ 11.2 + μ 11.2 T B 11.2 δ H p 2 + 2 ( χ p 2 2 ; Δ ) + δ T B 11.2 μ 11.2 H p 2 + 2 ( χ p 2 2 ; Δ ) + δ T B 11.2 δ H p 2 + 2 2 ( χ p 2 2 ; Δ ) , AQDB ( β ^ 1 RSE ) = μ 11.2 T B 11.2 μ 11.2 + ( p 2 2 ) μ 11.2 T B 11.2 δ E χ p 2 + 2 2 ( Δ ) + ( p 2 2 ) δ T B 11.2 μ 11.2 E χ p 2 + 2 2 ( Δ ) + ( p 2 2 ) 2 δ T B 11.2 δ E χ p 2 + 2 2 ( Δ ) 2 , AQDB ( β ^ 1 RPS ) = μ 11.2 T B 11.2 μ 11.2 + δ T B 11.2 μ 11.2 + μ 11.2 T B 11.2 δ [ H p 2 + 2 ( p 2 2 ; Δ ) + ( p 2 2 ) E χ p 2 + 2 2 ( Δ ) I ( χ p 2 + 2 2 ( Δ ) > p 2 2 ) ] + δ T B 11.2 δ [ H p 2 + 2 ( p 2 2 ; Δ ) + ( p 2 2 ) E χ p 2 + 2 2 ( Δ ) I ( χ p 2 + 2 2 ( Δ ) > p 2 2 ) ] 2 .
When B 11.2 = 0 , the AQDB of all estimators are equivalent, and the estimators are therefore asymptotically unbiased. If we assume that B 11.2 0 , the results for the bias of the estimators can be summarized as follows:
  • The AQDB of β ^ 1 RSM is an unbounded function of γ T B 11.2 γ .
  • The AQDB of β ^ 1 RPT starts from μ 11.2 T B 11.2 μ 11.2 at Δ = 0 , and when Δ increases, it increases to the maximum and then decreases to zero.
  • The characteristics of β ^ 1 RSE and β ^ 1 RPS are similar to β ^ 1 RPT . The AQDB of β ^ 1 RSE and β ^ 1 RPS similarly start from μ 11.2 T B 11.2 μ 11.2 at Δ = 0 , and increase to a point, and then decrease towards zero, since E χ p 2 + 2 2 ( Δ ) is a non-increasing on of Δ .
Theorem 3.
Suppose Theorem 1 holds and under the local alternatives K n , the covariance matrices of the estimators are
C o v ( β ^ 1 RFM ) = B 11.2 1 + μ 11.2 μ 11.2 T , C o v ( β ^ 1 RSM ) = B 11 1 + γ γ T , C o v ( β ^ 1 RPT ) = B 11.2 1 + μ 11.2 μ 11.2 T + 2 μ 11.2 T δ H p 2 + 2 ( χ p 2 2 ; Δ ) Φ H p 2 + 2 ( χ p 2 2 ; Δ ) + δ δ T 2 H p 2 + 2 ( χ p 2 2 ; Δ ) H p 2 + 4 ( χ p 2 2 ; Δ ) , C o v ( β ^ 1 RSE ) = B 11.2 1 + μ 11.2 μ 11.2 T + 2 ( p 2 2 ) μ 11.2 T δ E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) Φ 2 E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) E χ p 2 + 2 4 ( Δ ) + ( p 2 2 ) δ δ T 2 E χ p 2 + 4 2 ( Δ ) + 2 E ( χ p 2 + 2 2 ( Δ ) ) + ( p 2 2 ) E χ p 2 + 4 4 ( Δ ) , C o v ( β ^ 1 RPS ) = Cov ( β ^ 1 RSE ) + 2 δ μ 11.2 T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 Φ E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 δ δ T E { 1 ( p 2 2 ) χ p 2 + 4 2 ( Δ ) } I ( χ p 2 + 4 2 ( Δ ) p 2 2 ) + 2 δ δ T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 ( p 2 2 ) 2 Φ E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 , α 2 ( Δ ) p 2 2 ( p 2 2 ) 2 δ δ T E χ p 2 + 2 , α 4 ( Δ ) I χ p 2 + 2 , α 2 ( Δ ) p 2 2 + Φ H p 2 + 2 p 2 2 ; Δ + δ δ T H p 2 + 4 p 2 2 ; Δ .
Proof. 
See Appendix B.2. □
Corollary 2.
Under the local alternatives ( K n ) and from Theorem 3, the risk of the estimators are obtained as
R β ^ 1 RFM = t r Q B 11.2 1 + μ 11.2 T Q μ 11.2 , R [ β ^ 1 RSM ] = t r Q B 11 1 + γ T Q γ , R β ^ 1 RPT = t r Q B 11.2 1 + μ 11.2 T Q μ 11.2 + 2 μ 11.2 T Q δ H p 2 + 2 χ p 2 2 ; Δ t r Q Φ H p 2 + 2 χ p 2 2 ; Δ + δ Q δ T 2 H p 2 + 2 χ p 2 2 ; Δ H p 2 + 4 χ p 2 2 ; Δ , R β ^ 1 RSE = t r Q B 11.2 1 + μ 11.2 T Q μ 11.2 + 2 ( p 2 2 ) μ 11.2 T Q δ E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) tr ( Q Φ ) E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) E χ p 2 + 2 4 ( Δ ) + ( p 2 2 ) δ T Q δ 2 E χ p 2 + 2 2 ( Δ ) 2 E χ p 2 + 4 2 ( Δ ) ( p 2 2 ) E χ p 2 + 4 4 ( Δ ) , R β ^ 1 RPS = R β ^ 1 RSE + 2 δ Q μ 11.2 T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 t r ( Q Φ ) E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 δ T Q δ E { 1 ( p 2 2 ) χ p 2 + 4 2 ( Δ ) } I ( χ p 2 + 4 2 ( Δ ) p 2 2 ) + 2 δ T Q δ E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 ( p 2 2 ) 2 t r ( Q Φ ) E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 ( p 2 2 ) 2 δ T Q δ E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 + t r ( Q Φ ) H p 2 + 2 p 2 2 ; Δ + δ T Q δ H p 2 + 4 p 2 2 ; Δ .
From Theorem 2, when B 12 = 0 , the risks of estimators β ^ 1 RSM , β ^ 1 RPT , β ^ 1 RSE , and β ^ 1 RPS are reduced to common value tr ( Q B 11.2 1 ) + μ 11.2 T Q μ 11.2 , the risk of β ^ 1 RFM . If B 12 0 , the results can be summarized as follows:
  • The risk of β ^ 1 RFM remains constant while the risk of β ^ 1 RSM is an unbounded function of Δ since Δ [ 0 , ) .
  • The risk of β ^ 1 RPT increases as Δ moves away from zero, achieves it maximum and then decreases towards the risk of the full model estimator.
  • The risk of β ^ 1 RFM is smaller than the risk of β ^ 1 RPT for small values in the neighborhood of Δ and for the rest of the parameter space, β ^ 1 RPT outperforms β ^ 1 RFM , thus, R β ^ 1 RFM > R β ^ 1 RPT .
  • Comparing the risks of β ^ 1 RSE and β ^ 1 RFM , it can be seen that the estimator β ^ 1 RSE outperforms β ^ 1 RFM that is, R β ^ 1 RSE R β ^ 1 RFM for all Δ 0 .

4. Simulation Studies

In this section, we conduct a simulation study to assess the performance of the suggested estimators for finite samples. The criterion for comparing the performance of any estimator in our study is the mean square error. We simulate the response from the following LMM model
Y i = X i β + Z i a i + ϵ i ,
where ϵ i N ( 0 , σ 2 I n i ) with σ 2 = 1 . We generate random effect covariate a i from a multivariate normal distribution with zero mean and covariance matrix G = 0.5 I 2 × 2 , where I 2 × 2 is 2 × 2 identity matrix. The design matrix X i = ( x i 1 , , x i n i ) is generated from a n i -multivariate normal distribution with mean vector and covariance matrix x . Furthermore, we assume that the off-diagonal elements of the covariance matrix x are equal to ρ , which is the coefficient of correlation between any two predictors, with ρ = 0.3 , 0.7 , 0.9 . The ratio of the largest eigenvalue to the smallest eigen-value of matrix X T V 1 X is calculated as a condition number index (CNI) [24], which assesses the existence of multicollinearity in the design matrix. If the CNI is larger than 30, then the model has significant multicollinearity. Our simulations are based on the linear mixed effects model in Equation (3) with n = 60 and 100 subjects.
We consider a situation when the model is assumed to be sparse. In this study, our interest lies in testing the hypothesis H o : β 2 = 0 , and our goal is to estimate the fixed effect coefficient β 1 . We partition the fixed effects coefficients as β = ( β 1 , β 2 ) = ( β 1 , 0 p 2 ) . The coefficients β 1 and β 2 are p 1 and p 2 dimensional vectors, respectively, with p = p 1 + p 2 .
In order to investigate the behavior of the estimators, we define Δ * = | | β β o | | , where β o = ( β 1 T , 0 p 2 ) T and | | . | | is the euclidean norm. We considered Δ * values between 0 and 4. If Δ * = 0 , then we will have β = ( 1 , 1 , 1 , 1 , 0 , 0 , , 0 p 2 ) T to generate the response under null hypothesis. On the other hand, when Δ * 0 , say Δ * = 4 , we will have β = ( 1 , 1 , 1 , 1 , 4 , 0 , 0 , , 0 p 2 1 ) T to generate the response under the local alternative hypothesis. In our simulation study, we consider the number of fixed effect or predictor variables as ( p 1 , p 2 ) { ( 5 , 40 ) , ( 5 , 500 ) , ( 5 , 1000 ) } . Each realization is repeated 5000 times to obtain consistent results and compute the MSE of suggested estimators with α = 0.05 .
Based on the simulated data, we calculate the mean square error (MSE) of all the estimators as MSE ( β ^ ) = 1 5000 j = 1 5000 ( β ^ β ) T ( β ^ β ) , where β ^ denotes any one of β ^ RSM , β ^ RPT , β ^ RSE and β ^ RPS , in the jth repetition. We use the relative mean squared efficiency (RMSE), or the ratio of MSE for risk performance comparison. The RMSE of an estimator β ^ * with respect to the baseline full model ridge estimator β ^ 1 RFM is defined as RMSE ( β ^ 1 RFM : β ^ 1 * ) = MSE ( β ^ 1 RFM ) MSE ( β ^ 1 * ) , where β 1 * is one of the suggested estimators under consideration.

4.1. Simulation Results

In this subsection, we present the results from our simulation study. We report the results for n = 60 , 100 and p 1 = 5 with different values of correlation coefficient ρ are shown in Table 1. Furthermore, we plot the RMSEs against Δ * in Figure 1 and Figure 2. The findings can be summarized as follows:
  • When Δ * = 0 , the sub-model RSM outperforms all other estimators. As Δ * = 0 moves from zero, the RMSE of the sub-model decreases and goes to zero.
  • The pretest ridge estimator RPT outperforms shrinkage ridge and positive Stein ridge estimators in the case of Δ * = 0 . However, for large number of sparse predictors p 2 while keeping p 1 and n fixed, RPT is less efficient than RPS and RSE. In the case of Δ * being larger than zero, the RMSE of RPT decreases, and it remains below 1 for immediate values of Δ * , after that the RMSE of RPT increases and approaches one for larger values of Δ * .
  • RPS performs better than RSE in the entire parameter space induced by Δ * as presented in Table 1 and Table 2. Similarly, both shrinkage estimators RPS and RSE outperforms the full ridge model estimator irrespective of the corrected sub-model selected. This is consistent with the asymptotic theory presented in Section 3.
  • Δ * which measures the degree of deviation from the Assumption 1 on the parameter space, it is clear that one cannot go wrong with the use of shrinkage estimators even if the selected sub-model is wrongly specified. As evident from Table 1 and Table 2, Figure 1 and Figure 2, if the selected sub-model is correct, that is, Δ * = 0 , then the shrinkage estimators are relatively efficient compared with the ridge full model estimator. On the other hand, if the sub-model is misspecified, the gain slowly diminishes. However, in terms of risk, the shrinkage estimators are at least as good as the full ridge model estimator. Therefore, the use of shrinkage estimators makes sense in application when a sub-model cannot be correctly specified.
  • The RMSE of the ridge-type estimators are an increasing function of the amount of multicollinearity. This indicates that the ridge-type estimators perform better than the classical estimator in the presence of multicollinearity among predictor variables.

4.2. Comparison with LASSO-Type Estimators

We compare our listed estimators with the LASSO and adaptive LASSO estimators. A 10-fold cross-validation is used for selecting the optimal value of the penalty parameters that minimizes the mean square errors for the LASSO-type estimators. The results for ρ = 0.3 , 0.7 , 0.9 , n = 60 , 100 , p 1 = 10 and p 2 = 50 , 500 , 1000 , 2000 are presented in Table 3. We observe the following from Table 3.
  • The performance of the sub-model estimator is the best among all estimators.
  • The pretest ridge estimator performs better than the other estimators. However, for larger values of sparse predictors p 2 the shrinkage estimators outperform the pretest estimator.
  • The performance of the LASSO and aLASSO estimators are comparable when ρ is small. The pretest and shrinkage estimators remain stable for a given value of ρ .
  • For a large number of sparse predictors p 2 , the shrinkage and pretest ridge estimators outperforms the lasso-type estimators. This indicates the superiority of the shrinkage estimators over the LASSO-type estimators. Therefore shrinkage estimators are preferable when there is multicollinearity in our predictor variables.

5. Real Data Application

We consider two real data analyses using Amsterdam Growth and Health Data and a genetic and brain network connectivity edge weight data to illustrate the performance of the proposed estimators.

5.1. Amsterdam Growth and Health Data (AGHD)

The AGHD data is obtained from the Amsterdam Growth and Health Study [25]. The goal of this study is to investigate the relationship between lifestyle and health in adolescence into young adulthood. The response variable Y is the total serum cholesterol measured over six time points. There are five covariates: X 1 is the baseline fitness level measured as the maximum oxygen uptake on a treadmill, X 2 is the amount of body fat estimated by the sum of the thickness of four skinfolds, X 3 is a smoking indicator (0 = no, 1 = yes), X 4 is the gender (1 = female, 2 = male), and time measurement as X 5 and subject specific random effects.
A total of 147 subjects participated in the study where all variables were measured at n i = 6 time occasions. In order to apply the proposed methods, firstly, we apply a variable selection based on AIC procedure to select the sub-model. For the AGHD data, we fit a linear mixed model with all the five covariates for both fixed and subject specific random effects by two stage selection procedure for the purpose of choosing both the random and fixed effects. The analysis found X 2 and X 5 to be significant covariates for prediction of the response variable serum cholestrol and the other variables are ignored since they are not significantly important. Based on this information, a sub-model is chosen to be X 2 and X 5 and the full model includes all the covariates. We construct the shrinkage estimators from the full-model and sub-model. In terms of null hypothesis, the restriction can be written as β 2 = ( β 1 , β 3 , β 4 ) = ( 0 , 0 , 0 ) with p = 5 , p 1 = 2 and p 2 = 3 .
To evaluate the performance of the estimators, we obtain the mean square prediction error (MSPE) using bootstrap samples. We draw 1000 bootstrap samples of the 147 subjects from the data matrix { ( Y i j , X i j ) , i = 1 , 2 , , 147 ; j = 1 , 2 , , 6 } . We then calculate the relative prediction error (RPE) of β 1 * with respect to β 1 RFM , the full model estimator. The RPE is defined as
RPE ( β ^ 1 RFM : β ^ 1 * ) = MSPE ( β ^ 1 * ) MSPE ( β ^ 1 RFM ) = ( Y X 1 β ^ 1 * ) ( Y X 1 β ^ 1 * ) ( Y X 1 β ^ 1 RFM ) ( Y X 1 β ^ 1 RFM ) ,
where β 1 * is one of the listed estimators. If RPE < 1 , then β ^ 1 * outperforms β ^ 1 RFM .
Table 4 reports the estimates, standard error of the non-sparse predictors and RPEs of the estimators with respect to the full model. As expected, the sub-model ridge estimator β ^ 1 RSM has the minimum RPE because it is computed when the sub-model is correct, that is, Δ * = 0 . It is evident by the RPE values in Table 4 that the shrinkage estimators are superior to the LASSO-type estimators. Furthermore, the positive shrinkage is more efficient than the shrinkage ridge estimator.

5.2. Resting-State Effective Brain Connectivity and Genetic Data

This data comprises longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) effective brain connectivity network and genetic study [26] data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The 111 subjects comprise 36 cognitively normal (CN), 63 mild cognitive impairment (MCI) and 12 Alzheimer’s Disease (AD) subjects. The response is a network connection between regions of interest estimated from an rs-fMRI scan within the Default Mode Network (DMN), and we observe a longitudinal sequence of such connections for each subject with the number of repeated measurements. The DMN consists of a set of brain regions that tend to be active in resting-state, when a subject is mind wandering with no intended task. For this data analysis, we consider the network edge weight from the left intraparietal cortex to posterior cingulate cortex (LIPC → PCC) as our response. The genetic data are single nucleotide polymorphism (SNPs) from non-sex chromosomes, i.e., chromosome 1 to chromosome 22. SNPs with minor allele frequency less than 5% are removed as are SNPs with a Hardy–Weinberg equilibrium p-value lower than 10 6 or a missing rate greater than 5%. After preprocessing we are left with 1,220,955 SNPs and the longitudinal rs-fMRI effective connectivity network using the 111 subjects with rs-fMRI data. The response is network edge weight. There are SNPs which are the fixed effects and subject specific random effects.
In order to apply the proposed methods, we use a genome- wide association study (GWAS) for screening the genetic data to 100 SNPs. We implement a second screening by applying multinomial logistic regression to identify a smaller subset of the 100 SNPs that are potentially associated with disease (CN/MCI/AD). This yields a subset of top 10 SNPs. This showed the top 10 SNPs are the most important predictors and the other 90 SNPs are ignored as not significant. We now have two models, which are the full model with all 100 SNPs and sub-model with 10 SNPs selected. Finally, we construct the pretest and shrinkage estimators from the full-model and sub-model.
We draw 1000 bootstrap samples with replacements from the corresponding data matrix { ( Y i j , X i j ) , i = 1 , , 111 ; j = 1 , , n i } . We report the RPE of the estimators based on the bootstrap simulation with respect to the full model ridge estimator in Table 5. We observe that the RPE of the sub-model, pretest, shrinkage and positive shrinkage ridge estimators outperforms the full model estimator. Clearly, the sub-model ridge estimator has the smallest RPE since it’s computed when the candidate sub-model is correct, i.e., Δ = 0 . Both shrinkage ridge estimators outperform the pretest ridge estimator. Particularly, the positive shrinkage performed better than the shrinkage estimator. The performance of both shrinkage and pretest ridge estimators are better than the LASSO-type estimators. Thus, the data analysis is in line with our simulation and theoretical findings.

6. Conclusions

In this paper, we present efficient estimation strategies for the linear mixed effect model when there exists multicollinearity among predictor variables for high-dimensional data application. We considered the estimation of fixed effects parameters in the linear mixed model when some of the predictors may have a very weak influence on the response of interest. We introduced pretest and shrinkage estimation in our model using the ridge estimation as the reference estimator. In addition, we established the asymptotic properties of the pretest and shrinkage ridge estimators. Our theoretical findings demonstrate that the shrinkage ridge estimators outperform the full model ridge estimator and perform relatively better than the sub-model estimator in a wide range of the parameter space.
Additionally, a Monte Carlo simulation was conducted to investigate and assess the finite sample behavior of proposed estimators when the model is sparse (restrictions on parameters hold). As expected, the sub-model ridge estimator outshines all other estimators when the restrictions hold. However, when this assumption is violated, the shrinkage and pretest ridge estimators outperform the sub-model estimator. Furthermore, when the number of sparse predictors are extremely large relative to the sample size, the shrinkage estimators outperform the pretest ridge estimator. These numerical results are consistent with our asymptotic result. We also assess the relative performance of the LASSO-type estimators with our ridge-type estimators. We observe that the performance of pretest and shrinkage ridge estimators are superior to the LASSO-type estimators when predictors are highly correlated. For our real data application, the shrinkage ridge estimators are superior with the smallest relative prediction error compared to the LASSO-type estimators.
In summary, the results of the data analyses strongly confirm the findings of the simulation study and suggest the use of the shrinkage ridge estimation strategy when no prior information about the parameter subspace is available. The results of our simulation study and real data application are consistent with available results in [27,28,29].
In our future work, we will focus on other penalty estimators like the Elastic-Net, the minimax concave penalty (MCP), and the smoothly clipped absolute deviation method (SCAD) as estimation strategy in LMM for high-dimensional data. These estimators will be assessed and compared with the proposed ridge-type estimators. Another interesting extension will be integrating two sub-models by incorporating ridge-type estimation strategies in the linear mixed effect models. The goal is to improve the estimation accuracy of the non-sparse set of the fixed effects parameters by combining an over-fitted model estimator with an under-fitted one [27,29]. This approach will include combining two sub-models produced by two different variable selection techniques from the LMM [28].

Author Contributions

Conceptualization, E.A.O. and S.E.A.; methodology, E.A.O. and F.S.N.; formal analysis, E.A.O.; writing—original draft preparation, E.A.O.; writing—review and editing, E.A.O., S.E.A. and F.S.N.; supervision, F.S.N. and S.E.A.; funding acquisition, F.S.N. and S.E.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Sciences and Engineering Research Council of Canada (NSERC).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here https://pubmed.ncbi.nlm.nih.gov/22434862/ (accessed on 20 April 2021).

Acknowledgments

Research is supported by the Visual and Automated Disease Analytics (VADA) graduate training program.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Proposition 1. 
The asymptotic relationship between the sub-model and full model estimators of β 1 , we use the argument and equation: Y ^ = Y X 2 β ^ 2 RFM , where
β ^ 1 RFM = arg min β 1 ( Y ^ X 1 β 1 ) T V 1 ( Y ^ X 1 β 1 ) + λ | | β 1 | | 2 = X 1 T V 1 X 1 + λ I p 1 1 X 1 T V 1 Y ^ = X 1 T V 1 X 1 + λ I p 1 1 X 1 T V 1 Y X 1 T V 1 X 1 + λ I p 1 1 X 1 T V 1 X 2 β ^ 2 RFM = β ^ 1 RSM X 1 V 1 X 1 + λ I p 1 1 X 1 T V 1 X 2 β ^ 2 RFM = β ^ 1 RSM B 11 1 B 12 β ^ 2 RFM
From Theorem 1, we partition n ( β ^ RFM β ) as n ( β ^ RFM β ) = n ( β ^ 1 RFM β 1 ) , n ( β ^ 2 RFM β 2 ) . We obtain n ( β ^ 1 RFM β 1 ) D N p 1 ( μ 11.2 , B 11.2 1 ) , where B 11.2 1 = B 11 B 12 B 22 1 B 21 . We have shown that β ^ 1 RSM = β ^ 1 RFM + B 11 1 B 12 β ^ 2 RFM . Using this expression and under the local alternative { K n } , we obtain the following expressions
φ 2 = n β ^ 1 RSM β 1 = n β ^ 1 RFM + B 11 1 B 12 β ^ 2 RFM β 1 = φ 1 + B 11 1 B 12 n β ^ 2 RFM , φ 3 = n ( β ^ 1 RFM β ^ 1 RSM ) = n β ^ 1 RFM β 1 n β ^ 1 RSM β 1 = φ 1 φ 2 .
Since φ 2 and φ 3 are linear functions of φ 1 , as n , they are also asymptotically normally distributed. Their mean vectors and covariance matrices are as follows:
E ( φ 1 ) = E n β ^ 1 RFM β 1 = μ 11.2 E ( φ 2 ) = E φ 1 + B 11 1 B 12 n β ^ 2 RFM = E ( φ 1 ) + B 11 1 B 12 n E ( β ^ 2 RFM ) = μ 11.2 + B 11 1 B 12 κ = ( μ 11.2 δ ) = γ E ( φ 3 ) = E ( φ 1 φ 2 ) = μ 11.2 ( ( μ 11.2 δ ) ) = δ V a r ( φ 1 ) = B 22.1 1 V a r ( φ 2 ) = V a r φ 1 + B 11 1 B 12 n β ^ 2 RFM = V a r ( φ 1 ) + B 11 1 B 12 B 22.1 1 B 21 B 11 1 + 2 C o v n ( β ^ 1 RFM β 1 ) , n ( β ^ 2 RFM β 2 ) ( B 11 1 B 12 ) T = B 22.1 1 B 11 1 B 12 B 22.1 1 B 21 B 11 1 = B 11 1 V a r ( φ 3 ) = V a r n β ^ 1 RFM β ^ 1 RSM = V a r n β ^ 1 RFM β ^ 1 RFM B 11 1 B 12 β ^ 2 RFM = B 11 1 B 12 V a r n β ^ 2 RFM ( B 11 1 B 12 ) T = B 11 1 B 12 B 22.1 1 B 21 B 11 1 = Φ C o v ( φ 1 , φ 3 ) = C o v n β ^ 1 RFM β 1 , n β ^ 1 RFM β ^ 1 RSM = V a r n β ^ 1 RFM β 1 C o v n β ^ 1 RFM β 1 , n β ^ 1 RSM β 1 = V a r ( φ 1 ) C o v n β ^ 1 RFM β 1 , n β ^ 1 RFM β 1 + n B 11 1 B 12 β ^ 2 RFM = B 11 1 B 12 B 22.1 1 B 21 B 11 1 = Φ
C o v ( φ 2 , φ 3 ) = C o v n β ^ 1 RSM β 1 , n β ^ 1 RFM β ^ 1 RSM = C o v n β ^ 1 RSM β 1 , n β ^ 1 RFM β 1 V a r n β ^ 1 RSM β 1 = B 11.2 1 B 11 1 B 12 B 22.1 1 B 21 B 11 1 B 11 1 = B 11.2 1 B 11.2 1 B 11 1 B 11 1 = 0
Therefore, the asymptotic distributions of the vectors φ 2 and φ 3 are obtained as follows:
φ 2 = n ( β ^ 1 RSM β 1 ) D N p 1 ( γ , B 11 1 ) φ 3 = n ( β ^ 1 RFM β ^ 1 RSM ) D N p 1 ( δ , Φ )
  □

Appendix B

We next introduce the lemmas given in [30] to aid with the proof of the bias and covariance of the estimators.
Lemma A1.
Let V = ( V 1 , V 2 , V p ) T be a p-dimensional normal vector distributed as N p ( μ v , p ) , then for a measurable function Ψ , we have
E V Ψ ( V T V ) = μ v E Ψ χ p + 2 2 ( Δ ) E VV T Ψ ( V T V ) = p E Ψ χ p + 2 2 ( Δ ) + μ v μ v T E Ψ χ p + 4 2 ( Δ )
where χ k 2 ( Δ ) is a non-central chi-square distribution with k degrees of freedom and non-centrality parameter Δ.

Appendix B.1

Proof of Theorem 2. 
ADB ( β ^ 1 RFM ) = E lim n n ( β ^ 1 RFM β 1 ) = μ 11.2 .
ADB ( β ^ 1 RSM ) = E lim n n ( β ^ 1 RSM β 1 ) = E lim n n ( β ^ 1 RFM B 11 1 B 12 β ^ 2 RFM β 1 ) = E lim n n ( β ^ 1 RFM β 1 ) E lim n n ( B 11 1 B 12 β ^ 2 RFM ) = μ 11.2 E lim n n ( B 11 1 B 12 β ^ 2 RFM ) = μ 11.2 B 11 1 B 12 κ = ( μ 11.2 + δ ) = γ .
Using Lemma 1,
ADB ( β ^ 1 RPT ) = E lim n n ( β ^ 1 RPT β 1 ) = E lim n n ( β ^ 1 RFM ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) β 1 ) = E lim n n ( β ^ 1 RFM β 1 ) E lim n n ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) = μ 11.2 E lim n n ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) ) = μ 11.2 δ H p 2 + 2 ( χ p 2 2 ; Δ ) .
ADB ( β ^ 1 RSE ) = E lim n n ( β ^ 1 RSE β 1 ) = E lim n n ( β ^ 1 RFM ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 β 1 ) = E lim n n ( β ^ 1 RFM β 1 ) E lim n n ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 = μ 11.2 E lim n n ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 = μ 11.2 ( p 2 2 ) δ E ( χ p 2 + 2 2 ( Δ ) ) .
ADB ( β ^ 1 RPS ) = E lim n n ( β ^ 1 RPS β 1 ) = E lim n n ( β ^ 1 RSM + ( β ^ 1 RFM β ^ 1 RSM ) ( 1 ( p 2 2 ) L n 1 ) I ( L n > p 2 2 ) β 1 ) = E { n [ β ^ 1 RSM + ( β ^ 1 RFM β ^ 1 RSM ) ( 1 I ( L n p 2 2 ) ) ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 I ( L n > p 2 2 ) β 1 ] } = E lim n n ( β ^ 1 RFM β 1 ) E lim n n ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) I ( L n p 2 2 ) E { lim n n ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 I ( L n > p 2 2 ) = μ 11.2 δ H p 2 + 2 ( χ p 2 2 2 ; Δ ) } ( p 2 2 ) δ E χ p 2 + 2 2 ( Δ ) I ( χ p 2 + 2 2 > p 2 2 ) .
    □

Appendix B.2

In order to compute the risk functions, we first compute the asymptotic covariance of the estimators. The asymptotic covariance of an estimator β ^ 1 * is expressed as
Cov ( β ^ 1 * ) = lim n E n ( β ^ 1 * β 1 ) ( β ^ 1 * β 1 ) T .
Proof of Theorem 3. 
We first start by computing the asymptotic covariance of the estimator β ^ 1 RFM as:
Cov ( β ^ 1 RFM ) = E { lim n n ( β ^ 1 RFM β 1 ) n ( β ^ 1 RFM β 1 ) T } = E ( φ 1 φ 1 T ) = Cov ( φ 1 φ 1 T ) + E ( φ 1 ) E ( φ 1 T ) = B 11.2 1 + μ 11.2 μ 11.2 T .
Furthermore, similarly, the asymptotic covariance of the estimator β ^ 1 RSM is obtained as:
Cov ( β ^ 1 RSM ) = E { lim n n ( β ^ 1 RSM β 1 ) n ( β ^ 1 RSM β 1 ) T } = E ( φ 2 φ 2 T ) = C o v ( φ 2 φ 2 T ) + E ( φ 2 ) E ( φ 2 T ) = B 11 1 + γ γ T .
The asymptotic covariance of the estimator β ^ 1 RPT is obtained as:
Cov ( β ^ 1 RPT ) = E { lim n n ( β ^ 1 RPT β 1 ) n ( β ^ 1 RPT β 1 ) T } = E { lim n n [ β ^ 1 RFM β 1 ) ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) β ^ 1 RFM β 1 ) ( β ^ 1 RFM β ^ 1 RSM ) I ( L n d n , α ) T = E [ φ 1 φ 3 I ( L n d n , α ) ] [ φ 1 φ 3 I ( L n d n , α ) ] T = E φ 1 φ 1 T 2 φ 3 φ 1 T I ( L n d n , α ) + φ 3 φ 3 T I ( L n d n , α )
Thus, we need to find E φ 1 φ 1 T , E φ 3 φ 1 T I ( L n d n , α ) and E φ 3 φ 3 T I ( L n d n , α ) , The first term is E φ 1 φ 1 T = B 11.2 1 + μ 11.2 μ 11.2 T . Using Lemma 1, the third term is computed as:
E φ 3 φ 3 T I ( L n d n , α ) = Φ H p 2 + 2 ( χ p 2 2 ; Δ ) + δ δ T H p 2 + 4 ( χ p 2 2 ; Δ ) .
The second term E φ 3 φ 1 T I ( L n d n , α ) can be computed from normal theory as
E φ 3 φ 1 T I ( L n d n , α ) = E E φ 3 φ 1 T I ( L n d n , α ) | φ 3 = E φ 3 E φ 1 T I ( L n d n , α ) | φ 3 = E φ 3 [ μ 11.2 + ( φ 3 δ ) ] T I ( L n d n , α ) = E φ 3 μ 11.2 I ( L n d n , α ) + E φ 3 ( φ 3 δ ) T I ( L n d n , α ) = μ 11.2 T E { φ 3 I ( L n d n , α ) } + E { φ 3 φ 3 T I ( L n d n , α ) } E φ 3 δ T I ( L n d n , α ) = μ 11.2 T δ H p 2 + 2 ( χ p 2 2 ; Δ ) + { C o v ( φ 3 φ 3 T ) H p 2 + 2 ( χ p 2 2 ; Δ ) + E ( φ 3 ) E ( φ 3 T ) H p 2 + 4 ( χ p 2 2 ; Δ ) δ δ T H p 2 + 2 ( χ p 2 2 ; Δ ) } = μ 11.2 T δ H p 2 + 2 ( χ p 2 2 ; Δ ) + Φ H p 2 + 2 ( χ p 2 2 ; Δ ) + δ δ T H p 2 + 4 ( χ p 2 2 ; Δ ) δ δ T H p 2 + 2 ( χ p 2 2 ; Δ )
Putting all the terms together and simplifying, we obtain
Cov ( β ^ 1 RPT ) = μ 11.2 μ 11.2 T + 2 μ 11.2 T δ H p 2 + 2 ( χ p 2 2 ; Δ ) + B 11.2 1 Φ H p 2 + 2 ( χ p 2 2 ; Δ ) δ δ T H p 2 + 4 ( χ p 2 2 ; Δ ) + 2 δ δ T H p 2 + 2 ( χ p 2 2 ; Δ ) = B 11.2 1 + μ 11.2 μ 11.2 T + 2 μ 11.2 T δ H p 2 + 2 ( χ p 2 2 ; Δ ) Φ H p 2 + 2 ( χ p 2 2 ; Δ ) + δ δ T 2 H p 2 + 2 ( χ p 2 2 ; Δ ) H p 2 + 4 ( χ p 2 2 ; Δ ) .
The asymptotic covariance of the estimator β ^ 1 RSE can be obtained as
Cov ( β ^ 1 RSE ) = E { lim n n ( β ^ 1 RSE β 1 ) n ( β ^ 1 RSE β 1 ) T } = E { lim n n [ β ^ 1 RFM β 1 ) ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 β ^ 1 RFM β 1 ) ( β ^ 1 RFM β ^ 1 RSM ) ( p 2 2 ) L n 1 T = E [ φ 1 φ 3 ( p 2 2 ) L n 1 ] [ φ 1 φ 3 ( p 2 2 ) L n 1 ] T = E φ 1 φ 1 T 2 ( p 2 2 ) φ 3 φ 1 T L n 1 + ( p 2 2 ) 2 φ 3 φ 3 T L n 2
We need to compute E φ 3 φ 3 T L n 2 and E φ 3 φ 1 T L n 1 . By using Lemma 1, the first term is obtained as follows:
E φ 3 φ 3 T L n 2 = Φ E χ p 2 + 2 4 ( Δ ) + δ δ T E χ p 2 + 4 4 ( Δ ) .
The second term is computed from normal theory
E φ 3 φ 1 T L n 1 = E E φ 3 φ 1 T L n 1 | φ 3 = E φ 3 E φ 1 T L n 1 | φ 3 = E φ 3 [ μ 11.2 + ( φ 3 δ ) ] T L n 1 = E φ 3 μ 11.2 L n 1 + E φ 3 ( φ 3 δ ) T L n 1 = μ 11.2 T E { φ 3 L n 1 } + E { φ 3 φ 3 T L n 1 } E φ 3 δ T L n 1
From above, we can find E φ 3 δ T L n 1 = δ δ T E χ p 2 + 2 2 ( Δ ) and E φ 3 L n 1 = δ E χ p 2 + 2 2 ( Δ ) . Putting these terms together and simplifying, we obtain
Cov ( β ^ 1 RSE ) = B 11.2 1 + μ 11.2 μ 11.2 T + 2 ( p 2 2 ) μ 11.2 T δ E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) Φ 2 E χ p 2 + 2 2 ( Δ ) ( p 2 2 ) E χ p 2 + 2 4 ( Δ ) + ( p 2 2 ) δ δ T 2 E χ p 2 + 4 2 ( Δ ) + 2 E ( χ p 2 + 2 2 ( Δ ) ) + ( p 2 2 ) E χ p 2 + 4 4 ( Δ ) .
Since β ^ 1 RPS = β ^ 1 RSE ( β ^ 1 RFM β ^ 1 RSM ) 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) .
We derive the covariance of the estimator β ^ 1 RPS as follows.
Cov ( β ^ 1 RPS ) = E lim n n ( β ^ 1 RPS β 1 ) n ( β ^ 1 RPS β 1 ) T = E { lim n n ( β ^ 1 RSE β 1 ) n ( β ^ 1 RFM β ^ 1 RSM ) 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) × n ( β ^ 1 RSE β 1 ) n ( β ^ 1 RFM β ^ 1 RSM ) 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) T } = E { lim n n ( β ^ 1 RSE β 1 ) n ( β ^ 1 RSE β 1 ) T 2 φ 3 n ( β ^ 1 RSE β 1 ) T 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) + φ 3 φ 3 T 1 ( p 2 2 ) L n 1 2 I ( L n p 2 2 ) } = Cov ( β ^ 1 RSE ) 2 E lim n φ 3 n ( β ^ 1 RSE β 1 ) T 1 ( p 2 2 ) L n 1 2 I ( L n p 2 2 ) + E lim n φ 3 φ 3 T 1 ( p 2 2 ) L n 1 2 I ( L n p 2 2 ) = Cov ( β ^ 1 RSE ) 2 E lim n φ 3 φ 1 T 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) + 2 E lim n φ 3 φ 3 T ( p 2 2 ) L n 1 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) + E lim n φ 3 φ 3 T 1 ( p 2 2 ) L n 1 2 I ( L n p 2 2 ) = Cov ( β ^ 1 RSE ) 2 E lim n φ 3 φ 1 T 1 ( p 2 2 ) L n 1 I ( L n p 2 2 ) E lim n φ 3 φ 3 T ( p 2 2 ) 2 L n 2 I ( L n p 2 2 ) + E lim n φ 3 φ 3 T I ( L n p 2 2 )
We first compute the last term in the equation above E φ 3 φ 3 T I ( L n p 2 2 ) as E φ 3 φ 3 T I ( L n p 2 2 ) = Φ H p 2 + 2 ( p 2 2 ; Δ ) + δ δ T H p 2 + 4 ( p 2 2 ; Δ ) . Using Lemma 1 and from the normal theory, we find,
E φ 3 φ 1 T { 1 ( p 2 2 ) L n 1 } I ( L n p 2 2 ) = E E φ 3 φ 1 T { 1 ( p 2 2 ) L n 1 } I ( L n p 2 2 ) | φ 3 = E φ 3 E φ 1 T { 1 ( p 2 2 ) L n 1 } I ( L n p 2 2 ) | φ 3 = E φ 3 [ μ 11.2 + ( φ 3 δ ) ] T { 1 ( p 2 2 ) L n 1 } I ( L n p 2 2 ) = μ 11.2 E φ 3 1 ( p 2 2 ) L n 1 I L n p 2 2 + E φ 3 φ 3 T 1 ( p 2 2 ) L n 1 I L n p 2 2 E φ 3 δ T 1 ( p 2 2 ) L n 1 I L n p 2 2 = δ μ 11.2 T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 + Φ E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 + δ δ T E 1 ( p 2 2 ) χ p 2 + 4 2 ( Δ ) I χ p 2 + 4 2 ( Δ ) p 2 2 δ δ T E 1 ( p 2 2 ) χ p 2 + 4 2 ( Δ ) I χ p 2 + 4 2 ( Δ ) p 2 2 .
E φ 3 φ 3 T ( p 2 2 ) 2 L n 2 I ( L n p 2 2 ) = ( p 2 2 ) 2 Φ E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 + ( p 2 2 ) 2 δ δ T E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2
Putting all the terms together, we obtain
Cov ( β ^ 1 RPS ) = Cov ( β ^ 1 RSE ) + 2 δ μ 11.2 T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 Φ E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 2 δ δ T E { 1 ( p 2 2 ) χ p 2 + 4 2 ( Δ ) } I ( χ p 2 + 4 2 ( Δ ) p 2 2 ) + 2 δ δ T E 1 ( p 2 2 ) χ p 2 + 2 2 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 ( p 2 2 ) 2 Φ E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 , α 2 ( Δ ) p 2 2 ( p 2 2 ) 2 δ δ T E χ p 2 + 2 4 ( Δ ) I χ p 2 + 2 2 ( Δ ) p 2 2 + Φ H p 2 + 2 p 2 2 ; Δ + δ δ T H p 2 + 4 p 2 2 ; Δ .
 □

References

  1. Laird, N.M.; Ware, J.H. Random-effects models for longitudinal data. Biometrics 1982, 38, 963–974. [Google Scholar] [CrossRef]
  2. Longford, N. Regression analysis of multilevel data with measurement error. Br. J. Math. Stat. Psychol. 1993, 46, 301–311. [Google Scholar] [CrossRef]
  3. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  4. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  5. Tran, M.N. The loss rank criterion for variable selection in linear regression analysis. Scand. J. Stat. 2011, 38, 466–479. [Google Scholar] [CrossRef] [Green Version]
  6. Huang, J.; Ma, S.; Zhang, C.H. Adaptive Lasso for sparse high-dimensional regression models. Stat. Sin. 2008, 18, 1603–1618. [Google Scholar]
  7. Kim, Y.; Choi, H.; Oh, H.S. Smoothly clipped absolute deviation on high dimensions. J. Am. Stat. Assoc. 2008, 103, 1665–1673. [Google Scholar] [CrossRef]
  8. Wang, H.; Leng, C. Unified LASSO estimation by least squares approximation. J. Am. Stat. Assoc. 2007, 102, 1039–1048. [Google Scholar] [CrossRef]
  9. Yuan, M.; Lin, Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol. 2006, 68, 49–67. [Google Scholar] [CrossRef]
  10. Leng, C.; Lin, Y.; Wahba, G. A note on the lasso and related procedures in model selection. Stat. Sin. 2006, 16, 1273–1284. [Google Scholar]
  11. Park, T.; Casella, G. The bayesian lasso. J. Am. Stat. Assoc. 2008, 103, 681–686. [Google Scholar] [CrossRef]
  12. Greenlaw, K.; Szefer, E.; Graham, J.; Lesperance, M.; Nathoo, F.S.; Initiative, A.D.N. A Bayesian group sparse multi-task regression model for imaging genetics. Bioinformatics 2017, 33, 2513–2522. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Ahmed, S.E.; Nicol, C.J. An application of shrinkage estimation to the nonlinear regression model. Comput. Stat. Data Anal. 2012, 56, 3309–3321. [Google Scholar] [CrossRef]
  14. Ahmed, S.E.; Raheem, S.E. Shrinkage and absolute penalty estimation in linear regression models. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 541–553. [Google Scholar] [CrossRef]
  15. Lisawadi, S.; Kashif Ali Shah, M.; Ejaz Ahmed, S. Model selection and post estimation based on a pretest for logistic regression models. J. Stat. Comput. Simul. 2016, 86, 3495–3511. [Google Scholar] [CrossRef]
  16. Ahmed, S.E.; Opoku, E.A. Submodel selection and post-estimation of the linear mixed models. In Proceedings of the Tenth International Conference on Management Science and Engineering Management; Springer: Berlin/Heidelberg, Germany, 2017; pp. 633–646. [Google Scholar]
  17. Raheem, S.E.; Ahmed, S.E.; Doksum, K.A. Absolute penalty and shrinkage estimation in partially linear models. Comput. Stat. Data Anal. 2012, 56, 874–891. [Google Scholar] [CrossRef]
  18. Geladi, P.; Kowalski, B.R. Partial least-squares regression: A tutorial. Anal. Chim. Acta 1986, 185, 1–17. [Google Scholar] [CrossRef]
  19. Liu, K. Using Liu-type estimator to combat collinearity. Commun. Stat.-Theory Methods 2003, 32, 1009–1020. [Google Scholar] [CrossRef]
  20. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  21. Yüzbaşı, B.; Ejaz Ahmed, S. Shrinkage and penalized estimation in semi-parametric models with multicollinear data. J. Stat. Comput. Simul. 2016, 86, 3543–3561. [Google Scholar] [CrossRef]
  22. Yüzbası, B.; Ahmed, S.E.; Güngör, M. Improved penalty strategies in linear regression models. REVSTAT J. 2017, 15, 251–276. [Google Scholar]
  23. Knight, K.; Fu, W. Asymptotics for lasso-type estimators. Ann. Stat. 2000, 28, 1356–1378. [Google Scholar]
  24. Belsley, D.A. Conditioning Diagnostics: Collinearity and Weak Data in Regression; Number 519.536 B452; Wiley: Hoboken, NJ, USA, 1991. [Google Scholar]
  25. Twisk, J.; Kemper, H.; Mellenbergh, G. Longitudinal development of lipoprotein levels in males and females aged 12–28 years: The Amsterdam Growth and Health Study. Int. J. Epidemiol. 1995, 24, 69–77. [Google Scholar] [CrossRef] [PubMed]
  26. Nie, Y.; Opoku, E.; Yasmin, L.; Song, Y.; Wang, J.; Wu, S.; Scarapicchia, V.; Gawryluk, J.; Wang, L.; Cao, J.; et al. Spectral dynamic causal modelling of resting-state fMRI: An exploratory study relating effective brain connectivity in the default mode network to genetics. Stat. Appl. Genet. Mol. Biol. 2020, 19. [Google Scholar] [CrossRef]
  27. Ahmed, S.E.; Kim, H.; Yıldırım, G.; Yüzbaşı, B. High-Dimensional Regression Under Correlated Design: An Extensive Simulation Study. In International Workshop on Matrices and Statistics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 145–175. [Google Scholar]
  28. Ejaz Ahmed, S.; Yüzbaşı, B. Big data analytics: Integrating penalty strategies. Int. J. Manag. Sci. Eng. Manag. 2016, 11, 105–115. [Google Scholar] [CrossRef]
  29. Ahmed, S.E.; Yüzbaşı, B. High dimensional data analysis: Integrating submodels. In Big and Complex Data Analysis; Springer: Berlin/Heidelberg, Germany, 2017; pp. 285–304. [Google Scholar]
  30. Judge, G.G.; Bock, M.E. The Statistical Implication of Pre-Test and Steinrule Estimators in Econometrics; Elsevier: Amsterdam, The Netherlands, 1978. [Google Scholar]
Figure 1. RMSE of estimators as a function of the non-centrality parameter Δ when n = 60, and p 1 = 5 .
Figure 1. RMSE of estimators as a function of the non-centrality parameter Δ when n = 60, and p 1 = 5 .
Entropy 23 01348 g001
Figure 2. RMSE of estimators as a function of the non-centrality parameter Δ when n = 100, and p 1 = 5 .
Figure 2. RMSE of estimators as a function of the non-centrality parameter Δ when n = 100, and p 1 = 5 .
Entropy 23 01348 g002
Table 1. RMSEs of RSM, RPT, RSE, and RPS estimators with respect to β ^ RFM when Δ 0 for p 1 = 5 and n = 60 .
Table 1. RMSEs of RSM, RPT, RSE, and RPS estimators with respect to β ^ RFM when Δ 0 for p 1 = 5 and n = 60 .
ρ p 2 Δ CNIRSMRPTRSERPS
0.34003612.612.071.941.96
1 1.051.071.201.25
2 0.250.951.041.05
3 0.120.980.991.00
4 0.081.001.001.00
50006134.483.293.481.96
1 1.261.121.261.29
2 0.410.971.081.09
3 0.180.991.001.00
4 0.131.001.001.00
100006935.364.534.674.71
1 1.531.211.351.39
2 0.491.011.131.14
3 0.280.990.990.99
4 0.101.001.001.00
0.740013523.182.332.172.18
1 1.041.111.201.23
2 0.421.031.041.04
3 0.230.980.991.00
4 0.141.001.001.00
500017894.482.762.943.02
1 1.081.431.521.53
2 0.671.031.071.06
3 0.350.981.001.00
4 0.191.001.001.00
1000021346.825.245.303.02
1 1.161.321.421.53
2 0.751.101.151.16
3 0.390.991.001.00
4 0.111.001.001.00
Table 2. RMSEs of RSM, RPT, RSE, and RPS estimators with respect to β ^ RFM when Δ 0 for p 1 = 5 , and n = 100 .
Table 2. RMSEs of RSM, RPT, RSE, and RPS estimators with respect to β ^ RFM when Δ 0 for p 1 = 5 , and n = 100 .
ρ p 2 Δ CNIRSMRPTRSERPS
0.34001502.382.091.881.90
1 0.891.011.051.08
2 0.210.941.011.02
3 0.060.940.991.00
4 0.021.001.001.00
50003404.152.652.993.17
1 0.871.081.181.21
2 0.140.961.031.05
3 0.060.990.991.00
4 0.031.001.001.00
100005364.302.753.023.08
1 0.961.091.131.15
2 0.210.81.031.03
3 0.091.001.001.00
4 0.041.001.001.00
0.74009973.272.152.092.11
1 0.851.021.091.10
2 0.210.981.021.02
3 0.060.990.990.99
4 0.011.001.001.00
500015894.132.222.352.39
1 1.041.191.211.20
2 0.300.971.051.05
3 0.141.001.001.00
4 0.081.001.001.00
1000017515.173.714.034.09
1 1.011.151.241.25
2 0.391.041.071.06
3 0.160.991.001.00
4 0.111.001.001.00
Table 3. RMSEs of estimators with respect to β ^ RFM when Δ = 0 for p 1 = 10 .
Table 3. RMSEs of estimators with respect to β ^ RFM when Δ = 0 for p 1 = 10 .
n ρ p 2 CNIRSMRPTRSERPSLASSOaLASSO
600.35035.643.312.251.821.951.231.28
500452.764.133.712.613.011.471.52
10001265.345.024.284.614.781.962.15
20004567.567.135.106.186.392.703.06
0.75061.343.523.052.512.551.141.21
500743.174.493.653.413.501.361.58
10002350.895.844.114.324.611.681.95
20006908.398.105.316.246.291.842.02
0.950120.214.213.613.343.351.101.05
500950.984.823.3.83.723.731.211.16
10005892.516.354.105.015.131.421.31
20008352.738.514.635.245.381.611.35
1000.35031.212.912.542.122.231.321.36
500356.643.753.312.842.921.541.61
1000975.324.252.533.423.611.922.06
20002764.845.614.254.915.082.312.46
0.75052.793.182.612.302.371.281.53
500578.434.283.053.523.591.462.07
10001281.665.103.263.783.821.842.52
20003498.306.123.014.264.332.272.41
0.95079.414.113.413.213.281.281.21
500681.434.353.553.413.501.431.51
10001470.325.823.184.014.141.721.79
20004105.907.044.575.225.321.871.96
Table 4. Estimate, standard error for the active predictors and RPEs of estimators with respect to full-model estimator for the Amsterdam Growth and Health Study data.
Table 4. Estimate, standard error for the active predictors and RPEs of estimators with respect to full-model estimator for the Amsterdam Growth and Health Study data.
RFMRSMRPTRSERPSLASSOaLASSO
Estimate( β 2 )0.3810.3950.3920.3890.3900.6240.611
Standard error0.1040.1020.1000.0090.0080.0810.079
Estimate ( β 5 )0.1370.1250.1310.1300.1330.1010.105
Standard error0.0120.0100.0090.0110.0100.0130.012
RPE1.0000.7230.8410.8380.8310.9860.973
Table 5. RPEs of estimators.
Table 5. RPEs of estimators.
RFMRSMRPTRSERPSLASSOaLASSO
RPE1.0000.8020.9470.9320.9281.0511.190
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Opoku, E.A.; Ahmed, S.E.; Nathoo, F.S. Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application. Entropy 2021, 23, 1348. https://doi.org/10.3390/e23101348

AMA Style

Opoku EA, Ahmed SE, Nathoo FS. Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application. Entropy. 2021; 23(10):1348. https://doi.org/10.3390/e23101348

Chicago/Turabian Style

Opoku, Eugene A., Syed Ejaz Ahmed, and Farouk S. Nathoo. 2021. "Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application" Entropy 23, no. 10: 1348. https://doi.org/10.3390/e23101348

APA Style

Opoku, E. A., Ahmed, S. E., & Nathoo, F. S. (2021). Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application. Entropy, 23(10), 1348. https://doi.org/10.3390/e23101348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop