Next Article in Journal
Activities with BBC micro:bit as a Foundation for Statistical Reasoning of Lower-Secondary Students
Next Article in Special Issue
An Improved Dunnett’s Procedure for Comparing Multiple Treatments with a Control in the Presence of Missing Observations
Previous Article in Journal
Enhanced Effectiveness in Various Ladder Graphs Based on the F-Centroidal Meanness Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Overview of High-Dimensional Measurement Error Regression Models

1
School of Statistics, Beijing Normal University, Beijing 100875, China
2
School of Statistics and Data Science, Nanjing Audit University, Nanjing 211815, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3202; https://doi.org/10.3390/math11143202
Submission received: 14 June 2023 / Revised: 10 July 2023 / Accepted: 19 July 2023 / Published: 21 July 2023
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 2nd Edition)

Abstract

:
High-dimensional measurement error data are becoming more prevalent across various fields. Research on measurement error regression models has gained momentum due to the risk of drawing inaccurate conclusions if measurement errors are ignored. When the dimension p is larger than the sample size n, it is challenging to develop statistical inference methods for high-dimensional measurement error regression models due to the existence of bias, nonconvexity of the objective function, high computational cost and many other difficulties. Over the past few years, some works have overcome the aforementioned difficulties and proposed several novel statistical inference methods. This paper mainly reviews the current development on estimation, hypothesis testing and variable screening methods for high-dimensional measurement error regression models and shows the theoretical results of these methods with some directions worthy of exploring in future research.

1. Introduction

Measurement error data inevitably exist in applications, causing significant concern in various fields including biology, medicine, epidemiology, economics, finance and remote sensing. So far, there have been a wealth of research achievements on classical low-dimensional measurement error regression models under various assumptions. Numerous studies focus on parameter estimation for low-dimensional measurement error regression models, with the primary techniques listed below: (1) Corrected regression estimation methods [1]; (2) Simulation–Extrapolation (SIMEX) estimation methods [2,3]; (3) Deconvolution methods [4]; (4) Corrected empirical likelihood methods [5,6]. For more detailed discussions on other estimation and hypothesis testing methods for classical low-dimensional measurement error models, please refer to the literature [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29], as well as the monographs [30,31,32,33,34,35].
As one of the most popular research fields in statistics, high-dimensional regression has been widely used in various fields including genetics, economics, medical imaging, meteorology and sensor networks. Over the past two decades, various high-dimensional regression methods have been widely proposed such as Lasso [36], smoothly clipped absolute deviation (SCAD) [37], Elastic Net [38], Adaptive Lasso [39], Dantzig Selector [40], smooth integration of counting and absolute deviation (SICA) [41], and minimax concave penalty (MCP) [42], among many others. These methods have been widely applied to estimate regression coefficients while also achieving the goal of variable selection by adding penalties to objective functions; please refer to the literature review [43,44,45,46] as well as the monographs [47,48,49].
For the variable screening methods of ultrahigh-dimensional regression models where dimension p and sample size n satisfy log p = O ( n κ ) , κ > 0 , Fan and Lv [50] proposed the sure independence screening (SIS) method, which is a pioneering method in this field. For the estimation and variable selection of ultrahigh-dimensional regression models, it is suggested to apply the SIS method for variable screening first. Then, based on the variables screened in the first step, we can utilize regularization methods with penalties to estimate the regression coefficients and identify the significant variables simultaneously. Due to the operability and effectiveness of the SIS method in applications, numerous works have extended the method; see [51,52,53,54,55,56,57,58,59,60].
However, most of the aforementioned theories and applications for high-dimensional regression models focused on clean data. In the era of big data, researchers frequently collect high-dimensional data with measurement errors. Typical instances include gene expression data [61] and sensor network data [62]. The imprecise measurements are the result of poorly managed and defective data collection processes as well as the imprecise measuring instruments. It is well known that ignoring the influence of measurement errors will result in biased estimators and erroneous conclusions. Therefore, developing statistical inference methods for high-dimensional measurement error regression models has drawn a lot of interest.
Based on the types of measurement errors, research on high-dimensional measurement error regression models can be divided into the following three categories: covariates containing measurement errors; response variables containing measurement errors; both covariates and response variables containing measurement errors. In this paper, we mainly focus on the category where covariates contain measurement errors. When the dimension p is larger than the sample size n, parameter estimation can be challenging due to the nonconvexity of the penalized objective function caused by correction for the bias. This further makes it impossible to obtain the optimal solution of optimization problem. We utilize the following linear regression model to illustrate this problem
y = X β + ε ,
where y = ( y 1 , , y n ) T R n is the n × 1 response vector, X = ( X 1 , , X n ) T R n × p is the n × p fixed design matrix with X i = ( x i 1 , , x i p ) T , β = ( β 1 , , β p ) T R p is the sparse regression coefficient vector with only s nonzero components, and assume that model error vector ε = ( ε 1 , , ε n ) T R n is independent of X . In order to obtain a sparse estimator of the true regression coefficient vector β 0 = ( β 01 , , β 0 p ) T R p , we can minimize the following penalized least-square objective function
1 2 n y X β 2 2 + p λ ( β ) 1 ,
which is equivalent to minimizing
1 2 β T Σ β ρ T β + p λ ( β ) 1 ,
where Σ = n 1 X T X , ρ = n 1 X T y , p λ ( · ) is a penalty function with regularization parameter λ 0 . If the covariates matrix X can be precisely measured, the penalized objective functions (2) and (3) are convex. Thus, we can obtain a sparse estimator of β 0 by minimizing the penalized objective function (2) or (3).
However, it is common that the covariates matrix X cannot be accurately observed in practice. Let W = ( W 1 , , W n ) T = ( w i j ) n × p be the observed covariates matrix with additive measurement errors satisfying W = X + U , where U = ( U 1 , , U n ) T is the matrix of measurement errors, U i = ( u i 1 , , u i p ) T follows a sub-Gaussian distribution with mean zero and covariance matrix Σ u , and it is assumed to be independent of ( X , y ) . To reduce the influence of measurement errors, Loh and Wainwright [63] proposed replacing Σ and ρ in the penalized objective function (3) by their consistent estimators Σ ^ = n 1 W T W Σ u and ρ ˜ = n 1 W T y , respectively. Then, we can obtain the sparse estimator of β 0 by minimizing the following penalized objective function
1 2 β T Σ ^ β ρ ˜ T β + p λ ( β ) 1 .
Note that when the dimension p is fixed or smaller than the sample size n, it can be guaranteed that Σ ^ is a positive definite or semi positive-definite matrix. It further ensures that the penalized objective function (4) remains convex. Thus, the global optimal solution of β can be obtained by minimizing the penalized objective function (4).
However, for high-dimensional or ultrahigh-dimensional regression models, i.e., p > n or p n , there are two key problems: (i) the penalized objective function (4) is no longer convex and unbounded from below because the corrected estimator Σ ^ of Σ is no longer a semi-positive definite matrix. This further makes it impossible to obtain the estimator of β 0 by minimizing the penalty objective function (4). (ii) In order to construct an objective function similar to that of standard Lasso and solve the corresponding optimization problem using R package “glmnet” or “lars”, it is necessary to decompose Σ ^ by Cholesky’s decomposition method and obtain the substitution of response vector and covariates matrix. However, this process results in an error accumulation and makes it challenging to guarantee valid theoretical results; please see the detailed discussions in [64,65].
For problem (i), Loh and Wainwright [63] changed the unconstrained optimization problem into a constrained optimization problem by adding restrictions to β . They suggested applying the projected gradient descent algorithm to solve the restricted optimization problem and acquire the global optimal solution of true regression coefficient vector β 0 . Nevertheless, the penalized objective function of the optimization problem is still nonconvex. To address this issue, Datta and Zou [64] suggested substituting Σ ^ by its semi-positive definite projection matrix Σ ˜ , and they proposed convex conditioned Lasso (CoCoLasso). Furthermore, Zheng et al. [65] introduced a balanced estimation that prevented overfitting while maintaining the estimation accuracy by combining l 1 and concave penalty. Zhang et al. [66] further proposed an estimation method based on L 0 regularization. Tao et al. [67] constructed a modified least-squares loss function using a semi-positive definite projection matrix for the estimated covariance matrix and proposed a calibrated zero-norm regularized least squares (CaZnRLS) estimation of regression coefficients. Rosenbaum and Tsybakov [68,69] proposed a matrix uncertainty (MU) selector and its improved version compensated MU selector for high-dimensional linear models with additive measurement errors in covariates. Sørensen et al. [70] extended the MU selector to generalized linear models and developed the generalized matrix uncertainty (GMU) selector. Sørensen et al. [71] showed the theoretical results of relevant variable selection methods. Based on the MU selector, Belloni et al. [72] introduced an estimator that can achieve the minimax efficiency bound. They proved that the corresponding optimization problem can be converted into a second-order cone programming problem, which can be solved in polynomial time. Romeo and Thoresen [73] evaluated the performance of the MU selector in [68], nonconvex Lasso in [63], and CoCoLasso in [64] using simulation studies. Brown et al. [74] proposed a path-following iterative algorithm called Measurement Error Boosting (MEBoost), which is a computationally effective method for variable selection in high-dimensional measurement error regression models. Nghiem and Potgieter [75] introduced a new estimation method called simulation–selection–extrapolation (SIMSELEX), which used Lasso in the simulation step and group Lasso in the selection step. Li and Wu [76] established minimax convergence rates for the estimation of regression coefficients under a more general situation. Bai et al. [77] proposed a variable selection method for ultrahigh-dimensional linear quantile regression models with measurement errors. Jiang and Ma [78] drew on the idea of nonconvex Lasso in [63] and proposed an estimator of the regression coefficients for high-dimensional Poisson models with measurement errors. Byrd and McGee [79] developed an iterative estimation method for high-dimensional generalized linear models with additive measurement errors based on the imputation-regularized optimization (IRO) algorithm in [80]. However, the error accumulation issue mentioned in problem (ii) has not been addressed in the literature.
The aforementioned works place more emphasis on estimation and variable selection problems rather than hypothesis testing. For high-dimensional regression models with clean data, research on hypothesis testing problems has made significant progress under various settings in [81,82,83,84,85,86,87,88]. For high-dimensional measurement error models, the hypothesis testing methods are equally crucial. However, the bias and instability caused by measurement errors make hypothesis testing extremely difficult. Recently, some progress has been achieved in statistical inference methods. Based on a multiplier bootstrap, Belloni [89] constructed simultaneous confidence intervals for the target parameters in high-dimensional linear measurement error models. Focused on the case where a fixed number of covariates contain measurement errors, Li et al. [90] proposed a corrected decorrelated score test for parameters corresponding to the error-prone covariates and created asymptotic confidence intervals for them. Huang et al. [91] proposed a new variable selection method based on debiased CoCoLasso and proved that it can achieve false discovery rate (FDR) control. Jiang et al. [92] developed Wald and score tests for high-dimensional Poisson measurement error models.
Compared to the above estimation and hypothesis testing methods, there are relatively few screening techniques for ultrahigh-dimensional measurement error models. Nghiem et al. [93] introduced two screening methods named corrected penalized marginal screening (PMSc) and corrected sure independence screening (SISc) for ultrahigh-dimensional linear measurement error models.
This paper gives an overview of the estimation and hypothesis testing methods for high-dimensional measurement error regression models as well as the variable screening methods for ultrahigh-dimensional measurement error models. The rest of this paper is organized as follows. In Section 2, we review some estimation methods for linear models. We survey the estimation methods for generalized linear models in Section 3. Section 4 presents the recent advances in hypothesis testing methods for high-dimensional measurement error models. Section 5 introduces the variable screening techniques for ultrahigh-dimensional linear measurement error models. We conclude the paper with some discussions in Section 6.
Notation 1. 
Let S p be the set of all p × p real symmetric matrices and S + p be the subset of S p containing all positive semi-definite matrix in S p . We use | A | to denote the cardinality of set A . Let S = { j : β 0 j 0 , j = 1 , , p } be the index set of nonzero parameters. For a vector a = ( a 1 , , a m ) R m , let a q = ( = 1 m | a | q ) 1 / q , 1 q < denote its l q norm, and write a = max 1 m | a | . Denote by a A R | A | the subvector of a with index set A { 1 , , m } . Denote by e the vector of all ones. For a matrix B = ( b i j ) , let B 1 = max j i b i j , B max = max i , j b i j and B = max i j b i j . For constants a and b, define a b = max { a , b } . We use c and C to denote positive constants that may vary throughout the paper. Finally, let d denote convergence in distribution.

2. Estimation Methods for Linear Models

This section mainly focuses on the linear model (1) with high-dimensional settings where the dimension p is larger than the sample size n. When the data can be observed precisely, we can estimate the true regression coefficient vector β 0 by minimizing the penalized objective function (2) or (3). However, we frequently come across cases where the measured covariates contain measurement errors. There are various types of measurement error data, and we primarily focus on the two categories below.
(1) Covariates with additive errors. The observed error-prone covariate W i = X i + U i , where the measurement error U i is independent of X i and independently generated from a distribution with mean zero and known covariance matrix Σ u .
(2) Covariates with multiplicative errors. The observed error-prone covariates W i = X i M i , where ⊙ denotes the Hadamard product, and the measurement error M i is independent of X i and follows from a distribution with mean μ M and known covariance matrix Σ M .
Our main goal is to obtain the sparse estimator β ^ of true regression coefficient vector β 0 in the presence of measurement errors. As we introduced in Section 1, we will run into the issue of the penalized objective function being nonconvex and unbounded from below after correcting the bias caused by measurement errors. This prevents us solving the optimization problem. Several works focused on this issue and proposed some estimation methods.

2.1. Nonconvex Lasso

In order to resolve the issue of the objective function being unbounded from below and unsolvable in the presence of measurement errors, Loh and Wainwright [63] added restrictions to regression coefficients β and adopted an l 1 penalty. Then, the estimator of β 0 can be obtained by the following l 1 -constrained quadratic program
β ^ NCL arg min β 1 c 0 s 1 2 β T Σ ^ β ρ ˜ T β + λ β 1 = : arg min β 1 c 0 s L ( β ) + λ β 1 ,
where c 0 > 0 is a constant, s = | S | denotes the number of nonzero components of β 0 , L ( β ) = 2 1 β T Σ ^ β ρ ˜ T β is the loss function, Σ ^ and ρ ˜ are the consistent estimators of covariance matrix Σ of X i and marginal correlation coefficient vector ρ of ( X i , y i ) , and they may differ in terms of various kinds of measurement error data. Under the additive error setting,
Σ ^ add = n 1 W T W Σ u , ρ ˜ add = n 1 W T y .
Under the multiplicative error setting,
Σ ^ mul = n 1 W T W ( Σ m + μ m μ m T ) , ρ ˜ mul = n 1 W T y μ m ,
where ⊘ denotes the elementwise division operator, and let Σ ^ = Σ ^ add or Σ ^ mul throughout the sequel. The reason for using “∈” rather than “=” in (5) is that several local minima might exist in the objective function. Note that this method still relies on a nonconvex objective function to obtain the estimator of β 0 . Thus, we refer to it as “nonconvex Lasso”. It can be implemented by the R package “hdme” [94] at https://cran.r-project.org/web/packages/hdme/vignettes/hdme.html (accessed on 13 June 2023).
The nonconvexity of the penalized objective function makes it challenging to obtain the global minimum of the optimization problem (5). To solve the optimization problem (5), Loh and Wainwright [63] applied the projected gradient descent algorithm and demonstrated that even if the penalized objective function is nonconvex, the solution produced by this algorithm can reach the global minimum with high probability. The algorithm finds the global minimum in an iterative way as follows. At the ( k + 1 ) th iteration,
β NCL ( k + 1 ) = arg min β 1 c 0 s L ( β NCL ( k ) ) + L ( β NCL ( k ) ) T ( β β NCL ( k ) ) + η 2 β β NCL ( k ) 2 2 + λ β 1 ,
where L ( β ) = Σ ^ β ρ ˜ is the gradient of loss function L ( β ) , η > 0 denotes the step-size parameter. For details of this algorithm, please see [63,95,96,97]. Loh and Wainwright [63] proved that the solution obtained by iteration (8) is quite near to the global minimum in both l 1 -norm and l 2 -norm under some conditions. Specifically, for all t 0 ,
β NCL ( k ) β ^ NCL 2 2 γ k β NCL ( 0 ) β ^ NCL 2 2 + C 1 log p n β ^ NCL β 0 1 2 + C 2 β ^ NCL β 0 2 2 , β NCL ( k ) β ^ NCL 1 2 k β NCL ( k ) β ^ NCL 2 + 2 k β ^ NCL β 0 2 + 2 β ^ NCL β 0 1 ,
where C 1 and C 2 are positive constants, and γ ( 0 , 1 ) is a contraction coefficient independent of ( n , p , k ) . For the estimator β ^ NCL of the true regression coefficient vector β 0 , Loh and Wainwright [63] showed that with any c 0 β 0 2 and λ = O ( log p / n ) , the l q -estimation error of β ^ NCL satisfies the bounds
β ^ NCL β 0 q = O s 1 / q log p n , q = 1 , 2 .
When q = 1 , the l 1 -estimation error can reach the convergence rate s log p / n ; when q = 2 , the l 2 -estimation error can reach the convergence rate s log p / n . However, Loh and Wainwright [63] did not establish the variable selection consistency and oracle inequality for the prediction error of the nonconvex Lasso estimator.

2.2. Convex Conditioned Lasso

The nonconvex Lasso [63] overcomes the problem of unsolvability caused by the nonconvex objective function in the presence of measurement errors. However, there are some drawbacks to this method. First, the nonconvex Lasso solves the problem by adding constraints to β , but the penalized objective function remains nonconvex. It is well recognized that the convexity of the penalized objective function will be incredibly useful for theoretical analysis and computation. Second, two important unknown parameters c 0 and s are included in the optimization problem (5). These two parameters have a direct impact on the estimation results, but we are not sure about their magnitudes in applications. Third, Loh and Wainwright [63] have not established the variable selection results of the nonconvex Lasso estimator. To remedy these issues, Datta and Zou [64] proposed Convex Conditioned Lasso (CoCoLasso) based on a convex objective function, which possesses computational and theoretical superiority brought by convexity.
In order to construct the convex objective function, Datta and Zou [64] introduced a nearest positive semi-definite matrix projection operator for the square matrix, which is defined as
( A ) + = arg min A 1 0 A A 1 max ,
where A is a square matrix. Let Σ ˜ = ( Σ ^ ) + , and the alternating direction method of multipliers (ADMM) algorithm [98] can be utilized to derive Σ ˜ from Σ ^ . Based on Σ ˜ , the following convex objective function can be constructed, and it yields the CoCoLasso estimator
β ^ coco = arg min β 1 2 β T Σ ˜ β ρ ˜ T β + λ β 1 .
When the covariates contain additive measurement errors,
Σ ˜ add = ( Σ ^ add ) + , ρ ˜ add = n 1 W T y , Σ ^ add = n 1 W T W Σ u .
When the covariates contain multiplicative measurement errors,
Σ ˜ mul = ( Σ ^ mul ) + , ρ ˜ mul = n 1 W T y μ m , Σ ^ mul = n 1 W T W ( Σ m + μ m μ m T ) .
Note that Σ ˜ not only contributes to the construction of the convex objective function but also possesses the same level of estimation accuracy as Σ ^ in [63]. It can be guaranteed by the following equation
Σ ˜ Σ max Σ ˜ Σ ^ max + Σ ^ Σ max 2 Σ ^ Σ max .
Since Σ ˜ is semi-positive definite, we can perform Cholesky decomposition on Σ ˜ . Then, the Cholesky factor of Σ ˜ can be used to simplify computations by rewriting (10) as
β ^ coco = arg min β 1 2 n y ˜ W ˜ β 2 2 + λ β 1 ,
where W ˜ denotes the Cholesky factor of Σ ˜ satisfying n 1 W ˜ T W ˜ = Σ ˜ , and y ˜ is the vector satisfying n 1 W ˜ T y ˜ = ρ ˜ . The penalized objective function in (13) is similar to that of the standard Lasso. Thus, we can utilize the coordinate descent algorithm to obtain the CoCoLasso estimator; please see the details in [64,99,100]. Theoretically, Datta and Zou [64] established the l q -estimation ( q = 1 , 2 ) and prediction error bounds of the CoCoLasso estimator. Suppose that
ψ = min δ 0 , δ S c 1 3 δ S 1 δ T Σ δ δ 2 2 > 0 .
For s ζ log p / n < λ min { ϵ 0 , 12 ϵ 0 β 0 S } , where ζ = max { σ ε 4 , σ U 4 , 1 } , ϵ 0 = σ U 2 , σ ε 2 and σ U 2 are sub-Gaussian parameters of model error and measurement error, respectively, the CoCoLasso estimator β ^ coco satisfies that with probability at least 1 C exp ( c log p ) ,
β ^ coco β 0 q = O λ s 1 / q ψ , q = 1 , 2 ,
n 1 / 2 X ( β ^ coco β 0 ) 2 = O λ s ψ .
The fomulas (14) and (15) show the oracle inequalities for the l q -estimation error with q = 1 , 2 and prediction error. Furthermore, Datta and Zou [64] established the sign consistency of the CoCoLasso estimator under an additional irrepresentable condition and minimum signal strength condition. Meanwhile, there was no variable selection result provided for the nonconvex Lasso estimator β ^ NCL in [63]. Thus, the CoCoLasso estimation method not only enjoys the computational convenience of convexity but also possesses excellent theoretical results. However, when the dimension of covariates p is large, the computation of Σ ˜ is expensive. To improve the computational efficiency, Escribe et al. [101] applied a two-step block descent algorithm and proposed a block coordinate descent convex conditioned Lasso (BDCoCoLasso), which is designed for the case in which the covariate matrix is only partially corrupted. CoCoLasso and BDCoCoLasso are now available in R package “BDcocolasso” at https://github.com/celiaescribe/BDcocolasso (accessed on 13 June 2023).

2.3. Balanced Estimation

CoCoLasso is effective in the parameter estimation of high-dimensional measurement error models, but it suffers from overfitting. To overcome this drawback, Zheng et al. [65] replaced the Lasso penalty in CoCoLasso with the combined l 1 and concave penalty and developed the balanced estimator, which can be obtained by
β ^ bal = arg min β 1 2 β T Σ ˜ β ρ ˜ T β + λ 0 β 1 + p λ ( β ) 1 ,
where λ 0 = c 1 log p / n is the regularization parameter for the l 1 penalty with c 1 being a positive constant, p λ ( β ) = [ p λ ( | β 1 | ) , , p λ ( | β p | ) ] T , and p λ ( u ) , u [ 0 , + ) is a concave penalty function with the tuning parameter λ 0 . The definitions of Σ ˜ and ρ ˜ are the same as those in (11) and (12) with the two kinds of measurement error data. In contrast to the CoCoLasso estimator, the balanced estimator strikes a perfect balance between prediction and variable selection. Surprisingly, excellent variable selection results promote the estimation and prediction accuracy of the balanced estimator. The simulation studies in [65] demonstrate the estimation and prediction accuracy as well as the better variable selection results of the balanced estimator. As for the asymptotic properties of β ^ bal , Zheng et al. [65] established the oracle inequalities for the l q -estimation and prediction error,
β ^ bal β 0 q = O p λ 0 s 1 / q ϕ 2 , q = 1 , 2 ,
n 1 / 2 X ( β ^ bal β 0 ) 2 = O p λ 0 s ϕ ,
where
ϕ = min δ 0 , δ S c 1 7 δ S 1 n 1 / 2 X δ 2 δ S 2 δ S c * 2 > 0 ,
and δ S c * R s contains the s largest absolute vaules of δ S c . It can be seen from (17) and (18) that the bounds of l q -estimation ( q = 1 , 2 ) and prediction error are free of regularization parameter λ for the concave penalty. Also, the upper bound of falsely discovered signs is provided in [65]. Denote FS ( β ^ ) = | { 1 j p : sgn ( β ^ j ) sgn ( β 0 , j ) } | ; then
FS ( β ^ ) = O p λ 0 2 s λ 2 ϕ 4 .
From (19), we can see that if min j S | β 0 j | s log p / n such that λ 2 λ 0 2 s , a balanced estimator can achieve sign consistency, which is stronger than the variable selection consistency. Compared with the balanced estimator, the CoCoLasso estimator requires an additional irrepresentable condition to achieve this property.

2.4. Calibrated Zero-Norm Regularized Least Square Estimation

The nearest positive semi-definite matrix projection operator defined in [64] solves the problem that the penalized objective function is nonconvex in high-dimensional measurement error models. However, with the constraint of the positive semi-definite matrix, the computation cost of Σ ˜ is high. Tao et al. [67] demonstrated that as the dimension p increases, the time required to calculate Σ ˜ using the ADMM algorithm will increase significantly. Thus, Tao et al. [67] suggested replacing Σ ˜ with an approximation of Σ ^ that is easy to obtain but less precise. To achieve this purpose, consider the eigendecomposition of Σ ^ as follows
Σ ^ = V diag ( θ 1 , , θ p ) V T ,
where diag ( θ 1 , , θ p ) is a diagonal matrix containing the eigenvalues of Σ ^ with θ 1 θ 2 θ p , and V R p × p is an orthonormal matrix consisting of the corresponding eigenvectors. Then, Tao et al. [67] substituted the Frobenius norm for the elementwise maximum norm in (9) and obtained a positive definite approximation of Σ ^ as follows
Σ ˜ F = arg min W ξ I Σ ^ W F for some ξ > 0 .
Note that the optimal solution of (20) is the same as that of the problem
min W ξ I Σ ^ W F 2 .
Thus, we have
Σ ˜ F = ξ I + Π S + p ( Σ ^ ξ I ) = V diag [ max ( θ 1 , ξ ) , , max ( θ p , ξ ) ] V T ,
where Π S + p ( · ) denotes the projection of a matrix on S + p . Similar to Σ ˜ , we have Σ ˜ F = n 1 W ˜ F T W ˜ F , where n 1 / 2 W ˜ F is the Cholesky factor of Σ ˜ F . Let y ˜ F be the vector satisfying n 1 W ˜ F T y ˜ F = ρ ˜ . By some simple calculation, we can obtain that
W ˜ F = n V diag max θ 1 , ξ , , max θ p , ξ V T , y ˜ F = n V diag 1 max θ 1 , ξ , , 1 max θ p , ξ V T ρ ˜ .
Based on Equation (22), Σ ˜ F can be obtained easily. This implies that computing Σ ˜ F requires substantially less time than computing Σ ˜ . However, the approximation accuracy of Σ ˜ F to Σ ^ is not as good as that of Σ ˜ because minimizing the Frobenius norm may yield larger components compared with the elementwise maximum norm. To obtain an excellent estimator of β 0 , it is reasonable to find a more effective regression method to replace Lasso. Tao et al. [67] considered the zero norm penalty and defined the following calibrated zero-norm regularized least squares (CaZnRLS) estimator
β ^ zn arg min β R p 1 2 n λ W ˜ F β y ˜ F 2 2 + β 0 .
However, it is difficult to solve (24) directly. Thus, to give an equivalent form for (24) that can be solved, Tao et al. [67] defined
ϕ ( u ) : = a 1 a + 1 u 2 + 2 a + 1 u ( a > 1 ) , u R .
It is easy to verify that for any β R p ,
β 0 = min w R p i = 1 p ϕ w i : ( e w ) T | β | = 0 , 0 w e ,
where | β | = ( | β 1 | , , | β p | ) T . The Formula (25) implies that the optimization problem (24) can be rewritten as the following mathematical program with equilibrium constraints (MPEC)
min β , w R p 1 2 n λ W ˜ F β y ˜ F 2 2 + i = 1 p ϕ w i : ( e w ) T | β | = 0 , 0 w e .
Note that if the optimal solution of optimization problem (24) is β ^ * , then the corresponding optimal solution of optimization problem (26) is ( β ^ * , sign ( | β ^ * | ) ) .
However, it can be seen that the annoying nonconvexity is introduced by the restriction ( e w ) T | β | = 0 in (26), and it is the cause of the difficulty in obtaining the estimator β ^ zn . Accordingly, Tao et al. [67] considered the following penalized version of optimization problem (26)
min β , w R p 1 2 n λ W ˜ F β y ˜ F 2 2 + i = 1 p ϕ w i + ρ ( e w ) T | β | , 0 w e ,
where ρ > 0 is the penalty parameter. Tao et al. [67] proved that the global optimal solution of optimization problem (27) with ρ ρ ¯ : = ( 4 a L f ) [ ( a + 1 ) λ ] 1 is the same as that of optimization problem (26), where L f is the Lipschitz constant of the function f ( β ) : = ( 2 n ) 1 W ˜ F β y ˜ F 2 2 on the ball { β R p : β 2 R } , and R is a constant. Thus, β ^ zn can be obtained by solving the following optimization problem with ρ ρ ¯
β ^ zn arg min β R p , w [ 0 , e ] 1 2 n W ˜ F β y ˜ F 2 2 + i = 1 p λ ϕ ( w i ) + ρ ( 1 w i ) | β i | .
Tao et al. [67] recommended using the multi-stage convex relaxation approach (GEP–MSCRA) to obtain β ^ zn . This approach solves (28) in an iterative way with the main steps summarized as follows.
Step 1. Initialize the algorithm with w ( 0 ) [ 0 , 2 1 e ] , ρ ( 0 ) = 1 , λ > 0 , k = 1 .
Step 2. Solve the following optimization problem and obtain β ^ zn ( k )
β ^ zn ( k ) = arg min β R p 1 2 n W ˜ F β y ˜ F 2 2 + λ i = 1 p ( 1 w i ( k 1 ) ) | β i | .
Step 3. If k = 1 , choose an appropriate ρ ( 1 ) > ρ ( 0 ) using the information from β ^ zn ( 1 ) ; if 1 < k 3 , choose ρ ( k ) satisfying ρ ( k ) > ρ ( k 1 ) ; if k > 3 , let ρ ( k ) = ρ ( k 1 ) .
Step 4. Obtain w i ( k ) ( i = 1 , , p ) through the following optimization problem
w i ( k ) = arg min 0 w i 1 ϕ ( w i ) ρ ( k ) w i | β ^ zn , i ( k ) | .
Step 5. Let k k + 1 and repeat Steps 2–4 until the stopping conditions are satisfied.
Note that the initial w ( 0 ) in Step 1 is an arbitrary vector from the interval [ 0 , 2 1 e ] rather than the feasible set [ 0 , e ] in (28). The reason is to obtain a better initial estimator β ^ zn ( 1 ) . In addition, w i ( k ) in Step 4 has the following closed form based on the convexity of ϕ
w i ( k ) = min 1 , max ( a + 1 ) ρ ( k ) | β i ( k ) | 2 2 ( a + 1 ) , 0 , i = 1 , , p .
Consequently, the primary calculation in each iteration is to solve a weighted l 1 -norm regularized least square problem. Under some regularity conditions, β ^ zn ( k ) satisfies
β ^ zn ( k ) β 0 2 = O p ( λ s ) k N + .
It can be seen from (29) that the l 2 -estimation error bound of CaZnRLS estimator possesses the same order as those of nonconvex Lasso and CoCoLasso estimators. Tao et al. [67] further showed that the error bound of β ^ zn ( k + 1 ) is better than that of β ^ zn ( k ) for all k N + . Furthermore, Tao et al. [67] demonstrated that GEP-MSCRA will produce a β ^ zn ( k ) such that supp ( β ^ zn ( k ) ) = supp ( β 0 ) in a finite number of iterations if the minimum nonzero value of the smallest nonzero entries of β 0 is not too small.

2.5. Linear and Conic Programming Estimation

In addition to the approaches mentioned above, another class of methods is based on the idea of the Dantzig selector to acquire an estimator of true regression coefficients β 0 . Rosenbaum and Tsybakov [68] proposed the following matrix uncertainty (MU) selector
β ^ MU = arg min β β 1 : n 1 W T ( y W β ) δ β 1 + λ ,
where δ 0 and λ 0 are tuning parameters depending on the level of measurement error U and model error ε , respectively. The MU selector is available in R package “hdme” [94].
However, the n 1 W T W is included in (30) rather than n 1 X T X due to the unobservability of X . Obviously, the matrix n 1 W T W contains bias caused by measurement errors. To address this issue, Rosenbaum and Tsybakov [69] proposed an improved version of the MU selector called the compensated MU selector. It is applicable to the case that the entries of measurement error U i are independent such that σ U , j 2 = n 1 i = 1 n E ( U i j 2 ) is finite for j = 1 , , p . The compensated MU selector is defined as
β ^ CMU = arg min β β 1 : n 1 W T ( y W β ) + D ^ β δ β 1 + λ ,
where D ^ is a diagonal matrix consisting of σ ^ U , j 2 , j = 1 , , p , and constants δ and λ are the same as those in (30). Rosenbaum and Tsybakov [69] showed that the l q -estimation error of the estimator β ^ CMU satisfies
β ^ CMU β 0 q = O p s 1 / q ( β 0 1 + 1 ) log p n , 1 q .
The MU selector and compensated MU selector provide two alternative estimation methods for high-dimensional measurement error models, but there remains an issue. The optimization problem in (31) may be nonconvex, and Rosenbaum and Tsybakov [69] did not offer a suitable algorithm to the general case. To remedy this issue, Belloni et al. [72] proposed the conic-programming-based estimator β ^ cp . Consider the following optimization problem
min β , t β 1 + κ t , s . t . n 1 W T ( y W β ) + D ^ β δ t + λ , β 2 t , t R + ,
where κ , δ and λ are positive tuning parameters. Suppose that the solution of (32) is ( β ^ cp ; t ^ ) , then, β ^ cp is defined as the conic-programming-based estimator of true regression coefficients β 0 . It is easy to verify that the optimization problem (32) can be solved efficiently in a polynomial time as it is a second-order cone programming problem. To analyze the asymptotic properties of β ^ cp , assume that κ [ 2 1 , 2 ] , δ = O ( log p / n ) , and λ = O ( log p / n ) . Then, Belloni et al. [72] showed that the l q -estimation ( 1 q ) and prediction error of β ^ cp satisfy
β ^ cp β 0 q = O p s 1 / q ( β 0 2 + 1 ) log p n , 1 q ,
n 1 / 2 X ( β ^ cp β 0 ) 2 = O p s 1 / 2 ( β 0 2 + 1 ) log p n .
In contrast to nonconvex Lasso in [63], the conic-programming-based estimator β ^ cp can achieve the convergence rate in (33) and (34) without any information of the parameters β 0 1 , β 0 2 or s. Compared with the compensated MU selector in [69], the conic-programming based estimator β ^ cp can be obtained in the general case without the computational difficulty of nonconvexity.

3. Estimation Methods for Generalized Linear Models

The above methods are mainly for linear models. This section introduces the estimation methods for high-dimensional generalized linear models with measurement errors.

3.1. Estimation Method for Poisson Models

Count data are commonly encountered in various fields including finance, economics and social sciences. In order to analyze count data, Poisson regression models are a popular choice in practice. Jiang and Ma [78] studied the high-dimensional Poisson regression models with additive measurement errors and proposed a novel optimization algorithm to obtain the estimator of true regression coefficient vector β 0 . Suppose that Y i is the response variable following a Poisson distribution satisfying E ( Y i | X i ) = exp ( X i T β ) , where X i R p is an unobservable covariate. Its error-prone surrogate W i = X i + U i , and the measurement error U i follows from a sub-Gaussian distribution with known covariance matrix Σ u . It is easy to verify that
E Y i W i T β exp β T W i β T Σ u β / 2 X i , Y i = Y i X i T β exp β T X i .
From (35), Jiang and Ma [78] imposed a restriction on β similar to it in [63] and estimated β by solving the following optimization problem
β ^ p = arg min β 1 c p s , β 2 c p L ( β ) + λ β 1 ,
where
L ( β ) = 1 n i = 1 n Y i W i T β exp β T W i β T Σ u β / 2 .
The estimator β ^ p can be obtained by the composite gradient descent algorithm. Specifically, at the ( k + 1 ) th iteration, first solve the following optimization problem without any restrictions on β
β ˜ p ( k + 1 ) = arg min β L ( β p ( k ) ) / β T ( β β ( k ) ) + η / 2 β β ( k ) 2 2 + λ β 1 ,
where η > 0 is a stepsize parameter. Next, apply the projection method in [95] to project β ˜ p ( k + 1 ) onto the l 1 ball with radius c p s and produce β ˘ p ( k + 1 ) . If β ˘ p ( k + 1 ) 2 > c p , let β ^ p ( k + 1 ) = β ˘ p ( k + 1 ) c p / β ˘ p ( k + 1 ) 2 ; otherwise, let β ^ p ( k + 1 ) = β ˘ p ( k + 1 ) . Repeat the above steps until the stopping condition is satisfied. Jiang and Ma [78] proved the convergence of this algorithm. Under some regularity conditions, they further showed that the global minimum β ^ p of (36) satisfies
β ^ p β 0 q = O ( s 1 / q λ ) .
There is an usual requirement that λ 2 L ( β ) / β in Poisson models, and the term L ( β ) / β = O ( n / log p ) . Thus, the convergence rate of β ^ p is slower than those of nonconvex Lasso, CoCoLasso and balanced estimators in linear models.

3.2. Generalized Matrix Uncertainty Selector

The method in [78] is only designed for high-dimensional Poisson models with measurement errors. To develop a method that is applicable to generalized linear models, Sørensen et al. [70] drew on the idea of the MU selector and proposed the generalized matrix uncertainty (GMU) selector for high-dimensional generalized linear models with additive measurement errors.
Consider a generalized linear model with response variable Y distributed according to
f Y ( y ; θ , ϕ ) = exp y θ b ( θ ) a ( ϕ ) + c ( y , ϕ ) ,
where θ = X T β 0 , X R p are the covariates. The expected response is given by the mean function μ ( θ ) = b ( θ ) , and the Taylor expansion of the mean function μ ( X i T β 0 ) at point W i T β 0 is
μ ( X i T β 0 ) = = 0 μ ( ) W i T β 0 ! ( U i T β 0 ) ,
where μ ( ) ( · ) is the th derivative of function μ ( · ) . With the Taylor expansion (39) of the mean function, the generalized matrix uncertainty selector can be defined as
β ^ GMU L = arg min β β 1 : β Θ L , Θ L = β R p : max 1 j p 1 n w i j [ Y i μ ( W i T β ) ] λ + = 1 L δ ! n β 1 μ ( ) ( W β ) 2 ,
where δ is the positive parameter satisfying
U δ , μ ( ) ( W β ) = [ μ ( ) ( W 1 T β ) , , μ ( ) ( W n T β ) ] T .
In practice, Sørensen et al. [70] recommended using L = 1 for computational convenience and demonstrated that the first-order approximation produces satisfactory results.
To solve the optimation problem (40) and obtain the estimator β ^ GMU L , we can utilize the iterative reweighing algorithm. The main iteration step of the algorithm is stated as follows
β ^ GMU ( k + 1 ) = arg min β β 1 : 1 n W ˜ g ( k ) T ( z ˜ ( k ) W ˜ g ( k ) β ) λ + = 1 L δ ! n β 1 V ( , k ) 2 ,
where W ˜ g R n × p is a matrix of the weighted error-prone surrogate of covariates with elements w ˜ g , i j ( k ) = w i j V i ( 1 , k ) , z ˜ ( k ) R n is a vector with the elements z ˜ i ( k ) = z i ( k ) V i ( 1 , k ) ,
z i ( k ) = W i T β ^ GMU ( k ) + Y i μ W i T β ^ GMU ( k ) μ W i T β ^ GMU ( k ) 1 , i = 1 , , n ,
and
V ( , k ) = μ ( ) W 1 T β ^ GMU ( k ) , , μ ( ) W n T β ^ GMU ( k ) T = V 1 ( , k ) , , V n ( , k ) T , = 1 , , L
is the weight vector in Taylor expansion with L terms. When L = 1 is applied, it is easy to verify that (41) is a linear program. For more details about the algorithm, please see [70,102]. The GMU selector can be implemented by R package “hdme” [94]. However, Sørensen et al. [70] did not establish any asymptotic properties of the GMU selector.

4. Hypothesis Testing Methods

The aforementioned works on high-dimensional measurement error models mainly investigate estimation problems and numerical algorithms of optimization problems as well as the theoretical properties of estimators. Recently, some works have studied the hypothesis testing problems for high-dimensional measurement error regression models, which will be introduced in this section.

4.1. Corrected Decorrelated Score Test

The above methods are proposed under the setting that all covariates are corrupted. In practice, it is common that not all covariates are measured with errors. Thus, Li et al. [90] investigated high-dimensional measurement error models where a fixed number of covariates contain measurement errors and proposed statistical inference methods for the regression coefficients corresponding to these covariates.
Consider the following high-dimensional linear model with one of the covariates containing additive errors
y i = β 0 X i + γ 0 T Z i + ε i , W i = X i + U i , i = 1 , , n ,
where X i R is an unobservable covariate, W i is its error-prone surrogate, and Z i R p 1 is an observed covariate precisely. The measurement error U i follows from sub-Gaussian distribution with mean zero and variance σ U 2 , and U i is independent of ( X i , Z i , ε i ) . Denote y = ( y 1 , , y n ) T , X = ( X 1 , , X n ) T , W = ( W 1 , , W n ) T and Z = ( Z 1 , , Z n ) T . This subsection aims to test the hypothesis:
H 0 : β 0 = β * H 1 : β 0 β * ( β * R ) ,
and construct a confidence interval for β 0 under high-dimensional settings.
Since we are only concerned with the inference of the parameter β , then the parameter γ is regarded as a nuisance. Following the idea in [85], Li et al. [90] defined the corrected score function as
S θ ( θ ) = Σ ^ θ ρ ^ = 1 n i = 1 n S i θ ( θ ) = S β ( β , γ ) S γ ( β , γ ) = Σ ^ 11 β + Σ ^ 12 γ ρ ^ 1 Σ ^ 21 β + Σ ^ 22 γ ρ ^ 2 ,
where θ = ( β , γ T ) T ,
Σ ^ = Σ ^ 11 Σ ^ 12 Σ ^ 21 Σ ^ 22 = W T W / n σ U 2 W T Z / n Z T W / n Z T Z / n and ρ ^ = ρ ^ 1 ρ ^ 2 = W T y / n Z T y / n
are consistent estimators of Σ = ( X , Z ) T ( X , Z ) / n and ρ = ( X , Z ) T y / n , respectively. The corrected score covariance matrix is defined as
I ( θ ) = E S i θ ( θ ) S i θ ( θ ) T = I β β I β γ I γ β I γ γ .
To conduct statistical inference on the target parameter β , it is crucial to eliminate the influence of nuisance parameter γ . Thus, Li et al. [90] developed the corrected decorrelated score function for the target parameter β as
S ( β , γ ) = S β ( β , γ ) ω T S γ ( β , γ ) ,
where ω T = I β γ I γ γ 1 = E ( X i Z i T ) E ( Z i Z i T ) 1 . It easy to verify that E [ S ( β 0 , γ 0 ) S γ ( β 0 , γ 0 ) ] = 0 , which indicates that S ( β , γ ) and nuisance score function S γ ( β , γ ) are uncorrelated. Obviously, we can obtain that Var [ S ( β , γ ) ] = I β β I β γ I γ γ 1 I γ β = : σ β γ 2 . Then, Li et al. [90] constructed the test statistic and the confidence interval for β 0 based on the estimated decorrelated score function. This statistical inference procedure is summarized as follows.
Step 1. Apply the CoCoLasso estimation method in [64] to calculate initial estimator θ ˜ = ( β ˜ , γ ˜ T ) T , and utilize the following Dantzig-type estimator to estimate ω
ω ^ = arg min ω ω 1 , s . t . Σ ^ 12 ω T Σ ^ 22 λ ,
where λ = O ( log p / n ) .
Step 2. Estimate the decorrelated score function by
S ^ ( β , γ ˜ ) = S β ( β , γ ˜ ) ω ^ T S γ ( β , γ ˜ ) ,
and calculate the test statistic T ^ = n S ^ β * , γ ˜ ( σ ^ β γ , H 0 2 ) 1 / 2 , where
σ ^ β γ , H 0 2 = I ^ β β ω ^ T I ^ γ β β = β * = ( σ ^ ε , H 0 2 + β * 2 σ U 2 ) ( 1 ω ^ T Σ ^ 21 ) + β * 2 E ( U i 4 ) + σ ^ ε , H 0 2 σ U 2 β * 2 σ U 4 .
Step 3. Estimate β as
β ^ = β ˜ S ^ ( θ ˜ ) / ( Σ ^ 11 ω ^ T Σ ^ 21 ) ,
and construct the ( 1 α ) 100 % confidence interval for β 0 as
β ^ u 1 α / 2 σ ^ β 2 / n , β ^ + u 1 α / 2 σ ^ β 2 / n ,
where u 1 α / 2 is the ( 1 α / 2 ) quantile of standard normal distribution,
σ ^ β 2 = ( 1 ω ^ T Σ ^ 21 ) 2 ( σ ^ ε 2 + β ^ 2 σ U 2 ) ( 1 ω ^ T Σ ^ 21 ) + β ^ 2 E ( U i 4 ) + σ ^ ε 2 σ U 2 β ^ 2 σ U 4
is the estimator of the asymptotic variance σ β 2 of β ^ , and σ ^ ε 2 = n 1 i = 1 n ( y i β ^ W i γ ˜ T Z i ) 2 β ^ 2 σ U 2 is the estimator of the variance σ ε 2 of ε i .
Note that the methods used to estimate θ and ω in Step 1 can be varying, as long as the corresponding estimators are consistent; please see more discussions in [90]. Li et al. [90] showed that under some regularity conditions,
n S ^ β * , γ ˜ ( σ ^ β γ , H 0 2 ) 1 / 2 d N ( 0 , 1 ) as n .
Furthermore, the asymptotic normality of the test statistic T ^ n at local alternatives was also established in [90] without any additional condition. Li et al. [90] also constructed the asymptotic confidence interval for target parameter β in Step 3 based on the asymptotic normality of β ^ , which is given as follows
n ( β ^ β 0 ) = E S β , γ 0 β β = β 0 1 n S β 0 , γ 0 + o P ( 1 ) d N ( 0 , σ β 2 ) as n ,
where σ β 2 = E X i 2 ω T E X i Z i 2 σ β γ , 0 2 , and
σ β γ , 0 2 = ( σ ε 2 + β 0 2 σ U 2 ) 1 ω T E X i Z i + β 0 2 E ( U i 4 ) + σ ε 2 σ U 2 β 0 2 σ U 4 .

4.2. Wald and Score Tests for Poisson Models

In addition to linear models, researchers have made some progress on hypothesis-testing problems for Poisson models. Jiang et al. [92] studied hypothesis-testing problems for high-dimensional Poisson measurement error models, and they proposed Wald and score tests for the linear function of regression coefficients.
Consider the following hypothesis test
H 0 : C β 0 M = b H 1 : C β 0 M = b + h n for some h n R r ,
where C R r × m is a matrix with r m , and β 0 M R m × 1 is a subvector of the true regression coefficient vector β 0 = ( β 01 , , β 0 p ) T formed by β 0 j ( j M ) . To construct a valid test statistic, Jiang et al. [92] drew on the idea of the estimation method in [78] and suggested estimating regression coefficients under the null hypothesis by
β ^ p n = arg min β 1 R 1 , β 2 R 2 L β + p λ β M c , s . t . C β M = b ,
where p λ ( · ) is a penalty function, and L ( β ) is defined in (37). Similarly, the following estimator of β 0 can be considered without assuming the null hypothesis
β ^ p w = arg min β 1 R 1 , β 2 R 2 L ( β ) + p λ β M c .
The estimators β ^ p n and β ^ p w can be obtained by the ADMM algorithm; for more details, please see [92]. It can be seen that optimization problems (42) and (43) can be distinguished from the method in (36) because we do not impose penalties on the components of the target parameter β M to avoid forcing them to be zeros. Then, based on the above estimators of β 0 , Jiang et al. [92] proposed the following score statistic and Wald statistic to test whether C β 0 M = b or not
T S = n L ( β ^ ) β T M S A T Ψ 1 ( Σ ^ r , Q ^ , β ^ ) A L ( β ^ ) β M S , T W = n ( C β ^ pw , M b ) T Ψ ( Σ ^ r , Q ^ , β ^ pw ) 1 ( C β ^ pw , M b ) ,
where A = C I m × m , 0 m × k Q ^ M S , M S 1 ( β ^ ) ,
Ψ ( Σ , Q , β ) C I m × m , 0 m × k Q M S , M S 1 ( β ) Σ M S , M S ( β ) Q M S , M S 1 ( β ) I m × m , 0 m × k T C T ,
Σ ^ r ( β ) and Q ^ ( β ) are estimators of Σ r ( β ) and Q ( β ) = E exp β T X X X T , respectively, and
Σ r ( β ) = E Y i W i exp ( β T W i β T Σ u β / 2 ) W i Σ u β 2
is the covariance of the residuals.
Jiang et al. [92] established the consistency of β ^ p n and β ^ p w with λ larger than O ( { log p / n } 1 / 4 ) , m = o ( { log p / n } 1 / 2 ) and s = o ( { log p / n } 1 / 2 ) . Furthermore, the asymptotic distributions of the two test statistics are established; specifically, as n , we have
T S d χ 2 r , n h n T Ψ 1 Σ , Q , β t h n , T W d χ 2 r , n h n T Ψ 1 Σ , Q , β t h n .
Thus, we reject the null hypothesis if T S > χ 1 α 2 ( r ) for the score test with the nominal significance level α > 0 , and we reject the null hypothesis if T W > χ 1 α 2 ( r ) for the Wald test, where χ 1 α 2 ( r ) is the ( 1 α ) quantile of the chi-square distribution χ 2 r .

5. Screening Methods

As the dimensions of data become higher and higher, we often encounter ultrahigh-dimensional data. For the ultrahigh-dimensional models, we frequently reduce the dimensions using variable screening techniques and then apply other estimation or hypothesis-testing methods. The variable screening technique SIS [50] designed for ultrahigh-dimensional clean data has achieved great success and has been extended to various settings. SIS screens the variables according to the magnitudes of their marginal correlations with the response variable. Nghiem et al. [93] drew inspiration from the ideas of SIS in [50] and marginal bridge estimation in [103], and they proposed the corrected sure independence screening (SISc) method and corrected penalized marginal screening method (PMSc). Consider the following optimization problem
β ˜ sc = arg min β L ( β ) = arg min β j = 1 p L j ( β j ) = arg min β 1 n j = 1 p i = 1 n y i w i j β j 2 σ j 2 β j 2 + p λ β j ,
where p λ ( · ) is a penalty function, and the bridge penalty is adopted in [93]. Based on (44), Nghiem et al. [93] proposed PMSc and SISc methods. For the PMSc method, it suggested taking the selected submodel as
S ^ PMSc = j : β ˜ sc , j 0 .
Under some regularity conditions, Nghiem et al. [93] showed that P ( S S ^ PMSc ) 1 . Furthermore, when λ = 0 , we can obtain that
β ˜ sc , j = i = 1 n w i j y i i = 1 n w i j 2 n σ u , j 2 , j = 1 , , p ,
which measures the marginal correlation between the jth variable and the response variable. The SISc selects the variable according to the magnitude of β ˜ sc , j . The corresponding selected set is
S ^ SISc = 1 j p : | β ˜ j | is among the d largest of all .
Nghiem et al. [93] proved that P ( S S ^ SISc ) = 1 O { p exp ( C n ) } for some constant C > 0 under some regularity conditions.

6. Conclusions

With the advent of the big data era, high-dimensional measurement error data have proliferated in various fields. Over the past few years, many statistical inference methods for high-dimensional measurement error regression models have been developed to overcome the difficulties in scientific research and provide effective approaches for tackling problems in applications. This paper reviews the research advances in estimation and hypothesis testing methods for high-dimensional measurement error models as well as variable screening methods for ultrahigh-dimensional measurement error models. The aforementioned estimation methods can be classified into the following three categories: (i) methods based on a nonconvex objective function with restrictions on the regression coefficients, such as the nonconvex Lasso and the estimation method for Poisson models in [78]; (ii) methods with a convex objective function including CoCoLasso, the balanced estimation method and the CaZnRLS estimation method; (iii) methods that draw on the idea of a Dantzig selector, such as the MU selector, compensated MU selector, GMU selector, and conic-programming-based estimation method. Many methods are now available in R packages “hdme” and “BDcocolasso”. Thus, we can apply these methods to analyze high-dimensional measurement error data. For the use of estimation methods, it is recommended to use CoCoLasso and balanced estimation methods due to their operability. If a higher computational efficiency is required, the CaZnRLS estimation method can be considered. If covariates are only partially corrupted by measurement errors, it is better to apply BDCoCoLasso.
Due to the prevalence of high-dimensional measurement error data in daily life and the growing demand for the statistical inference methods of measurement error regression models in applications, the related research is still one of the crucial aspects in statistical research. At present, the statistical inference methods and the theoretical system of high-dimensional measurement error models are far from complete. To the best of our knowledge, the study of high-dimensional measurement error regression models is currently limited to linear models and generalized linear models. However, it is common that covariates and response variables show a complicated relationship rather than a simple linear relationship in practice. Therefore, in order to meet the urgent needs of applications, it is necessary to develop more general statistical inference methods for high-dimensional nonlinear measurement error models. Further research in this area includes the following aspects.
  • Existing estimation methods for high-dimensional measurement error regression models are mainly for linear or generalized linear models. Therefore, it is urgent to develop estimation methods for nonlinear models with high-dimensional measurement error data such as nonparametric and semiparametric models.
  • Existing works mainly focus on independent and identically distributed data. It is worthwhile to extend the estimation and hypothesis-testing methods to measurement error models with complex data such as panel data and functional data.
  • In most studies of high-dimensional measurement error models, it is assumed that the covariance structure of the measurement errors is specific or the covariance matrix of measurement errors is known. Thus, it is a challenging problem to develop estimation and hypothesis-testing methods in the case that the covariance matrix of measurement errors is completely unknown.

Author Contributions

Conceptualization, G.L.; methodology, J.L.; validation, L.Y.; formal analysis, G.L.; investigation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, G.L. and L.Y.; supervision, G.L.; project administration, G.L.; funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (grant numbers: 12271046, 11971001, 12131006 and 12001277).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely thank the editor, the associate editor, and two reviewers for their constructive comments that have led to a substantial improvement of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SIMEXSimulation–extrapolation
SCADSmoothly clipped absolute deviation
SICASmooth integration of counting and absolute deviation
MCPMinimax concave penalty
SISSure independence screening
CoCoLassoConvex conditioned Lasso
CaZnRLSCalibrated zero-norm regularized least squares
MUMatrix uncertainty
MEBoostMeasurement error boosting
SIMSELEXSimulation--selection--extrapolation
IROImputation-regularized optimization
FDRFalse discovery rate
PMScCorrected penalized marginal screening
SIScCorrected sure independence screening
ADMMAlternating direction method of multipliers
BDCoCoLassoBlock coordinate descent convex conditioned Lasso
MPECMathematical program with equilibrium constraints
GEP–MSCRAMulti-stage convex relaxation approach
GMUGeneralized matrix uncertainty

References

  1. Liang, H.; Härdle, W.; Carroll, R.J. Estimation in a semiparametric partially linear errors-in-variables model. Ann. Stat. 1999, 27, 1519–1535. [Google Scholar] [CrossRef]
  2. Cook, J.; Stefanski, L.A. Simulation-extrapolation estimation in parametric measurement error models. J. Am. Stat. Assoc. 1994, 89, 1314–1328. [Google Scholar] [CrossRef]
  3. Carroll, R.J.; Lombard, F.; Kuchenhoff, H.; Stefanski, L.A. Asymptotics for the SIMEX estimator in structural measurement error models. J. Am. Stat. Assoc. 1996, 91, 242–250. [Google Scholar] [CrossRef]
  4. Fan, J.Q.; Truong, Y.K. Nonparametric regression with errors in variables. Ann. Stat. 1993, 21, 1900–1925. [Google Scholar] [CrossRef]
  5. Cui, H.J.; Chen, S.X. Empirical likelihood confidence region for parameter in the errors-in-variables models. J. Multivar. Anal. 2003, 84, 101–115. [Google Scholar] [CrossRef] [Green Version]
  6. Cui, H.J.; Kong, E.F. Empirical likelihood confidence region for parameters in semi-linear errors-in-variables models. Scand. J. Stat. 2006, 33, 153–168. [Google Scholar] [CrossRef] [Green Version]
  7. Cheng, C.L.; Tsai, J.R.; Schneeweiss, H. Polynomial regression with heteroscedastic measurement errors in both axes: Estimation and hypothesis testing. Stat. Methods Med. Res. 2019, 28, 2681–2696. [Google Scholar] [CrossRef]
  8. He, X.M.; Liang, H. Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Stat. Sin. 2000, 10, 129–140. [Google Scholar]
  9. Carroll, R.J.; Delaigle, A.; Hall, P. Nonparametric prediction in measurement error models. J. Am. Stat. Assoc. 2009, 104, 993–1003. [Google Scholar] [CrossRef] [Green Version]
  10. Jeon, J.M.; Park, B.U.; Keilegom, I.V. Nonparametric regression on lie groups with measurement errors. Ann. Stat. 2022, 50, 2973–3008. [Google Scholar] [CrossRef]
  11. Chen, L.P.; Yi, G.Y. Model selection and model averaging for analysis of truncated and censored data with measurement error. Electron. J. Stat. 2020, 14, 4054–4109. [Google Scholar] [CrossRef]
  12. Shi, P.X.; Zhou, Y.C.; Zhang, A.R. High-dimensional log-error-in-variable regression with applications to microbial compositional data analysis. Biometrika 2022, 109, 405–420. [Google Scholar] [CrossRef]
  13. Li, B.; Yin, X.R. On surrogate dimension reduction for measurement error regression: An invariance law. Ann. Stat. 2007, 35, 2143–2172. [Google Scholar] [CrossRef]
  14. Staudenmayer, J.; Buonaccorsi, J.P. Measurement error in linear autoregressive models. J. Am. Stat. Assoc. 2005, 100, 841–852. [Google Scholar] [CrossRef]
  15. Wei, Y.; Carroll, R.J. Quantile regression with measurement error. J. Am. Stat. Assoc. 2009, 104, 1129–1143. [Google Scholar] [CrossRef] [Green Version]
  16. Liang, H.; Li, R.Z. Variable selection for partially linear models with measurement errors. J. Am. Stat. Assoc. 2009, 104, 234–248. [Google Scholar] [CrossRef] [Green Version]
  17. Hall, P.; Ma, Y.Y. Testing the suitability of polynomial models in errors-in-variables problems. Ann. Stat. 2007, 35, 2620–2638. [Google Scholar] [CrossRef]
  18. Hall, P.; Ma, Y.Y. Semiparametric estimators of functional measurement error models with unknown error. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2007, 69, 429–446. [Google Scholar] [CrossRef]
  19. Ma, Y.Y.; Carroll, R.J. Locally efficient estimators for semiparametric models with measurement error. J. Am. Stat. Assoc. 2006, 101, 1465–1474. [Google Scholar] [CrossRef]
  20. Ma, Y.Y.; Li, R.Z. Variable selection in measurement error models. Bernoulli 2010, 16, 274–300. [Google Scholar] [CrossRef] [Green Version]
  21. Ma, Y.Y.; Hart, J.D.; Janicki, R.; Carroll, R.J. Local and omnibus goodness-of-fit tests in classical measurement error models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2011, 73, 81–98. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, L.Q. Estimation of nonlinear models with Berkson measurement errors. Ann. Stat. 2004, 32, 2559–2579. [Google Scholar] [CrossRef] [Green Version]
  23. Nghiem, L.H.; Byrd, M.C.; Potgieter, C.J. Estimation in linear errors-in-variables models with unknown error distribution. Biometrika 2020, 107, 841–856. [Google Scholar] [CrossRef]
  24. Pan, W.Q.; Zeng, D.L.; Lin, X.H. Estimation in semiparametric transition measurement error models for longitudinal data. Biometrics 2009, 65, 728–736. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Zhang, J.; Zhou, Y. Calibration procedures for linear regression models with multiplicative distortion measurement errors. Braz. J. Probab. Stat. 2020, 34, 519–536. [Google Scholar] [CrossRef]
  26. Zhang, J. Estimation and variable selection for partial linear single-index distortion measurement errors models. Stat. Pap. 2021, 62, 887–913. [Google Scholar] [CrossRef]
  27. Wang, L.Q.; Hsiao, C. Method of moments estimation and identifiability of semiparametric nonlinear errors-in-variables models. J. Econom. 2011, 165, 30–44. [Google Scholar] [CrossRef]
  28. Schennach, S.M.; Hu, Y.Y. Nonparametric identification and semiparametric estimation of classical measurement error models without side information. J. Am. Stat. Assoc. 2013, 108, 177–186. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, X.Y.; Ma, Y.Y.; Carroll, R.J. MALMEM: Model averaging in linear measurement error models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2019, 81, 763–779. [Google Scholar] [CrossRef]
  30. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models, 2nd ed.; Chapman and Hall: New York, NY, USA, 2006. [Google Scholar]
  31. Cheng, C.L.; Van Ness, J.W. Statistical Regression With Measurement Error; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
  32. Fuller, W.A. Measurement Error Models; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
  33. Li, G.R.; Zhang, J.; Feng, S.Y. Modern Measurement Error Models; Science Press: Beijing, China, 2016. [Google Scholar]
  34. Yi, G.Y. Statistical Analysis with Measurement Error or Misclassification; Springer: New York, NY, USA, 2017. [Google Scholar]
  35. Yi, G.Y.; Delaigle, A.; Gustafson, P. Handbook of Measurement Error Models; Chapman and Hall: New York, NY, USA, 2021. [Google Scholar]
  36. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  37. Fan, J.Q.; Li, R.Z. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  38. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef] [Green Version]
  39. Zou, H. The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  40. Candès, E.J.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  41. Lv, J.C.; Fan, Y.Y. A unified approach to model selection and sparse recovery using regularized least squares. Ann. Stat. 2009, 37, 3498–3528. [Google Scholar] [CrossRef]
  42. Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [Green Version]
  43. Fan, J.Q.; Lv, J.C. A selective overview of variable selection in high dimensional feature space. Stat. Sin. 2010, 20, 101–148. [Google Scholar]
  44. Wu, Y.N.; Wang, L. A survey of tuning parameter selection for high-dimensional regression. Annu. Rev. Stat. Its Appl. 2020, 7, 209–226. [Google Scholar] [CrossRef] [Green Version]
  45. Yang, E.; Lozano, A.C.; Ravikumar, P. Elementary estimators for high-dimensional linear regression. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21 June 2014. [Google Scholar]
  46. Kuchibhotla, A.K.; Kolassa, J.E.; Kuffner, T.A. Post-selection inference. Annu. Rev. Stat. Its Appl. 2022, 9, 505–527. [Google Scholar] [CrossRef]
  47. Bühlmann, P.; van de Geer, S. Statistics for High-Dimensional Data: Methods, Theory and Applications; Springer: Heidelberg, Germany, 2011. [Google Scholar]
  48. Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; Taylor & Francis Group, CRC: Boca Raton, FL, USA, 2015. [Google Scholar]
  49. Fan, J.Q.; Li, R.Z.; Zhang, C.-H.; Zou, H. Statistical Foundations of Data Science; Chapman and Hall: Boca Raton, FL, USA, 2020. [Google Scholar]
  50. Fan, J.Q.; Lv, J.C. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 849–911. [Google Scholar] [CrossRef] [Green Version]
  51. Barut, E.; Fan, J.Q.; Verhasselt, A. Conditional sure independence screening. J. Am. Stat. Assoc. 2016, 111, 1266–1277. [Google Scholar] [CrossRef] [PubMed]
  52. Fan, J.Q.; Song, R. Sure independence screening in generalized linear models with NP-dimensionality. Ann. Stat. 2010, 38, 3567–3604. [Google Scholar] [CrossRef]
  53. Fan, J.Q.; Feng, Y.; Song, R. Nonparametric independence screening in sparse ultrahigh-dimensional additive models. J. Am. Stat. Assoc. 2011, 106, 544–557. [Google Scholar] [CrossRef] [Green Version]
  54. Li, G.R.; Peng, H.; Zhang, J.; Zhu, L.X. Robust rank correlation based screening. Ann. Stat. 2012, 40, 1846–1877. [Google Scholar] [CrossRef] [Green Version]
  55. Ma, S.J.; Li, R.Z.; Tsai, C.L. Variable screening via quantile partial correlation. J. Am. Stat. Assoc. 2017, 112, 650–663. [Google Scholar] [CrossRef]
  56. Pan, W.L.; Wang, X.Q.; Xiao, W.N.; Zhu, H.T. A generic sure independence screening procedure. J. Am. Stat. Assoc. 2019, 114, 928–937. [Google Scholar] [CrossRef]
  57. Tong, Z.X.; Cai, Z.R.; Yang, S.S.; Li, R.Z. Model-free conditional feature screening with FDR control. J. Am. Stat. Assoc. 2022, in press. [Google Scholar] [CrossRef]
  58. Wen, C.H.; Pan, W.L.; Huang, M.; Wang, X.Q. Sure independence screening adjusted for confounding covariates with ultrahigh dimensional data. Stat. Sin. 2018, 28, 293–317. [Google Scholar]
  59. Wang, L.M.; Li, X.X.; Wang, X.Q.; Lai, P. Unified mean-variance feature screening for ultrahigh-dimensional regression. Comput. Stat. 2022, 37, 1887–1918. [Google Scholar] [CrossRef]
  60. Zhao, S.F.; Fu, G.F. Distribution-free and model-free multivariate feature screening via multivariate rank distance correlation. J. Multivar. Anal. 2022, 192, 105081. [Google Scholar] [CrossRef]
  61. Purdom, E.; Holmes, S.P. Error distribution for gene expression data. Stat. Appl. Genet. Mol. Biol. 2005, 4, 16. [Google Scholar] [CrossRef] [Green Version]
  62. Slijepcevic, S.; Megerian, S.; Potkonjak, M. Location errors in wireless embedded sensor networks: Sources, models, and effects on applications. Mob. Comput. Commun. Rev. 2002, 6, 67–78. [Google Scholar] [CrossRef]
  63. Loh, P.-L.; Wainwright, M.J. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Stat. 2012, 40, 1637–1664. [Google Scholar] [CrossRef]
  64. Datta, A.; Zou, H. CoCoLasso for high-dimensional error-in-variables regression. Ann. Stat. 2017, 45, 2400–2426. [Google Scholar] [CrossRef] [Green Version]
  65. Zheng, Z.M.; Li, Y.; Yu, C.X.; Li, G.R. Balanced estimation for high-dimensional measurement error models. Comput. Stat. Data Anal. 2018, 126, 78–91. [Google Scholar] [CrossRef]
  66. Zhang, J.; Li, Y.; Zhao, N.; Zheng, Z.M. L0 regularization for high-dimensional regression with corrupted data. Commun. Stat. Theory Methods 2022, in press. [Google Scholar] [CrossRef]
  67. Tao, T.; Pan, S.H.; Bi, S.J. Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression. Stat. Sin. 2018, 31, 909–933. [Google Scholar] [CrossRef]
  68. Rosenbaum, M.; Tsybakov, A. Sparse recovery under matrix uncertainty. Ann. Stat. 2010, 38, 2620–2651. [Google Scholar] [CrossRef]
  69. Rosenbaum, M.; Tsybakov, A. Improved matrix uncertainty selector. Probab. Stat. Back-High-Dimens. Model. Processes 2013, 9, 276–290. [Google Scholar]
  70. Sørensen, Ø.; Hellton, K.H.; Frigessi, A.; Thoresen, M. Covariate selection in high-dimensional generalized linear models with measurement error. J. Comput. Graph. Stat. 2018, 27, 739–749. [Google Scholar] [CrossRef]
  71. Sørensen, Ø.; Frigessi, A.; Thoresen, M. Measurement error in Lasso: Impact and likelihood bias correction. Stat. Sin. 2015, 25, 809–829. [Google Scholar] [CrossRef] [Green Version]
  72. Belloni, A.; Rosenbaum, M.; Tsybakov, A.B. Linear and conic programming estimators in high dimensional errors-in-variables models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2017, 79, 939–956. [Google Scholar] [CrossRef] [Green Version]
  73. Romeo, G.; Thoresen, M. Model selection in high-dimensional noisy data: A simulation study. J. Stat. Comput. Simul. 2019, 89, 2031–2050. [Google Scholar] [CrossRef]
  74. Brown, B.; Weaver, T.; Wolfson, J. Meboost: Variable selection in the presence of measurement error. Stat. Med. 2019, 38, 2705–2718. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Nghiem, L.H.; Potgieter, C.J. Simulation-selection-extrapolation: Estimation in high-dimensional errors-in-variables models. Biometrics 2019, 75, 1133–1144. [Google Scholar] [CrossRef] [PubMed]
  76. Li, X.; Wu, D.Y. Minimax rates of lp-losses for high-dimensional linear errors-in-variables models over lq-balls. Entropy 2021, 23, 722. [Google Scholar] [CrossRef]
  77. Bai, Y.X.; Tian, M.Z.; Tang, M.-L.; Lee, W.-Y. Variable selection for ultra-high dimensional quantile regression with missing data and measurement error. Stat. Methods Med. Res. 2021, 30, 129–150. [Google Scholar] [CrossRef]
  78. Jiang, F.; Ma, Y.Y. Poisson regression with error corrupted high dimensional features. Stat. Sin. 2022, 32, 2023–2046. [Google Scholar] [CrossRef]
  79. Byrd, M.; McGee, M. A simple correction procedure for high-dimensional generalized linear models with measurement error. arXiv 2019, arXiv:1912.11740. [Google Scholar]
  80. Liang, F.M.; Jia, B.C.; Xue, J.N.; Li, Q.Z.; Luo, Y. An imputation–regularized optimization algorithm for high dimensional missing data problems and beyond. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2018, 80, 899–926. [Google Scholar] [CrossRef]
  81. van de Geer, S.; Bühlmann, P.; Ritov, Y.; Dezeure, R. On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Stat. 2014, 42, 1166–1202. [Google Scholar] [CrossRef]
  82. Zhang, C.-H.; Zhang, S.S. Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2014, 76, 217–242. [Google Scholar] [CrossRef] [Green Version]
  83. Ma, S.J.; Carroll, R.J.; Liang, H.; Xu, S.Z. Estimation and inference in generalized additive coefficient models for nonlinear interactions with high-dimensional covariates. Ann. Stat. 2015, 43, 2102–2131. [Google Scholar] [CrossRef] [PubMed]
  84. Dezeure, R.; Bühlmann, P.; Meier, L.; Meinshausen, N. High-dimensional inference: Confidence intervals, p-values and R-software hdi. Stat. Sci. 2015, 30, 533–558. [Google Scholar] [CrossRef] [Green Version]
  85. Ning, Y.; Liu, H. A general theory of hypothesis tests and confidence regions for sparse high dimensional models. Ann. Stat. 2017, 45, 158–195. [Google Scholar] [CrossRef]
  86. Zhang, X.Y.; Cheng, G. Simultaneous inference for high-dimensional linear models. J. Am. Stat. Assoc. 2017, 112, 757–768. [Google Scholar] [CrossRef] [Green Version]
  87. Vandekar, S.N.; Reiss, P.T.; Shinohara, R.T. Interpretable high-dimensional inference via score projection with an application in neuroimaging. J. Am. Stat. Assoc. 2019, 114, 820–830. [Google Scholar] [CrossRef]
  88. Ghosh, S.; Tan, Z.Q. Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data. Bernoulli 2022, 28, 1675–1703. [Google Scholar] [CrossRef]
  89. Belloni, A.; Chernozhukov, V.; Kaul, A. Confidence bands for coefficients in high dimensional linear models with error-in-variables. arXiv 2017, arXiv:1703.00469. [Google Scholar]
  90. Li, M.Y.; Li, R.Z.; Ma, Y.Y. Inference in high dimensional linear measurement error models. J. Multivar. Anal. 2021, 184, 104759. [Google Scholar] [CrossRef]
  91. Huang, X.D.; Bao, N.N.; Xu, K.; Wang, G.P. Variable selection in high-dimensional error-in-variables models via controlling the false discovery proportion. Commun. Math. Stat. 2022, 10, 123–151. [Google Scholar] [CrossRef]
  92. Jiang, F.; Zhou, Y.Q.; Liu, J.X.; Ma, Y.Y. On high dimensional Poisson models with measurement error: Hypothesis testing for nonlinear nonconvex optimization. Ann. Stat. 2023, 51, 233–259. [Google Scholar] [CrossRef]
  93. Nghiem, L.H.; Hui, F.K.C.; Müller, S.; Welsh, A.H. Screening methods for linear errors-in-variables models in high dimensions. Biometrics 2023, 79, 926–939. [Google Scholar] [CrossRef] [PubMed]
  94. Sørensen, Ø. hdme: High-dimensional regression with measurement error. J. Open Source Softw. 2019, 4, 1404. [Google Scholar] [CrossRef] [Green Version]
  95. Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 5–9 July 2008. [Google Scholar]
  96. Agarwal, A.; Negahban, S.; Wainwright, M.J. Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat. 2012, 40, 2452–2482. [Google Scholar] [CrossRef]
  97. Chen, Y.D.; Caramanis, C. Noisy and missing data regression: Distribution-oblivious support recovery. J. Mach. Learn. Res. 2013, 28, 383–391. [Google Scholar]
  98. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  99. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
  100. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef]
  101. Escribe, C.; Lu, T.Y.; Keller-Baruch, J.; Forgetta, V.; Xiao, B.W.; Richards, J.B.; Bhatnagar, S.; Oualkacha, K.; Greenwood, C.M.T. Block coordinate descent algorithm improves variable selection and estimation in error-in-variables regression. Genet. Epidemiol. 2021, 45, 874–890. [Google Scholar] [CrossRef]
  102. James, G.M.; Radchenko, P. A generalized Dantzig selector with shrinkage tuning. Biometrika 2009, 96, 323–337. [Google Scholar] [CrossRef] [Green Version]
  103. Huang, J.; Horowitz, J.L.; Ma, S.G. Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 2008, 36, 587–613. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, J.; Yue, L.; Li, G. Overview of High-Dimensional Measurement Error Regression Models. Mathematics 2023, 11, 3202. https://doi.org/10.3390/math11143202

AMA Style

Luo J, Yue L, Li G. Overview of High-Dimensional Measurement Error Regression Models. Mathematics. 2023; 11(14):3202. https://doi.org/10.3390/math11143202

Chicago/Turabian Style

Luo, Jingxuan, Lili Yue, and Gaorong Li. 2023. "Overview of High-Dimensional Measurement Error Regression Models" Mathematics 11, no. 14: 3202. https://doi.org/10.3390/math11143202

APA Style

Luo, J., Yue, L., & Li, G. (2023). Overview of High-Dimensional Measurement Error Regression Models. Mathematics, 11(14), 3202. https://doi.org/10.3390/math11143202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop