Next Article in Journal
Flickering Emergences: The Question of Locality in Information-Theoretic Approaches to Emergence
Next Article in Special Issue
Robust Variable Selection with Exponential Squared Loss for the Spatial Durbin Model
Previous Article in Journal
Forecasting for Chaotic Time Series Based on GRP-lstmGAN Model: Application to Temperature Series of Rotary Kiln
Previous Article in Special Issue
Feature Selection in High-Dimensional Models via EBIC with Energy Distance Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization

1
College of Mathematics and Statistics, Guangxi Normal University, Guilin 541004, China
2
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(1), 53; https://doi.org/10.3390/e25010053
Submission received: 11 November 2022 / Revised: 20 December 2022 / Accepted: 23 December 2022 / Published: 27 December 2022
(This article belongs to the Special Issue Statistical Methods for Modeling High-Dimensional and Complex Data)

Abstract

:
This paper tackles the problem of estimating the covariance matrix in large-dimension and small-sample-size scenarios. Inspired by the well-known linear shrinkage estimation, we propose a novel second-order Stein-type regularization strategy to generate well-conditioned covariance matrix estimators. We model the second-order Stein-type regularization as a quadratic polynomial concerning the sample covariance matrix and a given target matrix, representing the prior information of the actual covariance structure. To obtain available covariance matrix estimators, we choose the spherical and diagonal target matrices and develop unbiased estimates of the theoretical mean squared errors, which measure the distances between the actual covariance matrix and its estimators. We formulate the second-order Stein-type regularization as a convex optimization problem, resulting in the optimal second-order Stein-type estimators. Numerical simulations reveal that the proposed estimators can significantly lower the Frobenius losses compared with the existing Stein-type estimators. Moreover, a real data analysis in portfolio selection verifies the performance of the proposed estimators.

1. Introduction

As a fundamental problem in modern multivariate statistics and various practical applications, estimating the covariance matrix of a large-dimensional random vector has attracted significant interest in the last two decades [1,2]. The traditional sample covariance matrix (SCM) becomes unstable and ill-conditioned when the dimension increases proportionally with the sample size. The algorithm, which still employs SCM as the covariance matrix estimator, will result in a drop in performance or failure [3,4]. Although remaining unbiased, the SCM is not a satisfactory estimator of the actual covariance matrix anymore [5,6]. Therefore, it is of great concern to develop well-conditioned covariance matrix estimators in large-dimensional scenarios [7,8,9].
A comprehensive point of view to obtain well-conditioned estimators is by improving the SCM [10]. In the Stein-type regularization (linear shrinkage estimation), a target matrix is preset by rectifying the SCM according to prior information on the covariance structure. For example, the spherical target is a scalar matrix, with the coefficient being the average of the diagonal elements of the SCM [11]. The diagonal target is a diagonal matrix which retainsthe diagonal elements of the SCM [12,13]. Moreover, the Toeplitz-structured target is formed by averaging each diagonal of the SCM [14]. To some extent, the target matrices are covariance matrix estimators. Despite being biased, they usually enjoy low variances. The Stein-type estimator combines the SCM and the target matrix to balance the bias and variance. This method generates a well-conditioned estimator for the spherical target matrix by retaining the sample eigenvectors and shrinking the sample eigenvalues toward their grand mean. Moreover, the Stein-type regularization can be translated as a weighted average between the SCM and the target matrix. Many Stein-type estimators have been developed for various target matrices. Additionally, it is worth mentioning that the optimal Stein-type estimator can be expressed in closed form and significantly outperforms the traditional SCM under some appropriate criteria [15,16,17].
In addition, the nonlinear shrinkage estimation is proposed based on the random matrix theory [18]. By taking spectral decomposition on the SCM, this method retains the sample eigenvectors and estimates the actual eigenvalues by taking a nonlinear transformation on the sample eigenvalues. Then, the nonlinear shrinkage estimator is obtained by assembling the estimated eigenvalues and the sample eigenvectors. In the mean squared error (MSE) sense, the resulting nonlinear shrinkage estimator enjoys a significant advantage over the SCM. It also outperforms the Stein-type estimator for the spherical target. However, both the sample eigenvalues and sample eigenvectors have serious deficiencies in a high-dimensional case [19]. Hence, the existing nonlinear shrinkage strategy, which modifies the sample eigenvalues whileretaining the sample eigenvectors, has some limitations in improving the SCM performance. Moreover, the method can hardly handle the prior structure information employed in the Stein-type regularization. Hence, developing a new nonlinear shrinkage technique is essential to generate outperformed covariance matrix estimators.
This paper combines the SCM and the target matrix via a nonlinear shrinkage strategy to obtain well-conditioned estimators of a large-dimensional covariance matrix. The main contributions are the following:
  • The second-order Stein-type estimator is modeled as a quadratic polynomial concerning the SCM and an almost surely (a.s.) positive definite target matrix. For the spherical and diagonal target matrices, the MSEs between the second-order Stein-type estimator and the actual covariance matrix are unbiasedly estimated under Gaussian distribution.
  • We formulate the second-order Stein-type estimators for the two target matrices as convex quadratic programming problems. Then, the optimal second-order Stein-type estimators are immediately obtained.
  • Some numerical simulations and application examples are provided for comparing the proposed second-order Stein-type estimators with the existing linear and nonlinear shrinkage estimators.
The outline of this paper is as follows. Section 2 proposes the second-order Stein-type estimator based on the Stein-type regularization. In Section 3, the spherical and diagonal matrices are employed as the target matrices. We obtain unbiased estimates of the MSEs between the second-order Stein-type estimators and the actual covariance matrix. The optimal second-order Stein-type estimators are obtained by solving the corresponding optimization problems. Section 4 provides some numerical simulations and two examples to discover the performance of the proposed estimators in large-dimensional scenarios. Section 5 concludes the major work of this paper.

2. Notation, Motivation, and Formulation

The symbol R p denotes the set of entire p-dimensional real column vectors, R m × n denotes the set of entire m × n real matrices, and S p denotes the set of entire p × p real symmetric matrices. The bold symbol E denotes the squared matrix having all entries 1 with appropriate dimensions. The symbol I p denotes the p × p identity matrix. For a matrix A , A T , tr ( A ) , and A , we denote its transpose, trace, and Frobenius matrix norm, respectively. For two matrices A and B , A B means their Hadamard (element-wise) product.
Assume that x 1 , x 2 , , x n R p is an independent and identically distributed (i.i.d.) sample drawn from a certain distribution with mean 0 and covariance matrix Σ . The SCM S is defined by
S = ( s i j ) p × p = 1 n m = 1 n x m x m T .
As is widely known, the SCM is ill conditioned in large-dimension scenarios and is even singular when p > n . The Stein-type regularization can produce a well-conditioned covariance matrix estimator based on the SCM [20,21,22,23].
For an a.s. positive definite target matrix T representing the prior information of the covariance structure, the Stein-type estimator combines the SCM and the target matrix with a linear function
f ( S , T ) = ( 1 w ) S + w T .
Through an equivalent transformation, the expression given by (2) can be recombined as
f ( S , T ) = S + w ( T S ) ,
where T S is a regularization term and w is a tuning parameter. Moreover, the tuning parameter w lies in ( 0 , 1 ] so as to keep the Stein-type estimator a.s. positive definite even when n < p . An interesting fact is that the matrix ( T S ) 2 is still symmetric and positive definite, which motivates us to consider further a quadratic function of the SCM and the target matrix.
For an a.s. positive definite target matrix T , we model the second-order Stein-type estimator of the covariance matrix Σ as
Σ ^ = S + w 1 ( T S ) + w 2 ( T S ) 2 ,
where w 1 , w 2 are the tuning parameters. It is easy to find out that Σ ^ S p . In the same manner, we further assume w 1 ( 0 , 1 ] and w 2 0 to keep the covariance estimator given by (4) to be a.s. positive definite. We note that the constraint on the tuning parameters is an easy-to-implement condition but is not necessary. One can also consider alternative assumptions, such as the condition number constraint, to obtain positive definite estimators of the large-dimensional covariance matrix [24].
Next, we choose the optimal tuning parameters in (4). In the Stein-type regularization, the MSE between the actual covariance matrix and its estimator is the most commonly used loss function. It includes unknown scalars concerning the expectation operator and the actual covariance matrix. One practical way is to make the MSE available by estimating the unknown scalars and obtaining the optimal shrinkage intensity by minimizing the available MSE [15]. Therefore, we still follow the above steps to find optimal tuning parameters in the second-order Stein-type estimator. To be specific, the loss function of the second-order Stein-type estimator Σ ^ is defined by
M T ( w | Σ ) = E [ Σ ^ Σ 2 ] ,
where w = ( w 1 , w 2 ) T . Through substituting the expression of Σ ^ into (5), we can obtain
M T ( w | Σ ) = 1 2 w T H ( T ) w 2 w T b ( T ) + c ,
where
H ( T ) = 2 E [ tr ( S T ) 2 ] E [ tr ( S T ) 3 ] E [ tr ( S T ) 3 ] E [ tr ( S T ) 4 ] ,
b ( T ) = E [ tr ( S Σ ) ( S T ) ] E [ tr ( S Σ ) ( S T ) 2 ] ,
c = E [ tr ( S Σ ) 2 ] .
Therefore, the second-order Stein-type regularization can be modeled as the following optimization problem:
min 1 2 w T H ( T ) w 2 w T b ( T ) + c s . t . 0 e 1 w 1 , e 2 w 0 ,
where e 1 = ( 1 , 0 ) and e 2 = ( 0 , 1 ) . It is easy to see that the loss function in problem (10) is a binary quadratic polynomial function about w , and H ( T ) is the Hessian matrix. By the Cauchy–Schwarz inequality, we have
E [ tr ( S T ) 3 ] 2 E [ tr ( S T ) 2 ] E [ tr ( S T ) 4 ] .
Therefore, the Hessian matrix H ( T ) is positive definite. Then, the optimization problem (10) is a convex quadratic program. However, it cannot be solved because the quantities H ( T ) , b ( T ) , and c in the objective function are unknown. When the underlying distribution is given and the target matrix is prespecified, we can estimate the unknown quantities H ( T ) , b ( T ) , and c. Then, the optimization problem (10) turns out to be available based on plug-in strategy and can be effectively solved.
Remark 1.
The unknown quantity c does not affect the choice of optimal tuning parameter w in the optimization problem (10). Moreover, it is the theoretical MSE between the actual covariance matrix and the classic SCM and plays an important role in evaluating the performance of improved covariance estimators based on the SCM.

3. Optimal Second-Order Stein-Type Estimators

In this section, as the target matrix is specified, we estimate the corresponding unknown quantities under Gaussian distribution, then establish the available version of the optimization problem to obtain the optimal second-order Stein-type estimator.

3.1. Target Matrices

As mentioned before, the target matrix represents the prior information of the actual covariance structure. In the Stein-type regularization, the commonly used target matrices include the spherical target, the diagonal target, the Toeplitz-structured target, and the tapered SCM. Among these, the spherical target and the diagonal target are a.s. positive definite, whereas both the Toeplitz-structured target and the tapered SCM are unnecessary. Thereby, in the second-order Stein-type regularization, we employ the spherical target and the diagonal target, given by
T 1 = tr ( S ) p I p , T 2 = diag ( s 11 , , s p p ) .
The diagonal target T 2 is also denoted as D S because it consists of the diagonal elements of the SCM.

3.2. Available Loss Functions

For the target matrices T 1 and T 2 , we unbiasedly estimate the loss functions given by (6) through plugging in the estimates of unknown quantities H ( T i ) , b ( T i ) , i = 1 , 2 and c under Gaussian distribution.
First of all, by directly removing the expectation operator, the Hessian matrices H ( T i ) can be estimated by
H ^ ( T i ) = 2 tr ( S T i ) 2 tr ( S T i ) 3 tr ( S T i ) 3 tr ( S T i ) 4 .
Furthermore, H ^ ( T i ) , i = 1 , 2 are, respectively, unbiased estimates of H ( T i ) , i = 1 , 2 .
Next, for i = 1 , 2 , we decompose the unknown vectors b ( T i ) into two terms,
b ( T i ) = E [ tr ( S Σ ) ( S T i ) ] E [ tr ( S Σ ) ( S T i ) 2 ] = E [ tr ( S ( S T i ) ) ] E [ tr ( S ( S T i ) 2 ) ] + E [ tr ( Σ ( S T i ) ) ] E [ tr ( Σ ( S T i ) 2 ) ]
u ( T i ) + v ( T i ) .
Similar to the Hessian matrices H ( T 1 ) and H ( T 2 ) , the first term u ( T i ) can be unbiasedly estimated by
u ^ ( T i ) = tr ( S ( S T i ) ) tr ( S ( S T i ) 2 ) .
Therefore, we only need to estimate the second term v ( T i ) . It is challenging to estimate v ( T i ) unbiased because it includes both the expectation operator and the actual covariance matrix Σ . We need the following moment properties about the Wishart distribution [25].
Lemma 1.
Denote A and B as arbitrary symmetric nonrandom matrices, S is the sample covariance matrix given by (1), and then the following equalities hold under Gaussian distribution:
E [ S A S ] = n + 1 n Σ A Σ + 1 n tr ( Σ A ) Σ ,
E [ tr ( A S ) S ] = tr ( A Σ ) Σ + 2 n Σ A Σ ,
E [ tr ( A S ) tr ( B S ) ] = tr ( A Σ ) tr ( B Σ ) + 2 n tr ( A Σ B Σ ) .
By Lemma 1, letting A = B = I p , we can obtain
E [ tr ( S 2 ) ] = n + 1 n tr ( Σ 2 ) + 1 n tr 2 ( Σ ) , E [ tr 2 ( S ) ] = tr 2 ( Σ ) + 2 n tr ( Σ 2 ) .
Moreover, letting A = Σ and B = I p , we have
E [ tr ( S 2 Σ ) ] = n + 1 n tr ( Σ 3 ) + 1 n tr ( Σ ) tr ( Σ 2 ) ,
E [ tr ( S Σ ) tr ( S ) ] = tr ( Σ ) tr ( Σ 2 ) + 2 n tr ( Σ 3 ) .
Lemma 1 is very helpful to compute the second term v ( T i ) in (14). For the spherical target matrix T 1 , we have
E [ tr ( Σ ( S T 1 ) ) ] = tr ( Σ 2 ) 1 p tr 2 ( Σ ) ,
and
E [ tr ( Σ ( S T 1 ) 2 ) ] = E [ tr ( S 2 Σ ) ] 2 p E [ tr ( S ) tr ( S Σ ) ] + 1 p 2 E [ tr 2 ( S ) ] tr ( Σ ) = n p + p 4 n p tr ( Σ 3 ) + p 2 2 n p + 2 n p 2 tr ( Σ ) tr ( Σ 2 ) + 1 p 2 tr 3 ( Σ ) .
For the diagonal target matrix T 2 , we have
E [ tr ( Σ ( S T 2 ) ) ] = tr ( Σ 2 ) tr ( D Σ 2 ) ,
where D Σ = diag ( Σ 11 , , Σ p p ) . Moreover, we can obtain
E [ tr ( Σ ( S T 2 ) 2 ) ] = E [ tr ( S 2 Σ ) ] 2 E [ tr ( S T 2 Σ ) ] + E [ tr ( T 2 2 Σ ) ] = n + 1 n tr ( Σ 3 ) + 1 n tr ( Σ ) tr ( Σ 2 ) 2 n + 4 n tr ( D Σ Σ 2 ) + n + 2 n tr ( D Σ 3 ) .
Denote
a 1 = tr ( Σ 2 ) , a 2 = tr 2 ( Σ ) , a 3 = tr ( D Σ 2 ) , b 1 = tr ( Σ 3 ) , b 2 = tr 3 ( Σ ) , b 3 = tr ( Σ ) tr ( Σ 2 ) , c 1 = tr ( D Σ 3 ) , c 2 = tr ( D Σ Σ 2 ) ,
then the vectors v ( T i ) , i = 1 , 2 can be rewritten as
v ( T 1 ) = 1 p a 2 a 1 n p + p 4 n p b 1 + 1 p 2 b 2 + p 2 2 n p + 2 n p 2 b 3 ,
v ( T 2 ) = a 3 a 1 n + 1 n b 1 + 1 n b 3 + n + 2 n c 1 2 n + 4 n c 2 .
It is worth noting that each element in v ( T i ) , i = 1 , 2 is a linear combination of the quantities in (27). Therefore, we only need to find out the estimates of the quantities in (27).
Firstly, the unbiased estimates of the quantities a i , i = 1 , 2 , 3 were proposed in [12,26,27],
α 1 = τ a n tr ( S 2 ) tr 2 ( S ) , α 2 = τ a ( n + 1 ) tr 2 ( S ) 2 tr ( S 2 ) ,
α 3 = τ a ( n 1 ) tr ( D S 2 ) ,
where τ a = n ( n 1 ) ( n + 2 ) .
Secondly, denote the matrix W as
W = n 2 2 3 n 16 n 2 + 3 n 2 6 ( n + 2 ) 4 n ( n + 2 ) n 2 + 2 n + 4 ,
Then, the unbiased estimates of b i , i = 1 , 2 , 3 can be obtained by the following theorem.
Theorem 1.
Under Gaussian distribution, the following equation holds when n 3 :
E tr ( S 3 ) tr 3 ( S ) tr ( S ) tr ( S 2 ) = ( τ b W ) 1 b 1 b 2 b 3 ,
where τ b = n 2 ( n 1 ) ( n 2 ) ( n + 2 ) ( n + 4 ) .
Proof. 
The actual covariance matrix Σ has the spectral decomposition which is described as Σ = Γ T Λ Γ , where Λ = diag ( λ 1 , , λ p ) is a diagonal matrix consisting of the eigenvalues and Γ is the corresponding unitary matrix. Define F = Σ 1 2 , and F is a symmetric matrix; then we have F 2 = Σ . For m = 1 , , n , denote z m = F 1 x m , then z m is an i.i.d. sample and z m N ( 0 , I p ) . Let X = ( x 1 , , x n ) and Z = ( z 1 , , z n ) ; then, we have X = F Z . Notice that the SCM is S = 1 n m = 1 n x m x m T ; therefore, we have
n S = X X T = F Z Z T F .
Moreover, we can obtain
tr ( n S ) = tr ( F Z Z T F ) = tr ( Z T Σ Z ) = tr ( Z T Γ T Λ Γ Z ) .
Define a matrix Q T = Z T Γ T = ( q 1 , , q p ) and denote v i i = q i T q i and v i j = q i T q j = q j T q i for i , j { 1 , , p } , then the above equation can be rewritten as
tr ( n S ) = tr ( Q T Λ Q ) = i = 1 p λ i v i i .
In a same manner, we can obtain
tr ( n S ) 2 = i , j = 1 p λ i λ j v i j 2 , tr 2 ( n S ) = i , j = 1 p λ i λ j v i i v j j ,
and
tr ( n S ) 3 = i , j , k = 1 p λ i λ j λ k v i j v i k v j k ,
tr 3 ( n S ) = i , j , k = 1 p λ i λ j λ k v i i v j j v k k ,
tr ( n S ) tr ( n S ) 2 = i , j , k = 1 p λ i λ j λ k v i j 2 v k k .
By the moment properties of random variables v i i and v i j in [26,28], we have
E [ v i i ] = n , E [ v i i 2 ] = n ( n + 2 ) , E [ v i j 2 ] = n , E [ v i i i j 2 ] = n ( n + 2 ) , E [ v i j v i k v j k ] = n E [ v i i 3 ] = n ( n + 2 ) ( n + 4 ) .
where i , j , k are arbitrary mutually unequal numbers in {1, ..., p}. Next, we compute the mathematical expectation of Equations (38)–(40) based on the moment properties in (41). Denote μ 1 = i = 1 p λ i 3 , μ 2 = i j p λ i 2 λ j and μ 3 = i j k p λ i λ j λ k , where i j k means that i , j , k are mutually unequal, and we have
E [ tr ( n S ) 3 ] = n ( n + 2 ) ( n + 4 ) μ 1 + 3 n ( n + 2 ) μ 2 + n μ 3 ,
E [ tr 3 ( n S ) ] = n ( n + 2 ) ( n + 4 ) μ 1 + 3 n 2 ( n + 2 ) μ 2 + n 3 μ 3 ,
E [ tr ( n S ) tr ( n S ) 2 ] = n ( n + 2 ) ( n + 4 ) μ 1 + n ( n + 2 ) 2 μ 2 + n 2 μ 3 .
Denote a matrix D as
D = n ( n + 2 ) ( n + 4 ) 3 n ( n + 2 ) n n ( n + 2 ) ( n + 4 ) 3 n 2 ( n + 2 ) n 3 n ( n + 2 ) ( n + 4 ) n ( n + 2 ) 2 n 2 ,
then the above Equations (42)–(44) can be rewritten in the form of a matrix equation:
E [ tr ( n S ) 3 ] E [ tr 3 ( n S ) ] E [ tr ( n S ) tr ( n S ) 2 ] = D μ 1 μ 2 μ 3 .
Furthermore, the unknown quantities b i , i = 1 , 2 , 3 can be decomposed as
b 1 = μ 1 , b 2 = μ 1 + 3 μ 2 + μ 3 , b 3 = μ 1 + μ 2 .
Therefore, we have
μ 1 μ 2 μ 3 = 1 0 0 1 3 1 1 1 0 1 b 1 b 2 b 3 .
By the Equations (46) and (48), we have
E [ tr ( n S ) 3 ] E [ tr 3 ( n S ) ] E [ tr ( n S ) tr ( n S ) 2 ] = D 1 0 0 1 3 1 1 1 0 1 b 1 b 2 b 3 .
For n 3 , the following equality holds:
D 1 0 0 1 3 1 1 1 0 1 = ( τ b W ) 1 .
Therefore, we can obtain
E tr ( n S ) 3 tr 3 ( n S ) tr ( n S ) tr ( n S ) 2 = ( τ b W ) 1 b 1 b 2 b 3 .
This completes the proof. □
Then, the unknown scalars b i , i = 1 , 2 , 3 can be unbiasedly estimated by
β 1 β 2 β 3 = τ b W tr ( S 3 ) tr 3 ( S ) tr ( S ) tr ( S 2 ) .
Remark 2.
In a large sample scenario, the unknown scalars b i , i = 1 , 2 , 3 can be consistently estimated by tr ( S 3 ) , tr 3 ( S ) and tr ( S ) tr ( S 2 ) [29]. Theorem 1 shows that these estimates are biased. Moreover, the biases become non-ignorable in high-dimensional situations [30]. Furthermore, by Theorem 1, the biases can be eliminated by the linear combinations of tr ( S 3 ) , tr 3 ( S ) , and tr ( S ) tr ( S 2 ) .
Thirdly, denote the matrices G , R , and K as follows:
G = ( g i j ) with g i j = x i x j 2 ,
R = ( r i j ) with r i j = x i x i x j 2 ,
K = ( k i j ) with k i j = ( x i x i ) ( x i x j ) T ,
where x i is the observations of i-th variable for i = 1 , , p , then the following theorem holds.
Theorem 2.
Under Gaussian distribution, the following equations hold when n 3 :
E [ n 3 tr ( D S 3 ) 3 n tr ( G D S ) + 2 tr ( R ) ] = τ c 1 c 1 ,
E [ n 3 tr ( D S S 2 ) 2 n tr ( K S ) n tr ( G D S E ) + 2 tr ( R E ) ] = τ c 1 c 2 ,
where τ c = 1 n ( n 1 ) ( n 2 ) .
Proof. 
Let F = ( f i j ) , and F is a symmetric matrix; then, we have F 2 = Σ = ( σ i j ) . Therefore, for arbitrary i , j { 1 , , p } , the equalities σ i j = k = 1 p f i k f j k and σ i i = k = 1 p f i k 2 hold. For m = 1 , , n , denote x m = ( x m 1 , x m 2 , , x m p ) T and z m = F 1 x m , then z m is an i.i.d. sample and z m N ( 0 , I p ) . Let z m = ( z m 1 , z m 2 , , z m p ) T , then z m k , m = 1 , , n , k = 1 , , p are mutually independent standard Gaussian random variables. For arbitrary m { 1 , , n } and i , j { 1 , , p } , we have x m i = k = 1 p f i k z m k and x m j = k = 1 p f j k z m k . Denote that SCM S = ( s i j ) , then s i j can be decomposed as follows:
s i j = 1 n m = 1 n x m i x m j = 1 n m = 1 n k 1 , k 2 = 1 p f i k 1 f j k 2 z m k 1 z m k 2 .
Then, for arbitrary m { 1 , , n } and i , j { 1 , , p } , we have
E [ x m i x m j ] = k 1 , k 2 = 1 p f i k 1 f j k 2 E [ z m k 1 z m k 2 ] = k = 1 p f i k f j k = σ i j .
Especially when i = j , we have E [ x m i 2 ] = σ i i . Then, we can obtain
E [ n 3 tr ( D S 3 ) 3 n tr ( G D S ) + 2 tr ( R ) ] = m 1 m 2 m 3 n i = 1 p E [ x m 1 i 2 x m 2 i 2 x m 3 i 2 ] = m 1 m 2 m 3 n i = 1 p E [ x m 1 i 2 ] E [ x m 2 i 2 ] E [ x m 3 i 2 ] = m 1 m 2 m 3 n i = 1 p σ i i 3 = n ( n 1 ) ( n 2 ) i = 1 p σ i i 3 = τ c 1 tr ( D Σ 3 ) .
Furthermore, we have
E [ n 3 tr ( D S S 2 ) 2 n tr ( K S ) n tr ( G D S E ) + 2 tr ( R E ) ] = m 1 m 2 m 3 n i , j = 1 p E [ x m 1 i 2 x m 2 i x m 2 j x m 3 i x m 3 j ] = m 1 m 2 m 3 n i , j = 1 p E [ x m 1 i 2 ] E [ x m 2 i x m 2 j ] E [ x m 3 i x m 3 j ] = m 1 m 2 m 3 n i , j = 1 p σ i i σ i j 2 = n ( n 1 ) ( n 2 ) i , j = 1 p σ i i σ i j 2 = τ c 1 tr ( D Σ Σ 2 ) .
Then, the unknown scalars c 1 and c 2 can be unbiasedly estimated by
γ 1 = τ c [ n 3 tr ( D S 3 ) 3 n tr ( G D S ) + 2 tr ( R ) ] ,
γ 2 = τ c [ n 3 tr ( D S S 2 ) 2 n tr ( K S ) n tr ( G D S E ) + 2 tr ( R E ) ] .
By plugging the estimates of quantities in (27) into (28) and (29), the unbiased estimates of v ( T 1 ) and v ( T 2 ) are given by
v ^ ( T 1 ) = 1 p α 2 α 1 n p + p 4 n p β 1 + 1 p 2 β 2 + p 2 2 n p + 2 n p 2 β 3 ,
v ^ ( T 2 ) = α 3 α 1 n + 1 n β 1 + 1 n β 3 + n + 2 n γ 1 2 n + 4 n γ 2 .
By the Equations (16), (64), and (65), the unbiased estimates of the vectors b ( T i ) , i = 1 , 2 are given by
b ^ ( T i ) = u ^ ( T i ) + v ^ ( T i ) , i = 1 , 2 .
In addition, the constant c in (10) can be further calculated under Gaussian distribution, which is
c = 1 n a 1 + 1 n a 2 .
Therefore, we can obtain that the unbiased estimate of c is given by
c ^ = 1 n α 1 + 1 n α 2 = τ a n 2 n tr ( S 2 ) + tr 2 ( S ) .
To sum up, by Equations (13), (66) and (68), we can obtain that the unbiased estimates of the loss functions M T i ( w | Σ ) are given by
M ^ T i ( w ) = 1 2 w T H ^ ( T i ) w 2 w T b ^ ( T i ) + c ^ .

3.3. Optimal Second-Order Stein-Type Estimators

For the target matrices T i , i = 1 , 2 , through replacing the objective function in (10) with its unbiased estimate given by (69), we further formulate the second-order Stein-type estimators as the following optimization problems:
min 1 2 w T H ^ ( T i ) w 2 w T b ^ ( T i ) + c ^ s . t . 0 e 1 w 1 , e 2 w 0 .
For i = 1 , 2 , the Hessian matrix of the objective function in (70) is H ^ ( T i ) . By the following inequality
tr ( S T i ) 3 2 tr ( S T i ) 2 tr ( S T i ) 4 ,
we can obtain that H ^ ( T i ) is positive definite. Therefore, the optimization problem (70) is a convex quadratic program. Furthermore, we can obtain the globally optimal solution by an efficient algorithm.
For the target matrices T i , i = 1 , 2 , by denoting the corresponding optimal tuning parameters as w i = ( w 1 i , w 2 i ) T , the optimal second-order Stein-type estimators can be expressed as
Σ ^ i = S + w 1 i ( T S ) + w 2 i ( T S ) 2 .
Remark 3.
The proposed second-order Stein-type estimators are both well conditioned. Moreover, by taking the spectral decomposition, the SCM can be expressed as
S = U T Δ U ,
where Δ = diag ( δ 1 , , δ p ) is a diagonal matrix consisting of the eigenvalues and U is the corresponding unitary matrix. Then, the second-order Stein-type estimator Σ ^ 1 has the following spectral decomposition:
Σ ^ 1 = U T Δ ˜ U ,
where Δ ˜ = diag ( δ ˜ 1 , , δ ˜ p ) with
δ ˜ i = δ i + w 1 1 ( θ δ i ) + w 2 1 ( θ δ i ) 2 .
where θ = tr ( S ) p is the mean of sample eigenvalues. Therefore, the proposed estimator Σ ^ 1 shrinks the sample eigenvalues by a nonlinear transformation whileretaining the sample eigenvectors.

4. Numerical Simulations and Real Data Analysis

This section presents numerical simulations and two application examples to discover the performance of the proposed second-order Stein-type estimators. The proposed covariance matrix estimators for the target matrices T 1 and T 2 are denoted as QS-T1 and QS-T2, respectively. The control estimators include the Stein-type estimator LS-T1 for T 1 in [11], and the Stein-type estimator LS-T2 for T 2 in [12] and the nonlinear shrinkage estimator NS developed in [18].

4.1. MSE Performance

We assume that the actual distribution is N ( 0 , Σ ) , where the following models generate the covariance matrix:
(1)
Model 1: Σ = ( σ i j ) p × p with σ i i = 1 and σ i j = 0 . 1 | i j | for i j ,
(2)
Model 2: Σ = ( σ i j ) p × p with σ i i = max { 50 i , 0 } + 2 and σ i j = 0.6 for i j .
Under Model 1, the diagonal elements are equal to 1, and the off-diagonal elements are tiny. Therefore, the covariance matrix is close to a spherical matrix. Under Model 2, the diagonal elements are dispersive, and the off-diagonal elements correspond to weak correlations. Therefore, the covariance matrix is close to a diagonal matrix. We carry out random sampling in each Monte Carlo run and compute the Frobenius loss of each covariance matrix estimator. The MSE performance of each covariance matrix estimator is evaluated by averaging the Frobenius losses of 5 × 10 3 runs.
Figure 1 and Figure 2 report the logarithmic Frobenius loss of each estimator in large-dimensional scenarios where the dimension is 180 and the sample size varies from 10 to 100. Under Model 1, the Stein-type estimator LS-T1 and the second-order Stein-type estimator QS-T1 outperform the nonlinear shrinkage estimator NS and the shrinkage estimators LS-T2 and QS-T2, which employ the diagonal target matrix. Furthermore, the proposed second-order Stein-type estimator QS-T1 shows a significant advantage over the Stein-type estimator LS-T1, especially when the sample size is tiny. Similarly, under Model 2, the Stein-type estimator LS-T2 and the second-order Stein-type estimator QS-T2 perform better than the other three estimators. Moreover, the proposed estimator QS-T2 outperforms the corresponding Stein-type estimator LS-T2. Therefore, when the correct target matrix is employed, the second-order Stein-type estimators enjoy lower Frobenius losses than the linear and nonlinear shrinkage estimators in large-dimensional scenarios.
Figure 3 and Figure 4 report the logarithmic Frobenius loss of each estimator versus the dimension. The sample size is 30. We can find that the logarithmic Frobenius losses become more prominent as the dimension increases. In Figure 3, the actual covariance matrix is close to being spherical. The proposed second-order Stein-type estimator QS-T1 significantly outperforms the other estimators. In Figure 4, the actual covariance matrix is close to being diagonal. The proposed second-order Stein-type estimator QS-T2 enjoys lower Frobenius loss when the dimension exceeds 120.
The proposed second-order Stein-type estimators take significant advantage of the MSE performance over the linear and nonlinear shrinkage estimators, especially when the dimension is large compared to the sample size.

4.2. Portfolio Selection

In finance, assets with higher expected returns generally involve higher risks. Therefore, investors must constantly balance the expected return and the risk tolerance. The portfolio strategy is a popular way to reduce risk and enhance return. Therefore, portfolio selection plays a vital role in asset investment.
In 1952, Markowitz introduced the famous mean-variance optimization to determine the optimal portfolio weights [31]. Let m and Σ be the expectation and covariance matrix of the daily returns. For portfolio weight k , the variance of the portfolio is defined as σ 2 = k T Σ k in the Markowitz framework. As short selling is forbidden, the Markowitz portfolio optimization is formulated as the following mean-variance problem:
min k T Σ k s . t . k T m = r , k T 1 = 1 , k 0 ,
where r is a given expected return. By, respectively, replacing m and Σ with their estimates m ^ and Σ ^ , the optimal weight k r can be solved by efficient quadratic programming algorithm. It is obvious that the Markowitz optimization only depends on the estimates of the first and second moments of the daily returns. The sample mean and the SCM in the classic portfolio perform well in the portfolio risk measurement; however, the SCM becomes unstable as the number of stocks is large, resulting in significant property loss [32]. Therefore, a well-performed covariance matrix estimator is important in current portfolio selection [2,33].
In practice, we consider a portfolio consisting of p = 95 highly capitalized stocks from the New York stock exchange with ticker symbols AA, ABT, AIG, AIR, ALL, AMD, AP, APA, AXP, BA, BAC, BAX, BEN, BK, BMY, C, CAT, CCL, CHK, CL, COP, CPE, CVS, CVX, D, DB, DD, DE, DNR, DVN, EAT, EME, EMR, EXC, FCX, FDX, FNMA, GD, GE, GILD, GLW, HAL, HD, HIG, HON, HPQ, IBM, INTC, ITW, JNJ, JPM, KMB, KO, L, LLY, LMT, LOW, M, MCD, MDT, MMM, MO, MRK, MRO, MS, NBR, NC, NE, NEE, NL, NNN, ODC, OXM, OXY, PCG, PEP, PFE, PG, RIG, SLB, SO, T, TGT, TRV, TXN, UNH, UNP, USB, VLO, VZ, WFC, WMT, WWW, X, XOM. The dataset X contains n = 536 daily close prices from 11 November 2016 to 31 December 2018 collected via Yahoo! Finance at https://au.finance.yahoo.com/lookup?s=DATA, accessed on 10 November 2022. For each stock i, the daily close price is preprocessed as the daily return by
X ˜ ( i , j ) = X ( i , j + 1 ) X ( i , j ) 1 , j = 1 , , n 1 .
The covariance matrix estimators LS-T1, LS-T2, NS, QS-T1, and QS-T2 are generated from the daily return X ˜ . For an expected return r, the realized risk is defined as σ r = k r T Σ ^ k r . Next, we employ the realized risks, one key index to evaluate the portfolio, to verify the performance of the covariance matrix estimators.
Figure 5 and Figure 6 plot the realized risks of three kinds of shrinkage estimators for different investment horizons. For a short-term investment of 44 trading days (2 months), the proposed QS-T2 has the lowest realized risk when the expected return is less than 0.5% and has the highest realized risk when the expected return exceeds 0.6%. The proposed QS-T1 and the Stein-type estimator LS-T1 have the lowest realized risk when the expected return exceeds 0.5%. For a long-term investment of 280 trading days (13 months), the expected return of the five estimators becomes similar for the short-term investment.
Figure 7 and Figure 8 plot the realized risks of shrinkage estimators for different expected return levels. The proposed estimator QS-T2 enjoys the lowest realized risk for a low expected return. The nonlinear shrinkage estimator NS has the highest realized risk. However, the proposed estimator QS-T2 performs worst when the expected return becomes high. The proposed estimator QS-T1, together with NS and LS-T1, performs best in this scenario.
The proposed second-order Stein-type estimator QS-T2 enjoys good portfolio selection for short-term investment and prudent return cases. The proposed second-order Stein-type estimator QS-T1 is recommended in long-term investment and high-return scenarios.

4.3. Discriminant Analysis

We further discover the performance of the second-order regularized estimators in small-sample-size situations. The Parkinson’s data are collected on the website https://archive-beta.ics.uci.edu/, accessed on 10 November 2022. p = 160 biomedical voice attitudes are measured from n 1 patients and n 2 healthy individuals. Let Σ ^ be the pooling estimator based on a certain estimation strategy. We use the following quadratic discriminant rule M = ( x i x ¯ ) T Σ ^ 1 ( x i x ¯ ) to make a diagnosis for each x i . M denotes the Mahalanobis distance between individual x i and the sample center x ¯ . The individual x i is classified as Parkinson’s patient if x i is closer to the sample center of patients in the sense of Mahalanobis distance.
Table 1 reports the classification accuracy rates of different estimators. We can see that the Stein-type estimators LS-T1, LS-T2, QS-T1, and LS-T2 perform better than NS. Moreover, the accuracy rate of the Stein-type estimators becomes larger as the sample size increases. Moreover, the proposed second-order Stein-type estimator QS-T2 enjoys the largest accuracy rate when n 40 , and the Stein-type estimator LS-T2 has the best performance when n 45 . Therefore, we can see that the proposed second-order Stein-type estimation performs better than the classic Stein-type estimation in this application.

5. Conclusions and Discussion

This paper investigated the problem of estimating a large-dimensional covariance matrix. Motivated by Stein’s strategy, we developed a novel strategy named the second-order Stein-type estimation. The proposed estimator is expected to be positive definite in the form of a quadratic binomial of the SCM and the target matrix. Firstly, we specified the spherical and diagonal targets in the second-order Stein-type regularization. The mean squared errors were, respectively, obtained for the two targets. Secondly, we unbiasedly estimated the two mean squared errors under the Gaussian distribution. Thirdly, the optimal parameters were obtained by solving the convex quadratic programming. The optimal second-order Stein-type estimators were obtained for the two target matrices. Finally, we verified the performance of the proposed estimators in numerical simulations and real data applications.
It is worth mentioning that the second-order Stein-type estimators were proposed under Gaussian distribution. In practical applications, the data may often deviate from the Gaussian distribution. Therefore, the problem of investigating the second-order Stein-type regularization under non-Gaussian distributions remains open and important.

Author Contributions

Conceptualization, B.Z.; methodology, B.Z.; software, B.Z.; validation, B.Z.; formal analysis, B.Z.; investigation, H.H.; resources B.Z.; data curation, B.Z.; writing—original draft preparation, B.Z.; writing—review and editing, H.H. and J.C.; visualization, B.Z.; supervision, B.Z.; project administration, B.Z. and H.H.; funding acquisition, B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Guangxi Science and Technology Planning Project (2022AC21276), Science and Technology Project of Guangxi Guike (AD21220114), National Natural Science Foundation of China (Grant Nos. 11861071 and 12261011), and Beijing Institute of Technology Research Fund Program for Young Scholars.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data included in this study are available upon request by contact with the first author.

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this paper.

References

  1. Fan, J.; Liao, Y.; Mincheva, M. Large covariance estimation by thresholding principal orthogonal complements. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2013, 75, 603–680. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Bodnar, T.; amd Yarema Okhrin, S.D.; Parolya, N.; Schmid, W. Statistical inference for the expected utility portfolio in high dimensions. IEEE Trans. Signal Process. 2021, 69. [Google Scholar] [CrossRef]
  3. Vershynin, R. How close is the sample covariance matrix to the actual covariance matrix? J. Theor. Probab. 2012, 25, 655–686. [Google Scholar] [CrossRef] [Green Version]
  4. Cai, T.T.; Han, X.; Pan, G. Limiting laws for divergent spiked eigenvalues and largest nonspiked eigenvalue of sample covariance matrices. Ann. Stat. 2020, 48, 1255–1280. [Google Scholar] [CrossRef]
  5. Wu, W.; Pourahmadi, M. Banding sample autocovariance matrices of stationary progress. Stat. Sin. 2009, 19, 1755–1768. [Google Scholar]
  6. Fan, J.; Liu, H.; Wang, W. Large covariance estimation through elliptical factor models. Ann. Stat. 2018, 46, 1383–1414. [Google Scholar] [CrossRef]
  7. Cai, T.T.; Yuan, M. Adaptive covariance matrix estimation through block thresholding. Ann. Stat. 2012, 40, 2014–2042. [Google Scholar] [CrossRef]
  8. Cao, Y.; Lin, W.; Li, H. Large covariance estimation for compositional data via composition-adjusted thresholding. J. Am. Stat. Assoc. 2019, 114, 759–772. [Google Scholar] [CrossRef] [Green Version]
  9. Bodnar, O.; Bodnar, T.; Parolya, N. Recent advances in shrinkage-based high-dimensional inference. J. Multivar. Anal. 2021, 188, 104826. [Google Scholar] [CrossRef]
  10. Raninen, E.; Ollila, E. Coupled regularized sample covariance matrix estimator for multiple classes. IEEE Trans. Signal Process. 2021, 69, 5681–5692. [Google Scholar] [CrossRef]
  11. Chen, Y.; Wiesel, A.; Eldar, Y.C.; Hero, A.O. Shrinkage algorithms for MMSE covariance estimation. IEEE Trans. Signal Process. 2010, 58, 5016–5029. [Google Scholar] [CrossRef]
  12. Fisher, T.J.; Sun, X. Improved Stein-type shrinkage estimators for the high-dimensional multivariate normal covariance matrix. Comput. Stat. Data Anal. 2011, 55, 1909–1918. [Google Scholar]
  13. Hannart, A.; Naveau, P. Estimating high dimensional covariance matrices: A new look at the Gaussian conjugate framework. J. Multivar. Anal. 2014, 131, 149–162. [Google Scholar] [CrossRef]
  14. Liu, Y.; Sun, X.; Zhao, S. A covariance matrix shrinkage method with Toeplitz rectified target for DOA estimation under the uniform linear array. Int. J. Electron. Commun. (AEÜ) 2017, 81, 50–55. [Google Scholar] [CrossRef]
  15. Lancewicki, T.; Aladjem, M. Multi-target shrinkage estimation for covariance matrices. Trans. Signal Process. 2014, 62, 6380–6390. [Google Scholar] [CrossRef]
  16. Tong, J.; Hu, R.; Xi, J.; Xiao, Z.; Guo, Q.; Yu, Y. Linear shrinkage estimation of covariance matrices using low-complexity cross-validation. Signal Process. 2018, 148, 223–233. [Google Scholar] [CrossRef] [Green Version]
  17. Yuasa, R.; Kubokawa, T. Ridge-type linear shrinkage estimation of the mean matrix of a high-dimensional normal distribution. J. Multivar. Anal. 2020, 178, 104608. [Google Scholar] [CrossRef]
  18. Ledoit, O.; Wolf, M. Analytical nonlinear shrinkage of large-dimensional covariance matrices. Ann. Stat. 2020, 48, 3043–3065. [Google Scholar] [CrossRef]
  19. Mestre, X. On the asymptotic behavior of the sample estimates of eigenvalues and eigenvectors of covariance matrices. IEEE Trans. Signal Process. 2008, 56, 5353–5368. [Google Scholar] [CrossRef]
  20. Ledoit, O.; Wolf, M. A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 2004, 88, 365–411. [Google Scholar] [CrossRef] [Green Version]
  21. Ikeda, Y.; Kubokawa, T.; Srivastava, M.S. Comparison of linear shrinkage estimators of a large covariance matrix in normal and non-normal distributions. Comput. Stat. Data Anal. 2016, 95, 95–108. [Google Scholar]
  22. Cabana, E.; Lillo, R.E.; Laniado, H. Multivariate outlier detection based on a robust Mahalanobis distance with shrinkage estimators. Stat. Pap. 2021, 62, 1583–1609. [Google Scholar] [CrossRef] [Green Version]
  23. Ledoit, O.; Wolf, M. Shrinkage estimation of large covariance matrices: Keep it simple, statistician? J. Multivar. Anal. 2021, 186, 104796. [Google Scholar] [CrossRef]
  24. Tanaka, M.; Nakata, K. Positive definite matrix approximation with condition number constraint. Optim. Lett. 2014, 8, 939–947. [Google Scholar] [CrossRef]
  25. Gupta, A.K.; Nagar, D.K. Matrix Variate Distributions; Chapman & Hall/CRC: Boca Raton, FL, USA, 2000. [Google Scholar]
  26. Srivastava, M.S. Some tests concerning the covariance matrix in high dimensional data. J. Jpn. Stat. Soc. 2005, 35, 251–272. [Google Scholar] [CrossRef] [Green Version]
  27. Li, J.; Zhou, J.; Zhang, B. Estimation of large covariance matrices by shrinking to structured target in normal and non-normal distributions. IEEE Access 2018, 6, 2158–2169. [Google Scholar] [CrossRef]
  28. Fisher, T.J.; Sun, X.; Gallagher, C.M. A new test for sphericity of the covariance matrix for high dimensional data. J. Multivar. Anal. 2010, 101, 2554–2570. [Google Scholar] [CrossRef]
  29. Lehmann, E.L. Elements of Large-Sample Theory; Springer: New York, NY, USA, 1999. [Google Scholar]
  30. Bai, Z.; Silverstein, J.W. Spectral Analysis of Large Dimensional Random Matrices; Springer: New York, NY, USA, 2010. [Google Scholar]
  31. Markowitz, H. Portfolio selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
  32. Pantaleo, E.; Tumminello, M.; Lillo, F.; Mantegna, R. When do improved covariance matrix estimators enhance portfolio optimization? An empirical comparative study of nine estimators. Quant. Financ. Pap. 2010, 11, 1067–1080. [Google Scholar] [CrossRef] [Green Version]
  33. Joo, Y.C.; Park, S.Y. Optimal portfolio selection using a simple double-shrinkage selection rule. Financ. Res. Lett. 2021, 43, 102019. [Google Scholar] [CrossRef]
Figure 1. The logarithmic Frobenius losses of shrinkage estimators under Model 1 with p = 180 .
Figure 1. The logarithmic Frobenius losses of shrinkage estimators under Model 1 with p = 180 .
Entropy 25 00053 g001
Figure 2. The logarithmic Frobenius losses of shrinkage estimators under Model 2 with p = 180 .
Figure 2. The logarithmic Frobenius losses of shrinkage estimators under Model 2 with p = 180 .
Entropy 25 00053 g002
Figure 3. The logarithmic Frobenius losses of shrinkage estimators under Model 1 with n = 30 .
Figure 3. The logarithmic Frobenius losses of shrinkage estimators under Model 1 with n = 30 .
Entropy 25 00053 g003
Figure 4. The logarithmic Frobenius losses of shrinkage estimators under Model 2 with n = 30 .
Figure 4. The logarithmic Frobenius losses of shrinkage estimators under Model 2 with n = 30 .
Entropy 25 00053 g004
Figure 5. Realized risk of each covariance matrix estimator when the investment horizon contains 44 trading days.
Figure 5. Realized risk of each covariance matrix estimator when the investment horizon contains 44 trading days.
Entropy 25 00053 g005
Figure 6. Realized risk of each covariance matrix estimator when the investment horizon contains 280 trading days.
Figure 6. Realized risk of each covariance matrix estimator when the investment horizon contains 280 trading days.
Entropy 25 00053 g006
Figure 7. Realized risk of each covariance matrix estimator when the expected return is 0.2 % .
Figure 7. Realized risk of each covariance matrix estimator when the expected return is 0.2 % .
Entropy 25 00053 g007
Figure 8. Realized risk of each covariance matrix estimator when the expected return is 0.8 % .
Figure 8. Realized risk of each covariance matrix estimator when the expected return is 0.8 % .
Entropy 25 00053 g008
Table 1. The classification accuracy rate of Parkinson’s data.
Table 1. The classification accuracy rate of Parkinson’s data.
n 2 1520253035404550
LS-T10.51650.55300.56460.58260.58700.59560.59810.6054
LS-T20.62620.64590.66070.67140.67960.68920.69180.6971
NS0.49310.48420.47010.46880.45440.44580.42850.4304
QS-T10.51240.54650.56000.57610.58330.59380.59530.6025
QS-T20.63360.65390.66780.67470.68400.69020.69160.6957
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, B.; Huang, H.; Chen, J. Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization. Entropy 2023, 25, 53. https://doi.org/10.3390/e25010053

AMA Style

Zhang B, Huang H, Chen J. Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization. Entropy. 2023; 25(1):53. https://doi.org/10.3390/e25010053

Chicago/Turabian Style

Zhang, Bin, Hengzhen Huang, and Jianbin Chen. 2023. "Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization" Entropy 25, no. 1: 53. https://doi.org/10.3390/e25010053

APA Style

Zhang, B., Huang, H., & Chen, J. (2023). Estimation of Large-Dimensional Covariance Matrices via Second-Order Stein-Type Regularization. Entropy, 25(1), 53. https://doi.org/10.3390/e25010053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop