Next Article in Journal
Estimation of Endogenous Volatility Models with Exponential Trends
Next Article in Special Issue
Comparative Study on Lower-Middle-, Upper-Middle-, and Higher-Income Economies of ASEAN for Fiscal and Current Account Deficits: A Panel Econometric Analysis
Previous Article in Journal
Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing
Previous Article in Special Issue
Measuring Variable Importance in Generalized Linear Models for Modeling Size of Loss Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Index Tracking Portfolio with Sector Neutrality

School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2645; https://doi.org/10.3390/math10152645
Submission received: 4 July 2022 / Revised: 22 July 2022 / Accepted: 25 July 2022 / Published: 28 July 2022

Abstract

:
As a popular passive investment strategy, a sparse index tracking strategy has advantages over a full index replication strategy because of higher liquidity and lower transaction costs. Sparsity and nonnegativity constraints are usually assumed in the construction of portfolios in sparse index tracking. However, none of the existing studies considered sector risk exposure of the portfolios that prices of stocks in one sector may fall at the same time due to sudden changes in policy or unexpected events that may affect the whole sector. Therefore, sector neutrality appeals to be critical when building a sparse index tracking portfolio if not using full replication. The statistical approach to sparse index tracking is a constrained variable selection problem. However, the constrained variable selection procedure using Lasso fails to produce a sparse portfolio under sector neutrality constraints. In this paper, we propose a high-dimensional constrained variable selection method using TLP for building index tracking portfolios under sparsity, sector neutrality and nonnegativity constraints. Selection consistency is established for the proposed method, and the asymptotic distribution is obtained for the sparse portfolio weights estimator. We also develop an efficient iteration algorithm for the weight estimation. We examine the performance of the proposed methodology through simulations and an application to the CSI 300 index of China. The results demonstrate the validity and advantages of our methodology.

1. Introduction

Despite various types and goals, portfolio managing strategies can be classified into two main groups according to styles, namely, active and passive strategies. The former ones attempt to beat the market by exploiting market inefficiency, whereas the latter ones prefer to follow the market. However, the majority of actively managed funds do not outperform the market in the long run [1]. On the contrary, passive funds provide market-level profits without taking an active risk. Passive funds become more popular in recent years. U.S. equity index fund assets have surpassed the assets of their actively managed counterparts for the first time, according to Morningstar’s latest fund flows report. This trend is also growing in other parts of the world.
The most commonly used passive investing strategy is index tracking, which aims to mimic the performance of a specified basket of underlying assets. Without exposure to active risk, an index tracking fund needs to follow the targeting index as closely as possible. The performance of an index tracker is measured by tracking error [2], which is defined as the divergence between the index tracker and the targeting index. Tracking error gives investors a sense of how “tight” the index tracker goes after the index. Smaller tracking error means less exposure to active risk.
A straightforward method to construct an index tracking portfolio is full replication, which is to buy appropriate quantities of all assets composing the index. Theoretically, the performance of a full replication portfolio should match that of the targeting index perfectly. However, full replication portfolios often incur higher transaction and administration costs than portfolios of relatively fewer stocks. An alternative to full replication is sparse index tracking, which is to replicate the performance of an index by holding a representative sample of the securities in the index [3,4,5]. The challenge of building a sparse index tracking portfolio lies in the trade-off between transaction costs and tracking efficiency. A sparse index tracking portfolio manager attempts to hold as few securities as possible to reduce the transaction costs and eliminate potential illiquid stocks and curb the tracking error to maintain tracking efficiency at the same time. Sparse index portfolios are often constructed via portfolio optimization. Such an algorithm-driven portfolio aims to minimize the tracking error in both training samples and testing samples. Specifically, index tracking optimization needs to incorporate certain constraints, such as no short selling and balance among industrial sectors. The latter constraint aims at taming sector risk of sparse portfolios that a set of factors particular to a sector drags down the sector’s overall performance in financial risk management. Upon the occurrence of events affecting the entire sector, such as sudden policy changes or technology innovation, the stock prices of many of the companies in the same sector may undergo drastic changes simultaneously. Unbalanced allocation of assets among different sectors can result in the failure of index tracking for a portfolio. In Table 1, we present several periods when sector risk appears for China CSI300 Index. The table shows the performance of the index, the sector gainers and losers during the selected period. The large divergence between the index and the gainers or losers indicates that inappropriate exposure to sector risk can inflate the tracking error and undermine the portfolio performance. To manage the sector risk with index tracking, it is necessary to build a sector-neutral portfolio in which the total weight of stocks within each sector remains the same as that in the index. In particular, a sector-neutral strategy means not to overweight or underweight any given sector compared to its weight in the index, so as to ensure the performance of the portfolio will be stable and accordingly will be not affected by style switch in the market. Therefore, a sector-neutral portfolio can closely fit and help track the performance of a benchmark index, and essentially it turns out to be a passive investment strategy.
To overcome the drawbacks of full replication and accommodate the sector risk in traditional sparse portfolios, we propose a novel method to construct sparse index-tracking portfolios, assuming that short positions are forbidden and no cash is allowed in the constructed portfolios. Finally, we formulate the sparse index tracking problem with sector neutrality as the minimization of the tracking error under constraints of nonnegativity, sparsity, and sector neutrality.

2. Literature Review

There is a vast amount of research on quantitative methods for index tracking. Ref. [6] studied the traditional Markowitz asset allocation problem and proposed an algorithm for designing a sparse index fund. This algorithm yields a locally optimal index portfolio with a fixed number of securities. Ref. [7] investigated the relation between error measures in statistical tracking and asset allocation restrictions expressed as admissible weight ranges. It addressed the relationship between tracking errors caused by active portfolio management and given tactical asset allocation ranges. Ref. [8] imposed the no-short-sale constraint on the Markowitz mean-variance optimization and gave an insightful explanation and demonstration of why the constraints help. Ref. [9] considered maximizing the index fund’s tracking accuracy by rebalancing the composition of the index fund’s tracking portfolio in response to new market information and cash deposits and withdrawals from investors. Ref. [10] investigated the inclusion of portfolio liquidity constraints for the construction of index tracking portfolios and proposed two liquidity modelling approaches for index tracking.
Many statistical regularization methods have been applied to the index tracking problem recently. In general, a sparse index tracking portfolio can be constructed via variable selection, which removes non-informative features and yields sparse models, especially for the high-dimensional data, and consequently facilitates inference, interpretability and prediction. Ref. [11] formulated sparse index tracking problem as an optimization problem that minimizes the tracking error, subject to the number of selected assets less than or equal to a preset threshold. Ref. [12] formulated the index tracking as a regression problem whose objective was to minimise the tracking error and adds a L 0 penalty on weights corresponding to the amount to invest in each stock, then solved the optimization problem with stochastic neural networks. Ref. [13] investigated the applications of sparse auto-encoder deep-learning architectures with L 1 regularization in selecting representative stocks from the index constituents. Ref. [14] analyzed the constraints effect on covariance regularization for index tracking, and developed an L 1 and L 2 norm constrained minimum-variance portfolio. Ref. [15] reformulated the classical Markowitz mean-variance framework as a constrained least-squares regression problem and added a penalty function to construct a sparse index tracking portfolio. They emphasized that adding an L 1 penalty on weights to the objective function may bring several advantages, including promoting sparsity and stabilizing the out-of-sample performance. Ref. [16] offered deep mathematical insights into the utility approximations with the gross-exposure constraint and gave a theoretical justification for the empirical results by [8]. These approaches consist in solving index tracking problems through constraining portfolio norms, for example, the L 1 norm as implied in Lasso and the L 2 norm as imposed in ridge regression. Besides, many other variable selection methods introduced in various fields can be applied to the index tracking problems as well. Truncated L 1 penalty (TLP) [17] has advantages over Lasso [18] in that Lasso gives biased parameter estimates and possibly inconsistent variable selection. Intuitively, unlike Lasso which penalizes all variables, TLP does not penalize variables of large values, and thus enables us to incorporate more complicated constraints into optimization as in Section 3. We asserted in Section 1 that the novelty of our method lies in the construction of a sparse index tracking portfolio with exposure to sector risk. Sector neutrality can be achieved by constrained variable selection procedures. The constrained Lasso [19] introduced additional equality and inequality constraints to the traditional Lasso method for asset allocation, which generalized the work of [16]. However, with the constrained Lasso, the nonnegativity and sector neutrality constraints nullify the Lasso penalty and cause the method fails to give sparsity. More details will be discussed further in Section 3. Ref. [20] followed and added a ridge term into the objective function in constrained Lasso so that it may work in a high-dimensional case where sparsity is necessary.
Computationally solving a constrained variable selection problem is not an easy task, especially when complex penalty functions and constraints are used. The alternating direction method of multipliers (ADMM) algorithm [21] is an efficient and scalable optimization algorithm for convex optimization. Ref. [22] exploited the structure of distributed optimization framework and illustrated this framework via applications such as L 1 mean and variance filtering. The idea of the ADMM algorithm is to break the original optimization into iterations of easier problems. However, optimization with the TLP term is not a convex problem. To solve this nonconvex problem, ref. [17] presented an efficient algorithm based on a difference of convex (DC) decomposition [23] and a coordinate descent (CD) algorithm [24,25]. The DC method decomposes the nonconvex constraint into a difference of two convex functions to produce a sequence of approximating convex constraints. Thus the original nonconvex optimization is broken down into a series of convex optimization problems. In minimization of a multivariate function, the CD algorithm iteratively minimizes the objective function with respect to each coordinate direction until convergence. This is an efficient algorithm for large-scale convex optimization problems.
In this paper, we propose a novel method to solve the sparse index tracking problem with sector neutrality. An error bound for variable selection is obtained for the method, and then variable selection consistency and asymptotic distribution are established for effective inference. An efficient minimization algorithm is developed by combining the ADMM algorithm, DC decomposition and CD algorithm. The new procedure is tested via numerical simulations under different data generation settings. An application is given to index tracking in the Chinese stock market. Both the numerical experiments and application confirm the good performance of our method in general.
This paper is organized as follows. Section 3 formulates the sparse index tracking with sector neutrality as an optimization problem under constraints. Section 4 discusses the theory for high-dimensional constrained variable selection. Section 5 describes the algorithm for the optimization problem. Section 6 presents the results of the simulated experiments. Section 7 shows the application of the proposed method to index tracking portfolio construction. Summary and discussion are given in Section 8. Technical proofs and some tables are relegated to the Appendix A.

3. Methodology

From a statistical point of view, index tracking is a linear regression problem:
Y = X β + ϵ ,
with the response Y R n × 1 and the covariates X R n × p , where n is the sample size and p is the dimension of covariates. β R p is the parameter of covariates X , and ϵ is the error term, we assume ϵ N ( 0 , σ 2 I ) , where I is the identity matrix. In index tracking, Y and X represent the returns of an index and its constituents, respectively, and β is the weight vector of the index’s constituents.
Suppose that there are q sectors. We rewrite the covariates X = ( X 1 , , X q ) , where X i = ( X i , 1 , , X i , g i ) are the covariates in the i-th sector, and g i is the number of stocks in the i-th sector, i { 1 , , q } . Accordingly, we write β = ( β 1 , , β q ) , and β i = ( β i , 1 , , β i , g i ) .
For sparse index tracking, three categories of constraints are under consideration:
(1)
the sparsity constraint within sector: i = 1 q j = 1 g i J ( β i , j ) K ;
(2)
the sector neutrality constraint: j = 1 g i β i , j = ω i , for i { 1 , , q } ;
(3)
the nonnegativity constraint: β i , j 0 , for i { 1 , , q } , j { 1 , , g i } .
Here ω i are constants satisfying i = 1 q ω i = 1 . Each ω i is the sum of the original constituent weights in the i-th sector. These original non-sparse weights are pre-specified by portfolio managers and hence are known in advance from the definition of the index.
The penalty function J ( · ) needs to be chosen carefully. The Lasso penalty is widely used in variable selection problems because of its ease of computation. Nevertheless, it turns out that the Lasso penalty fails to yield a sparse portfolio under sector neutrality and nonnegativity constraints for index tracking, as explained below. Under the nonnegativity constraint, the Lasso penalty i = 1 q j = 1 g i | β i , j | becomes i = 1 q j = 1 g i β i , j , while the sector neutrality constraint makes this sum a constant: i = 1 q j = 1 g i β i , j = i = 1 q ω i = 1 . Therefore, the Lasso penalty becomes invalid and gives no penalty on the portfolio size.
To solve this problem, we take J ( · ) to be the truncated L 1 penalty (TLP) function [17], which can achieve sparse portfolio selection under the sector neutrality and nonnegativity constraints. TLP can be thought of as a truncated version of the L 1 penalty approximating the L 0 -function. As a piecewise linear function, TLP permits efficient computation and adaptive variable selection. TLP is defined as
J ( | z | ) = min ( | z | τ , 1 ) ,
where τ > 0 is a tuning parameter controlling the degree of approximation. With the TLP function, the estimation of coefficients β is obtained by minimization:
min β TE ( β ) + λ i = 1 q j = 1 g i J ( | β i , j | ) .
where TE ( β ) is the tracking error. This is the dual problem corresponding to the constrained primal problem
min β TE ( β ) , subject to i = 1 q j = 1 g i J ( | β i , j | ) K .
The dual problem is not equivalent to the constrained primal problem in general. However, with the TLP as the penalty function, the two optimization problems are equivalent [17]. In the remainder of the paper, we will consider only the unconstrained dual problem for its computational advantages. Minimization of (2) reduces to a general weighted Lasso problem, which can be solved by many efficient algorithms [18,26,27].
The tracking error TE ( β ) can assume many forms [28], for example, the empirical tracking error (ETE), defined as
TE ( β ) = 1 n Y y X β 2 2 .
With TLP and the empirical tracking error, we can rewrite (2) as
min β f ( β ) = n 1 Y X β 2 2 + λ i = 1 q j = 1 g i min { | β i , j | τ , 1 } , subject to : j = 1 g i β i , j = ω i , i { 1 , , q } , β i , j 0 , i { 1 , , q } , j { 1 , , g i } .

4. Theory

Let us first introduce some notations. The true value of β is denoted by β * . Without loss of generality, we assume that the last component β i , g i * of β i * is nonzero for i { 1 , , q } . By virtue of the sector neutrality constraint, we can solve for the last component β i , g i in each sector:
β i , g i = w i j = 1 g i 1 β i , j , i { 1 , , q } .
We write
β q = ( β 1 , 1 , , β 1 , g 1 1 , , β q , 1 , , β q , g q 1 ) , X q = ( X 1 , 1 , , X 1 , g 1 1 , , X q , 1 , , X q , g q 1 ) , X G = ( X 1 , 1 X 1 , g 1 , , X 1 , g 1 1 X 1 , g 1 , , X q , 1 X q , g q , , X q , g q 1 X q , g q ) .
For any subscripts set A { ( i , j ) : i { 1 , , q } , j { 1 , , g i 1 } } , we use X A to denote the matrix consisting of the columns of X G with subscripts in A ; likewise, β A comprises all the elements of β q with subscripts in A .
Let
A * = { ( i , j ) : β i , j * > 0 , i { 1 , , q } , j { 1 , , g i 1 } }
be the set of subscripts of all nonzero true coefficients in β q * . We denote the cardinality of A * as p 0 = | A * | , and the minimal value of nonzero true coefficients as γ min = min { β i , j * : β i , j * > 0 , i { 1 , , q } , j { 1 , , g i } } .
By (5), the regression Equation (1) can be rewritten as
Y i = 1 q w i X i , g i = X q β q * i = 1 q X i , g i j = 1 g i 1 β i , j * + ϵ = X G β q * + ϵ = X A * β A * * + ϵ .
Define
β ^ A * o l s = ( X A * X A * ) 1 X A * ( Y i = 1 q w i X i , g i ) .
We expand β ^ A * o l s to be the oracle estimator β ^ o l s by letting β ^ i , j o l s = 0 if β i , j * = 0 , and obtaining β ^ i , g i o l s by Equation (5).
Let β ^ t l p be the global minimizer of (4), and let
A ^ = { ( i , j ) : β ^ i , j t l p > 0 , i { 1 , , q } , j { 1 , , g i 1 } } .
be the estimated set of nonzero coefficients.
We now give two key assumptions when establishing theoretical results regarding the error bound and asymptotic distribution of the coefficient estimator, and model selection consistency.
Assumption 1.
For some constants d 0 > 0 and α > 1 , C min d 0 σ 2 log p n , where
C min = inf β A : | A | α | A * | , A A * X A * β A * * X A β A 2 2 max | A * A | , 1 .
Assumption 2.
For some constants d 1 > 0 and d 2 > 0 ,
d 1 η min 1 n X X η max 1 n X X d 2 ,
where η min ( · ) and η max ( · ) denote the minimum and maximum eigenvalues of a matrix, respectively.
Theorem 1.
Assume that Assumption 1 holds, and 0 < τ λ ( n + 1 ) η max ( 1 n X X ) . Then Pr β ^ t l p β ^ o l s is upper bounded by
min 2 p 0 n 1 / 2 τ σ π η min 1 / 2 1 n X A * T X A * exp n γ min τ 2 2 σ 2 η min 1 1 n X A * T X A * , p 0 Φ n 1 / 2 γ min τ σ η min 1 / 2 1 n X A * T X A * + 4 exp n C min 20 σ 2 ( α + 1 ) log ( p + 1 ) n λ 4 σ 2 + 4 exp ( α 1 ) n λ 6 α σ 2 1 + 1 α ( log ( p + 1 ) 5 3 ) + p 0 Φ n 1 / 2 ( γ min τ ) σ η min 1 / 2 1 n X A * T X A * + q Φ n 1 / 2 ( γ min τ ) p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * ,
where Φ ( · ) is the distribution function of N ( 0 , 1 ) .
Theorem 2.
Assume there are constants a γ , a C , a λ , a p 0 and a l p satisfying
a p 0 0 , a l p 0 , a p 0 < a γ , a λ < a γ , a l p 1 < a λ , a λ < a C and γ min n 1 a γ 2 , C min n a C , λ n a λ , p 0 n a p 0 , log ( p ) n a l p ,
where for two sequences { h n } and { g n } we say h n g n if both sequences h n g n and g n h n are bounded. Also assume that the conditions of Theorem 1 and Assumption 2 hold. Then
(A) 
Selection consistency:
Pr [ A ^ A * ] Pr [ β ^ t l p β ^ o l s ] 0 , as n , p .
(B) 
Asymptotic distribution: Let F n ( t ) denote the distribution function of
n X A * X A * n 1 2 β ^ A * t l p β A * * , and Φ p 0 ( t ) the distribution function of p 0 -dimensional standard multivariate normal distribution. We have
lim n sup t R p 0 F n ( t ) Φ p 0 ( t ) = 0 .
Remark 1.
Theorems 1 and 2 stabilize the index tracking problem. They encourage the non-negativity and sector neutrality in constructing sparse index tracking portfolios in high dimensional cases and allow practical and empirical work with only a moderate size of training data. The proofs of Theorems 1 and 2 are displayed in Appendix A.1.

5. Computation

The minimization of (4) with TLP can be treated as a sequence of weighted Lasso problems [17] and can be solved iteratively. However, the sector neutrality constraint and the nonnegativity constraint in (4) are not easy to handle directly. Fortunately, the alternating direction method of multipliers (ADMM) algorithm [21,22] can be applied to solve the constrained minimization problem (4).
We are now to put our optimization problem in the ADMM framework. First, using the Lagrangian multiplier method, we transform the original optimization into minimization of the objective function
f ( β ) = n 1 Y X β 2 2 + λ i = 1 q j = 1 g i min { | β i , j | τ , 1 }
over the new parameter space C :
C = { β : β i , j 0 , j = 1 g i β i , j = ω i , i { 1 , , q } , j { 1 , , g i } } .
It is straightforward to verify that the new parameter space C is convex and the objective function f in (8) is a convex loss function plus a piecewise linear regularization function. Using the results in [21,29], our optimization problem is equivalent to
min β f ( β ) + I C ( α ) subject to : β i , j = α i , j , i { 1 , , q } , j { 1 , , g i } ,
where I C ( α ) is the indicator function of space C (i.e., I C ( α ) = 0 for α C , and I C ( α ) = for α C ). This is a standard starting form of the ADMM algorithm. Before following three steps in the ADMM algorithm, we write the augmented Lagrangian for (9),
h ( β ) = n 1 Y X β 2 2 + λ i = 1 q j = 1 g i min { | β i , j | τ , 1 } + ρ 2 β α + μ 2 2 ,
where μ is a scaled dual variable associated with the constraint β = α . Here ρ > 0 is a penalty parameter.
In each iteration of ADMM, we perform alternating minimization of the augmented Lagrangian over β and α . At the k-th iteration we update variables β , α and μ by the following steps:
β k + 1 = arg min β n 1 Y X β 2 2 + λ i = 1 q j = 1 g i min { | β i , j | τ , 1 } + ρ 2 β α k + μ k 2 2
α k + 1 = C ( β k + 1 + u k )
μ k + 1 = μ k + ( β k + 1 α k + 1 ) ,
where C denotes the Euclidean projection onto space C . In the first step of ADMM, we fix α and μ and update the value of β by minimization of the augmented Lagrangian; in the second step, we fix β and μ and update the value of α by projection onto space C ; and finally, we we fix α and β and update the dual variable μ . Algorithm 1 summarizes the framework of our algorithm and the details are referred to Appendix A.2.
Algorithm 1 ADMM algorithm for the minimization of (10).
  • (Initialization) Let β 0 , α 0 , μ 0 R p be the initial parameter vectors;
  • (Iteration) At each iteration k, Update β using Algorithm A1 (see Appendix A.2), α by the projection (12), and μ by direct calculation (13);
  • (Termination) Stop when β , α , μ converge.

6. Simulation Results

In this section, we show the performance of our estimation procedure via simulations and discuss the selection of appropriate tuning parameters λ , τ and ρ in the augmented Lagrangian (10).
The detailed information about the simulation study, including the model and its variations, the methods to conduct the experiments and the choice of tuning parameters are given in Appendix A.3.
The performance of various methods is evaluated by both variable selection performance and estimation accuracy. In terms of variable selection, we use four criteria: the mean true positive(TP), the mean false positive(FP), positive selection rate(PSR), and negative selection rate(NSR) [30], respectively. The true positive(TP) is defined as # T P = i = 1 q j = 1 g i I ( β i , j * 0 , β ^ i , j 0 ) , which counts the variables with true non-zero coefficients and estimated non-zero coefficients. The false positive (FP) is defined as # F P = i = 1 q j = 1 g i I ( β i , j * = 0 , β ^ i , j 0 ) , which counts the variables with true zero coefficients but estimated as non-zero coefficients. Additionally, PSR is the ratio of TP and the total number of the true non-zero coefficients. Similar to PSR, NSR is the ratio of FP and the total number of the true zero coefficients. Regarding estimation accuracy, we adopt the mean and the standard deviation of model errors (ME) as criteria, where the model error is defined as M E ( β ^ ) = ( β ^ β * ) E ( X X ) ( β ^ β * ) , and the expectation is taken only with respect to new observation ( X , y ) . The running time of each method is evaluated using the machine with Intel Core i5 CPU, 2.4 GHz, 8 GB RAM. All methods were implemented in R.

6.1. Case 1

In this case n > p , five methods are compared. The first one is our method, named as constrained TLP method (CTLP). The second method uses the Lasso penalty for sparsity constraint instead of TLP, and we name it the constrained L 1 method (C L 1 ). This method will fail to give a sparse portfolio because of the neutrality constraint, as is pointed out in Section 3; we present the results of C L 1 here for confirmation. The third method uses TLP for the sparsity constraint, but ignores the sector neutrality, and we refer to this method as TLP. The fourth and fifth methods are the index tracking procedures in the following form,
min β TE ( β ) + λ β 0 subject to β 1 = 1 and β > 0 ,
with TE ( β ) being empirical tracking error (ETE) and Huber empirical tracking error (HETE), which are denoted by ETE and HETE, respectively. In ETE, TE ( β ) is defined in (3), while in HETE, TE ( β ) = 1 n 1 ϕ ( y X β ) where ϕ ( x ) = ( ϕ ( x 1 ) , , ϕ ( x n ) ) , and
ϕ ( x ) = x 2 if | x | M M ( 2 | x | M ) if | x | > M ,
with M being the Huber parameter. Note that ETE and HETE methods ignore the sector-neutral constraints. These two methods can be carried out using R-package sparseIndexTracking [31].
We first investigate the case when the covariates X have an independent structure. Table 2 shows the simulation results for both σ = 1 and σ = 3 settings. For the σ = 1 setting, methods CTLP and C L 1 have smaller model errors than ETE and HETE, and the model errors of ETE and HETE are much smaller than that of TLP. When it comes to the variable selection ability, all methods correctly select non-zero variables. CTLP, ETE and HETE perform similarly, and better than C L 1 in that C L 1 tends to select too many trivial variables and thus fails to produce sparse portfolios, which confirms our earlier assertion. On the other hand, TLP makes the fewest mistakes in selecting both trivial and non-trivial variables. When the variance of error terms becomes larger, the simulated data are noisier. For the σ = 3 setting, CTLP and C L 1 still have the smallest model error while TLP performs quite poorly in terms of model errors. Every method selects almost all nontrivial variables except for TLP, which selects slightly fewer nontrivial and trivial variables.
Now we investigate the simulation settings when the covariates X have A R ( 1 ) correlation, without and with group structures. Similar conclusions can be reached from Table 3 and Table 4. In the large signal-to-noise ratio setting ( σ = 1 ), the model errors of CTLP and C L 1 are similar and smaller than that of their competitors. Besides, CTLP, ETE and HETE maintain a good balance between selections of trivial and nontrivial variables whereas C L 1 selects too many trivial variables. TLP gives good variable selection results but has poor estimation accuracy. In the small signal-to-noise ratio setting ( σ = 3 ), the model errors of all five methods become larger than those in the case of the large signal-to-noise ratio setting above. However, the relative performances of the five methods are similar in terms of model errors and variable selection ability as well.
From the last columns in Table 2, Table 3 and Table 4 we see that CTLP takes more time than other methods in training because of additional loops required to satisfy the sector neutrality constraint. In contrast, TLP and ETE consume the least time among all methods.

6.2. Case 2

When p > n , the variable selection becomes a high-dimensional problem and necessitates sparsity. Theoretically, the C L 1 method with the Lasso penalty fails to yield sparse models due to sector neutrality and nonnegativity constraints. However, we can solve the convex optimization problem using some common convex optimizers like CVXR [32] to obtain an approximate solution C L 1 * . Table 5, Table 6 and Table 7 present the results of this case.
The results are similar to that of the low-dimensional case, but all four methods have larger model errors and select fewer nontrivial variables. In the large signal-to-noise ratio setting ( σ = 1 ), CTLP has smaller model errors than other methods. CTLP, ETE and HETE have a similar ability in variable selection. TLP has larger model errors and it tends to select fewer nontrivial variables. The average model error of C L 1 * is between CTLP and other methods. It usually selects about 20% trivial variables as in tables if we consider the numbers less than 10 4 as 0. As the accuracy of numbers increases, more trivial variables are selected. No estimated coefficients of variables are exactly 0 with moderate accuracy, which proves that C L 1 * is just an approximate solution and the L 1 term does not work. In the small signal-to-noise ratio setting ( σ = 3 ), there is no significant difference between CTLP and ETE in terms of model errors, but CTLP selects fewer trivial variables. As to the other two methods, HETE gives less accurate estimation, and TLP performs less well in terms of both variable selection and estimation. Although C L 1 * yields the smallest model errors, it cannot provide the exact solution with the sparsity that we need. All five methods run faster than in the p < n case due to the smaller sample size. CTLP is still the slowest one while ETE is the fastest one.
In conclusion, although CTLP takes more time to run, it performs similarly or better than the competitors in terms of both model errors and variable selection ability.

7. Real Data Results

Now we are to apply the proposed methodology to sparse index tracking for the CSI 300 Index. In this application, we use the daily return series of the CSI 300 Index and all stocks in this index from 2014 to 2018. The CSI 300 is a capitalization-weighted stock market index designed to replicate the performance of the top 300 stocks traded in the Shanghai and Shenzhen stock exchanges. The index is compiled by the China Securities Index and is considered a blue chip index for Mainland China stock exchanges. The return series, the methodology of the CSI 300 Index, the names and weights of the index constituents, and the corresponding sectors are available from China Securities Index (http://www.csindex.com.cn/) (accessed on 31 September 2018).
According to the guidelines for the industry classification by the Global Industry Classification Standard (GICS) and the weights of constituents making up the CSI 300 Index, the stocks can be divided into 11 major sectors, including Materials, Communication services, Real estate, Industrials, Utilities, Financials, Consumer Discretionary, Energy, Consumer Staples, Information Technology and Health Care. In the composition of the CSI 300 index, the constituents, as well as their weights, are reviewed every six months. As a consequence, every six months, the stock numbers and the weights of sectors will change. Due to its dynamic properties, we must update our model periodically. We train and tune the model on the first day after constituent adjustment is implemented with the daily return series of the index and its renewed constituents before half a year. The total weights within sectors are calculated based on the renewed weights as well. Then the model will be tested with daily return series before the next adjustment day. These steps are in accordance with the practical procedure.
We treat the estimated coefficients as the weights of stocks in building an index-tracking portfolio. The daily tracking error or prediction error is adopted to evaluate the model performance. The daily tracking error measures the deviation of the index daily return from the portfolio daily return, defined as error = r ^ t r t , where r t is the daily return of the index, r ^ t is the daily return of the constructed portfolio. We will compare our method CTLP with TLP, ETE and HETE as described in Section 6.1 in terms of tracking errors. Since the training sample size is smaller than the number of index constituents, there is no exact solution without sparsity, but an approximate solution by a common convex optimizer. Thus, we show the C L 1 * solution as in Section 6.2. For an additional comparison in the Lasso family, we give the performance of standard sparse group lasso [33] by SGL. To give a more intuitive view of the advantages of the proposed method, we also present the performance of portfolios created by traditional methods, including equal-weighted portfolio (EW) and inverse volatility weighted portfolio (IVW). Both portfolios consist of 300 stocks. The weights in the EW portfolio are equal while the weights in the IVW portfolio are inversely proportional to their historical volatility.
The CSI 300 index has been adjusted eight times from December 2014 to November 2018. Each time the constituents of the index were adjusted, we update the model and test it with the return series for the next six months. Table 8 presents the mean and standard deviation of the daily tracking errors in each test set. It also shows the number of stocks building the portfolio. We highlight the method with the smallest mean daily tracking error in each period. It is clear that in a majority of periods, CTLP has the smallest mean tracking error among its competitors. Even in the period, such as 2017H1 and 2017H2, when CTLP is not the best, it is still very close to the best one. In terms of the standard deviation of tracking errors, CTLP has a slightly larger standard deviation than ETE and HETE, but significantly smaller than TLP. However, TLP and HETE tend to yield sparser portfolios than CTLP and ETE. As to tracking errors, C L 1 * performs not bad. But it has difficulties to give a sparse portfolio in several periods which violates our motivation to propose this research. The standard sparse group lasso (SGL) may provide negative coefficients but satisfy sector-neutral constraints. Although it has sparse enough solutions, its tracking errors are the largest. The traditional EW and IVW portfolios are not sparse and yield unstable performance. In some periods they track the index tightly, while in other periods they produce large tracking errors. We also present the average runtime of the first four methods in Table 9, indicating that CTLP and HETE run more slowly than the other four methods. Even though the computation of CTLP requires longer computation time due to additional sector neutral constraints, it is still worthwhile for its competitive performance and the fact that this computation is not a frequent task.
As illustration, we display analysis results for the most recent two periods – the first and second half of 2018, in Figure 1 and Figure 2, Table 10 and Table 11. Figure 1 and Figure 2 draw the cumulative profit and loss of CTLP, TLP, ETE, HETE and the index. The closer of the cumulative profit and loss line to the red line, which represents that of the index, the better the method. It is clear that the CTLP replicates the index the best among all four methods. Table 10 and Table 11 present the number of stocks and the total weights of different sectors. Compared with the index, all portfolios built by five methods are sparse. However, the portfolio given by TLP consists of such a small number of stocks that it cannot track the index well. In the TLP portfolio, some sectors even have zero weights. CTLP and C L 1 * selected a portfolio with sector weights strictly equal to that of the index because of the sector neutrality constraint, whereas ETE and HETE produced sparse portfolios with sector weights only roughly equal to that of the index.
In summary, the proposed CTLP method demonstrates its advantages over the competitive methods in sparse index tracking. The sector neutrality constraint of CTLP guarantees the resulted sparse portfolio has the same sector risk exposure as the index. Most of the time, the CTLP method also gives smaller tracking errors. In addition, the non-negativity and sparsity of the CTLP portfolios are often desired properties in practical applications.

8. Summary

Motivated by a sparse index tracking problem, we propose a new method to do sparse variable selection under constraints. Our methodology extends the traditional variable selection with added constraints. Constraints either represent the lower dimensional structure of the data or special characteristics of practical applications.
In the sparse index tracking problem, sparsity, sector neutrality, and nonnegativity constraints are necessary to build an efficient, sector-risk neutral portfolio with lower transaction costs to track the performance of the index. We proved the consistency and asymptotic distribution for the constrained high-dimensional variable selection using our method. We also developed an efficient algorithm for the estimation of the stock weights of the sparse portfolio. Both simulations and the real application confirmed the validity and advantages of the new methodology.
In portfolio management applications, additional constraints may be incorporated into index tracking, for example, low volatility or size neutrality constraints. We leave these problems for future investigations.

Author Contributions

Conceptualization, Y.C.; Data curation, Y.C.; Investigation, Y.C.; Methodology, S.C.; Software, S.C.; Supervision, X.L.; Validation, S.C.; Writing—original draft, Y.C.; Writing—review & editing, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

Liu’s research is partially sponsored by the Shanghai Pujiang Program (21PJC056) and Shanghai Institute of International Finance and Economics.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://www.csindex.com.cn/.

Acknowledgments

The authors would like to thank the editor and anonymous referees for their helpful comments and suggestions. Our appreciation goes to Mathematics for consideration of publishing our work.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Appendix A.1. Proof of Theorems

Proof of Theorem 1.
For ( i , j ) A * , by the definition of β ^ A * o l s , the variance of β ^ i , j o l s is a diagonal entry of matrix σ 2 ( X A * X A * ) 1 . Therefore,
Pr [ β ^ i , j o l s τ ] = Pr [ n 1 / 2 ( β ^ i , j o l s β i , j * ) n 1 / 2 ( β i , j * τ ) ] Φ n 1 / 2 ( γ min τ ) σ η min 1 / 2 1 n X A * T X A * .
The variance of β ^ i , g i o l s is σ 2 v i ( X A * X A * ) 1 v i , where v i is the coefficient vector for the linear combination of β i , j * on the right hand side of Equation (5) using only nonzero values of β * . Note that v i is a p 0 -dimensional vector consisting of 0’s and 1 ’s only, The variance of β ^ i , g i o l s is bounded above by p 0 σ 2 η min 1 X A * T X A * . Thus,
Pr [ β ^ i , g i o l s τ ] = Pr [ n 1 / 2 ( β ^ i , g i o l s β i , g i * ) n 1 / 2 ( β i , g i * τ ) ] Φ n 1 / 2 ( γ min τ ) p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * .
It follows that
Pr [ β ^ i , j o l s τ for some ( i , j ) with β i , j * > 0 ] p 0 Φ n 1 / 2 ( γ min τ ) σ η min 1 / 2 1 n X A * T X A * + q Φ n 1 / 2 ( γ min τ ) p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * .
Using theorem 5 of [34] for the global minimizer of (4) without constraints in (4), we obtain an upper bound for Pr β ^ o l s β ^ t l p :
min 2 p 0 n 1 / 2 τ σ π η min 1 / 2 1 n X A * T X A * exp n γ min τ 2 2 σ 2 η min 1 1 n X A * T X A * , p 0 Φ n 1 / 2 γ min τ σ η min 1 / 2 1 n X A * T X A * + 4 exp n C min 20 σ 2 ( α + 1 ) log ( p + 1 ) n λ 4 σ 2 + 4 exp ( α 1 ) n λ 6 α σ 2 1 + 1 α ( log ( p + 1 ) 5 3 ) + p 0 Φ n 1 / 2 ( γ min τ ) σ η min 1 / 2 1 n X A * T X A * + q Φ n 1 / 2 ( γ min τ ) p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * ,
which completes the proof.    □
Proof of Theorem 2.
We firstly show that P β ^ o l s β ^ t l p 0 as n . From Mill’s ratio [35] it follows that
Φ ( x ) 1 2 π 1 x e x 2 2 , x > 0 .
Hence,
Φ n 1 / 2 γ min τ σ η min 1 / 2 1 n X A * T X A * 1 2 π σ n 1 / 2 d 1 1 / 2 ( γ min τ ) exp n d 1 ( γ min τ ) 2 2 σ 2 ,
and
Φ n 1 / 2 γ min τ p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * 1 2 π p 0 1 / 2 σ n 1 / 2 d 1 1 / 2 ( γ min τ ) exp n d 1 ( γ min τ ) 2 2 p 0 σ 2 .
Since γ min n 1 a γ 2 , λ n a λ , p 0 n p 0 , a γ > a λ , a γ > a p 0 0 , it follows that
max 2 p 0 n 1 / 2 τ σ π η min 1 / 2 1 n X A * T X A * exp n γ min τ 2 2 σ 2 η min 1 1 n X A * T X A * , p 0 Φ n 1 / 2 γ min τ σ η min 1 / 2 1 n X A * T X A * max 2 p 0 n 1 / 2 d 2 1 / 2 τ σ π , p 0 3 / 2 σ 2 π n 1 / 2 d 1 1 / 2 ( γ min τ ) exp n d 1 ( γ min τ ) 2 2 σ 2 0 .
Similarly,
q Φ n 1 / 2 ( γ min τ ) p 0 1 / 2 σ η min 1 / 2 1 n X A * T X A * p 0 2 π p 0 1 / 2 σ n 1 / 2 d 1 1 / 2 ( γ min τ ) exp n d 1 ( γ min τ ) 2 2 p 0 σ 2 0 .
Because a l p 1 < a λ < a C and a l p 0 , it follows that
n C min 20 σ 2 ( α + 1 ) log ( p + 1 ) n λ 4 σ 2 ,
and
( α 1 ) n λ 6 α σ 2 1 + 1 α ( log ( p + 1 ) 5 3 ) .
Combining the limits (A1)–(A4), we obtain the selection consistency from Theorem 1.
   Asymptotic properties: From Equation (6) we know that β ^ A * o l s has a multivariate normal distribution with mean β A * * and covariance matrix σ 2 n X A * X A * n 1 . By the selection consistency in part (A) we obtain the asymptotic distribution (7).   □

Appendix A.2. The Details of Algorithm

The objective function in the first minimization step (11) is a quadratic function plus a TLP function. This is a non-convex minimization which can be solved by the difference of the convex (DC) method [17,23]. The DC method is used to decompose a non-convex function into the difference of convex functions so that the algorithms and properties of convex optimization can be applied. By DC decomposition and some calculations, the minimization subproblem (11) is equivalent to minimize (A5) with regard to β ,
h ( m ) ( β ) = L ( β ) + λ τ i = 1 q j = 1 g i | β i , j | I ( β ^ i , j ( m 1 ) τ ) .
Let us begin with the DC decomposition of the objective function h ( β ) = h 1 ( β ) h 2 ( β ) , where
h 1 ( β ) = L ( β ) + λ j = 1 p J 1 ( | β i , j | ) , h 2 ( β ) = λ i = 1 q j = 1 g i J 2 ( | β i , j | ) L ( β ) = n 1 Y X β 2 2 + ρ 2 β α k + μ k 2 2 J 1 ( | β i , j | ) = | β i , j | τ , J 2 ( | β i , j | ) = max { | β i , j | τ 1 , 0 } .
It is clear that L ( β ) is a convex function of β . Using the DC decomposition, a sequence of linear approximations h ( m ) ( β ) of h ( β ) is constructed. Let h 2 be the subgradient of h 2 . At the m-th iteration, replacing h 2 ( β ) by its majorization, we obtain
h ( m ) ( β ) = h 1 ( β ) ( h 2 ( β ^ ( m 1 ) ) + ( | β | | β ^ ( m 1 ) | ) h 2 ( | β ^ ( m 1 ) | ) ) ,
where | β | denotes the vector obtained by replacing each component of β with its absolute value. After ignoring the terms that are independent of β , the h ( m ) ( β ) becomes
h ( m ) ( β ) = L ( β ) + λ τ i = 1 q j = 1 g i | β i , j | I ( β ^ i , j ( m 1 ) τ ) .
Minimizing (A6) gives the updated value β ^ ( m ) . Repeat the process procedure until convergence.
Then, for the minimizing of (A6), we can use the adaptive weights λ i , j = λ τ I ( | β ^ i , j ( m 1 ) | τ ) , (A6) can be rewritten as
h ( m ) ( β ) = L ( β ) + i = 1 q j = 1 g i λ i , j | β i , j | .
Note that the second term in (A7) is a separable function in variable β . This property enables us to use coordinate descent algorithm [24,25] for minimization. Similar to the results of regular Lasso [36], the updating formula for β is
β i , j ( 2 n X i , j T X i , j + ρ ) 1 S ( Z i , j , λ i , j ) ,
were Z i , j = 2 n X i , j T r i , j ρ ( α i , j + μ i , j ) , r i , j = Y X ( i , j ) β ( i , j ) , X i , j is the j-th column of X i , X ( i , j ) is the matrix X with the j-th column of X i deleted, and β ( i , j ) is defined similarly. S ( z , λ ) = Sign ( z ) ( | z | λ ) is the soft-thresholding operator. Updating Formula (A8) is cycled through all variables in turn. Repeated iteration of (A8) until convergence gives the estimate β ^ ( m ) .
Finally, with the inner iteration (A8) and outer iteration Algorithm A1, we solve the minimization subproblem (11).
We summarize the DC algorithm for minimization (11) below.
Algorithm A1 DC algorithm for the minimization of (11).
  • (Initialization) Use β k in k-th ADMM iteration as the initial estimate β ^ ( 0 ) .
  • (Iteration) At m-th DC iteration, compute β ^ ( m ) by minimizing (A5).
  • (Termination) Stop when | h ( β ^ ( m 1 ) ) h ( β ^ ( m ) ) | < ϵ and no components of β ^ ( m ) is at ± τ .
Next, we note that computing the projection (12) directly is not easy. Let C 1 = { β : j = 1 , , g i β j = ω i , i = 1 , , q } . We divide the projection into two easier sequential projections: the first one is the projection from R p onto space C 1 , and the second one is the projection from C 1 to C . The Theorem A1 below will guarantee the equivalence between the direct projection on C and the two sequential projections.
Theorem A1.
In Euclidean space, projection onto space C β C β 2 is equivalent to the sequential projections onto space C 1 and space C :
β C 1 β 1 C β 2 .
There exists a closed-form solution for β 1 = C 1 β .
Let β 1 = ( β 11 , , β 1 p ) . Assume there are m i elements in the i-th group g i . We have
β 1 j = β j + ω i j = 1 g i β j m i , i { 1 , , q } , j { 1 , , g i } .
The computation of the second projection is given in the proof of Theorem A1. The composition of these two sequential projections updates the value of α . The third step (13) is a direct calculation.
Proof of Theorem A1.
Without loss of generality, we assume that all variables belong to the same group. The proof is similar to the case when the number of groups for variables is more than 1. The parameter space C 1 becomes C 1 = { β : j β j = 1 } .
Denote β 1 = C 1 β , and β 2 = C β 1 , where β 1 = ( β 11 , , β 1 p ) , and β 2 = ( β 21 , , β 2 p ) . The closed-form formula for C 1 is given by (A10).
Now we are to calculate the projection C 2 . We assume that β 1 has u positive elements, v zero elements, and p u v negative elements:
β 1 j > 0 1 j u β 1 j = 0 u + 1 j u + v β 1 j < 0 u + v + 1 j p .
Let c u = 1 j = 1 u β 1 j u . The projection β 2 = C β 1 is given by
β 2 j = β 1 j + c u 1 j u 0 u + v + 1 j p .
It is possible that some of β 2 j , 1 j u are negative. If this is the case, we will repeat the above procedure (A11) and (A12) until there are no negative elements in β 2 . This completes the proof. □

Appendix A.3. The Details of Simulation

In the simulation study, we consider the model y = X β + σ ϵ . The covariates X are generated from a multivariate normal distribution with all marginal distributions being standard normal distribution N ( 0 , 1 ) . The correlation matrix of X is denoted by R .
We assume three settings for the correlation matrix R :
(1)
Independence structure: R is the identity matrix;
(2)
A R ( 1 ) correlation without group structure: R = ( r j , l ) j , l = 1 p , where r j , l = 0 . 5 | j l | for j , l = 1 , , p ;
(3)
A R ( 1 ) correlation with group structure: R = ( r j , l ) j , l = 1 p , where r j , l = 0 . 5 | j l | for j , l = 1 , , g i and r j , l = 0 for j = 1 , g i and l = 1 , , g i , i i .
As to the dimension of the covariates X , two cases n < p and n > p will be considered. We use p = 100 . The parameter vector β is divided into 5 groups, with 20 elements in each group. The numbers of non-zero elements in each group are 3, 3, 1, 2, and 5, respectively. The true values of β are
β 0 = ( 0.12 , 0.04 , 0.05 , 0 , , 0 17 , 0.09 , 0.02 , 0.03 , 0 , , 0 17 , 0.17 , 0 , , 0 19 , 0.2 , 0.12 , 0 , , 0 18 , 0.02 , 0.04 , 0.03 , 0.02 , 0.05 , 0 , , 0 15 ) .
The sum of coefficients for each group are γ 1 = 0.21 , γ 2 = 0.14 , γ 3 = 0.17 , γ 4 = 0.32 , and γ 5 = 0.16 , respectively. Besides, the numbers of zero elements in each group are 17, 17, 19, 18, and 15, respectively. The sample size n is set to be 50 and 2000, for low- and high-dimensional cases. The distribution of error term ϵ is the normal distribution N ( 0 , 0 . 05 2 ) , and the scaling parameter σ takes two values: σ = 1 and σ = 3 .
In each experiment, we randomly divide a dataset into training, tuning, and testing sets of 60%, 20%, and 20% of original sample sizes, respectively. We repeat the experiment 100 times and report the means of ME, TP, FP, PSR, NSR and the standard error of ME for each of the simulation settings.
As to tuning parameter selection, we will use five-fold cross-validation. The optimal choice of tuning parameters λ and τ can be found by grid search [17]. Finding an optimal value for ρ is not a straightforward problem [22], however, ref. [21] provides many heuristics that work well in practice. Here, we find out that a good choice of ρ is around 1.

References

  1. Barber, B.M.; Odean, T. Trading is hazardous to your wealth: The common stock investment performance of individual investors. J. Financ. 2000, 55, 773–806. [Google Scholar] [CrossRef]
  2. Jansen, R.; Van Dijk, R. Optimal benchmark tracking with small portfolios. J. Portf. Manag. 2002, 28, 33–39. [Google Scholar] [CrossRef]
  3. Ben-David, I.; Franzoni, F.; Moussawi, R. Exchange-traded funds. Annu. Rev. Financ. Econ. 2017, 9, 169–189. [Google Scholar] [CrossRef]
  4. Fuller, S.L. The evolution of actively managed exchange-traded funds. Rev. Secur. Commod. Regul. 2008, 41, 89–96. [Google Scholar]
  5. Santos, A.A. Beating the market with small portfolios: Evidence from Brazil. EconomiA 2015, 16, 22–31. [Google Scholar] [CrossRef] [Green Version]
  6. Tabata, Y.; Takeda, E. Bicriteria Optimization Problem of Designing an Index Fund. J. Oper. Res. Soc. 1995, 46, 1023–1032. [Google Scholar] [CrossRef]
  7. Ammann, M.; Zimmermann, H. Tracking Error and Tactical Asset Allocation. Financ. Anal. J. 2001, 57, 32–43. [Google Scholar] [CrossRef] [Green Version]
  8. Jagannathan, R.; Ma, T. Risk reduction in large portfolios: Why imposing the wrong constraints helps. J. Financ. 2003, 58, 1651–1684. [Google Scholar] [CrossRef] [Green Version]
  9. Strub, O.; Baumann, P. Optimal construction and rebalancing of index-tracking portfolios. Eur. J. Oper. Res. 2018, 264, 370–387. [Google Scholar] [CrossRef]
  10. Vieira, E.B.F.; Filomena, T.P.; Sant’anna, L.R.; Lejeune, M.A. Liquidity-constrained index tracking optimization models. Ann. Oper. Res. 2021, 1–46. [Google Scholar] [CrossRef]
  11. Li, X.P.; Shi, Z.L.; Leung, C.S.; So, H.C. Sparse Index Tracking with K-Sparsity or ε-Deviation Constraint via 0-Norm Minimization. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  12. Zheng, Y.; Chen, B.; Hospedales, T.M.; Yang, Y. Index tracking with cardinality constraints: A stochastic neural networks approach. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1242–1249. [Google Scholar]
  13. Zhang, C.; Liang, S.; Lyu, F.; Fang, L. Stock-index tracking optimization using auto-encoders. Front. Phys. 2020, 8, 388. [Google Scholar] [CrossRef]
  14. Demiguel, V.; Garlappi, L.; Nogales, F.; Uppal, R. A Generalized Approach to Portfolio Optimization: Improving Performance by Constraining Portfolio Norms. Manag. Sci. 2009, 55, 798–812. [Google Scholar] [CrossRef] [Green Version]
  15. Brodie, J.; Daubechies, I.; De Mol, C.; Giannone, D.; Loris, I. Sparse and stable Markowitz portfolios. Proc. Natl. Acad. Sci. USA 2009, 106, 12267–12272. [Google Scholar] [CrossRef] [Green Version]
  16. Fan, J.; Zhang, J.; Yu, K. Vast portfolio selection with gross-exposure constraints. J. Am. Stat. Assoc. 2012, 107, 592–606. [Google Scholar] [CrossRef] [Green Version]
  17. Shen, X.; Pan, W.; Zhu, Y. Likelihood-based selection and sharp parameter estimation. J. Am. Stat. Assoc. 2012, 107, 223–232. [Google Scholar] [CrossRef]
  18. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  19. James, G.M.; Paulson, C.; Rusmevichientong, P. Penalized and Constrained Optimization: An Application to High-Dimensional Website Advertising. J. Am. Stat. Assoc. 2019, 115, 1–31. [Google Scholar] [CrossRef]
  20. Gaines, B.R.; Kim, J.; Zhou, H. Algorithms for Fitting the Constrained Lasso. J. Comput. Graph. Stat. 2018, 27, 861–871. [Google Scholar] [CrossRef]
  21. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  22. Wahlberg, B.; Boyd, S.; Annergren, M.; Wang, Y. An ADMM algorithm for a class of total variation regularized estimation problems. IFAC Proc. Vol. 2012, 45, 83–88. [Google Scholar] [CrossRef] [Green Version]
  23. Tao, P.D.; An, L.T.H. Convex analysis approach to dc programming: Theory, algorithms and applications. Acta Math. Vietnam. 1997, 22, 289–355. [Google Scholar]
  24. Wu, T.T.; Lange, K. Coordinate descent algorithms for lasso penalized regression. Ann. Appl. Stat. 2008, 2, 224–244. [Google Scholar] [CrossRef]
  25. Wright, S.J. Coordinate descent algorithms. Math. Program. 2015, 151, 3–34. [Google Scholar] [CrossRef]
  26. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
  27. Osborne, M.R.; Presnell, B.; Turlach, B.A. On the lasso and its dual. J. Comput. Graph. Stat. 2000, 9, 319–337. [Google Scholar]
  28. Benidis, K.; Feng, Y.; Palomar, D.P. Sparse Portfolios for High-Dimensional Financial Index Tracking. IEEE Trans. Signal Process. 2018, 66, 155. [Google Scholar] [CrossRef]
  29. Wang, Y.; Yin, W.; Zeng, J. Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 2015, 78, 29–63. [Google Scholar] [CrossRef] [Green Version]
  30. Chen, Z.; Chen, J. Tournament screening cum EBIC for feature selection with high-dimensional feature spaces. Sci. China Ser. A Math. 2009, 52, 1327–1341. [Google Scholar] [CrossRef]
  31. Benidis, K.; Palomar, D.P. sparseIndexTracking: Design of Portfolio of Stocks to Tracks an Index; R Package Version 0.1.0. Available online: https://CRAN.R-project.org/package=sparseIndexTracking (accessed on 1 September 2019).
  32. Fu, A.; Narasimhan, B.; Boyd, S. CVXR: An R Package for Disciplined Convex Optimization. J. Stat. Soft. 2020, 94, 1–34. [Google Scholar] [CrossRef]
  33. Simon, N.; Friedman, J.; Hastie, T.; Tibshirani, R. A sparse-group lasso. J. Comput. Graph. Stat. 2013, 22, 231–245. [Google Scholar] [CrossRef]
  34. Shen, X.; Pan, W.; Zhu, Y.; Zhou, H. On constrained and regularized high-dimensional regression. Ann. Inst. Stat. Math. 2013, 65, 807–832. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Gasull, A.; Utzet, F. Approximating Mills ratio. J. Math. Anal. Appl. 2014, 420, 1832–1853. [Google Scholar] [CrossRef] [Green Version]
  36. Friedman, J.; Hastie, T.; Höfling, H.; Tibshirani, R. Pathwise coordinate optimization. Ann. Appl. Stat. 2007, 1, 302–332. [Google Scholar]
Figure 1. The cumulative profit and loss of portfolios built by different methods and the index in 2018H1.
Figure 1. The cumulative profit and loss of portfolios built by different methods and the index in 2018H1.
Mathematics 10 02645 g001
Figure 2. The cumulative profit and loss of portfolios built by different methods and the index in 2018H2.
Figure 2. The cumulative profit and loss of portfolios built by different methods and the index in 2018H2.
Mathematics 10 02645 g002
Table 1. The periods with sector risk for CSI300 index from 2014 to 2018.
Table 1. The periods with sector risk for CSI300 index from 2014 to 2018.
PeriodCSI300GainerLoser
October 2018–November 2018−4.16%Financials2.50%Consumer Staple−13.18%
February 2018–March 2018−5.23%Information6.44%Financials−9.17%
June 2017–August 20179.75%Consumer Staple17.51%Utilities−3.77%
October 2015–December 20158.19%Information26.40%Energy3.18%
November 2012–January 201317.01%Financials30.92%Consumer Staple−5.94%
Table 2. The comparison of five methods in the case of n > p , independent correlation matrix.
Table 2. The comparison of five methods in the case of n > p , independent correlation matrix.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.07420.00251418.85100.00%21.92%2.78
C L 1 0.07410.00251452.36100.00%60.88%1.10
TLP0.16500.0042146.77100.00%7.87%0.12
ETE0.08580.00261418.76100.00%21.81%0.02
HETE0.09240.00261418.8100.00%21.86%1.45
σ = 3 CTLP0.64150.020213.9815.6499.86%18.19%1.66
C L 1 0.67120.019813.9831.5599.86%36.69%1.00
TLP1.29890.033813.557.5296.79%8.74%0.11
ETE0.76730.021513.9518.7499.64%21.79%0.02
HETE0.98180.027813.919.3399.29%22.48%1.47
Table 3. The comparison of five methods in the case of n > p , A R ( 1 ) correlation matrix without group structure.
Table 3. The comparison of five methods in the case of n > p , A R ( 1 ) correlation matrix without group structure.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.06970.00241416.65100.00%19.36%2.81
C L 1 0.06960.00241451.04100.00%59.35%1.10
TLP0.13820.0043146.74100.00%7.84%0.14
ETE0.07960.00251416.22100.00%18.86%0.02
HETE0.08840.00261416.21100.00%18.85%1.49
σ = 3 CTLP0.59560.017113.9812.9299.86%15.02%1.95
C L 1 0.61610.017913.9729.2199.79%33.97%1.12
TLP1.04800.026813.817.2698.64%8.44%0.13
ETE0.71360.019613.9516.299.64%18.84%0.02
HETE0.94270.028513.9117.7399.36%20.62%1.43
Table 4. The comparison of five methods in the case of n > p , A R ( 1 ) correlation matrix with group structure.
Table 4. The comparison of five methods in the case of n > p , A R ( 1 ) correlation matrix with group structure.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.06510.00201415.26100.00%17.74%2.75
C L 1 0.06510.00201450.32100.00%58.51%1.04
TLP0.13370.0039146.28100.00%7.30%0.14
ETE0.07650.00221415.32100.00%17.81%0.02
HETE0.08410.00251415.4100.00%17.91%1.42
σ = 3 CTLP0.57400.017113.9913.3699.93%15.53%1.96
C L 1 0.58360.018513.9944.7999.93%52.08%1.20
TLP0.90610.027613.8910.6599.21%12.38%0.13
ETE0.67420.020313.9617.0799.71%19.85%0.02
HETE0.92590.025313.917.0599.29%19.83%1.49
Table 5. The comparison of four methods in the case of n < p , independent correlation matrix.
Table 5. The comparison of four methods in the case of n < p , independent correlation matrix.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.13520.011413.6118.2797.21%21.24%2.35
C L 1 * 0.17580.010113.6519.1097.50%22.21%0.64
TLP0.34330.029911.0912.7779.21%14.85%0.10
ETE0.17240.013413.3418.3995.29%21.38%0.07
HETE0.17720.013413.3218.295.14%21.16%0.33
σ = 3 CTLP0.99150.044110.5312.0875.21%14.05%2.73
C L 1 * 0.95990.043711.3117.3480.78%20.16%0.63
TLP1.70580.08166.084.2543.43%4.94%0.04
ETE1.08850.043210.0116.5471.50%19.23%0.03
HETE1.24940.05079.815.7370.00%18.29%0.40
Table 6. The comparison of four methods in the case of n < p , A R ( 1 ) correlation matrix without group structure.
Table 6. The comparison of four methods in the case of n < p , A R ( 1 ) correlation matrix without group structure.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.12680.008813.6914.6897.79%17.07%2.17
C L 1 * 0.13280.006213.7916.8698.50%19.60%0.65
TLP0.30600.026111.8711.0284.79%12.81%0.10
ETE0.16430.011113.3615.0295.43%17.47%0.06
HETE0.16280.011013.3314.7195.21%17.10%0.31
σ = 3 CTLP1.07280.077210.7911.6777.07%13.57%2.67
C L 1 * 0.81850.030411.2415.5480.29%18.07%0.64
TLP1.88850.06447.034.5350.21%5.27%0.05
ETE1.07950.043910.8615.477.57%17.91%0.03
HETE1.24470.050710.3915.2774.21%17.76%0.37
Table 7. The comparison of four methods in the case of n < p , A R ( 1 ) correlation matrix with group structure.
Table 7. The comparison of four methods in the case of n < p , A R ( 1 ) correlation matrix with group structure.
MethodMESETPFPPSRNSRRuntime (s)
σ = 1 CTLP0.11890.006713.7116.2397.93%18.87%2.03
C L 1 * 0.13280.008213.6616.6697.57%19.37%0.63
TLP0.27260.022912.9415.0492.43%17.49%0.10
ETE0.16000.010013.6216.0397.29%18.64%0.05
HETE0.16250.010413.571696.93%18.60%0.32
σ = 3 CTLP1.02460.052610.711.0976.43%12.90%2.24
C L 1 * 0.83400.047211.3615.4181.14%17.92%0.62
TLP1.99270.09936.724.1248.00%4.79%0.05
ETE1.02230.041210.8814.7877.71%17.19%0.03
HETE1.09640.040410.5313.9475.21%16.21%0.39
Table 8. The performance of the selected portfolio. By different methods in eight periods. Mean, Std. Dev and Num stand for the mean, the standard deviation of tracking error and the number of stocks in the portfolio. In the columns of period, for example, 2018H1 stands for the first half year of 2018.
Table 8. The performance of the selected portfolio. By different methods in eight periods. Mean, Std. Dev and Num stand for the mean, the standard deviation of tracking error and the number of stocks in the portfolio. In the columns of period, for example, 2018H1 stands for the first half year of 2018.
PeriodMethodMeanStd. DevNumPeriodMethodMeanStd. DevNum
2018H2CTLP0.018%0.00161242016H2CTLP0.015%0.0013107
C L 1 * 0.028%0.0016297C L 1 * 0.027%0.0007300
SGL0.123%0.012657SGL−0.073%0.005665
TLP0.081%0.004642TLP0.020%0.002052
ETE0.031%0.0013188ETE0.023%0.0009219
HETE0.026%0.001398HETE0.016%0.0011128
EW0.091%0.0064300EW−0.016%0.0022300
IVW0.087%0.0066300IVW0.055%0.0060300
2018H1CTLP0.008%0.00311162016H1CTLP0.002%0.002275
C L 1 * 0.014%0.0017126C L 1 * 0.025%0.0021120
SGL0.043%0.009357SGL0.062%0.010990
TLP0.024%0.005134TLP0.031%0.003349
ETE0.022%0.0013129ETE0.031%0.0018140
HETE0.024%0.001379HETE0.033%0.0017121
EW0.018%0.0086300EW0.052%0.0040300
IVW0.027%0.0079300IVW0.052%0.0035300
2017H2CTLP−0.008%0.00191232015H2CTLP0.055%0.0034117
C L 1 * −0.005%0.0015296C L 1 * 0.064%0.0021130
SGL−0.095%0.006035SGL0.195%0.020884
TLP−0.040%0.003525TLP0.135%0.008751
ETE−0.012%0.0013220ETE0.064%0.0017179
HETE−0.017%0.001684HETE0.062%0.0019121
EW−0.015%0.0048300EW0.076%0.0049300
IVW0.020%0.0040300IVW−0.935%0.0479300
2017H1CTLP−0.014%0.00121292015H1CTLP0.060%0.0032120
C L 1 * −0.016%0.0009299C L 1 * −0.011%0.0020147
SGL0.050%0.005811SGL−0.358%0.015358
TLP−0.024%0.003233TLP−0.166%0.007522
ETE−0.010%0.0009177ETE0.004%0.0018166
HETE−0.011%0.001283HETE−0.009%0.002490
EW0.017%0.0023300EW0.009%0.0041300
IVW0.015%0.0017300IVW0.045%0.0176300
Bold means the method has the smallest mean daily tracking error in this period.
Table 9. The average runtime of different methods.
Table 9. The average runtime of different methods.
CTLPC L 1 * SGLTLPETEHETE
Runtime (s)47.600.750.1418.481.6746.78
Table 10. The number of stocks and the sum of weights within sectors in selected portfolio by different methods and the index in 2018H1.
Table 10. The number of stocks and the sum of weights within sectors in selected portfolio by different methods and the index in 2018H1.
SectorNumber of StocksSum of Weights
IndexCTLPC L 1 * TLPETEHETEIndexCTLPC L 1 * TLPETEHETE
Materials25111431270.0620.0620.0620.0040.0680.077
Communication2120110.0080.0080.0080.0000.0040.005
Real Estate20682650.0570.0570.0570.0740.0490.056
Industrials601422522100.1370.1370.1370.1170.1330.131
Utilities8321430.0250.0250.0250.0060.0410.047
Financials5822241027170.3530.3530.3530.3400.3230.323
Consumer dis.421217419120.1140.1140.1140.1030.1270.139
Energy11350530.0230.0230.0230.0000.0260.008
Consumer sta.14561750.0750.0750.0750.0700.0850.078
Information42161851780.1000.1000.1000.0880.0650.050
Health Care18883980.0470.0470.0470.1980.0790.086
Table 11. The number of stocks and the sum of weights within sectors in selected portfolio by different methods and the index in 2018H2.
Table 11. The number of stocks and the sum of weights within sectors in selected portfolio by different methods and the index in 2018H2.
SectorNumber of StocksSum of Weights
IndexCTLPC L 1 * TLPETEHETEIndexCTLPC L 1 * TLPETEHETE
Materials3693552390.0670.0670.0670.0940.0660.063
Communications2120100.0060.0060.0060.0000.0050.000
Real Estate1751711270.0500.0500.0500.0050.0580.065
Industrials5714571025120.1220.1220.1220.1320.1180.128
Utilities104100730.0270.0270.0270.1840.0340.038
Financials5924591141250.3680.3680.3680.0000.3340.330
Consumer dis.371436525130.1070.1070.1070.4390.1200.113
Energy132131710.0260.0260.0260.0350.0240.018
Consumer sta.1381341050.0820.0820.0820.0450.0790.066
Information341033423140.0800.0800.0800.0460.0980.108
Health Care22102211490.0630.0630.0630.0150.0630.071
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Che, Y.; Chen, S.; Liu, X. Sparse Index Tracking Portfolio with Sector Neutrality. Mathematics 2022, 10, 2645. https://doi.org/10.3390/math10152645

AMA Style

Che Y, Chen S, Liu X. Sparse Index Tracking Portfolio with Sector Neutrality. Mathematics. 2022; 10(15):2645. https://doi.org/10.3390/math10152645

Chicago/Turabian Style

Che, Yuezhang, Shuyan Chen, and Xin Liu. 2022. "Sparse Index Tracking Portfolio with Sector Neutrality" Mathematics 10, no. 15: 2645. https://doi.org/10.3390/math10152645

APA Style

Che, Y., Chen, S., & Liu, X. (2022). Sparse Index Tracking Portfolio with Sector Neutrality. Mathematics, 10(15), 2645. https://doi.org/10.3390/math10152645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop