1. Introduction
Traditionally in finance, risk is described by the variance in asset returns. In this article we extend the consideration of risk to include the risk (“interconnectedness” risk) based on minimum spanning tree topology. The goal of this article is to show that by taking this broader risk consideration into account, we can improve equal risk contribution (ERC) portfolio performance. We use the constituents of the S&P100 index to illustrate the improvement in the portfolio’s risk-adjusted return.
Modern portfolio theory (MPT) measures the risk of an asset by its volatility/variance (
Markowitz 1952). Higher volatility assets carry higher risk. Alongside the introduction of MPT,
Markowitz (
1952) also introduced the mean variance portfolio (MVO). MVO portfolios rely on predicted asset returns and a variance–covariance (VCV) matrix to create a portfolio that minimizes risk while aiming for a specific expected return.
Generally, the expected returns and VCV matrix are estimated based on historical returns. As Chopra and Ziemba showed (
Chopra and Ziemba 1993), the risk and return estimations can carry large errors, making the MVO portfolio unstable. Since the bulk of the estimation error lies in the return estimation, one way to improve the portfolio is by only using the VCV matrix to optimize asset allocations (
Chopra and Ziemba 1993). Such strategies are called risk-based strategies. The simplest example of this is the minimum variance (MV) portfolio (the risk-only alteration of the MVO portfolio), while MV portfolios are more stable than MVO portfolios, they lack diversification. This is due to the optimization model concentrating funds in the assets with the lowest volatility. In contrast, the ERC portfolio allocates the weights so each asset contributes equal risk to the total portfolio risk (
Maillard et al. 2010). This means each asset will have a non-zero weight, thus creating a diversified portfolio from the pool of assets. The ERC portfolio is often compared with the equally weighted (EW) portfolio (
Maillard et al. 2010).
In comparison to MPT, market graphs are a fairly recent way to analyze the market. The introduction of market graphs (networks) by
Mantegna (
1999) allowed for another way to understand asset relationships. Market graphs are a hierarchical organization of the assets in consideration (
Mantegna 1999). Analysis of these networks can provide information about asset relationships and how risk is transferred between assets (
Konstantinov et al. 2020). Market graphs are typically developed based on correlations between assets.
In each network, the assets (typically stocks) are the nodes, and edges connecting the nodes are dictated by the correlations between the assets (
Mantegna 1999). The graph can be filtered into a minimum spanning tree (MST), which retains only the most important relationships in the full network (
Mantegna 1999;
Tumminello et al. 2005). Within the study of networks, centrality is a key network property that helps us understand how important a node is within a network (
Rodrigues 2019). Centrality can be quantified in various ways, of which five common methods are: betweenness centrality, closeness centrality, eigenvector centrality, eccentricity, and degree centrality (
Pozzi et al. 2013). Recently, there has been more research on the use of centrality and MSTs to improve portfolio performance. Such articles on centrality-based portfolios show that portfolio performance can be improved by favoring the peripheral assets in a graph (
Peralta and Zareei 2016;
Pozzi et al. 2013).
Pozzi et al. (
2013) explain that peripheral asset portfolios perform better since central assets are more likely to carry more risk in comparison to peripheral assets, as they are subject to more sudden changes. As well, as shown by
Baitinger and Papenbrock (
2017), the risk associated with a stock’s importance in the network (“interconnectedness” risk) is not related to the risk captured by the VCV matrix.
To date, network-based risk using MSTs has been used to improve MVO and MV portfolio construction. Most of this literature does not consider other important portfolio models such as those based on ERC. The contribution of this paper is to demonstrate that by considering both the “interconnectedness” risk and VCV risk concepts, we can improve on the ERC portfolio risk-return performance. We do this by simply augmenting the VCV matrix with the MST topology information.
The rest of the paper is organized as follows.
Section 2 reviews the current literature in the area of network-based portfolios.
Section 3 presents the methodology of centrality-based portfolio construction.
Section 4 outlines the experimental framework and
Section 5 presents the computational results for centrality-based ERC portfolios. Finally, we provide the conclusions of this work in
Section 6.
2. Literature Review
Using networks to build a portfolio is not a new concept. Networks have previously been used to develop portfolios via the use of clustering. As carrried out in several studies (
Tola et al. 2008), (
Puerto et al. 2020) and (
Boginski et al. 2014), clustering can be used to select assets for a portfolio to improve its performance. However, recently there has been more interest in the concept of using the asset relationships understood from networks to improve the portfolio weights and attain better portfolio performance (better risk-adjusted return).
Pozzi et al. (
2013) studied portfolios consisting of only central assets of a network and compared them to portfolios only consisting of peripheral assets. The results showed that peripheral asset portfolios performed better than central asset portfolios.
Li et al. (
2019) also presented similar results using the centrality heuristic introduced by
Pozzi et al. (
2013) in combination with a market mode to filter noise and improve basic MVO and EW portfolio performance. The idea of asset returns being negatively linked to asset centrality was also confirmed by
Peralta and Zareei (
2016). Based on the findings of several articles (e.g., (
Pozzi et al. 2013), (
Onnela et al. 2003), and (
Peralta and Zareei 2016)), there have been many articles showcasing the various ways MSTs and centrality can be used to improve traditional portfolio methods.
Baitinger and Papenbrock (
2017) calculated the optimal asset allocations by using the five centrality measures discussed by
Pozzi et al. (
2013). Using the base MVO model, the authors minimized (peripheral assets weighted more) or maximized (central assets weighted more) each centrality measure instead of the more traditional risk measure, portfolio variance. The results highlighted that centrality-based (centrality minimizing) portfolios can out-perform traditional portfolios methods.
Baitinger and Papenbrock (
2017) also found that the “interconnectedness risk” (risk based on network relationships) was not related to the risk captured by the VCV matrix. Therefore, combining the two can help improve traditional portfolio methods.
Table 1 provides a summary of some of the current relevant literature on the use of centrality measures to improve traditional portfolio methods. Many of the articles in
Table 1 focus on the improvement of MVO or MV portfolios. The key distinguishing factor between each of the articles is how they have implemented the centrality information (typically based on MST) into the traditional optimization methods. Some articles use constraints (
Výrost et al. 2019), while others use a multi-objective approach (
Giudici et al. 2022). Some authors, such as
Peralta and Zareei (
2016), implemented the centrality within the closed-form solution (which allows short-selling). Instead of only using centrality for the purposes of asset allocation, several authors ((
Cho and Song 2023), (
Zhao et al. 2018), and (
Peralta and Zareei 2016)) use centrality measures to select the assets for a portfolio. Centrality is not the only way to measure a node’s importance in a network.
Clemente et al. (
2021) use the local clustering coefficient (similar to measuring local density in a network (
Hansen et al. 2011)) to improve the MV portfolio. To calculate the local clustering coefficient, the authors use threshold filtering, which discards the edges with weight less than a given threshold. The authors implement the local clustering coefficient by using a modified matrix instead of the VCV matrix when calculating the optimal weights. The modified matrix uses the local clustering coefficient to represent the asset relationships, instead of the covariance, and scales the coefficients using the variances of the assets. Naïve equal risk portfolios are also commonly combined with network centrality. The key difference is naïve ERC portfolios do not take into account the asset covariance, while traditional ERC portfolios do. One such article is that by
Kaya (
2015), which uses eccentricity to scale the weights of the naïve equal risk portfolio.
Konstantinov et al. (
2020) demonstrate how eigenvector centrality, based on factor-asset networks, can be used to outperform the MV, naïve ERC and EW portfolios.
Lopez de Prado (
2016) presented a novel technique called hierarchical risk parity (HRP). HRP is an ERC-based strategy using network information to improve portfolio performance. However, the HRP portfolio method uses the hierarchical nature of a network via hierarchical clustering and not by directly studying network measures like centrality. Other articles, such as
Ricca and Scozzari (
2024), use networks to develop mixed-integer formulations for portfolio optimization, which include the network topology information in the objective function (multi-objective problem) or as constraints.
In contrast, we find little literature concerning the improvement of more complex portfolios like ERC with centrality and MST.
Clemente et al. (
2022) is one of the few articles in which the authors have extended the use of the local clustering coefficient to an ERC portfolio. In a later study (
Clemente et al. 2022), the authors extend upon their earlier research (
Clemente et al. 2021) by expanding the use of modified matrix (using the clustering coefficient) to maximum diversification and ERC portfolios. In their later paper (
Clemente et al. 2022), the authors also noted that including the use of shrinkage can improve the performance of both portfolios.
Other measures of risk such as VaR and CVaR have also been used to develop portfolios. Some recent literature on alternate risk based portfolios includes the paper by
Li et al. (
2022), which presents a bi-objective model using variance, VaR and entropy and the paper by
James et al. (
2023), in which the authors implemented semi-metrics in a Sharpe ratio maximization portfolio. However, certain conditions need to be met for such risk measures to be applicable to an ERC portfolio (since Euler’s theorem is used to decompose the risk), as mentioned by
Mausser and Romanko (
2018), while implementing risk measures such as CVaR in the ERC portfolio is possible, as Mausser and Romanko show, other risk measures may not satisfy the properties for Euler’s theorem. As a result, not all risk measures can be used to develop an ERC portfolio. In this article, we focus on variance and centrality as a measure of asset risk.
Our contribution is the extension of articles such as those by
Baitinger and Papenbrock (
2017) and
Výrost et al. (
2019) by using centrality measures (we focus on betweenness centrality) based on MSTs to improve the equal risk contribution (ERC) portfolio. Following
Clemente et al. (
2021), we implement the centrality by scaling the VCV matrix based on each nodes centrality, instead of a multi-objective (
Giudici et al. 2022) or constraint approach (
Výrost et al. 2019). This provides a simpler method to include the centrality in the optimization model. Our goal is to show the effect of using “interconnectedness” risk in the ERC portfolio formulation and how the performance compares to the base ERC model. The next section provides an overview of the network and portfolio methods employed in this article.
3. Methodology
In this section, we present our method for implementing centrality into the ERC portfolio model. We first explain how the MST is developed. Based on the MST, the centrality is calculated using a modified heuristic presented in the literature. Following
Clemente et al. (
2021), we implement the centrality within a modified VCV matrix. As shown in the previous literature, peripheral asset portfolios perform better than central asset portfolios. Therefore, in our modified VCV matrix, our goal is to favor the peripheral assets. In our implementation, we scale the VCV matrix so the peripheral assets (central assets) co-variances and variances become smaller (larger). This allows the ERC portfolio model to favor the peripheral assets as they have lower co-variances, and thus assign them a higher weight. Finally, we implement this modified matrix into a convex formulation of the ERC portfolio model. Following the steps outlined by
Mantegna (
1999), we develop the correlation matrix using log returns for the look-back period. Next, using the equation,
we transform the correlations into distances. In the equation above,
is the correlation between the log returns of asset
i and asset
j. This creates a weighted network, as the edges contain information about the strength of the connection between nodes they connect.
We follow the procedure of
Baitinger and Papenbrock (
2017) and
Pozzi et al. (
2013) by using a MST to calculate the centrality measures. A minimum spanning tree (MST) is a filtered graph that only preserves its strongest connections. There are many different ways to develop a MST. In this case, we use the Kruskal algorithm (
Kruskal 1956), which adds the edges to a empty graph, starting with the shortest edge (strongest correlation). The edges are added so no cliques and no self loops are formed. As stated by
Baitinger and Papenbrock (
2017) (citing
Lopez de Prado (
2016)), the MST reduces the estimation error when used for forming portfolios as it only focuses on the strongest relationships (shortest distances) (
Mantegna 1999). An unweighted MST can be developed from this weighted MST by assigning each edge a value of 1 if its exists and 0 if the edge does not exist.
We use betweenness centrality (
) in this article. Betweenness centrality is the fraction of the all shortest paths on which a node lies (
Freeman 1977). This essentially measures the nodes’ influence on the flow of information in the network (
Rodrigues 2019). The centrality measure is implemented through the Python package, Networkx (
Hagberg et al. 2008).
We then transform the raw centrality measure following a modified version of the heuristic introduced by
Pozzi et al. (
2013). By using the equation,
we can transform the centrality measure for asset
i so
. In Equation (
2),
N is the number of assets,
u is for the unweighted MST and
w is for the weighted MST. Following
Pozzi et al. (
2013),
is the tied ranking of the raw centrality values for asset
i in the MST. Tied ranking refers to assigning tied values the average rank of said values. The transformation in Equation (
2) allows us to use the information in both the weighted and unweighted graphs. The fact that
is important since this means that when the centralities are applied as multipliers, they do not erase any asset relationships in the VCV matrix. Here, a lower
indicates a more peripheral asset while a larger
indicates a more central asset, according to the betweenness centrality measure.
We can use these transformed values as multipliers for the VCV matrix as explained in the following section.
Pozzi et al. (
2013) provide code and a more detailed explanation for the calculation of the hybrid metric on which our formulation is based. The key difference in the way we calculate the metric is the ranking of the centrality is done so that the most peripheral assets are given a lower ranking (smaller value) and the more central assets are given higher ranking (larger value).
In this study, we focus on long-only optimization and leave the option of shorting as future work. As discussed in
Bai et al. (
2016), the marginal risk contribution of asset
i is calculated as
where
is a vector of weights and
is the
i-th component of
vector. The total risk in the equal risk contribution (ERC) portfolio is then defined as
, for N assets (
Maillard et al. 2010). The goal of the equal risk contribution portfolio is to make the contributions of each asset equal,
. As defined in
Maillard et al. (
2010), the ERC portfolio optimization problem is as follows:
To work around the non-linear and non-convex nature of the objective function, we use the second-order cone programming model by
Mausser and Romanko (
2014) shown below.
Our method for introducing the network topology information to the portfolio model has been influenced by
Clemente et al. (
2021). We do note a few key differences in our approach in comparison to
Clemente et al. (
2021). Firstly, we use a simpler filtering approach in comparison to the multiple threshold filtering approach used by
Clemente et al. (
2021). MST filtering provides a more concise understanding of the asset relationships in comparison to threshold filtering and is commonly used to calculate centrality, as seen in the studies by
Baitinger and Papenbrock (
2017) and
Peralta and Zareei (
2016). Secondly, the centrality we use, betweenness centrality, uses the whole network to understand the relative importance of a node. This is because betweenness centrality of a node is based on the fraction of all shortest paths it is on (
Freeman 1977;
Rodrigues 2019). In comparison, hte local clustering coefficient, as the name suggests, is local and only considers the node and its neighbors when calculating the nodes importance. Finally, our implementation of the centrality preserves the asset co-variances, and is more intuitive from a traditional portfolio optimization perspective. In comparison,
Clemente et al. (
2021) use the combined clustering coefficients to represent the asset co-movements/relationships.
Our key contribution lies in the combination of the network-based risk (centrality of an asset) with the variance (traditional portfolio risk) with a specific application to the ERC portfolio. When we combine the centrality and VCV matrix, we do so with the goal of the ERC portfolio model favoring the peripheral assets over the central assets. This is in-line with previous literature such as
Pozzi et al. (
2013) and
Peralta and Zareei (
2016), who show that centrality is negatively linked to asset centrality. In the ERC portfolio, equal risk is implemented by assigning assets with larger risk (smaller risk) a smaller weight (larger weight). Therefore, by using the centrality to scale down the peripheral asset risk, we are essentially assigning the peripheral stocks a larger weight.
We define our combination matrix,
Q by the following equation,
where
is the VCV matrix and
D is a diagonal matrix whose diagonal entries
are the transformed betweenness centrality measures calculated in Equation (
2) for asset
i.
Q replaces the
matrix in the ERC objective function.
By combining the centrality and the VCV matrix in this way, we are scaling the variances and co-variances by the centrality of the nodes. Nodes with less centrality (smaller ) will make the variance and co-variances smaller (shrinking the“risk”), allowing the ERC model to favor these stocks by allocating them higher weights. If a stock is central, its higher centrality metric will enlarge its risk and the ERC model will lower its weight to balance out this increased risk.
For the optimization problem to remain convex in nature, the matrix Q must be at least positive semi-definite.
Proof of PSD Q Matrix. Following
Horn and Johnson (
2012),
by definition if A is a positive semi-definite matrix. If
, then
as well. That means
is positive semi-definite as well, with
D being a Hermitian matrix as well. □
Shrinkage is a estimation technique used to reduce the estimation error in the VCV matrix (
Ledoit and Wolf 2004b). As previously mentioned, the sample covariance matrix estimation carries estimation errors since we calculate it from the historical data (
Chopra and Ziemba 1993). This is especially true when the number of observations is less than the number stocks considered (
Ledoit and Wolf 2004a,
2004b). We employ the shrinkage method suggested by
Ledoit and Wolf (
2004b), which involves the calculation of the convex combination of the sample VCV matrix and an estimator. We use the python package Scikit-Learn (
Pedregosa et al. 2011) to perform the VCV shrinkage.
4. Computational Experiments
In this section, we test the methodology presented in the previous section using historical data. We provide details about our data set and present the methodology for data cleaning. We then present how the periods are calculated using the rolling window method. Finally, we demonstrate the step-by-step process for each period, to show how the optimal weights are calculated and applied to the out-of-sample period. To measure the out-of-sample performance, we also use performance measures, which are a common way to compare portfolio performance.
4.1. Experimental Design
The data are retrieved from Yahoo Finance and consist of daily adjusted closing stock prices for the index constituents of S&P100 assets from December 1998 to December 2022. We use December 2003 as the beginning of the back-testing period.
Any assets with missing data are removed to avoid any alterations to the return time series with data imputation. The remaining 75 assets are used to develop the portfolio. The log returns () of the data at time i are calculated according to the equation, , where is the price of the asset at time i. The resulting daily log returns are then used in the calculation of the variance–covariance matrix (VCV) and the correlation matrix.
The risk-free rate is approximated by the 10-year US T-Bill rate. The risk-free rate data are retrieved from the Federal Reserve Bank of St. Louis (FRED) (
Board of Governors of the Federal Reserve System (US) 2023) on a daily basis for the same time period as the stocks. The rates are converted to a daily rate by dividing by a factor of 252. Any days with missing risk-free rates are replaced with the previous day’s rate. The portfolio back-testing process is done on a moving window basis. The length of the total window is
H. The first
T days are the look-back period. The remaining
t days are the out-of-sample period.
Using the log returns from the look-back period, we estimate the sample VCV matrix . By optimizing the portfolio based on the sample VCV matrix, we obtain the optimal weights for that period. We calculate the daily portfolio simple (discrete) returns in the out-of-sample period using the optimal weights.
We then roll the window forward by t days, removing the first t days in the current window, and adding on t days to the end of the window.
This is repeated for the entirely of the back-testing period. We can vary the look-back period length T and the out-of-sample period t. For every rolling window instance, the portfolio optimization process as follows:
Isolate the training period log returns (defined by look-back period);
Calculate the correlations from the training period log returns;
Convert the correlations to distances as shown in Equation (
1) and develop the graph based on the correlations from the previous step;
Filter the graph to a MST (both weighted and unweighted);
Calculate the centrality metric in Equation (
2);
Use the training period log returns to calculate the sample VCV matrix ();
Calculate ;
Calculate the optimal weights using the ERC portfolio optimization model;
Calculate the portfolio out-of-sample returns using , where is the vector of out-of-sample simple (discrete) returns for the individual assets on a daily basis.
4.2. Performance Measures
Performance measures allow us to better understand the portfolio out-of-sample performance and compare the performance of the three portfolios in question: ERC, centrality ERC, and EW portfolios. This section presents an overview of what the selected performance measures are and how they will be computed. The performance measures are calculated using simple (discrete) returns,
, where
is the price of the asset at time
i. The first performance measure is the annualized return, which is calculated as the geometric mean using the equation,
In Equation (
13),
is the portfolio return for the
i-th day in the back-testing period and where
W is the total number of days in the back-testing period. For calculating annualized risk
, we use the equation,
The Sharpe Ratio is calculated based on the formula by
Sharpe (
1966),
where,
is the vector of portfolio simple (discrete) returns,
is the vector of risk free rates, and
is the volatility of the risk adjusted return of the portfolio. Finally Jensen’s alpha denotes how much the portfolio “beats the market” by comparing the actual returns to those predicted by the Capital Asset Pricing Model (CAPM) (
Jensen 1968). The expected return for the market (we use the S&P500 index to represent the market), and portfolio return are all computed using the geometric return formula shown in Equation (
13). The
is a linear regression coefficient and is calculated as
where
is the vector of market simple (discrete) returns and
is the vector of portfolio simple (discrete) returns. Putting the expected returns and
all together, the formula for Jensen’s alpha (
) is
4.3. Downside Risk Measures
To compare the risk of each portfolio, we use downside risk measures to understand the potential losses in a bear market. Two of the most commonly used downside risk measures are value at risk (VaR) and and conditional value at risk (CVaR). Value at risk indicates the losses for a given probability. The VaR is calculated as
where
is the random variable of simple (discrete) returns with probability distribution
(
Artzner et al. 1999). We use a probability of 5% (
). CVaR is the expected value of the simple (discrete) returns that are at or below the
q quantile. The measure is calculated using the following equation,
The maximum draw-down is a downside risk measure which quantifies the largest loss (from trough to peak) before another peak is attained. For a given period, the MDD is calculated using
where
is the peak portfolio value and
is the trough portfolio value.
5. Results
This section presents the results of the experiment as previously outlined. We discuss the results for the overall back-testing period, as well as specific instances of a bear market.
Figure 1 shows a the results for the out-of-sample period of December 2003 to December 2022. We use an initial portfolio value of
$100. We used a look-back period of three and a half years and an out-of-sample period of six months following the framework of
Gambeta and Kwon (
2020).
Table 2 shows the comparison of the key metrics for the different portfolio methods. In terms of overall performance, the benchmark ERC portfolio and the centrality portfolios are not considerably different due to the conservative nature of the risk parity portfolio. The main improvement is the 3.5% overall increase in the returns with the centrality ERC portfolio. These findings are in line with the conclusions of
Pozzi et al. (
2013), since the centrality ERC portfolio favors peripheral assets. Mainly due to the increase in returns, the centrality ERC portfolio also has a better Sharpe ratio in comparison to both the ERC and EW portfolios.
Figure 1 shows further evidence of this, as the centrality ERC portfolio attains a larger portfolio value than the ERC portfolio over the back-testing period.
From
Table 2, we can observe that the centrality ERC portfolio only shows slightly lower volatility (0.45% decrease relative to the ERC portfolio) over the back-testing period in comparison to the ERC portfolio. Therefore the ERC and centrality ERC portfolios are almost equivalent in terms of risk. This is partially contradicted when looking at the bear market sub-period analysis with downside risk measures.
Table 3 shows all the metrics calculated for the 2008–2009 recession and 2020 pandemic periods. The dates isolating the 2008–2009 and 2020 periods are from
Gambeta and Kwon (
2020) and
Cho and Song (
2023). The downside risk measures during high-stress periods allows us to better compare the portfolio risks. We can see in these high-stress situations, that the centrality ERC portfolio actually is on par during the 2008 crisis but is riskier during the 2020 pandemic in comparison to the benchmark portfolio. Although the difference is not more than 1–3% in these scenarios, we need to be keep in mind that these are daily return worst case scenarios, so on a yearly scale, the difference will be larger. During the 2020 period, we can observe a larger draw-down and annualized return. The most interesting result is the small difference in the CVaR of the EW and centrality ERC portfolios, as the EW portfolio is known for its higher risk level. Overall, these downside risk measures show that in bear market situations, the centrality ERC portfolio can perform worse than the ERC portfolio. These findings are in line with the risk–return relationship introduced in MPT, since improving the ERC portfolio returns has also increased the risk.
We note that the turnover rate for the centrality-based portfolio is higher than the turnover rate for the ERC portfolio. Since the base ERC portfolio has such a low turnover rate, it is difficult to improve upon or stay at the same rate while improving the overall performance. Instead, we compare it to the minimum variance portfolio for the same period, which has an average turnover rate (per re-balancing period) of 36.93%. Using this turnover rate as a benchmark, the centrality ERC portfolio has a reasonable turnover rate in comparison.
The
values measure the performance of the portfolio in comparison to the market (S&P500 index). Specifically, if the portfolio is able to beat the portfolio return, as predicted by CAPM. The goal for the centrality ERC portfolio
is to be greater than zero and larger than the ERC portfolio. From
Table 2, we can see this is true, with the centrality ERC portfolio
being 7.77% larger than the ERC portfolio
.
To test the peripheral assets performance, we can compare it to the performance of an ERC portfolio weighting the central assets more highly. In
Figure 2, we can see that the central assets ERC portfolio not only performs worse than the peripheral-assets-based portfolio but also does worse than the ERC portfolio. This analysis provides further evidence of the effects of favoring central versus peripheral assets in a MST.
Figure 3,
Figure 4 and
Figure 5 show the MSTs for the S&P 100 assets. In each MST shown, the size of the node is proportional to the weight allocated to that asset. In the case of the ERC portfolio, which does not use any network information, we see the asset weights more evenly distributed, with some preference to the peripheral assets. In contrast, as seen in
Figure 4, the centrality ERC portfolio has a clear preference for the peripheral assets (an effect of using the
Q matrix). We can see that in both cases, there is good diversification in comparison to the weights from an MV portfolio shown in
Figure 5, in which only a handful of assets are favored and the other assets are given little to no weight.
When combining the ERC portfolio with the centrality metrics, the risk contributions will differ according to the variance matrix used to measure risk. Risk contributions refers to the amount of risk the asset contributes to the total portfolio risk. Therefore, the summation of the individual asset risk contributions is the total portfolio risk. In the centrality ERC portfolio model, we use the
Q matrix to measure risk, leading to an equal risk contribution from each asset according to the
Q matrix. However, when considering the base
matrix (original VCV matrix), as shown in
Figure 6, we see that the risk contributions are not equal. The conclusion from this comparison is that without the use of the centrality scaling, the ERC portfolio would not have deemed this solution optimal considering the variations between asset risk contributions.
Table 4 shows the performance of the centrality-based ERC and ERC portfolios with different look-back (denoted in years) and out-of-sample (denoted in months) period lengths. Even by varying the look-back and out-of-sample period lengths, we observe that the centrality ERC portfolio is able to meet or exceed the benchmark ERC portfolio performance. In all cases, the Sharpe ratio of the EW portfolio (0.57) is lower than those of both the ERC and the centrality ERC portfolios, due to its larger volatility. Generally, we see that the centrality ERC portfolio performs better with more data (longer look-back periods) and shorter holding periods.
In this study, the effect of shrinkage is negligible. We see this in both
Table 4 and
Table 5. The differences in the Sharpe ratios are negligible. From the downside risk measures, we can observe that the centrality ERC portfolio performs slightly worse with shrinkage. Overall, the negligible change in performance due to shrinkage is not surprising, as when the ratio of assets to observations is increasingly small, the estimated matrix by shrinkage will be very close to the VCV matrix estimated by historical data (
Ledoit and Wolf 2004a). Since our portfolio only consists of 75 assets, this is easily achieved with two years of daily data.
6. Discussion
In this article, we use betweenness centrality, based on the MST of the market graph, to improve the performance of the ERC portfolio. We add to the current literature (e.g., (
Giudici et al. 2022), (
Výrost et al. 2019), and (
Baitinger and Papenbrock 2017)), which shows a similar concept but predominately with the MVO or MV portfolios. The ERC portfolio is a important portfolio, which has not been explored as commonly as the MVO or MV portfolios, in relation to network-based risk. It is more difficult to expand upon as the risk measure used must satisfy certain properties, so Euler’s theorem can be used for risk decomposition. We expand on this literature by improving the risk-adjusted performance of the ERC portfolio with a modified VCV matrix based on betweenness centrality. This is in line with the conclusion that including the “interconnectedness” risk can help improve portfolio performance (
Baitinger and Papenbrock 2017).
The results show that the network-based portfolio improves the returns with similar risk. By observing many different performance measures like , annualized returns, and Sharpe ratio, we can conclude that there is a clear improvement in performance. However, considering the performance in the bear markets and larger turnover rate, there are improvements that can be made to this portfolio model.
Future research can compare the centrality ERC portfolio to other network-based portfolio methods such as those of
Lopez de Prado (
2016) and
Clemente et al. (
2022). The centrality ERC portfolio could also be applied with other network topology measures like the local clustering coefficient.
In addition, we have only tested this portfolio on a limited stock data set, and so, expanding upon this set to include a wider variety of assets like bonds and commodities would provide a better idea of the performance of the centrality-based ERC portfolio. A natural extension of this article involves comparing the performance of this Pearson-correlation-based model with various combinations of methods used for constructing and filtering a network (such as the Planar Maximally Filtered Graph (
Tumminello et al. 2005)).