1. Introduction
As a research field, portfolio selection is used to accomplish the investments in financial markets by spreading investors’ capital among several different assets considering return and risk. Since the pioneering work of Markowitz [
1] in single-period investment problems, the mean–variance portfolio selection problem has attracted much attention and has become a research hotspot. By introducing different risk measures, a large variety of portfolio selection models have been presented, such as the mean–variance–skewness model [
2], the mean-conditional value at risk (CVaR) model [
3], the mean-value at risk (VaR) model [
4], the mean-semi-variance model [
5] and the minimax risk model [
6]. In addition, entropy can also be used as a risk measure because it does not depend on symmetric membership functions and can be calculated from non-metric data. Philippatos and Wilson [
7] first replaced variance with entropy as a risk measure. Later, Rödder et al. [
8] provided a new and efficient method for determining the portfolio weights on the basis of a rule inference mechanism with both maximum entropy and minimum relative entropy. Nawrocki and Harding [
9] provided two alternative weighted computations of entropy to measure portfolio risk. Usta and Kantar [
10] presented a multi-objective model founded on mean, variance, skewness and entropy to adequately diversify the portfolio. Yu et al. [
11] discussed the performance of the models with diverse entropy measures by comparing the mean–variance efficiency, portfolio values, and diversity.
Traditionally, researchers dealt with the uncertainty of portfolio selection problems by applying probability theory. For example, Beraldi et al. [
12] proposed a mean-CVaR model considering a complex transaction cost structure, and designed a specialized Branch and Bound method to solve the proposed model. Huang [
13] built a new type of model based on a risk curve. However, many non-probabilistic elements, such as economics, politics and social circumstances, exist in real capital markets and affect investment decisions. With the introduction of fuzzy set theory [
14], an increasing number of scholars began to investigate the portfolio selection problems in the fuzzy environment. Assuming that the returns are fuzzy, there exist numerous papers employing possibility theory for fuzzy portfolio selections; see, for example, Vercher et al. [
15], Chen [
16], Jana et al. [
17], Chen and Tsaur [
18], Liu and Zhang [
19], and Chen and Xu [
20]. Although possibility theory is widely used, it has limitations. For instance, it is not self-dual. To overcome this drawback, Liu [
21] proposed credibility theory. Under the framework of the credibility theory, Gupta et al. [
22] presented a multi-objective expected value model using risk, liquidity, short-term return, and long-term return. Gupta et al. [
23] proposed a multi-criteria credibilistic portfolio rebalancing model considering portfolio risk as a risk curve. Liu et al. [
24] built a class of credibilistic mean-CVaR portfolio optimization models. Huang [
25] provided two credibility-based portfolio selection models according to two types of chance criteria. Li et al. [
26] discussed a maximum likelihood estimation and a minimum entropy estimation for expected value and variance of normal fuzzy numbers in fuzzy portfolio selection. Jalota et al. [
27] modeled return, illiquidity, and risk of different kinds of assets by using L-R fuzzy numbers in a credibilistic framework. Deng et al. [
28] built a mean-entropy model in the framework of credibility theory. Xu et al. [
29] proposed a credibilistic semi-variance project portfolio model with skewness risk constraints.
In reality, except for portfolio risk, investors frequently face background risks such as losses of human capital, pensions, unexpected health-related costs, labor incomes, and real estate investments. Therefore, an increasing number of scholars have studied portfolio selection problems with background risk. Alghalith [
30] introduced a dynamic investment model to illustrate the impact of background risk and found a negative correlation between the background risk and portfolio risk. Huang and Wang [
31] analyzed the characteristics of the portfolio with background risk under a mean–variance framework. Jiang et al. [
32] discussed the influence of background risk in the framework of the mean–variance model. Biptista [
33] proposed a mean–variance model considering background risk and analyzed the circumstances under which investors can optimally entrust the portfolio managers to administer their wealth. Biptista [
34] introduced mental accounts as well as background risk into portfolio selection and derived the efficient portfolio frontier. In addition to the above studies, few researchers considered background risk in fuzzy portfolio selection problems. Thus, to the best of our knowledge, the only exceptions are the following two studies. Xu et al. [
35] provided a fuzzy portfolio selection model taking the vagueness of the investors’ performances and background risk into account. Li et al. [
36] gave a possibility-based portfolio selection model considering background risk.
All of the previous literature is in the framework of single-period. However, investment is a long-term process, and investors need to redistribute their funds over time. Numerous scholars have studied portfolio selection problems from single-period to multi-period cases. Some representative works on multi-period portfolio selections include Chen et al. [
37], Zhang et al. [
38], Liagkouras and Metaxiotis [
39], Li et al. [
40], and Zhang et al. [
41]. On the other hand, several researchers have researched multi-period portfolio selection problems based on credibility theory. Typically, Mehlawat [
42] developed credibility-based multi-objective models taking multi-choice aspiration levels into consideration for multi-period portfolio optimization problems. Mohebei and Najafi [
43] presented a multi-period mean-VaR model by combining the credibility theory with a scenario tree. Liu et al. [
44] designed a credibilistic multi-period mean-LAD-entropy model considering bankruptcy control and bound constraints. Zhang and Liu [
45] gave a credibility-based model with a bankruptcy risk control constraint for solving multi-period portfolio selection problems. Guo et al. [
46] formulated a multi-period credibilistic mean–variance model with the terminal return constraint and V-shaped transaction cost.
In recent years, swarm intelligence-based optimization techniques have attracted increased attention. A literature review reveals the effectiveness of swarm intelligence algorithms in solving complex optimization problems, such as the salp swarm algorithm (SSA) [
47], the artificial bee colony algorithm (ABC) [
48], the firefly algorithm (FA) [
49], and the particle swarm optimization (PSO) [
50]. The dragonfly algorithm (DA) is a fairly novel swarm intelligence optimization technique proposed by Mirjalili [
51] and is based on the static and dynamic swarming behaviors of dragonflies in nature. Compared with the non-dominated sorting genetic algorithm II (NSGA II) and PSO, DA has advantages in dealing with optimization problems and has been applied in many fields. Recently, Mirjalili [
51] proposed a multi-objective dragonfly algorithm (MODA) and applied it to submarine propeller optimization problems. Amroune et al. [
52] used a hybrid dragonfly optimization algorithm and support vector regression to solve a power system voltage stability assessment problem. Suresh and Sreejith [
53] used the dragonfly algorithm to solve static economic dispatch with solar energy. Mafarja et al. [
54] presented a variety of S-shaped and V-shaped transfer functions to balance the exploration and exploitation in the binary dragonfly algorithm. Khadanga et al. [
55] proposed a hybrid dragonfly and pattern search algorithm approach and used it in tilt integral derivative controller design. Ghanem and Jantan [
56] combined ABC and DA to train a multi-layer perceptron. Sree and Murugan [
57] developed a memory-based hybrid dragonfly algorithm with the concept of PSO
and
for solving three engineering design problems.
Although numerous studies have been performed for multi-period fuzzy portfolio selections, few studies have considered background risk under the framework of credibility theory. Moreover, to date, the application of the DA algorithm in portfolio selection problems is relatively rare. The purpose of this paper is to investigate the multi-period portfolio selection problem with background risk in the framework of credibility theory. The main contributions of this paper are as follows: (1) We formulate a credibility-based mean-semi-entropy multi-period portfolio model, considering background risk and several constraints, namely cardinality, liquidity, and buy-in thresholds; (2) We develop a new meta-heuristic approach, combining the strengths of DA and NSGA II. In the proposed algorithm, parameter optimization, constraints handling, and external archive approaches are proposed to improve the ability of finding accurate approximations of Pareto optimal solutions with high diversity and coverage; (3) We run several experiments based on ZDT benchmark functions and a real-world empirical application to verify the effectiveness of the proposed methods.
The rest of this paper is organized as follows:
Section 2 describes the preliminaries. In
Section 3, we build a multi-period credibility-based mean-semi-entropy model considering background risk.
Section 4 discusses the solution method and proposes a hybrid algorithm. In
Section 5, numerical experiments are examined to verify the validity of the proposed model and the hybrid algorithm. In
Section 6, we submit our conclusions.
5. Numerical Experiments
For the sake of verifying the usefulness of the proposed methods, numerical empirical examples introduced by Mehlawat [
42] are presented. The fuzzy return rates of the 10 risky assets in each period are presented in
Table 1, and
Table 2 shows the fuzzy turnover rates of these 10 risky assets. The background asset returns are given by experts’ estimations.
In this empirical study, we hypothetically set the initial wealth as , the lower and upper bounds are set as and , respectively, the unit transaction cost is , and the desired number of risky assets chosen for the portfolio during the investment period is . In addition, we assume that and . The fuzzy variable is the return rate on a background asset, the return rate on risk-free assets is , and the accepted minimum expected liquidities during each investment interval are designed as , , and .
5.1. Parameter Settings
Six algorithms, HDA-GA, NSGA II [
63], the multi-objective dragonfly algorithm (MODA) [
51], the multi-objective particle swarm algorithm (MOPSO) [
50], the multi-objective salp swarm algorithm (MOSSA) [
47], and the multi-objective artificial bee algorithm (MOABC) [
48], are compared in these experiments. The parameters of each algorithm are set as follows:
HDA-GA: , , the probability of individual mutation , the crossover distribution exponent , and the mutation distribution exponent .
The parameters in NSGA II and MODA are equal to those in HDA-GA.
MOPSO: The modulus of personal learning is 1, the modulus of global learning is 2, and the initial weight w is 0.5.
MOSSA: The initial range r is 0.2, and the initial max velocity is 0.04.
MOABC: The is 200, and the is 50.
In addition, each algorithm independently runs 30 times, and the average results are obtained after running.
5.2. Performance Measure Metrics
Five performance metrics, GD, Spacing, Diversity, CM and MPFE, are selected to compare the performances of the algorithms.
Generation Distance (GD): This convergence metric is employed to compute the distance between the approximated Pareto frontier and the true Pareto frontier. It is calculated as [
63]
where
N is the number of the obtained solutions, and
is the minimum Euclidean distance between each of the obtained solutions and the true Pareto frontier. A smaller value of GD means that the obtained Pareto frontier is closer to the true Pareto frontier.
Spacing: This diversity metric is applied to measure the propagate of the obtained values. It is evaluated as [
64]
where
is the minimum distance between the
solution and its adjacent solutions,
, and
is the average distance of
. A smaller value of Spacing indicates that the obtained solutions are in a better distribution.
Diversity: This diversity metric measures the spread and distribution of the obtained solutions. It is given as [
63]
where
and
are the distance between the boundary of the obtained solutions and the extreme values of the true Pareto frontier. A smaller value of Diversity means a better distribution and spread of obtained solutions.
Convergence Metric (CM): This convergence metric measures the extent of convergence to the true Pareto frontier. It is computed as [
63]
where
is the Euclidean distance between the solution obtained with the algorithm and the nearest solution on the Pareto frontier. The smaller the value of this metric is, the better the convergence toward the true Pareto frontier.
Maximum Pareto front error (MPFE): This convergence-diversity metric is employed to measure the quality of the obtained solutions in terms of diversity and convergence on a single scale. It is expressed as [
64]
where
Q is the number of objective functions and
P is the number of the Pareto solutions. MPFE aims to find the maximum minimum distance between each solution obtained with the algorithm and the corresponding nearest solution on the Pareto frontier. The convergence and the diversity of the algorithm improve with smaller values of this metric.
5.3. Experimental Results Based on the Zdt Functions
In this section, we select four ZDT functions as benchmarks and present a comparison of these functions to verify the validity of the proposed HDA-GA. The details of the four ZDT functions are in
Appendix A.
Table 3 and
Table 4 show the best (
), mean (
) and standard deviation (
) of the five performance metrics. The bold fonts indicate better results. It can be easily observed that the proposed HDA-GA is superior to the other five algorithms within the five performance metrics.
ZDT1 is a relatively easier problem than the other three ZDT problems. From
Table 3, MOPSO and MOSSA have better
than HDA-GA. However, HDA-GA has the smallest
of the five metrics among the six algorithms, which means that HDA-GA converges to the Pareto frontier with the best distribution, spread, and diversity.
Five disjoint curves make up the Pareto front of ZDT3. With respect to GD and Diversity, although MOPSO can obtain the , HDA-GA performs better between and . In addition, HDA-GA has the smallest and of two metrics, CM and MPFE. Moreover, HDA-GA owns a better Spacing than others, which means solutions produced by HDA-GA have a better distribution than others.
ZDT6 is another difficult problem for many multi-objective optimization algorithms to achieve a set of solutions with good convergence and diversity. From
Table 4, for GD and CM, although MOPSO has smaller
, HDA-GA performs better in
and
than the others. For diversity metrics Spacing and Diversity, solutions produced by HDA-GA spread out better over the Pareto frontier with a better distribution. The results of MPFE demonstrate a superior convergence and diversity ability of HDA-GA.
For ZDT2, although MOPSO and MOSSA perform more stably than HDA-GA with respect to GD and Spacing, HDA-GA has the smallest , and of Diversity, CM and MPFE, which indicates that HDA-GA finds a better distribution and spread with a smaller convergence metric than others.
Based on the above discussion, HDA-GA has a superior convergence and diversity ability with a better distribution and spread. It indicates that HDA-GA outperforms the other algorithms in most of the performance metrics.
5.4. Experimental Results Based on the Proposed Model
This section presents three cases with different cardinality constraints. For the proposed model, the minimum (
), maximum (
), mean (
), standard deviation (
) and range (
) of the results found by six different algorithms are revealed in
Table 5. The bold fonts indicate better results. Given the comparisons among the six algorithms, HDA-GA can own the smallest mean value in all the cases. In addition, according to the comparison of
and
index, we can see that HDA-GA can acquire a set of non-dominated solutions with better distribution. Finally, the comparison of
index illustrates that HDA-GA can search space reliably and extensively. Although the MODA is more stable than the HDA-GA in terms of
index, it is easier for MODA to fall into local optimization. These results indicated that HDA-GA performs better than the other algorithms.
Moreover, for a fair comparison of the performances among the algorithms, GD, Spacing, Diversity, CM, and MPFE are employed as the performance measurement metrics.
Table 6 presents some results in terms of the five metrics above. For GD and CM, the index values indicate results obtained by the proposed HDA-GA are closer to the Pareto front than the other algorithms in the three cases. Meanwhile, for Spacing and Diversity, HDA-GA performs better than the other algorithms, which means that it finds a better spread and distribution metric than others. Moreover, for MPFE, HDA-GA has a superior convergence and diversity ability.
Figure 1,
Figure 2 and
Figure 3 display the Pareto front and the efficient frontiers of the six algorithms under the three cases above. It can be seen that the proposed HDA-GA can obtain a set of non-dominated solutions that approach the Pareto front properly. Moreover, we can see that the proposed HDA-GA performs better with accurate convergence, preferable coverage, and better diversity.
5.5. Experimental Results with and without Background Risk
We present four cases to analyze the impact of background risk in the proposed model. Case 1: Without background risk asset (BR); Case 2: With background risk asset whose fuzzy return is ; Case 3: With background asset whose fuzzy return is ; Case 4: With background asset whose fuzzy return is . The experimental results indicate that the background risk has a significant impact on the portfolio selection.
From
Table 7, it can be observed that cases considering background risk have higher returns and risk than that without background risk. Ignoring background risk will cause the underestimation of risk and the reduction of return in the actual investment.
In addition,
Figure 4 shows the Pareto frontiers of the above four cases. The shapes of the Pareto frontiers are approximately the same, and the Pareto frontier moves right as the background risk is concerned. It can be observed that there is a positive correlation between the background asset return and portfolio return. When the risk is the same, a portfolio with background risk can obtain a higher return than that without background risk. It indicates that considering background risk avoids the reduction of return in the actual investment and the ignorance of the potential income in the actual investment. Moreover, the risk of background assets is positively correlated with portfolio risk. When the return is the same, a portfolio with background risk is riskier than one without background risk. Considering background risk can prevent investors from underestimating the investment risk and ignoring the potential risk.
6. Conclusions
In the real world, investors usually need to optimize the portfolio strategies from time to time. In this paper, we proposed a mean-semi-entropy model based on the credibility theory by taking buy-in thresholds, cardinality, liquidity, and transaction costs into account. In particular, background risk is also considered in the proposed model. To solve the proposed multi-objective model, a hybrid algorithm, HDA-GA, combining the advantages of dragonfly algorithm (DA) and non-dominated sorting genetic algorithm II (NSGA II), is developed. Finally, we conducted a series of experiments to demonstrate the effectiveness of the proposed model and the hybrid algorithm. The numerical results showed that (1) the proposed algorithm HDA-GA is superior to the other five algorithms, namely, NSGA II, MODA, MOPSO, MOSSA, and MOABC, with accurate convergence, preferable coverage, and better diversity; (2) the mean-semi-entropy model can lead to more distributive investments; and (3) considering background risk will prevent investors from the underestimation of risk in the actual investment.
Future research directions include but are not limited to the following: (1) considering a more general transaction cost structure as in Beraldi et al. [
12]; (2) extending the proposed model by adding other constraints of real markets such as minimum transaction lots, skewness, and class constraints; and (3) applying other metaheuristic algorithms such as the estimation of distribution algorithm (EDA), the krill herd (KH) algorithm, and bacterial foraging optimization (BFO) for solving the proposed model.