1. Introduction
This paper claims to use various optimization techniques in the context of financial models, such as the Heston model. The goal is to apply these techniques to market liquidity modeling and identify any superior forecasting quality to the liquidity estimation approach using a standard Brownian motion. The first outstanding new idea within this work is the application of various optimization techniques to the Heston model in order to analyze its forecasting quality in contrast to the standard Brownian motion in the context of market liquidity measurement. The second novel contribution of this article to the existing literature is the application of inverse transform sampling to the volume-generating bid and ask volume process of an order book. To estimate the liquidity, a simulation of the bid and ask volumes is necessary. For this purpose, two methods are applied: the compound Poisson process and inverse transform sampling. In order to compare their performance, the chi-square test is performed.
Financial models are characterized by a set of parameters that describe price movements of financial assets. In order to price and hedge financial products, stochastic models are most often used in practice and applied for general risk estimation frameworks.
Heston (
1993) developed a model for describing a movement of a stock price based on the standard Brownian motion approach but added an important component: stochastic volatility. Since a constant volatility is assumed in the classic framework, it may not be appropriate to use such models to forecast future price developments. Heston’s stochastic volatility model seems to be a better choice when modeling price fluctuations even though it makes assumptions that might not be encountered in reality, such as normal distributed returns in short time frames.
The application of stochastic volatility models is very broad. This paper focuses especially on the application of the Heston model to market liquidity simulation.
Market liquidity describes the ability of a financial market to absorb additional trades between market participants without affecting the price. When the price impact is strong, we speak of the illiquidity of a market. In order to simulate market liquidity, a specific parameter set is necessary to model trade capacity. This includes quoted bid and ask prices, quoted bid and ask volumes, and traded prices and traded volumes.
In order to simulate the Heston model as accurately as possible, frequent updates to the parameters used for estimation are necessary.
The Heston model has been used in different areas of finance. These include the pricing of derivatives and structured products, hedging, performance, and risk estimation. Many articles address the application of the Heston model. Most works cover computational implementation, such as
Ingber (
1996) or
Kirkpatrick et al. (
1983). Others give summaries or overviews of the theoretical optimization techniques, such as
Mikhailov and Ulrich (
2003) or
Gatheral (
2004). However, no existing research has compared various optimization techniques applied to market liquidity measurement in the context of the the Heston model.
A rather new stream of research tries to capture the dynamics of the Bitcoin market by calibrating various models to the existing options market, as discussed by
Madan et al. (
2019). While their approach considers different financial models, their article focuses on the comparison of various optimization techniques and on the broader underlying estimation of market liquidity. Both techniques could also be applied to the calibration of the financial models to Bitcoin option prices.
Mrazek and Pospísil (
2017) calibrate the Heston stochastic volatility model to real market data using several optimization techniques. They use known schemes, such as the log-Euler, Milstein, QE, Exact, IJK, and a new method combining the Exact approach and Milstein (E+M) scheme. They find the most precise to be the QE scheme of
Andersen (
2008). However, for estimating market liquidity, the proposed methods in this article are shown to be more effective.
Relying on market option data, fair values are calculated with the Heston model. Exact fair prices can only be achieved by making use of algorithmic optimization techniques. This optimization leads to a set of parameters that are then used to simulate the bid and ask prices of the Euro Stoxx 50 Future (FESX).
Traded volume is an important factor for determining market liquidity. Since most liquidity is generated at market and not via the limit order book, estimating at-market volume is very difficult. The problem when simulating at market traded volume is that point-wise simulation leads to a non-homogeneous movement that does not correspond to reality.
The estimated traded volume serves as a good estimator of future expected market liquidity. Market liquidity is highly linked to market risk since illiquidity leads to high risk. Therefore, it is obvious to link risk classification to the value of future expected market liquidity.
The scope of this research is on the computational implementation of a valid market liquidity estimation. This research aims to show that various simulation techniques exist to estimate future liquidity. It is likely that simulated traded volumes will not converge to the number of realized traded volumes due to the fact that the simulated quotes are compound Poisson-distributed random variables. Therefore, an approximation algorithm might be necessary to generate values that are converging due to the law of large numbers.
The basis for this article is the working paper by
Unger and Hughston (
2010), which examines the liquidity-generating pricing process. It incorporates two standard Brownian motions that represent the price process of the buyer and the seller. The intersection of the two Brownian motions serves as the indicator for the corresponding volume-generating process and, therefore, as the indicator variable for the points in time when liquidity is being generated. A simulated trade occurs by taking the minimum of two quoted sizes. This means if a buyer wants to buy, for example, 50 units at a certain price, and a seller agrees on that price but only wants to trade 10 units, then a liquidity of 10 units is generated. Since the quoted volume is assumed to be simulated by a compound Poisson process, the law of large numbers yields exponentially distributed random numbers.
The usual way to describe the price dynamics of an asset price process is to assume conditions, such as stochastic independence of increments, finite variation, and driving factors. Such factors may encompass a drift term, stochastic volatility, speed of mean reversion, or correlation terms. In order to generate a price process that is close to reality, it is necessary to update the randomized price movement with the implied volatilities and the subsequent traded prices of the corresponding derivatives. This means that without having a method to reduce the error between the estimated value and the realized value, it is impossible to stick to the real price development.
The problem with stochastic models is that a closed-form solution for every kind of model does not exist. In such a case, numerical solutions are needed. These numerical computations need to be calibrated to current market data. Since regular calibration techniques require a lot of computational resources, the focus of this work lies in the application of robust calibration.
One important calibration parameter that is widely used for pricing financial products is implied volatility. The Heston model assumes stochastic volatility, which seems to reflect reality better than constant volatility, as used in the classic Black–Scholes framework. The standard approach is the least-square type calibration. The shortfall of this approach is its sensitivity to the choice of the initial point: The point of convergence depends on the point of departure.
On the basis of the obtained parameters, a simulation of expected traded volume will be performed. The parameters needed for estimation of the volatility parameter are , where is the mean reversion rate, denotes the long-run variance, is the volatility, and is the correlation. The error resulting from the least square calibration is subject to optimization procedures.
To reduce this error, several optimization methods exist, such as genetic algorithms (GA)
Poklewski-Koziell (
2012), the particle swarm optimization technique (PSO)
Ruiz et al. (
2015), the Levenberg–Marquardt method (LM)
Lu et al. (
2007), and the Nelder–Mead Simplex method (SM)
Spall (
2001). Not many papers have dealt with these kinds of optimization techniques in terms of financial market data calibration. The application to liquidity estimation outlines the novelty of this research.
A potential extension of this article could be the consideration of a regime-switching Markov model, similar to the work of
Mehrdoust et al. (
2023). They calculate American put option prices using the Levenberg–Marquardt optimization by calibrating a regime-switching double Heston model. For future research, their approach could be applied to all of the optimization techniques presented in this article.
Poklewski-Koziell (
2012) uses genetic algorithms to calibrate the Heston model to synthetically generated data. This approach is being used in this article in order to evaluate its performance with the other optimization techniques. Using the approach of
Gilli and Schumann (
2010), the option bid and ask prices and volumes are used in this framework to calibrate option prices by application of the particle swarm optimization technique to the Heston model. According to
Beltrami (
2009–2010), the Levenberg–Marquardt optimization locates the minimum of a function that is expressed as the sum of the squares of non-linear functions. In our context, its application is to optimize the relevant Heston parameters so that the sum of the squares of the deviations to the realized price values becomes minimal.
All these optimization techniques are performed in order to minimize the estimation error between the Heston option price and the realized option price under the condition of minimizing the sum of the squared percentage errors between model- and market-implied volatilities
Bauer (
2012);
Yang et al. (
2010).
The structure of the paper is as follows. In
Section 2, the liquidity model used to generate the bid and ask price, and the volume process is described. In
Section 3, the model for calibrating the Heston model is presented. Moreover, the different optimization techniques and their validations are discussed.
Section 4 presents the data and results. Finally, the performance of the calibrated Heston model and the standard Geometric Brownian motion is compared in
Section 5.
Section 6 concludes the paper.
2. Model Description
The starting point of our research is the liquidity intersection model by
Unger and Hughston (
2011). It defines a liquidity-generating process based on four key parameters that prevail on market: bid price, ask price, bid volume, and ask volume. The mechanics are as follows: The bid and ask prices are simulated independently by Geometric Brownian motions as well as the corresponding bid and ask volume processes, which are simulated by compound Poisson processes. Every time the bid price reaches the level of the ask price, or exceeds it, a stopping time
is defined. This stopping time refers to the time at which liquidity is generated by the minimum of the prevailing quoted bid and ask volumes. By repetition and averaging of the procedure, the simulated traded volume converges to one single value, depending on the bid/ask spread. This singular value is the estimated liquidity for the time period of the simulation of the corresponding asset.
The described simulation is conducted under the assumption of normal distributed random variables and by estimation of and . These are the input parameters for the Geometric Brownian motions, which serve as the driving random price processes of the bid and ask prices. The procedure applies the Black–Scholes framework and all of its properties. The point of interest is where we assume a different underlying price process of the liquidity-generating process. For this purpose, we apply the Heston model and show how the liquidity value behaves when the Heston model is calibrated with four different optimization methods. These are the previously mentioned genetic algorithms (GA), particle swarm optimization technique (PSO), Levenberg–Marquardt method (LM), and Simplex method (SM).
For the estimation of market liquidity, we assume two stochastic processes for the development of the bid (i) and ask (j) prices, two compound stochastic processes for simulation of the bid (i) and ask (j) volumes, and an algorithm for the calculation of the arithmetic mean of the traded volumes over a certain period of time. The arithmetic mean is calculated by dividing the generated traded volume during a certain time interval by the total number of time steps.
2.1. Simulation of Bid (i) and Ask (j) Prices
For the simulation of the prices, we propose a multivariate Geometric Brownian motion and Heston model and compare the estimated results.
The multivariate Geometric Brownian motions for simulation of the price process of the buyer
and the price process of the seller
with
(price drift) and
(price volatility) can be developed as follows:
where the correlation between the Wiener bid and the ask processes
and
is denoted as
.
The Heston model assumes a stochastic volatility development of the bid (
i) and ask (
j) prices
with parameters
(price drift),
(price variance),
(rate of mean reversion),
(long run variance),
(volatility of variance), and
(Standard Brownian movements). Thus, for the bid dynamics, we use:
and for the ask price dynamics:
where
and
are correlated by
due to the leverage effect between the asset price and instantaneous volatility.
2.2. Simulation of Bid (i) and Ask (j) Volumes
Parallel to the bid and ask price process, we have two corresponding bid and ask volume processes running. Therefore, we propose two methods:
2.2.1. Compound Poisson Process
The dynamics of the volume processes
and
are assumed to follow a compound Poisson process with a jump rate
. Thus, for the processes of the bid and ask volumes, we set the following probability functions:
where
and
denote the submitted volume quote
n that a buyer or a seller wants to buy or sell, respectively, at a time
t.
2.2.2. Inverse Transform Sampling
The dynamics of the volume processes and are assumed to be simulated with a inverse transform sampling as follows:
Let
be the probability mass function (PMF) of the discrete random variables
(bid volume) and
(ask volume) calculated by historical data with
quoted bid volumes
and
quoted ask volumes
. Then, the PMF of the quoted bid (
i) and ask (
j) volumes at a time
t are given, respectively, by:
with
denoting the cumulative distribution function (CDF) of the historical quoted bid (
i) and quoted ask (
j) volumes at a time t calculated from the PMF
and
U denotes a uniform random variable between 0 and 1.
Since for
and
we can easily prove that inverse transform sampling could be a good estimation for the development of the bid (
i) and ask (
j) volume processes:
2.3. Liquidity Estimation
The resulting compound volume process
at a time
t characterizes the volume-generating process induced by trading.
The result is a non-homogeneous compound process. The
min condition ensures that the minimum amount of two matched quantities is listed as a transaction, or traded volume, responsible for the market impact and its costs. By matching the bid and ask prices and taking the average of each possible generated volume, we obtain the average traded volume (liquidity) over a certain time period n:
3. Optimization
To simulate of the bid and ask prices, we proposed a Geometric Brownian motion and Heston model.
Since the GBM has the lognormal distribution with parameters and for , we can use the average mean value as the estimator for and the standard deviation as the estimator for , calculated from the historical data.
The Heston model is based on the assumption that the volatility of the underlying asset is stochastic and includes more parameters requiring estimation. To conduct an estimation, we use the theoretical plain vanilla bid/ask option price. We proceed by developing an analytic expression for the Fourier transform of the option price and then re-obtain the price by Fourier inversion.
For optimizing the order submission flow on the ask volume as well as on the bid volume, we perform a chi-square optimization. Our null hypothesis states that the real arrival times of the conducted trades follow a compound Poisson distribution or an inverse Fourier transform.
3.1. Optimization of Heston Parameters
In order to validate the effectiveness of the Heston model optimization, we estimate the Heston parameters by using four different techniques: 1. genetic algorithms, 2. particle swarm optimization 3. Levenberg–Marquardt, and 4. Nelder–Mead Simplex.
We perform these optimization techniques in order to minimize the root mean square error between the estimated plain vanilla Heston option price and the realized option price of option
i on the estimation day:
where
is the set of Heston parameters to be estimated,
N is the number of options on the estimation day,
is the Heston call function denoting the dollar-adjusted plain vanilla call option price,
r is the interest rate,
the maturity of option
i,
S the closing price of the underlying asset, and
the strike of option
i.
In order to calculate the plain vanilla option prices, e.g., the Fast Fourier Transform (FFT) can be applied
Carr and Madan (
1999). Assuming no dividends and a constant interest rate
r, the initial option value
is
with risk-neutral probability that the option matures in-the-money:
and the delta of the option
Carr and Madan (
1999)
where
denotes the characteristic function of
, which is
denoting the risk-neutral probability of the log-price of the underlying
Putschögl (
2010).
Since the integrand is singular at
, FFT cannot be used to evaluate the integral. However, to make use of the speed advantage of FFT, we make use of the relation of the initial call option value
and the risk-neutral density
by
tends to
as
k tends to
∞, which indicates that the call price function is not square-integrable. However, the FFT can be applied when using the modified call price
defined by
With the inveres transform, we can obtain
by
where
For out-of-the money options, the call price can be obtained with the Fourier transform
of
for
By inversion of this transform, we obtain
with
Carr and Madan (
1999)
Since we are now able to generate option prices based on stochastic volatility, we can now simulate the bid and ask of the stock price using the Heston model.
The calibration of the Heston parameters follows the approach of the multi-asset Heston model. This allows us to simulate the development of bid and ask prices while taking into account the high correlation
Lichters and Markus (
2016).
Since the quoted bid and ask prices in the order book do not always correspond to realized traded prices (too high or too low), a selection of the optimal options for the calibration of the Heston model is necessary. For this reason, the options were tested by maturity and moneyness levels, as well as the optimal amount of options in the sample and the historic sampling period.
In order to solve the optimization problem, four different techniques are applied and compared by validating their accuracies within the Heston model to the realized historical prices of the options.
3.1.1. Heston Model Calibration with Genetic Algorithms
Genetic algorithms (GAs) are rooted in the principles of natural selection and evolution, favoring the selection of stronger individuals within a population over weaker ones. This concept is harnessed by GAs to optimize the relevant parameters of the Heston model. The optimization process revolves around evaluating how effectively individual parameters within the parameter space contribute to minimizing the objective function. Each individual parameter is assigned a fitness value based on its ability to minimize the difference with the objective function. Parameters that align well with the objective function, thus leading to better fitness scores, are granted the privilege of reproducing. This reproduction process forms the basis for creating subsequent generations within the population. As generations progress, the genetic algorithm iteratively refines the parameter values, allowing for the emergence of increasingly effective solutions in the pursuit of optimizing the Heston model.
3.1.2. Heston Model Calibration with Particle Swarm Optimization
Particle swarm optimization (PSO) is a computational algorithm for finding minima in multi-dimensional spaces. Unlike genetic algorithms (GAs), in which individuals evolve through selection, crossover, and mutation, PSO employs a population of particles that continuously move within the solution space, adjusting their positions based on their experiences and those of their neighbors
Ruiz et al. (
2015). This swarm intelligence approach makes PSO particularly effective in identifying the correct minimum, even in the presence of multiple solutions, making it valuable for global optimization tasks due to its computational efficiency.
3.1.3. Heston Model Calibration with Levenberg–Marquardt
The Levenberg–Marquardt method is the industry standard when it comes to optimizing multivariate nonlinear systems represented as least squares problems. LM is an iterative method for finding the minimum of a function expressed as the sum of the squares of nonlinear functions
Beltrami (
2009–2010). An advantage of using Levenberg–Marquardt is its fast convergence ability over all other optimization methods. As shown in this article, this property holds also for the estimation process of the Heston parameters.
3.1.4. Heston Model Calibration with Nelder–Mead Simplex
The Nelder–Mead simplex method is a nonlinear optimization approach that generates new points in or near a geometric object at each iteration. It uses a reflection step to move from high-energy regions to low-energy regions. Calibration of the Heston model is complete when the simplex becomes small enough
Kienitz and Wetterau (
2012).
3.1.5. Heston Model Calibration with Levenberg–Marquardt Combined with Genetic Algorithms
Due to the large amount of data, high frequency of the simulations, and complexity of the closed Heston formula for calculating option prices, the computer run times for optimizing Heston parameters are very high. For this reason, we have chosen weak termination conditions (maximum function evaluation, maximum iterations, termination tolerance on the function value). The calibration results are very satisfactory for all optimization methods.
Nevertheless, we also tested a combination of Levenberg–Marquardt and a genetic algorithm by using the optimal parameters of LM as starting parameters for the GA. This allows us to improve the optimization error (see results below).
3.2. Chi-Square Optimization Test of Bid and Ask Volume
For the simulation of the order submission flow, we test for two different arrival time distributions for both the ask and bid sides: compound Poisson-distributed and inverse-transformed arrival times. Our null hypothesis states that these simulated arrival times correspond to the real order submission arrival times:
: The bid/ask volume generated by the compound Poisson process and inverse transform has a statistically significant association with the cumulative distribution function of the real observed bid/ask volume data.
: The bid/ask volume generated by the compound Poisson process or inverse transform is distinct from the cumulative distribution function of the real observed bid/ask volume data.
In order to test this assumption, we conduct a chi-square test for different time intervals and take the arrival times that best match reality.
The compound Poisson process is a distribution that consists of two parameters,
and
n, which measure the arrival time and volume, respectively. To estimate
, we take a sample length of real order submissions, which serves as the mean arrival time for our simulation:
where
is the order submissions per time unit.
serves as a good estimator for the maximum likelihood method. To estimate
n, we take historical data and calculate the mean value of the bid/ask volumes for a specific time period.
For the inverse-transformed distribution, we take a random sample from the past based on the PMF.
In order to perform a chi-square test for the simulated ask and bid order submission, we classify for both methods:
for the ask volume side, as well as
for the bid volume side and conduct the test
for the ask-volume side, as well as
for the bid volume side.
4. Validation of Parameter Optimization
This section provides the results of the parameter estimation using FESX50 Future option data as well as the statistical test results from the bid/ask volume chi-square optimization. For the FESX50 parameter optimization, we compare the root mean square errors of the four different optimization techniques, i.e., genetic algorithms (GA), particle swarm optimization (PSO), Levenberg–Marquardt (LM), and Nelder–Mead Simplex (NM). The parameters of interest are the Greeks , , , , and needed to determine the price of an option, priced with the Heston model.
For the bid/ask volume chi-square optimization, we provide in-sample as well as out-of sample test results for the compound Poisson process and inverse transformation.
In order to figure out the optimal model choice for the calibration of the Heston model, we only take the bid prices due to computational reasons, since it exhibits a high correlation of more than 92% with the ask prices. This high correlation has also been shown by Lichters and Trahe in
Lichters and Markus (
2016). This indicates that the model choice for the ask prices would be the same as for the bid prices. Nevertheless, it is important to stress that, for the simulation of the stochastic processes, both bid and ask data are used.
4.1. Validation of Heston Optimization
The time frame used for testing is 27 April 2015–30 December 2015, the source for the option data shown in
Table 1 and
Figure 1 is Interactive Brokers, and the source for the Euribor interest rate data shown in
Table 2 is Bloomberg. We use 1301 call options with different maturities and different strikes. On average, between 70 and 80 options were traded daily.
For the estimation of the parameters, we need to use different maturities. For calculation, we do a linear interpolation by
where
is the interest rate that needs to be determined at a time
t with the maturity
and
,
are the given short rates at a time
, or
, with the maturity
, or
, respectively. By interpolation, we obtain the following sequence of Euribor interest rates (
Figure 2):
For the estimation of the Heston parameters, we use the same strat values, same lower and upper bounds, and the same termination conditions for all four optimization methods (
Table 3 and
Table 4). These are the optimal start parameters calculated from the historical bid/ask call data.
The given input optimization parameters are tested based on all traded bid and ask options on 20 August 2015. Out of sample testing are the traded options on 21 August 2015. The calculated optimal Heston parameters are shown in
Table 5.
Figure 3 and
Figure 4 show the results of calibrating the parameters of the Heston model both in-sample and out-of-sample.
Figure 3.
In-sample market vs. estimated traded call options at 20 August 2015.
Figure 3.
In-sample market vs. estimated traded call options at 20 August 2015.
Figure 4.
Out-of-sample market vs. estimated traded call options at 21 August 2015.
Figure 4.
Out-of-sample market vs. estimated traded call options at 21 August 2015.
We can see in the
Table 6 and
Table 7 that blocks of short maturities lead to smaller RMSEs in in-sample as well as out-of-sample estimations. In particular, 0–3 months maturity outperforms longer dated maturities. For the estimation procedure, we can see that LM + GA generates the smallest RMSE in this short maturity block in-sample as well as out-of-sample. In the intermediate (3–6 months) maturities, LM + GA outperforms the other optimization methods in-sample but not out-of-sample. To summarize, we can see a strong dominance of the LM + GA method compared to the other optimization methods in-sample as well as out-of-sample.
Figure 5 shows the option prices for in-sample estimated parameters (27 April 2015–30 December 2015) for a long maturity option with maturity 16 December 2016, Strike = 3800:
Figure 6 shows the option prices for of out-of-sample parameter estimation (28 April 2015–30 December 2015) for options with maturity 16 December 2016, Strike = 3800 (Parameters are estimated using data 24 h prior):
Figure 7 shows the option prices of in-sample (27 April 2015–30 December 2015) estimated parameters for a short maturity option with maturity 15 January 2016, Strike = 3500:
Figure 8 shows the option prices of out-of-sample parameter estimation (28 April 2015–30 December 2015) for option with maturity 15 January 2016, Strike = 3500 (Parameters are estimated using data 24 h prior):
In-sample RMSEs are smallest using the LM + GA method for at-the-money as well as out-of-the-money options. Using the in-the-money option, we can see that the NMSim method produces the smallest RMSE in-sample (
Table 8). We obtain similar results for the out-of-the-sample test (
Table 9), which indicates that LM + GA works best when using at-the-money as well as out-of-the-money options for parameter estimation, whereas NMSim generates the smallest RMSE using in-the-money options for parameter estimation. Since LM + GA provides the strongest test results, we apply this combination for the calibration of the Heston parameters.
Moreover, we determine the optimal time frame for historic parameter estimation. From all of these combinations, we then rank the options due to their best calibration results. Our tests show that the optimal Heston parameters can be calculated using data from the best 23 options of the last five days, selected by maturity and moneyness.
Next, the corresponding graphs using the LM method are presented.
Figure 9,
Figure 10 and
Figure 11 show the prices of the at-the-money, out-of-the-money and in-the-money call option prices for the in-sample calibration.
Figure 12,
Figure 13 and
Figure 14 show the prices of the at-the-money, out-of-the-money and in-the-money call option prices for the out-of-sample calibration.
The general test results show the smallest RMSE for the LM method, which indicates that LM is superior to the other optimization methods. Not only does LM generates the smallest RMSE, but it is also the fastest optimization method in terms of computational time (
Table 10).
Even though LM yields the best calibration results among all optimization methods, we can achieve an improvement in the test results via a combination of the LM method and genetic algorithms.
4.2. Validation of Chi-Square Optimization
For validation of the chi-square test results, we take the historic FESX50 tick data and accumulate all tick changes, regardless of price or volume changes or new order submission, at each second. We take the tick data from 26 April 2015 to 30 December 2015 and test in-sample as well as out-of-sample. The in-sample test comprises 1–50 days, whereas the out-of-sample test takes these 1–50 in-sample days as an estimation set and tests 1 day out-of-sample.
Table 11 shows that
for the compound Poisson distribution can be rejected for all tested in-sample time lengths. The real observed order submission arrival times cannot be described in-sample by a Poisson distribution.
Table 12 shows that
for compound Poisson distribution can be rejected for all tested out-of-sample time lengths. The real observed order submission arrival times cannot be described out-of-sample by a Poisson distribution.
Table 13 shows that
for an inverse transformation can be accepted for all tested in-sample time lengths, meaning that the real observed order submission arrival times can be described in-sample by an inverse transformation.
Table 14 shows that
for an inverse transformation can be accepted for some tested out-of-sample time lengths, meaning that real observed order submission arrival times can partly be described out-of-sample by an inverse transformation.
To summarize, the best estimation result can be achieved by the choice of an inverse transformation and the selection of a two-day time window.
The graphical representation of the compound Poisson PMFs shows that the historical bid and ask order submission volumes are not compound Poisson-distributed whereas the inverse transformation indicates very good chi-square test results.
As we can see in
Figure 15 and
Figure 16, the histogram reveals that there are many seconds during which there are no changes or new submissions in bid or ask orders. Therefore, we can see many zeros in our test sample. In order to avoid distortions of the compound Poisson process, we also test the compound Poisson process without these zero values.
In contrast,
Figure 17 and
Figure 18 show that inverse transform sampling is a better method for simulating bid and ask volumes.
6. Conclusions
This work compares the estimation power of the Heston model and the Geometric Brownian motion (GBM) model as well as inverse transformation sampling and a compound Poisson process. The application is a unique approach to market liquidity estimation.
For the simulation of bid and ask prices, the Heston and GBM models are applied. To calibrate the Heston parameters, several optimization methods are used, and their estimation power in-sample as well as out-of-sample are compared by taking historic data of Euro Stoxx 50 Future options. The estimation procedures for the Heston parameters are genetic algorithms (GA), the particle swarm optimization technique (PSO), the Levenberg–Marquardt method (LM), and the Nelder–Mead Simplex method (SM). A superior estimation power of the LM method compared to the other optimization techniques can be found. Further, for different option maturities, as well as for different moneyness levels, a higher estimation power in shorter dated maturities can be found. The LM method works best for out-of-the-money and at-the-money options, whereas the SM method works best for in-the-money options. Generally, the LM method not only generates the smallest RMSE but also performs best in computational time.
For simulation of the bid and ask volumes, two types of processes are applied: a compound Poisson process and inverse transformation sampling. For a two-day sampling window, the chi-square test indicates superiority to the inverse transformation sampling compared to the compound Poisson process, since the historical bid and ask order submission volumes are not compound Poisson-distributed. By using inverse transformation sampling, the forecasting error is significantly lower than with simulated volumes using the compound Poisson process. This indicates that the best choice for the simulation of bid and ask volumes is the inverse transformation sampling method.
For liquidity estimation, this means that by simulation of the bid and ask prices, generated by the calibrated Heston model, it is possible to estimate market liquidity up to 29.14% better than with the GBM model. The results are robust for in-sample as well as out-of-sample tests. The only drawback of using the Heston model is the high computation time, which is necessary for calibration of the parameters and simulation of the prices.