1. Introduction
Unlike the oil markets, the gas markets have witnessed regional divergence at several levels. However, the degree of competitiveness varies between the different gas markets.
Following extensive infrastructure development and regulation changes, the North American market developed transparent and competitive gas pricing hubs. Additional gas hubs emerged afterward in Europe, providing physical and virtual locations for trading gas. The abundance of gas and the presence of competition between different stakeholders at different levels of the value chain led to an increase in trade in both the spot and future markets. However, the price of gas does not reflect market fundamentals and forces in all markets.
The role of the regulators is to promote competitive conduct, domestic gas production, third-party access, price trade reporting, and to ensure the presence of futures trading. Once the liberalization measures are implemented and regulated, the status of the gas hub is confirmed as liquid and stable, which results in prices being indicative of market fundamentals.
In this study, we focus on the North American and European markets, in specific the United Kingdom (UK). The choice of these markets is explained by the fact that both attempted to liberalize the gas markets, and underwent intense regulations and policy changes over the past years [
1].
Wholesale buyers used to follow long-term contracts indexed to the price of oil derivatives in both of the aforementioned markets [
2]. Also, the gas industry was mostly dominated by state-owned monopolies.
However, the Federal Energy Regulatory Commission (FERC), encouraged the establishment of gas markets driven by free competition in the United States [
3]. As a result, the Henry Hub (HH), known as the most successful hub, was created [
4]. The success of the HH is marked by a large liquid portfolio of spot and future contracts, along with hub indexed prices which serve as a reference for the value of the gas commodity all over the North America.
Consequently, the UK and the European Union (EU) started their reforms. The Office of Gas and Electricity Markets (OFGEM) took lead and started the process of market liberalization since the 1990s. The reforms led to the establishment of the National Balancing Point (NBP), which serves as a physical platform for gas trading in the UK. Currently, the NBP is considered to be the most developed hub in Europe and has the longest standing European gas pricing point [
5,
6]. It is worth mentioning that the UK gas market and the European gas market are used interchangeably in the remainder of this study.
In 2016, the U.S. Natural gas consumption was roughly 750 billion cubic meters (BCM) [
7]. The majority of the demand was satisfied through indigenous production, and the remaining was imported from Canada, via pipelines. Additional marginal gas is imported from Mexico (also via pipelines), and from around the world, via liquefied natural gas, LNG (see
Figure 1).
The UK natural gas consumption in 2016 is estimated at around 73 BCM [
8], out of which 42 BCM are imported while the remaining volumes are produced locally (see
Figure 2).
To reflect the gradual advances in supply-side competition, the functioning of a wholesale gas market should be measured quantitatively. This has attracted attention from the professional and scientific community, as in-depth analysis of gas markets have been conducted and published [
2,
9,
10,
11,
12,
13]. All studies confirm that parameters such as market participants, the monthly day ahead trades, and churn ratio give an indication and a feel of the market. The churn ratio, calculated as the ratio of traded gas volumes to the total gas demanded, is an indicative measure of the liquidity of a gas hub and market maturity. Additionally, it measures the confidence of traders and consumers in the market.
The numbers shown in
Table 1, reveal a high churn ratio for both markets (above 15), which indicates that the gas prices registered at both hubs are liquid and reflect market conditions [
11]. Therefore, clearing prices are accepted as a reference and indicator, which contain reliable information for all stakeholders involved in the gas value chain (traders, customers, regulators, etc.).
Analysis of recent trends in the European and North American gas markets shows that the prices of gas are fundamentally market-driven. However, rules and policies set by gas regulators are a must to guarantee that the market keeps on operating efficiently [
16].
The information theory introduced by [
17] is a probabilistic principle that helps to quantify the information generated by a random variable in an uncertain context. Information theory can explain observations without the need to rely on neither statistical assumptions regarding the distribution of random variables, nor the random noise [
18]. Additional parametric assumptions such as estimates of demand and cost functions can also be avoided using the information theory. The mathematical tool that will be used in this work to measure the amount of information from the gas wholesale clearing prices is the statistical entropy.
Another tool to assess the information in a decision-making problem is the Blackwell approach [
19]. However, this approach has several complexities that prevent a simple application of Blackwell’s principle [
20,
21], especially at the level of cost and return function assumptions. Hence, to overcome all the mentioned difficulties, the entropy principle will be applied.
The first objective of this paper is to study whether the wholesale gas prices of two of the most liberalized gas markets carry valuable information that can serve as indicators for the relevant gas regulators. The value of these indicators will be quantified by using several econometric methods and mathematical theories. This analysis will guide and assist the decision-making process of regulators regarding the need for an intervention to stabilize the gas markets and improve the functioning of their internal markets. The second objective is to quantify and measure the accuracy and efficiency of the hidden information structure generated by these indicators.
All methods applied in this research are proven mathematical theories that have been used in previous studies [
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34]. All four theories (information, records, entropy, and game theory) have applications in the field of mathematics and statistics, in other words, econometrics. However, the novelty of our method lies in adopting a two-step approach that was not applied before in the literature. This approach is useful to assess the performance of a gas market in terms of information generated by several indicators and combined in one structure, called information structure. The indicators give an idea of the level of competition, level of price volatility, and price stability on one hand, and the level of the information structure, which measures the performance of the market. Among all gas stakeholders, this information is important for gas regulators. These will be more confident and can trust the price indicators if the performance of the market is powerful and efficient.
The level of competition changes from one market to another, and if measured correctly defines the concentration of competing firms in the market. Limited number of firms imply a highly concentrated market, and that based on their strategies can dictate prices, otherwise known as price-setters. Besides, the fewer the number of firms the easier it is to abuse conduct and act collusively. Such firms adjust their strategies in conjunction with an agreed-upon understanding with the competing firms at the expense of the welfare of gas consumers and possibly smaller firms. A typical example of such a market and behavior is the presence of cartels in commodity markets.
The level of volatility indicates how fast gas prices change in the short term. The higher the volatility the harder it is to predict the future behavior of the changes, thus making the market uncertain.
On the other hand, price stability hints at the behavior of gas prices in the medium and long term. Commodity prices tend to have abrupt and rapid price shocks, this explains the sudden increase or decrease due to sudden changes in supply and demand characteristics. The longer it takes for a commodity price to witness a shock the more stable the market is.
The performance of the market is the measure of the power and efficiency of the information contained in the gas prices and the indicators. The more the information is efficient, the more reliable, and reflective the prices are in such a market.
In the first step of the approach, the authors identified three different indicators: market concentration, price stability, and price uncertainty. The authors then applied three appropriate mathematical and statistical theories to extract, from the gas prices time-series metric values that are most suitable to measure the relevant indicators. The formulation of each model is explained and justified in the next section.
In the second step, the three indicators are combined to create an information structure that will help the authors evaluate the performance of the gas market in question. These are assessed against the actions/states that could be executed by the regulator of such markets. Two actions are identified: to intervene or not to intervene. Intervention in the market is conducted by taking legal actions, such as issuing new directives to ensure a stable supply and demand equilibrium, and making sure that there is no abusive conduct by gas suppliers. Furthermore, the approach deals with a case where the information is neither completely absent nor perfectly known, which has rarely been dealt with in literature.
2. Methods/Data and Models Formulation
To avoid price abuse and manipulation, a gas regulator is expected to regulate firms’ behavior by ensuring that customer welfare is maximized while maintaining the attractiveness and profitability for the producers and traders.
As stated in the introduction, price dynamics of a commodity in a liberalized market are indicative of the market structure. They contain consistent information that should, if adequately analyzed, help the regulators in assessing the performance of the market, namely the wholesale gas market in this case.
The authors have identified three main metrics that can signal information in the hidden structure of the price values of both hubs. These metrics are based on econometric and mathematical methods, and are used to inform the regulator in each market about the following:
Indicator 1: Level of competition;
Indicator 2: Market stability;
Indicator 3: Volatility and uncertainty of prices.
The first indicator studies the degree of concentration in the two different gas markets by using game theory, specifically the non-parametric Nash-Cournot equilibrium test. In other words, if the test shows that traders are participating in the market by trying to maximize their profit as “the only pure” strategy, then the market is considered efficient and the likelihood of anti-competitive behavior is negligible.
The second indicator employs the records theory, which relies on the analysis of the peak observations reached in a certain period. This indicator measures the degree of market stability, by calculating the probability of witnessing future peak prices. Therefore, the measure of probability is a measure of market stability and predictability. If the results point toward a tendency to score high probabilities of extreme gas prices, then the market can be characterized as unstable.
The third and final indicator studies the price predictability of both markets, by the use of Shannon entropy and the measure of volatility. This is done by analyzing the variation of prices and returns and assessing the degree of uncertainty and volatility which are present in gas prices. Simply, the higher the uncertainty in prices, the higher the volatility.
These indicators combined will inform the regulator about the functioning of the market. If the market shows signs of concentration, the likelihood of extreme prices, volatility, and uncertainty, then the regulator should intervene and use its policy enforcement power.
Since our indicators are based on econometric theory and models, it is important to assess the performance of such models. Therefore, a quantitative analysis that relies on the information theory is used to compare the power and efficiency of the information generated by all indicators in the two different selected markets. The market with the highest information power will give additional credibility to the indicators so that the gas prices time series speak for itself. Regulators in such a market have higher confidence and can trust the indicators, which will guide their decision of whether to intervene in the market or not.
The following part of
Section 2 will list and define the three indicators metrics and will explain the econometric and mathematical methods that will be used in this manuscript.
Section 2.5 will lay out an outline of the power of information. Results will be presented in
Section 3 with an analysis of their significance and impact, with an overview of how they can be used by gas regulators in their assessment of wholesale gas markets. The final section will conclude the study and emphasize the importance of the dynamics of gas prices and the power of information that it provides.
2.1. Data
A description of the data used in the study is presented in
Table 2. The variables set consist of monthly wholesale gas prices that were registered between October 2009 and June 2018.
Figure 3 illustrates, in a time series plot, the evolution of the natural gas prices for the two different markets. There is a clear divergence that happened in the year 2009 and continues to date. This is mainly due to two main factors that took place in the United States. The first is the abundance and oversupply of new unconventional shale gas production in the local market. The second is that U.S. natural gas contracts in that period started to be decoupled from the crude oil prices.
The line plots of the two markets presented in
Figure 3 show that there is no clear indication of a linear relationship between both variables.
2.2. Indicator 1—Level of Competition
The level of competition and market concentration method involves the classical Nash-Cournot equilibrium test. A Cournot equilibrium is reached when a given firm maximizes its profit by changing its output taking into account the other firm’s output. One important feature of a Cournot model is that firms are not allowed to cooperate. Therefore, as long as the players are playing the latter strategy, the companies would be abiding by pure market profit-oriented strategies, trying to maximize their “utility function,” and have no agreed-upon behavior (i.e., collusion). The market, where producers follow this trend is considered more liberalized.
The aim is to test whether the behavior of the gas producers in the respective markets follows a Cournot model. If a set of gas producers are not following the assumptions of a Cournot game, the test will identify them.
The optimal quantities of the suppliers in a given market are obtained by numerically solving the following set of equations:
At each observation t, the supplier chooses quantity (where represents the set of non-negative real numbers) to maximize its profit given the output of competent supplier , at its optimal choice, is the total quantity supplied to the market, and is the inverse demand function, from which the gas price is deduced. The latter function depends on the total quantity of gas supplied to the market. Finally, production and transmission costs are represented by the cost function The demand function is normally represented by a decreasing straight line and is estimated using regression methods. However, this methodology has its limitation due to endogeneity.
A non-parametric method, with no assumptions on cost and demand functions, has been developed by [
24,
25,
26,
27]. This analysis does not make any prior assumptions about the cost and demand functions. Therefore, instead of relying on an incomplete set of data related to demand and cost, the non-parametric method avoids such constraints. Nonetheless, many authors have contributed to the literature and exploited the parametric approach, by solving the equilibrium using some sensitivity analysis on cost and demand assumptions [
22,
23,
35].
The marginal cost of supplier
i at time
t will be denoted by
. The condition of the first order of the optimization problem defined in Equation (1) is given by:
where with
and
the solution of (1). Now, we consider the observations given by
where
and
and say that
respects Cournot equilibrium if the following
conditions are verified:
1. The matrix of data
must satisfy the following:
where with N = Number of suppliers in the market.
Condition 1 compares the values of marginal costs for both firms. This means that the marginal cost of the firm with the higher marginal cost will produce a quantity that is lower than the firm with the lower marginal costs.
2. Optimal solutions
must verify the following:
Condition 2 allows us to compare the costs of different firms at different times. For instance, if firm changes the produced quantity from to the marginal cost at time must be lower than the marginal cost at time . The same analysis for firm leads to an arrangement of marginal costs for each firm and at each time in increasing order.
To conduct the problem of detection of Cournot’s equilibrium, a numerical algorithm was developed. The result of which indicates whether or not the data being tested respects the Cournot equilibrium.
The algorithm starts with an assumption on the initial marginal cost of firm to produce quantity that is equal to the price and tests for conditions 1 and 2. This is typical in a fully competitive market, where the price of any commodity (i.e., gas in our case) should be as close as possible to the delivery cost. This procedure is repeated in several iterations by changing the marginal cost each time until conditions 1 and 2 are fully met. If the algorithm does not converge, then the set of observations does not respect a Cournot equilibrium. The ratio of the number of observations that respects the Cournot equilibrium (more specifically conditions 1 and 2 in our algorithm) to the total number of observations, is then calculated and is called the Cournot acceptance rate, defined as , and is expressed as a percentage (%).
The Cournot Theorem states that, for any market producing and selling a certain commodity, the price converges to its production cost, whenever the number of market participants
tends to be infinity.
This means that the more companies participate in the market, the more they are unable to affect the market price. The company will become a price-taker and must accept the equilibrium price as is. So it is normal and logical to start by the highest possible marginal cost, which is equal to the gas price at that specific time, and then reducing the cost values until we test the whole array . This is in line with our initial assumption.
For additional information about the algorithm, the readers are invited to check the four simple steps found in
Appendix A.
In simple words, the algorithm can have the following outcomes:
Companies are competing based on a Cournot model, trying to maximize their profit by acting strategically. In this case, Cournot acceptance is high.
Companies are cooperating and not acting by the rules of Nash Cournot equilibrium. In this case, Cournot acceptance is low.
Other strategies and objectives can be set by competing firms, however, the Cournot acceptance rate is only used to differentiate if the latter companies are respecting conditions 1 and 2, which are linked to the Nash-Cournot model.
2.3. Indicator 2—Market Stability
Records theory studies observations that are concentrated in the tail of a given distribution [
30] and will be used in this context to test the stability of two different gas markets.
Previous modelers of the record theory have obtained results in the case of independent and identically distributed (
i.i.d) underlying observations, called the classical case (see [
29]). In our application, we consider the absolute value of the difference between two consecutive gas prices as underlying observations. The most popular record model beyond the
case was introduced by Yang and developed by Nevzorov [
28,
29,
31], and it is currently called the Yang-model. In the latter model, the observations are considered to be independent but not identically distributed.
Considering a time series
, where
denotes the present time, the observation
is said to be an upper record if and only if
and record indicators are a sequence of random variables
defined by:
The total number of records in the considered time series is given by:
Additionally, the probability that the observation
represents a record in the
case is called record rate and is given by:
This has been justified by [
36], and it can be deduced that the
, this means that the chance of witnessing a record on a long term level is minimal and rare.
It is only reasonable to go beyond the classical case and test if the Yang model fits our set of data. However, before doing so, one should test if the data comes from a sequence of
random variables. The test is based on the statistic
which was shown by [
29] to have an asymptotic normal behavior.
Moving to the Yang model, the time between two consecutive records converges asymptotically to a geometric distribution and the record rate verifies the following equation:
where
is a parameter that needs to be estimated.
Unlike the classical case, the Yang record rate converges to a constant value in the long term and is given by:
This means that records are always expected in the long term, and not only observed among the first observations as in the classical case.
2.4. Indicator 3—Volatility and Uncertainty of Prices
The third quantitative method used in this study is represented by Shannon’s probabilistic entropy and is used on a time series analysis, to test the predictability power hidden in the underlying probabilistic distribution of the considered time series. A time series with a high predictability power is considered to have a high level of stability with an anticipated pattern.
The classical definition of entropy is as follows: for a given source of information represented by a finite discrete random variable
with
n possible outcomes, each possible outcome
having a probability
to appear, the Shannon entropy
of the random variable
is defined by:
In general, a logarithm of base 2 (
is used because the entropy is generally expressed in bits [
18]. Several researchers have previously attempted to predict the entropy of the commodity markets (oil, more specifically Brent and West Texas Intermediate, WTI, and other commodities) and tried to measure the information from statistical observations [
32,
33,
34]. Brent and WTI are two different crude oil grades (quality) and are known to be the most important oil pricing benchmark around the globe. As previously explained, the gas markets in question have been liberalized, and the influence of oil prices on gas prices is shrinking. Gas prices are becoming more influenced by gas to gas competition. To the knowledge of the authors, no previous researchers have worked on predicting the entropy of the gas markets.
To compute the Shannon entropy of a time series, which is a continuous random variable, a particular discretization method is introduced:
First, the returns of the prices are computed, this is a requirement for normalizing the data set
where
and
are the prices at times
and
respectively.
It is trivial that the series of observations of returns
has an underlying continuous distribution. Therefore, the second step is to introduce the discrete random variable
defined by:
The random variable has a binary output. Unity is when the returns are positive, which means that the prices are increasing, and zero means that the prices tend to diminish.
Based on the observed values of
, and by denoting the total number of observations by
, the corresponding probability distribution is computed:
and,
Hence, the entropy related to the random variable
is given by:
To compute the underlying entropy of each gas market by the above-explained method, we rely on daily values instead of monthly values. The price variable is divided into year windows, with 252 observations for each. The passage from one window to the other is done by removing the first observation while adding another from the remaining ones, and so on. By applying the latter procedure, we obtain a series of entropies that should be represented by the mean parameter, as a representative of a series of entropy observations.
However, a major disadvantage of the mean is that it is sensitive to the distribution with a thick queue which can be caused by the presence of outliers and extreme observations. Besides, the mean may have a false interpretation in case of a highly skewed distribution for the considered data. To overcome these weaknesses, a second statistical parameter will be adopted: the median value. Moreover, it has been shown that the median is useful when comparing sets of data. Once the median entropy of each market is computed, the values for each market should be compared. Besides, a non-parametric mathematical test will be conducted to check the statistical significance of the difference between both markets [
37]. The Kruskal test is used to compare two independent samples and checks if the observations originate from two different distributions or not.
Finally, the volatility for each market is computed and can serve to validate the results of the price unpredictability. It is the degree of variation in the price series of each market and is measured by the classical standard deviation of the returns.
Note that one can find in literature a version of the entropy adapted to the continuous random variable cases, called differential entropy [
38], given by:
where
is the probability density function of the underlying distribution of the continuous random variable
. However, this method has many flaws:
The density function is in general unknown. This is a weakness because the users will make assumptions about the distribution type. Nonetheless, users can utilize numerical methods to estimate the density function empirically, however, one could face several challenges concerning errors of estimation.
The properties and the interpretation of discrete random variables entropies are not known to be conserved when passing to the continuous case. In other words, the differential entropy does not share all properties of discrete entropy.
2.5. Information Theory
After defining the indicators, which can be extracted from the gas prices of each market, the regulator has to make important decisions. If the market indicators indicate market concentration and price instability, then certain measures should be taken to bring back stability to the gas prices. Therefore, the two defined states in this study are either for the regulators to take action or keep the business as usual (BAU). This is defined by
respectively. The indicators previously defined are denoted by
respectively. Also, we denote by
the conditional probability that the market is in state
after receiving the indicator
i.e.,
The probabilities
are categorized into three classes: Low, medium, and high. Each class has the following respective probabilities:
. The information structure is illustrated in
Table 3.
The methods used in this paper reduce the subjectivity of the probability distribution. The indicators are complemented by econometric models founded by economic parameters of the relevant gas markets, and by data analysis on its gas prices, from which the information is extracted.
The probability of being in a certain set (either
or
) after receiving the indicator (either
,
, or
) can take either a value of 0.1, 0.5, or 0.9, which constitutes the possible events on the probability set. The sum of the probability of being in
or
after receiving the same indicator
, however, should be equal to 1. This is normal as there are only two sets considered in this study.
Note that the first step is to compute entropy, called “apriori” entropy, based on the distribution of the states
before the reception of any additional information. Then, we start by:
where
is the probability of being in the state
before receiving additional information called “
apriori probability”. As
is defined based on no previous information, it is reasonable to consider a distribution, which has the highest level of uncertainty. In other words, the regulator has no information that can lead him to make an action, and the probability of either of the two states is equally likely. This is a uniform distribution with the following “
apriori probabilities”
.
Now, after receiving a specific indicator of information
the conditional entropy of the random variable
relative to the indicator
is defined as:
Hence, to assess the power of information generated by the whole information structure (composed by
,
and
), the “
posterior entropy” is defined, and is compared to the “
apriori probability”:
where
is the weight given for each indicator. This number is equally distributed for the three indicators, as they are equally important and each contributes to the understanding of the gas market in different ways.
Finally, the amount of reduced uncertainty, due to the additional received information, is measured by a quantity called mutual information:
Thus, to compare the power of information generated by two information structure related to two different gas market, one should consider the one with the highest . This is equivalent to say that the structure of information that reduces most of the uncertainty of the random variable will be most efficient and powerful.
3. Results and Discussions
3.1. Results of the Non-Parametric Cournot Test
Considering
Table 1, it is evident that both gas markets are competitive. Nonetheless, this is considerable and significant in the case of HH.
Table 1 draws attention to two main numbers: the first being the big difference between the volumes of gas traded in the future and the volume traded on the physical, which indicates the excessive participation for traders and financial players in the virtual market. The second being the large numbers of churn ratios, which indicates high liquidity and healthy trading platform, an attractive characteristic for all stakeholders. Unlike the U.S. gas market, the Herfindahl index for the European gas market is relatively high [
11]. This is a sign of healthy competition, and this simply means that out of the many gas suppliers in the U.S. market, none has market power on its own. However, this violates one of the main assumptions of a Cournot competition model, where firms have market power, and each firm’s output decision affects the gas prices. In a nutshell, there is no risk of market manipulation in such a market, therefore the market concentration is minimal and close to zero. All U.S. gas suppliers should be price takers in this case, and the Cournot acceptance rate is no longer valid.
As explained in section two, the data that is used for the Cournot test consist of the gas prices and the gas supplies to the relevant market. Gas supplies are shown in
Figure 1 and
Figure 2. The suppliers are represented by countries of origin. Results can be more indicative if the data related to gas supplies are composed of volumes of the suppliers (shippers, trader, and companies) directly rather than the country (market) where the gas was purchased. The authors acknowledge the need for the traders’ suppliers’ data and the need to perform the Cournot test on the American market, however with no publically available information on the supply market shares of companies, this is not possible. Therefore, we encourage the publishing agencies to list such data on their website (or upon request). The FERC Form 552 provides a database of trading activity and lists the data related to the largest companies (Top 20) with the largest total transaction volume from year to year. The list found in [
15] is incomplete and contains yearly data only. Thus additional data related to suppliers’ portfolios is needed to have valid test results. The suppliers in North Western Europe are oligopolistic [
39]; therefore, the usage of the data will lead to conclusive and significant results, when using the algorithm.
The non-parametric test results for the European market gives a Cournot acceptance rate of 51%. The results can be analyzed as follows: the behavior of the large gas suppliers in the European can be explained by a Cournot model, where suppliers are trying to maximize their payoffs by competing over quantities. However, the other half of the acceptance rate means that there are companies that are not behaving as such. This could implicate that some of the suppliers have other strategies such as collusion, or strategies that are not “pure” profit maximizers.
An example of a possible collusive behavior has been witnessed in the oil markets under the Organization of the Petroleum Exporting Countries, OPEC back in the 1970s. These countries used to control a major share of the world oil supplies, and together they form a cartel that cooperates, to increase prices and limit external competition.
Other examples that can be used to illustrate possible reasons why these suppliers are not seeking a “profit only” strategy under the Nash-Cournot umbrella are listed below:
Authors such as [
40], suggest that Gazprom, a major gas suppliers, is maximizing its “utility function” not only by limiting itself on one strategy that is focused on making a profit, but also by contemplating other strategies such as seeking to eliminate competition, even if this leads to some losses in profits initially.
Other authors, such as [
41], enumerate other reasons that are preventing some of the European gas suppliers from exerting their oligopolistic power, and these are due to old legacy contracts that are still effective, and perhaps new regulations. In short gas prices mechanism in old legacy contracts are mainly indexed to oil prices, and this type of contract does not offer the needed flexibility to gas suppliers. These valid assumptions are among many, possible reasons why the Cournot acceptance rate is not that elevated in Europe.
The first indicator is informative and the analysis of the prices of the NBP wholesale gas prices is indicative for the regulator in this market.
3.2. Results of the Records Theory
The second indicator is assessed using the records theory. To anticipate if the data belong to an sequence of variables, the goodness of fit test is used.
The results shown in
Table 4 indicate that the European market rejects the null hypothesis. The test results were computed at a confidence level of 5%. Accordingly, and from an empirical perspective, the gas prices recorded in this market are characterized by price variations and sudden price shifts.
Based on the analysis of
Table 5, the result is not surprising, as it confirms that the European market has a high number of records relative to the small number of observations. This indicates that the European gas price records are not grouped in one section of the time series, and are instead more spread, while the U.S. market is rather more stable and that price shifts are rarely observed all along the time series.
Looking further, in an attempt to measure the probability of witnessing a record in each of the gas markets, the Yang model will be used for the European market and the classical model for the U.S. market. The computed probabilities were computed for each market, and
Table 6 shows the result of the probability that matches the date of June 2018.
The probability of witnessing a new record is higher in the European market. The results from the above analysis can be summarized as follows:
The record rate in the European model converges to a constant value in the medium and long term (as explained in
Section 2). This shows that the market could prove to be unstable over time.
In contrast, the record rate and the time index in the U.S. model tend to have a negative correlation. This means that the probability of record diminishes over time, i.e., when increases, the probability of record diminishes. This shows that the market is rather more stable in the medium and long term.
3.3. Results of the Shannon Entropy
By applying the procedure described in
Section 2.4 dealing with Shannon entropy, the representative median entropy of each considered market in addition to the
p-value of the Kruskal non-parametric test is calculated. The results of the entropy approach are presented in
Table 7.
If a random variable
follows a discrete uniform distribution with
possible outcomes, the corresponding entropy is
[
42]. Hence, in our context, the values of the entropies are both close to the case of uniform distribution
, which is equal to unity, and this means that both markets are far from being predictable.
As the values of the median entropies of the considered two markets are close to each other, it is substantial to test if the two considered median entropies are issued from two different distributions. If it is the case, then this indicates a significant difference between the two medians. The non-parametric Kruskal test is applied to verify the latter hypothesis.
Based on
Table 7, the
p-value of the Kruskal test is close to zero, and therefore less than 5%. Accordingly, the difference between the markets in terms of entropy is significant, i.e., the market with the higher median value, European market in our case, has an entropy significantly higher than the U.S. market.
Also, the volatility in the U.S. market is very low (0.7), whereas it is significantly high in the European market (2.5).
Thus, for indicator number three, the U.S. market is significantly more predictable and has lower uncertainty than the European market.
3.4. Synthesis of Indicator Results
In an attempt to better illustrate the results of the three mathematical models used in the previous sections to measure the market indicators,
Table 8 summarizes the results and lists the main findings for each market.
3.5. Results of the Information Theory
Concentrated markets raise regulatory and antitrust concerns, as this is a clear sign of market power in the hands of suppliers. Appropriate actions need to be taken by the regulator to make sure that neither collusion, nor cooperation between companies, nor any kind of strategic decisions that do not end up in favor of consumer welfare, are permitted.
The regulator in such a case should ensure that under no circumstances, the companies communicate and have the agreed-upon understanding to raise prices and profit margins at the expense of consumer welfare. The barrier to entry for new companies should also be considered and reduced by regulators in such markets, to increase competition and diversify supplies. These are some examples of actions that the regulator can impose on the suppliers.
Markets that witness price volatility and uncertainty in the medium term, as well as price instability in the long term, are also raising concerns for regulators. In such a case the key to determining the movement of gas prices are supply and demand fundamentals [
43]. A slowdown in global demand is a key downside risk for suppliers, as they will eventually earn less while trying to sell their gas. On another hand, a sudden slowdown in supply is a key downside risk for another player in the gas value chain, which is the consumer. The latter will have to pay more to purchase the commodity.
In both cases, regulators should anticipate such results by acting in favor of a continuous supply and demand equilibrium, by trying to diversify supply (indigenous production, imports, and storage), while also ensuring that the consumers have the appropriate infrastructure and financial means to buy such a commodity. However in a market characterized by gas prices that are predictable in the medium term and prices that are stable over the long term, then there is no need for further actions by the regulator.
Moving forward, we start by assigning the relevant conditional probabilities which indicate to the regulator the state of nature of the gas market. As previously mentioned, the probabilities are categorized into three classes: low, medium, and high. Also important to remember that the sum of is equal to 1, as we only have two possible sets. The same thing applies to indicators 2 and 3.
The results listed in
Table 9, give a clear indication that the market in the United States is functioning smoothly and that the regulator does not need to add other measures. In other words, the BAU case is favored.
This is not the case however, for the European market. The regulator is more inclined to intervene. UK’s regulator OFGEM has to intervene and investigate the reason behind some instability and signs of non-competitive behavior, where some firms are not only focused on profit maximization.
To compute the global power of information generated by the considered information structure, we start by assessing the level of uncertainty of each receiving indicator by computing the conditional entropy of the latter ; then we get:
The “
posterior entropy,” previously defined by
is then computed, and compared with the “
apriori entropy,” which is defined in
Section 2.5 as the entropy of a uniform distribution (one that has the highest level of uncertainty), and given a value of 1.
Table 10 and
Table 11 illustrate the results of the entropies, conditional to the relevant indicators, which is then used to compute the outcome of these indicators in aggregated dimension and for each market.
The difference between the “posterior entropy” and the “apriori entropy” will help assess the level and amount of information, previously defined as gained by analyzing the gas prices data in each market. In other words, the indicator analyses that is measured by the various econometric methods used in this study constitute additional information that the regulators can use to assess the status of the market. The more the additional information increases (i.e., the difference between the “posterior entropy” and the “apriori entropy”), the more confident the regulator is about the power of information generated.
The amount of reduced uncertainty, due to the additional information received from the indicators, is estimated at 0.38 for the U.S. market and 0.18 for the European market, which means that the level of uncertainty has been reduced in the European and U.S. market respectively by 18% and 38%. The value of the information contained in both markets, although in asymmetric terms, is significant, powerful, and can serve as a reliable and efficient source of information.
4. Conclusions
Overall, this work presents four econometric and mathematical methods that are used collectively to estimate the level of information contained in gas prices in two separate wholesale gas markets, i.e., the European and the U.S. gas markets. The theories employed are Cournot theory, records theory, Shannon entropy, and information theory.
By analyzing the efficiency of the gas market and assessing the need for additional measures and intervention, the work of gas regulators with regard to market oversight is likely to be improved. The value of the information is based on three market indicators: the possibility of non-competitive behavior by gas firms, market stability, and uncertainty in prices.
Our findings suggest that the U.S. gas market is stable. The information value contained in the wholesale gas prices gives a clear indication that there is no need for additional market oversight. However, this is not the case in the UK (the most developed European gas market), where results show signs of market instability and non-competitive behavior. In other words, some firms are not only focused on profit maximization; therefore, the wholesale prices are not solely the product of classical law of supply and demand.
Interestingly, the value of the additional information brought about by the indicator analysis and included in both markets has contributed to reducing uncertainty. This makes the information carried in the gas prices of both markets, although asymmetric, powerful and efficient. The regulators in both markets can, therefore, act accordingly by using the two-step approach to assess the level of competition, price stability, and price predictability.
The originality of the two-step approach applied in this document can be summarized as follows: it is the first time that several multidisciplinary econometric methods have been combined to create a probabilistic structure assessing the underlying information of a gas market. Furthermore, the approach deals with a case where the information is neither completely absent nor perfectly known, which has rarely been dealt with in literature.
Worthy to mention, the authors have chosen three market indicators and four different econometric methods in this study. It is believed that additional mathematical/statistical analysis can be used for this topic. For further research, one can work on estimating the entropy generated (the third indicator) using another discretization procedure. This work is a growing research track and needs a large number of observations. Besides, one can also work on creating estimators for the underlying probability distribution of each indicator. A starting point is to apply goodness of fit techniques or a more empirical to perform a bootstrap process.