Next Article in Journal
Distance Measures Based on Metric Information Matrix for Atanassov’s Intuitionistic Fuzzy Sets
Next Article in Special Issue
Uncovering the Impact of Local and Global Interests in Artists on Stock Prices of K-Pop Entertainment Companies: A SHAP-XGBoost Analysis
Previous Article in Journal
Intrusive and Impact Modes of a Falling Drop Coalescence with a Target Fluid at Rest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Parametric Regression and Riesz Estimators

by
Christos Kountzakis
* and
Vasileia Tsachouridou-Papadatou
Department of Statistics and Actuarial-Financial Mathematics, University of the Aegean, Karlovassi, 83200 Samos, Greece
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 375; https://doi.org/10.3390/axioms12040375
Submission received: 7 March 2023 / Revised: 3 April 2023 / Accepted: 7 April 2023 / Published: 14 April 2023
(This article belongs to the Special Issue Mathematical and Computational Finance Analysis)

Abstract

:
In this paper, we consider a non-parametric regression model relying on Riesz estimators. This linear regression model is similar to the usual linear regression model since they both rely on projection operators. We indicate that Riesz estimator regression relies on the positive basis elements of the finite-dimensional sub-lattice generated by the rows of some design matrix. A strong motivation for using the Riesz estimator model is that the data of explanatory variables may come from categorical variables. Calculations related to Riesz estimator regression are very easy since they arise from the measurability in finite-dimensional probability spaces. Moreover, we show that the fitted model of Riesz estimators is an ordinary least squares model. Any vector of some Euclidean space is supposed to be a rendom variable under the objective probability values, being used in expected utility theory and its applications. Finally, the reader may notice that goodness-of-fit measures are similar to those defined for the usual linear regression. Due to the fact that this model is non-parametric, it may include samples relevant to finance and actuarial science variables.

1. Motivation

The background of the paper [1] about Riesz estimators is [2]. In the present paper, we determine the projection operator, which is necessary to fit the Riesz estimator regression. This operator relies on a partition of a sample’s values as they appear in the design matrix. This projection operator implies the goodness-of-fit measures, which are necessary for any model of linear regression. This projection operator is compatible with the relationship between sub-lattices and positive projections. This formulation relies on using the positive basis of the sub-lattice generated by a design matrix X, while in the usual method of linear regression, the fitting relies on a covariance matrix, Σ = X T X . The calculation of its parameters is carried out almost directly though observing a partition for the set of sample observations. This partition is the one defined by the subset of observations taking different positive values. The estimators are actually the ordinary least squares estimators. This corollary was not determined previously. This linear regression model may include data coming from both categorical and non-categorical variables as well. The support for the last ones may be some interval of real numbers. Another important contribution is that we define goodness-of-fit measures for the proposed Riesz regression model. These measures rely on the dimension of the sub-lattice generated by the columns of the design matrix. We do emphasize that the fitting of Riesz estimator regression may include only data from categorical variables. It also may include both data from categorical variables and data coming from variables whose support is some interval of real numbers. Options as factors in the so-called beta pricing are related to the regression model being proposed since option payoffs lie in the sub-lattice generated by the span of the variables included in the Riesz estimation model. Factor pricing, as it is presented in Chapter 20.7. of [3] is the same as in the regression model in p. 435 of [1]. The positive parts of random variables are related to portfolio insurance, and this is a motivation for the use of such a regression model. Riesz estimator regression is not related to specific probability distributions. Hence, it may be used for data analysis in both actuarial and financial issues, where there is not any specific way of fitting a linear regression model, as we mentioned above. The specification of ’factors’ that are not correlated with each other for a large number of samples implies an appropriate use of ’options’, which may be considered payoffs of some stop-loss reinsurance contract. The reader may refer to [4] for a rigorous introduction to reinsurance. The projection operator Π defined in this paper implies the fitting of the Riesz estimator linear regression. The idea of its use arose due to the linearity of the regression model appearing in p. 439 and p. 440 of [1]. A question that arises naturally is the following. Since the set of the affine Riesz estimators lie in the generated sub-lattice, what is a unified way to specify this sub-lattice? The answer is ’by using the positive basis of this sub-lattice’. The error term geometry is also specified by the complementary projection operator I- Π . Both of the operators Π and I- Π imply the goodness-of-fit measures of the Riesz estimator regression, which have not been determined previously. The last numerical examples are included in the present paper for a better understanding of the fitting procedure. The objective probability values may be estimated by the usual χ 2 test. For this reason, some previously obtained samples may be used. This fashion of testing is a ’Bayesian’ one since it relies on the values arising from previous samples. The objective probability approach of uncertainty in economics was introduced by [5], and it was developed mainly in [6,7].

2. Finite-Dimensional Sub-Lattices

We consider the vector space R m , where m is the number of the observed realizations for the m states of the world. We refer to any s { 1 , , m } as a ‘state’ since we assume that the set { 1 , , m } is the space Ω in terms of probability theory. Ω is actually the set of all elementary possible events. R m is ordered by point-wise ordering. As is well-known, x y if and only if x ( s ) y ( s ) for any s = 1 , 2 , , m . A positive vector of R m is any x R such that x ( s ) 0 for any s = 1 , 2 , , m . The set of the positive vectors of R m is the positive cone of R m , denoted by R + m . If x R + m , this is denoted by x 0 , where 0 is the zero vector of R m . We also consider L to be some subspace of R m . If x y , where x , y L , if and only if x ( s ) y ( s ) for any s = 1 , 2 , , m , L is called an ordered subspace of R m . Then, by L + = L R + m , we denote the positive cone of L. A basis { b 1 , b 2 , , b r } of L is called a positive basis of L if L + = { x L | i = 1 r λ i b i , where λ i 0 for any i = 1 , 2 , , r } and λ i R , i = 1 = 2 , , r . Additionally, b i R m for any i = 1 , 2 , , r , and b i 0 . L does not always have a positive basis. L is a sub-lattice of R m if, for any x , y L , x y L , x y L . Any sub-lattice of R m has a positive basis. The support of a positive vector x L + is the set s u p p ( x ) = { s = 1 , 2 , , m | x ( s ) > 0 } .
We recall from [8] the following:
  • An ordered subspace Z of R m is a sub-lattice of R m and has a positive basis such that s u p p ( b i ) s u p p ( b j ) = for any i j .
  • If Z is a sub-lattice of R m , whose positive basis is { b 1 , b 2 , , b μ } , then for any x Z + = Z R + m , we have x = i = 1 μ λ i b i , λ i 0 , λ i R .
  • If Z is a sub-lattice of R m , whose positive basis is { b 1 , b 2 , , b μ } , then for any s s u p p ( b i ) and x = i = 1 μ t i b i ,
    t i = x ( s ) b i ( s ) .
  • If Z is a sub-lattice of R m , whose positive basis is { b 1 , b 2 , , b μ } , where x = i = 1 μ t i b i , y = i = 1 μ q i b i , then
    x y = i = 1 μ ( t i q i ) b i , x y = i = 1 μ ( t i q i ) b i .
  • If Z is a sub-lattice of R m , and if the vector 1 = ( 1 , 1 , , 1 ) Z , then a positive basis exists, which is a partition of the unit. Namely, for each vector b i , i = 1 , , μ , where i = 1 μ s u p p ( b i ) = { 1 , 2 , , m } .
  • If Z is a sub-lattice of R m , whose positive basis is { b 1 , b 2 , , b μ } , then for each i = 1 , 2 , , μ , the vector b i has minimal support in Z. Namely, a positive vector x Z , x 0 such that s u p p ( x ) is a pure subset of s u p p ( b i ) does not exist.
Suppose now that z 1 , z 2 , , z r are fixed, linearly independent, positive vectors of R m and that Z is the subspace of R m generated by the vectors { z 1 , z 2 , , z r } . Since the variables of the design matrix X may take either positive or negative values and R m is a vector lattice itself under the usual point-wise partial ordering, for any column x i , i = 1 , 2 , , n of the design matrix X, x i = x i + x i , where x i + , x i denote the positive and the negative part for any of the vectors x i , i = 1 , 2 , , n . r denotes the number of a maximal set of linearly independent vectors of { x 1 + , x 1 , , x n + , x n } . The determination of a positive basis of the sub-lattice W of R m generated by the set { z 1 , z 2 , , z r } is specified through the method initially proved in Th.3.7 of [8]. The reader of the paper may find the statement of the above Theorem in [9] if Ω = { 1 , , m } . The paper of [9] is devoted to the equilibrium in incomplete markets, including European (vanilla) options arising from a given incomplete market. A relationship between the content of this paper and the paper [9] relies on what we call beta pricing. The study of this connection is a possible extension of the present paper.
If the positive basis of the sub-lattice generated by z i , i = 1 , 2 , , r is such that 1 Z , the elements of the partition of the unit { σ 1 , σ 2 , , σ μ } and the supports of the vectors b i , i = 1 , 2 , , μ , coincide. Namely, for the the same finite set of states { 1 , 2 , , m } , σ i σ j = if i j and i , j { 1 , 2 , , μ } . Hence, σ i = s u p p ( b i ) for any i = 1 , 2 , , μ .
Below, we show that the partition of the set of states { 1 , , m } is not related to the assumption that 1 Z , but it is implied by the measurability of random variables in finite probability spaces. The “true” state of the world appears from some σ , where s σ . Events and states do not coincide in economic modeling. Any σ is an element of some partition of { 1 , , m } . They constitute the information obtained by the design matrix X, or what is called observations in terms of statistics. This vector of probabilities provides an explanation of causality. Anything mentioned above about states, events, and objective probability arise from seminal works such as [5].

3. Fitting Riesz Linear Regression

We pose some assumptions, which are useful for the results of the present paper:
  • A vector p = ( p ( 1 ) , , p ( m ) ) of objective probabilities concerning the states of the world exists. For any state among s = 1 , , m , we suppose that p ( s ) > 0 holds.
  • Two time periods are considered, denoted by 0 and 1, respectively.
  • Any vector x = ( x ( 1 ) , x ( m ) ) is a random variable since the probability p ( s ) denotes the probability for the state s to occur at the 1-time period.
  • The probability of the 0-time period is equal to 1 = s = 1 m p ( s ) since the uncertainty refers to the 1-time period.
  • Under the existence of the vector p, the standard inner product of two random variables d = ( d ( 1 ) , , d ( m ) ) and c = ( c ( 1 ) , , c ( m ) ) . Any vector of a R m represents the possible results of any action at the 1-time period. The usual inner product is changed to the p-inner product, which is defined as follows:
    d , c p = s = 1 m d ( s ) · c ( s ) p ( s ) .
    The p-inner product denotes the correlation between c and d under the probability vector p.
  • Under the existence of the vector p, the function
    x p = s = 1 m x 2 ( s ) p ( s ) ,
    is a norm of the random variable x arising from the p-inner product.
Let us suppose that the size of a sample is equal to m.
Definition 1.
As mentioned above, a finite-dimensional probability space  ( Ω , F , P ) consists of a set of states of the world { 1 , 2 , , m } , a partition F of Ω, and a vector of objective probabilities P = p = ( p ( 1 ) , , p ( m ) ) for the set Ω, where p ( s ) > 0 for any s = 1 , 2 , , m .
Remark 1.
Since Ω is a finite set, and a probability space is defined through some (σ) algebra consisting of subsets of Ω, there exists a partition of disjoint sets σ i , i = 1 , , μ that generate it. Hence, the definition of such a probability space relies on the partition F = { σ 1 , , σ μ } . Any σ i , i = 1 , , μ is called an event.
Both of the definitions and propositions appearing below in a more clear form are obtained from Chapter 5 of [10].
Proposition 1.
A random variable x : Ω R is F -measurable, if its form is x = i = 1 μ x i I σ i .
Proof. 
Since x has to be F -measurable, P ( x 1 ( e , f ) ) F for any ( e , f ) . ( e , f ) denotes the open interval ( e , f ) of real numbers where f > e . Hence, x ( s ) = x i for any s σ i , i = 1 , , μ . □
Definition 2.
Suppose that x R m . The  Conditional expectation  of x, with respect to an event σ i F , is equal to the real number
E ( x | σ i ) = s σ i x ( s ) p ( s ) p ( σ i ) ,
where p ( σ i ) = s σ i p ( s ) .
Definition 3.
The random variable
E ( x | F ) = i = 1 μ E ( x | σ i ) I σ i
is called the  Conditional expectation of x with respect to F . I σ i denotes the characteristic function of σ i , i = 1 , 2 , , μ .
The following Proposition is important, since it implies that Riesz estimators are ordinary linear regression estimators in the framework assumed in the entirety of this paper about the objective probability vector p.
Proposition 2.
If y is some F -measurable random variable, then E ( x | F ) minimizes the square error x y p 2 . The solution is unique.
Proof. 
Since y is F -measurable, its form is y = i = 1 μ y i I σ i . Since E ( x | F ) = i = 1 μ E ( x | σ i ) I σ i , then
x y p 2 = i = 1 μ ( y i E ( x | σ i ) ) 2 .
x y p 2 is minimized if it is equal to zero; hence, y i = E ( x | σ i ) . From elementary calculus, this value is the unique one that minimizes ( y i E ( x | σ i ) ) 2 for any i = , , μ . □
Hence, we obtain the following.
Proposition 3.
E ( x | F ) is an element of the sub-lattice generated by { z 1 , z 2 , , z r } .
Proof. 
This is obtained directly from the definition of the s u p p ( b i ) = σ i , i = 1 , 2 , , μ . □
Definition 4.
The  Riesz estimator  of x : { 1 , 2 , , m } R m with respect to F is any vector of the form E ( x | F ) , where F is the partition of { 1 , 2 , , m } , which is composed of the supports of b i , i = 1 , 2 , , μ .
Definition 5.
A projection operator Π with respect to the sub-lattice S of R m is called a  strictly positive projection  if Π ( x ) S + = R + m S for any x R + m , while Π ( x ) = 0 x = 0 .
Theorem 1.
E ( x | F ) is a strictly positive projection Π on R + m with respect to the sublattice W, as defined above.
Proof. 
For any x R + m , Π ( x ) W + = W R + m ; hence, Π is a positive projection. If Π ( x ) = 0 , then x = 0 because Π ( x ) = i = 1 μ λ i b i = 0 , and moreover, since Π ( x ) W + , this implies that λ i = 0 for any i = 1 , 2 , , μ . □
According to what has been previously mentioned, for any random variable x R m , we obtain the decomposition
x = Π ( x ) + ( I Π ) ( x ) ,
where Π ( x ) = E ( x | F ) with respect to the sub-lattice of F -measurable random variables. We may thus give the following definition.
Definition 6.
The  Error term  of x R m with respect to Π is ( I Π ) ( x ) , where I is the identity operator on R m .
Definition 7.
The  Riesz estimator regression  of x : { 1 , 2 , , m } R m with respect to F is the following decomposition of x: x = E ( x | F ) + ( x E ( x | F ) ) .
Definition 8.
The  Goodness-of-fit for the Riesz estimator regression measure  of x with respect to ( Ω , F , P ) is the following number:
R 2 = x E ( x | F ) p 2 .
Definition 9.
The  Adjusted goodness-of-fit for Riesz estimator regression  of x with respect to ( Ω , F , P ) is the following number:
R a d j 2 = 1 m + 1 μ x E ( x | F ) p 2 .
Remark 2.
These two goodness-of-fit criteria denote the distance between the random variable x and its Riesz estimator with respect to ( Ω , F , P ) . We notice that in the case where μ = m , the values for both of the above goodness-of-fit measures are equal. If the value of R a d j 2 is less than 1, then it looks similar to the R 2 of the usual Regression. A question for further study is whether R a d j 2 is less than 1 if μ < m .

4. Numerical Examples

We give the following numerical examples in order to understand the above model better.
Example 1.
Let us suppose that
X = 1 3 1 1 1 4 3 1 1 1 1 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 4 ,
is the design matrix. We notice that m = 8 , and X is composed of positive vectors of R 8 . These vectors are linearly independent, and n = r = 4 . The values of β function are the following:
β ( 1 ) = ( 1 6 , 3 6 , 1 6 , 1 6 ) = P 1 ,
β ( 2 ) = ( 1 9 , 4 9 , 3 9 , 1 9 ) = P 2 ,
β ( 3 ) = β ( 4 ) = ( 1 4 , 1 4 , 1 4 , 1 4 ) = P 3 ,
β ( 4 ) = β ( 5 ) = β ( 6 ) = β ( 7 ) = ( 1 6 , 2 6 , 2 6 , 1 6 ) = P 4 ,
β ( 8 ) = ( 1 9 , 2 9 , 2 9 , 4 9 ) = P 5 .
Hence the dimension of the sub-lattice W of R 8 generated by the columns of the matrix X is μ = 5 . The vector z 5 is obtained in the following way since the first four values of β are linearly independent.
z 5 = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 9 ) T .
We determine the values of the γ function of the following vectors:
z 1 = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) T ,
z 2 = ( 3 , 4 , 1 , 2 , 2 , 2 , 2 , 2 ) T ,
z 3 = ( 1 , 3 , 1 , 2 , 2 , 2 , 2 , 2 ) T ,
z 4 = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 4 ) T ,
z 5 = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 9 ) T .
The values of γ are the following:
γ ( 1 ) = ( 1 6 , 3 6 , 1 6 , 1 6 , 0 ) T = P 1 ,
γ ( 2 ) = ( 1 9 , 4 9 , 3 9 , 1 9 , 0 ) T = P 2 ,
γ ( 3 ) = ( 1 4 , 1 4 , 1 4 , 1 4 , 0 ) T = P 3 ,
γ ( 4 ) = γ ( 5 ) = γ ( 6 ) = γ ( 7 ) = ( 1 6 , 2 6 , 2 6 , 1 6 , 0 ) T = P 4 ,
γ ( 8 ) = ( 1 18 , 2 18 , 2 18 , 4 18 , 1 2 ) T = P 5 .
Following the calculations implied by the previous Section, we obtain that the positive basis of W, which is the sub-lattice generated by z 1 , z 2 , z 3 , z 4 , z 5 , is composed of
b 1 = ( 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , b 2 = ( 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 ) , b 3 = ( 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ) ,
b 4 = ( 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 ) ,
and
b 5 = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 ) .
The supports of the positive basis vectors are the following:
s u p p ( b 1 ) = { 1 } , s u p p ( b 2 ) = { 2 } , s u p p ( b 3 ) = { 3 } ,
s u p p ( b 4 ) = { 4 , 5 , 6 , 7 } , s u p p ( b 5 ) = { 8 } .
We notice that s u p p ( b i ) = σ i , i = 1 , , 5 . We suppose that the objective probabilities’ vector is
p = ( 1 8 , 1 8 , 1 16 , 1 16 , 1 8 , 1 8 , 1 8 , 1 4 ) .
If x = ( 1 , 5 , 3 , 4 , 3 , 4 , 5 , 5 ) T , then we obtain that E ( x | F ) = Π ( x ) = i = 1 5 E ( x | σ i ) I σ i . Moreover, we obtain that
E ( x | σ 1 ) = 1 , E ( x | σ 2 ) = 5 , E ( x | σ 3 ) = 3 ,
E ( x | σ 4 ) = 28 7 = 4 , E ( x | σ 5 ) = 5 .
Finally, the values of goodness-of-fit for the Riesz estimator regression measure and the adjusted goodness-of-fit for the Riesz estimator regression measure of x are below. The goodness-of-fit for the Riesz estimator regression measure of x is x E ( x | F ) p 2 = 34 49 , and the adjusted goodness-of-fit for the Riesz estimator regression measure of x is 1 4 x E ( x | F ) p 2 = 34 196 . We denote that the adjusted goodness-of-fit for the Riesz estimator regression measure’s value is a more robust goodness-of-fit measure in this case since its value is less than 1.
Example 2.
Let us suppose that
X = 2 5 1 1 1 1 3 1 1 1 1 1 2 2 2 2 2 2 2 1 2 2 2 1 1 2 4 1 1 2 2 4 ,
is the design matrix. We notice that m = 8 , and X is not composed of positive vectors. We notice that the set of the positive and the negative parts in this case are composed of the following row vectors:
{ ( 0 , 1 , 0 , 2 , 2 , 2 , 1 , 1 ) , ( 2 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ) ,
( 5 , 0 , 0 , 0 , 2 , 0 , 2 , 2 ) , ( 0 , 1 , 1 , 2 , 0 , 2 , 0 , 0 ) ,
( 1 , 0 , 1 , 2 , 0 , 2 , 0 , 2 ) , ( 0 , 3 , 0 , 0 , 2 , 0 , 4 , 0 ) ,
( 1 , 1 , 1 , 2 , 1 , 0 , 1 , 0 ) , ( 0 , 0 , 0 , 0 , 0 , 1 , 0 , 4 ) } .
We notice that these vectors are linearly independent; hence, r = 8 . This implies that the supports of the positive basis vectors are the singletons
{ 1 } , , { 8 }
since the above partition of { 1 , , 8 } implies the measurability of the above vectors as random variables with respect to the objective probability vector p. p is the same as the one mentioned in the previous example:
p = ( 1 8 , 1 8 , 1 16 , 1 16 , 1 8 , 1 8 , 1 8 , 1 4 ) .
If x = ( 1 , 5 , 3 , 4 , 3 , 4 , 5 , 5 ) T , then the random variable E ( x | F ) = Π ( x ) = x . The error-term random variable is equal to ( I Π ) ( x ) = 0 . Hence, the values for both goodness-of-fit measures for the Riesz estimator regression measure and the adjusted goodness-of-fit for Riesz estimator regression measure for x are the same. Specifically, this value is equal to zero since the random variable x belongs to the sub-lattice generated by the 8 vectors denoted above, or else the states are totally separated by F .

5. Conclusions

In this paper, we consider a non-parametric regression model that relies on Riesz estimators. This linear regression model is similar to the usual linear regression model since they both rely on projection operators. The ascociated projection operator is compatible with the properties of any sub-lattice, as is implied in [2]. Since samples are actually projections of some infinite-dimensional vector lattice into some Euclidean space, Riesz estimator regression implies that the structure of positive projections is the main point for fitting such a model as an alternative linear regression model.

Author Contributions

The authors’ contribution is equal to the whole of the paper’s content. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

There is no data associated with the content of the paper.

Acknowledgments

The authors acknowledge the Department of Statistics and Actuarial-Financial Mathematics of the Aegean University for its support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aliprantis, C.D.; Harris, D.; Tourky, R. Riesz Estimators. J. Econom. 2007, 136, 431–456. [Google Scholar] [CrossRef]
  2. Abramovich, Y.A.; Aliprantis, C.D.; Polyrakis, I.A. Lattice-Subspaces and positive projections. Proc. R. Irish Acad. 1994, 94A, 237–253. [Google Scholar]
  3. Le Roy, S.F.; Werner, J. Principles of Financial Economics; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  4. Deelstra, G.; Plantin, G. Risk Theory an Reinsurance; EAA Series; Springer: London, UK, 2014. [Google Scholar]
  5. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1947. [Google Scholar]
  6. Gilboa, I.; Schmeidler, D. Maximum expected utility with nonunique prior. J. Math. Econom. 1989, 18, 141–153. [Google Scholar] [CrossRef]
  7. Harsanyi, J.C. Games with incomplete information played by “Bayesian” players: Part I. The Basic Model. Manag. Sci. 1967, 14, 159–182. [Google Scholar] [CrossRef]
  8. Polyrakis, I.A. Minimal Lattice-Subspaces. Trans. Am. Math. Soc. 1999, 351, 4183–4203. [Google Scholar] [CrossRef]
  9. Kountzakis, C. Equilibrium in options’ incomplete markets. Int. J. Financ. Mark. Deriv. 2020, 7, 414–423. [Google Scholar] [CrossRef]
  10. Magill, M.; Quinzii, M. Theory of Incomplete Markets, 1st ed.; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kountzakis, C.; Tsachouridou-Papadatou, V. Non-Parametric Regression and Riesz Estimators. Axioms 2023, 12, 375. https://doi.org/10.3390/axioms12040375

AMA Style

Kountzakis C, Tsachouridou-Papadatou V. Non-Parametric Regression and Riesz Estimators. Axioms. 2023; 12(4):375. https://doi.org/10.3390/axioms12040375

Chicago/Turabian Style

Kountzakis, Christos, and Vasileia Tsachouridou-Papadatou. 2023. "Non-Parametric Regression and Riesz Estimators" Axioms 12, no. 4: 375. https://doi.org/10.3390/axioms12040375

APA Style

Kountzakis, C., & Tsachouridou-Papadatou, V. (2023). Non-Parametric Regression and Riesz Estimators. Axioms, 12(4), 375. https://doi.org/10.3390/axioms12040375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop