1. Introduction
When solving practical problems such as big data, artificial intelligence, engineering calculation, and signal processing, computers will produce errors due to their own defects, algorithms and calculation methods. The different definitions of cost and algorithm errors lead to different calculation models. In the classical setting, what is considered is the maximum value of the error in all possible cases. This only measures the “worst-element” case in the given class of functions. However, the optimal approximation error obtained under this algorithm is not the best approximation of most elements. In the probabilistic setting, the error is given in the “worst” case by removing the subsets of the measure at most . This reflects the best approximation of the “concentrated” or “most” elements in the given class of functions, and gives the distribution of the elements that reach a certain error order under a certain measure , depicting the intrinsic structural characteristics of the function class more accurately. In the average setting, the error is obtained by integrating the function class under a probability measure, which reflects the average level of approximation of the function class for a given measure . This considers the weight of each function in the function class, and reflects the essence of the function approximation problem. Therefore, it can help people more deeply understand the essential characteristics of the function approximation problem.
In recent years, width theory has attracted more and more attention. In books [
1,
2], we can find more detailed information about the usual widths, for example, Kolmogorov width, linear width, and Gel’fand width. Micchelli and Traub [
3] expounded the relationship between computational complexity and width theory, which provided a solid theoretical foundation for solving problems such as numerical analysis and algorithm complexity. V.E. Maiorov [
4,
5] defined the probabilistic
-width, and obtained the sharp-order estimate of the probabilistic Kolmogorov
-width and linear
-width on the space of a finite dimension equipped with the Gaussian measure by using the discretization method. Wasilkowski and Maiorov [
6] studied the approximation characteristics of function classes assigned
r-fold Wiener measures under the average and probability settings. Fang and Ye [
7,
8] studied the probabilistic linear width and the
p-average linear width of the Sobolev space
equipped with Gaussian measure, and obtained the associated asymptotics in the
-norm. Shao et al. [
9] gave the concept of the probabilistic Gel’fand width. They estimated the sharp order of the probabilistic Gel’fand width of the univariate Sobolev space
equipped with Gaussian measure in the
-norm. Xu et al. [
10] obtained the sharp order of the probabilistic and the
p-average linear widths of the one-dimensional Sobolev space
equipped with Gaussian measure in the
-norm. With the progress and development of science and technology, research on single variables no longer satisfies the actual needs of today, so scholars have begun to step into the multivariate situation. Romanyuk [
11,
12] obtained the sharp-order estimates of best approximations to the classes of periodic functions of several variables by trigonometric polynomials with “numbers” of harmonics from step hyperbolic crosses. Chen and Fang [
13,
14] studied the multivariate Sobolev space
with mixed derivative equipped with the Gaussian measure, obtaining the associated asymptotics of the probabilistic and the
p-average Kolmogorov widths in the space
. After that, they estimated the sharp order of the probabilistic linear
-widths in the
-norm. Stepanets [
15] found the sharp order of the Kolmogorov widths
in the spaces
of the classes
of
-integrals of functions from the unit balls of
. Dai and Wang [
16] obtained the sharp order of the probabilistic and the
p-average linear widths of the diagonal matrix. Liu et al. [
17,
18] calculated the exact order of the probabilistic and the
p-average Gel’fand widths of the multivariate Sobolev space
equipped with the Gaussian measure in the
-norm.
First, let us review some definitions. Let
be the normed linear space, and
be any
N-dimensional subspace. Given a non-null subset
W of
X, for
, then
denotes the distance of
x to
, and
denotes the deviation of the subset
W from
. Thus,
measures the extent to which the “worst” element of the subset
W can be approximated from
.
Definition 1 ([
1])
. The N-widths, in the sense of Kolmogorov and linear widths, of W in X are given bywhere is run over all possible N-dimensional linear subspaces of X; is a continuous linear operator of rank not more than N from X to itself. Definition 2 ([
1])
. Let be N continuous linearly independent functionals on X such thatthen subspace of X is said to be of co-dimension N. Then, N-width, in the sense of Gel’fand, of W in X is defined bywhere the infimum is run over all linear subspaces of X of co-dimension N. Definition 3 ([
4,
5,
6])
. Let W be the non-null subset of the space , be the Borel field on the subset W, and μ be the probabilistic measure defined on , then μ is a σ-additive non-negative function on , and . For any , then the corresponding probabilistic -widths, in the sense of Kolmogorov and linear widths, of W in X with a measure μ are given bywhere runs over all possible subsets in with . The p-average N-widths, in the sense of Kolmogorov and linear widths, of W in X are given by Now, we introduce the concept of the probabilistic Gel’fand -width; let us first recall the following concept.
Let
H be a Hilbert space, and be equipped with the Gaussian measure
. Let
be a closed subspace, and
be its orthogonal complement subspace. For any
, then
where the element
y is called the projection of
x onto
F, and the decomposition form is unique. For any closed subspace
of
F such that
where
is the projection operator on
F, and
is the probabilistic measure on
F.
Definition 4 ([
9])
. Let X be the normed linear space, and be equipped with the norm . Let H be the Hilbert space, and H can be continuously embedded into X, and let μ be the probabilistic measure on the space H. Then, the corresponding probabilistic -width, in the sense of Gel’fand, of W in X with a measure μ is given bywhere runs over all linear subspaces of X with co-dimension not more than N, runs over all possible subsets in with , and which satisfies the following condition: For any closed subspace F of H, Remark 1. From ref. [9], we know that condition (2) is to ensure that there are enough elements in the set . Definition 5 ([
18])
. Let , W, , and μ be consistent with Definition 3. Then, the p-average N-width, in the sense of Gel’fand, of W in X is given bywhere runs over all linear subspaces of X with co-dimension not more than N. Next, we introduce some related symbols.
Let represent constants that are related only to the parameters , and d; and, and be two arbitrary positive functions defined on the set D. Assume that there are constants , such that or , , then let us write it as or . Assume that there are constants such that , , then let us write it as .
2. Main Results
In this article, we focus on the univariate Sobolev space (), and the multivariate Sobolev space with mixed derivative , . Now, we give their relevant definition.
Let , , be denoted by the usual space of the q-integral of the Lebesgue space defined on with the norm . Let ; we write , , and if , then , and , .
Let
denote the Hilbert space, which consists of all functions that are
-periodic in each variable
defined on
, and the Fourier series is given by
where
, and the inner product is given by
For any
, we define the
r-th-order derivative of
x in the sense of Weyl by
where
,
We define the univariate Sobolev space
by
where
is the Fourier series of
x. Then, the space
is a Hilbert space, whose inner product is given by
,
, and the norm is given by
. When
, the normed linear space
can be continuously embedded into the space
.
We equip the univariate Sobolev space
with a Gaussian measure
. The mean of the Gaussian measure
is 0, and the correlation operator
has eigenfunctions
and eigenvalues
, where
,
. Then,
For any system of orthogonal functions
in the space
, and
. Let
be a cylindrical subset in the space
. For any Borel subset
in
, the Gaussian measure
on
G is defined by
We define the multivariate Sobolev space with mixed derivative
,
by
where
denotes that
, if
, and
. Then, the space
is a Hilbert space, whose inner product is given by
,
, and the norm is given by
. When
, the normed linear space
can be continuously embedded into the space
.
Let
we equip the multivariate Sobolev space
with a Gaussian measure
. The mean of the Gaussian measure
is 0, the correlation operator
has eigenfunctions
and the eigenvalues
, where
. Then,
For any system of orthogonal functions
in
, and
. For any Borel subset
in
, let
be a cylindrical subset in the space
. Then, the Gaussian measure
on
G is given by
We can learn more related information about the Gaussian measure
in the papers by Kuo [
19] , and Talagrand and Ledoux [
20].
Liu et al. [
17,
18] studied the Gel’fand widths, in the probabilistic and average settings, of the multivariate Sobolev space
equipped with Gaussian measure
in the
-norm; the following theorem is proved by them.
Theorem 1 ([
17])
. Let , , , and . Then, the Gel’fand -width, in the probabilistic setting, of the multivariate Sobolev space equipped with Gaussian measure μ in the -norm satisfies the asymptotics: Theorem 2 ([
18])
. Let , , , , and . Then, the Gel’fand N-width, in the average setting, of the multivariate Sobolev space equipped with Gaussian measure μ in the -norm satisfies the asymptotics: In the normed linear space
, for
, the norm is given by
The following introduces the space.
For
, the definition of
is given by
and the norm is given by
where
is the Fourier series of
x.
According to Parseval equalities, we can obtain that . , if ; , if . Thus, according to Hölder inequalities, when , the normed linear space can be continuously embedded into the space .
Shao [
21] studied the Gel’fand
-width, in the average setting, of univariate Sobolev space
equipped with Gaussian measure
in the
-norm. He proved the following:
Theorem 3 ([
21])
. Let , , . Then, Based on previous research results, this paper mainly determines the asymptotic order of the p-average Gel’fand N-width of univariate Sobolev space , and the Gel’fand widths, in the probabilistic and average settings, of multivariate Sobolev space equipped with the Gaussian measure in the -norm. Our main results are as follows.
Theorem 4. Let , , , , and . Then, the p-average Gel’fand N-width of equipped with the Gaussian measure in the -norm satisfies the asymptotic Theorem 5. Let , , and , . Then, the probabilistic Gel’fand width of equipped with the Gaussian measure in the -norm satisfy asymptotics:
Remark 2. When the dimension , the results of Theorem 5 agree with those of Theorem 3.
Theorem 6. Let , , , , , and . Then, the p-average Gel’fand N-width of equipped with the Gaussian measure in the -norm satisfies the asymptotic 3. Discretization
This paper mainly uses discretization to prove Theorem 5, the core of which is transforming the widths of the function class space to the width of the finite-dimensional space, and then, using the conclusion of the order of the probabilistic Gel’fand width in the finite-dimensional space to calculate. First, we review the definition of the probabilistic Gel’fand width in the finite-dimensional space.
For
, where
m is a positive integer. Let
be the normed linear space in
with the norm
We equip the finite-dimensional space
with a standard Gaussian measure
, which is defined by for any Borel subset
G in
,
and
.
Let
N be a non-negative integer, then for any
, the Gel’fand
-width, in the probabilistic setting, of the finite-dimensional space
equipped with the standard Gaussian measure
in
-norm is defined by
where
runs over all linear subspaces of
with co-dimension not more than
N, and
runs over all possible subsets in
with
.
Tan et al. [
9] obtained the Gel’fand
-width of the finite-dimensional space in the probabilistic setting, the result is as follows:
Lemma 1 ([
9])
. Let , for any , then: In order to establish the discretization theorem better, the following two special Borel sets in the finite-dimensional Spacesit are of great importance.
Lemma 2 ([
5])
. For any , let be a positive constant, then Lemma 3 ([
4])
. For any . Let , and be a positive constant that depends only on the parameters q, then The following lemmas are crucial to the proof of Theorem 5.
Lemma 4 ([
9])
. Let be the normed linear space; H be the Hilbert space, and H can be embedded continuously into X; and μ be the probabilistic measure on H, . Then, Lemma 5 ([
22])
. Let , which satisfies the condition , , . Note , Then, the probabilistic linear width of equipped with the Gaussian measure in the space satisfies the asymptotics From Lemmas 4 and 5, the lower bound of Theorem 5 is easily proved. Thus, we only need to use the discretization method to establish the upper bound of Theorem 5. Next, to facilitate the computation, it is necessary to introduce some notation and split the Fourier series of the function into the sum of diadic blocks.
For any
, let
where
.
Let
denote the “block” of the Fourier series for
, namely,
where
.
Let , , where .
For any
, let
where
, and
, if
.
Let , then , where denotes the cardinality of the set .
Lemma 6 ([
23])
. For , and . Let , , , . Then,is an isomorphic mapping from the space of trigonometric polynomials to the space . Lemma 7 ([
22])
. Let , for any , . Then,when , this is consistent with the above results. And, According to Lemma 7, we can obtain
Now, for any
, we consider a mapping
Therefore, it follows from Lemma 6 and Equation (
18) that
is a linear isomorphic mapping from
to
. From Equation (
9), we can obtain
Let
, then
Based on the above analysis, the discretization theorem for the upper bound of Theorems 5 can be obtained as follows.
Theorem 7. For , let , which satisfies the condition , , . Assume that there exist the sequences of numbers and , which satisfy the conditions , , and . Then, Proof. By Lemma 1, assume that there exists a constant
such that
Let
where
,
, if
;
, if
, where
and
are the same as Lemmas 2 and 3.
From ref. [
9], we have
. Therefore, for any
, we can obtain that
From Lemmas 2 and 3, we can obtain . Obviously, for any linear subspace F of , we have .
Let
be subspaces of
with co-dimension at most
, Therefore,
From Equation (
20), there exists a constant
, such that
Consider the subset of
From the definition of the Gaussian measure
and the standard Gaussian measure
in the finite-dimensional space
, by virtue of Equations (
19) and (
21), then
Let
; according to the assumptions of the theorem, we can obtain
where
is the direct sum. Then, for any linear subspace
F of
, we have
.
Let
, whose co-dimension is at most
. Consider the subspace
of
; we can obtain
where
is a subspace of
;
is the direct sum.
According to the definition of the Gel’fand
-width in the probabilistic setting, we can obtain
□
4. Proofs of Theorem 4
In this part, we mainly prove the Gel’fand N-width, in the average setting, of the univariate Sobolev space . This mainly uses the results of the probabilistic Gel’fand -width and the method of real analysis to estimate the p-average Gel’fand N-width. It is known that any subspace of is absolutely convergent in the sense of with respect to the Fourier coefficients.
Proofs of Theorem 4. We calculate the asymptotic order in two cases. Let be a linear subspace of with co-dimension at most N. Consider the set sequence , which satisfies , for any k, and , for .
- (1)
For
, according to the result of Theorem 3, we have
According to the definition of Gel’fand
N-width in the average setting and estimate (
22), it can be known that
Now, we start to prove the lower bound of Theorem 4. From Theorem 3, let
be a constant, then
Consider the set
Then,
; otherwise,
. According to the definition of the Gel’fand
-width in the probabilistic setting, we can obtain
This contradicts the inequality (
23), consequently,
. Therefore,
- (2)
For
, according to the result of Theorem 3, we have
According to the definition of Gel’fand
N-width in the average setting and estimate (
24), it can be known that
Now, we start to prove the lower bound of Theorem 4. From Theorem 3, let
be a constant, then
Consider the set
Then,
. This contradicts inequality (
25), consequently,
. Therefore,
By synthesizing the proofs and results of (1) and (2), we can obtain
Theorem 4 is proved. □
5. Proofs of Theorem 5
Regarding the proof of the upper bound of Theorem 5, we need to recall two important lemmas:
Lemma 8. Let N be a non-negative integer, and , , be defined as Formula (15). Letwhere is the largest integer no greater than a; and β is a constant, which satisfies also the condition Lemma 9. For , suppose that there exists u satisfying condition . Let According to Lemmas 8 and 9, we can obtain . Then, the sequences of numbers and satisfy the conditions of Theorem 6.
Proof of Theorem 5. From Lemmas 8 and 9, we can obtain a simple fact
- (1)
For
, according to Theorem 7 and Lemma 1, we have
Next, we estimate the three terms of (
26). First, we estimate
:
Let
where
takes all
k that satisfy the condition
;
takes all
k that satisfy condition
. Then,
and
Therefore, combining Equations (
28) and (
29), we have
Next, we estimate
; from the condition
, we have
Using the method of calculating
, we obtain
Finally, we estimate
; from the condition
, we have
Similarly, using the method of calculating
, we obtain
Substituting (
30)–(
32) into (
26), we can obtain
- (2)
For
, according to Theorem 7 and Lemma 1, we have
Next, we estimate the three terms of (
34), respectively. In accordance with part (1), we can obtain
Next, we estimate
; from the condition
, we have
Finally, we estimate
; from the condition
, we have
Substituting (
35)–(
37) into (
34), we can obtain
The computation of the upper estimate is complete. After that, we combine Lemmas 4 and 5 so that Theorem 5 is proved. □
6. Proofs of Theorem 6
In this part, we mainly focus on proving Theorem 6. It is known that any subspace of is absolutely convergent in the sense of with respect to the Fourier coefficients.
Proofs of Theorem 6. We calculate the asymptotic order in two cases. Let be a linear subspace of whose co-dimension is not more than N. Consider the set sequence , which satisfies , for any k, and , for . Let .
- (1)
For
, according to the result of Theorem 5, we have
According to the definition of the Gel’fand
N-width in the average setting and estimate (
39), it can be known that
Now, we start to prove the lower bound of Theorem 6. From Theorem 5, let
be a constant, then
Consider the set
Then,
. Otherwise,
. According to the definition of the Gel’fand
-width in the probabilistic setting, we can obtain
This contradicts inequality (
40), consequently,
. Therefore,
- (2)
For
, according to the result of Theorem 5, we have
According to the definition of the Gel’fand
N-width in the average setting and estimate (
41), it can be known that
Now, we start to prove the lower bound of Theorem 6. From Theorem 5, let
be a constant, then
Consider the set
Then,
. This contradicts inequality (
42), consequently,
. Therefore,
By synthesizing the proofs and results of (1) and (2), we can obtain
Theorem 6 is proved. □
7. Summary
In this article, we mainly studied the Gel’fand widths, in the probabilistic and average settings, of Sobolev space in the -norm by discretization methods. As we know, with the development of data science, mathematical theory plays an increasingly important role in big data research. The complexity of algorithms, the selection of optimal algorithms, and the convergence speed of approximation algorithms have always been important research directions in big data science. This research, by the width of the theory, is the effective method to solve such problems.
However, this study has some limitations, for example, we did not discuss the case , which is also worth investigating later. In addition, expanding the research scope of function classes and studying the characteristics of nonlinear function approximation in different frameworks will also be a direction we need to study in the future.