2. Symmetrization Related to -Deformed and -Parametrized Hyperbolic Tangent Function
In this section, all of the initial background information comes from Chapter 18 in [
1].
We use (see (1)), and exhibit that it is a sigmoid function and we will present several of its properties related to approximation by neural network operators.
So, let us consider the activation function
Therefore, is strictly increasing.
Next, we obtain (
):
So, in the case of , we determine that is strictly concave up, with
And in the case of , we determine that is strictly concave down.
Clearly, is a shifted sigmoid function with , and (a semi-odd function).
Based on
,
, we consider the function
;
. Notice that
, so the
x-axis is a horizontal asymptote.
Thus,
a deformed symmetry.
Let ; then, and (based on being strictly concave up for ), that is, . Hence is strictly increasing over
Now, let ; then, , and , that is,
Therefore is strictly decreasing over
Let us next consider
We determine that
Clearly, according to (
13), we determine that
for
More precisely, is concave down over , and strictly concave down over
Consequently, has a bell-type shape over
Of course, it holds that
At
, we have
That is,
is the only critical number of
over
. Hence, at
achieves its global maximum, which is
Conclusion: The maximum value of
is
We mention the following:
Theorem 1 ([
1], Ch. 18, p. 458)
. We determine that Also, the following holds:
Theorem 2 ([
1], Ch. 18, p. 459)
. It holds that So that is a density function on
Similarly, we determine that
so that
is a density function.
Furthermore, we observe the important symmetry
Furthermore
is a new density function over
, i.e.,
An essential property follows:
Theorem 3 ([
2])
. Let , . Then, We need the following:
Proposition 1 ([
2])
. It holds for () that We mention the following:
Definition 1. The modulus of continuity here is defined bywhere is bounded and continuous, denoted by , . Similarly, is defined for (uniformly continuous functions). We determine that , iff as Denote ,
4. Main Results
We present the following approximation results:
Theorem 6. Let , , , . Then,andwhere is the supremum norm. So, for , we determine that , pointwise and uniformly.
Proof. That is,
□
We continue with the following.
Theorem 7. This theorem is all as in Theorem 6. Then,and For we determine that , pointwise and uniformly.
Thus, the following are true:
Theorem 8. This theorem is all as in Theorem 6. Then,and For we determine that , pointwise and uniformly.
Next, we describe the global smoothness preservation property of our activated multivariate operators.
Theorem 9. Here, . Then,If , then . Proof. Let
,
; then, we obtain (
59). □
Remark 2. Let f be the projection function onto the coordinate, call it , , where Then, it holds that Hence,proving thatfor any So, (59) is an attained sharp inequality. Furthermore,andThus, is well defined. Theorem 10. Let . Then,If , then . Inequality (67) is an attained sharp inequality according to , Proof. According to (
52), one can write
and
where
Thus,
proving inequality (
67).
We do have
Thus,
is well defined. □
Theorem 11. Let . Then,If , then . Inequality (73) is an attained sharp inequality according to , Proof. Let
; then,
and
That is, (
73) is true, and it is an attained sharp inequality according to
,
Indeed, we determine that
(as in (
72))
Thus, is well defined. □
We establish the following:
Remark 3. Let be fixed. Assume that , . Here, denotes a partial derivative of f, , , , and , where
We also write , and we say it is of the order l.
We assume that any partial for all ,
Through the repeated application of Theorem 5, we obtain Similarly, we determine thatandfor all , So, all of our results in this work can be written in the simultaneous approximation context (see Theorems 15–17).
We establish the following:
Remark 4. Activated Iterative Multivariate Convolution
We determine that∀
, where Let , as , and Furthermore, it holds thataccording to the dominated convergence theorem, because we determine thatand is integrable over , ∀. Hence, .
Furthermore it holds thati.e., So, is a bounded positive linear operator.
Clearly, it holds thatAnd for , we obtainso the contraction property is valid and is a bounded linear operator. Remark 5. Let . We observe that Now, let , and as above. Consequently, it holds, as in [1], Chapter 2, that Next, we have
Remark 6. Let , as , andas This is true according to the bounded convergence theorem, and we determine that andwhere is a cube. Thus, Therefore, it holds thatand we obtainas ∀. with being integrable over .
Therefore according to the dominated convergence theorem, Hence, is a bounded and continuous in .
The iterated facts hold for as in the case, all the same! See (83)–(85) and all of Remark 5. Remark 7. Next, we observe the following: Let , and Let , as . Then,as The latter is obtained according go the dominated convergence theorem:andandas ∀. Furthermore, it holds thatin which the last function is integrable over . Hence, is bounded and continuous in .
The iterated facts hold for in the same manner as with ! See (83)–(85) and all of Remark 5. See the related Theorems 18 and 19, which we describe later.
Next, we greatly improve the speed of convergence of our activated multivariate operators by using the differentiation of functions.
Notation 1. Let , . Here, denotes a partial derivative of f, , , and , where . We also write , and we say it is of the order l.
We denotealso written aswhere is the supremum norm. Theorem 12. Let , ; , , with for all , , and , where Then,
- (i)
- (ii)
Assume that for all , . We havewith a high speed of - (iii)
- (iv)
We determine that , as , pointwise and uniformly.
Proof. Consider
,
;
. Then,
for all
We have the multivariate Taylor’s formula
Notice that
. Also, for
we have
Furthermore we obtain
So,
Thus, we determine, for
, that
where
Conclusion: When
, we prove that
According to (
108) we determine that
We prove, in general, that
Next, we estimate
so that the following holds:
According to (
107), we can write
The theorem is proven. □
We continue with the activated multivariate Kantorovich operators under differentiation.
Theorem 13. Let , ; , , with , Then,
- (i)
- (ii)
Assume that for all , ; we havewith a high speed of - (iii)
- (iv)
We determine that , as , pointwise and uniformly.
Proof. Here we consider ,
Conclusion: When
, we prove that
According to (
128), we determine that
We prove, in general, that
Next, we see that
So, when
, we obtain
and, in general, it holds that
Furthermore, it holds that
Let
Here,
is as in (
45), and
is as in (
46).
We do have, under
,
Furthermore, we determine that
or, better still,
Consequently, we derive that
We derive that
The theorem is proven. □
We continue with the activated multivariate quadrature operators under differentiation.
Theorem 14. Let , ; , , with Then,
- (i)
- (ii)
Assume that for all , ; we havewith a high speed of - (iii)
- (iv)
We determine that , as , pointwise and uniformly.
Proof. Conclusion: When
, we prove that
According to (
154), we determine that
We establish, in general, that
So, when
, we obtain
and it holds, in general, that
Furthermore, it holds that
Let
Here,
is as in (
45), and
is as in (
46).
We derive, under
,
Furthermore, we determine that
As in the proof of Theorem 13, we obtain
Consequently, we derive that
At the end, we estimate
(as in the proof of Theorem 13)
The theorem is proven. □
Next comes the simultaneous multivariate activated approximation.
Theorem 15. Let be fixed, with , . We assume that for , . Here, , , .
Then,
Proof. We prove Theorems 6–8 and Remark 3. □
Next comes simultaneous global smoothness preservation.
Theorem 16. Let be fixed, with , . We assume that for , .
If , then , and
Proof. We prove Theorems 9–11 and Remark 3. □
Under simultaneous activated multivariate extended differentiation, we derive the following result.
Theorem 17. Let ; , . Let , denote a partial derivative of f, , , , and , where . We assume any for all , .
We further assume that , , with , . Then,
Proof. We prove Theorems 12–14 and Remark 3. □
In the final part of this work, we present our results related to activated iterative approximation. This is a continuation of Remarks 4–6.
Theorem 18. Let , , . Then,
So, the speed of convergence of , , to unit I is not worse than the speed of convergence of , , to I.
Proof. We prove Theorems 6–8 and (
87). □
We continue with the following:
Theorem 19. Let , with , ; . Then,
Clearly, we notice that the speed of convergence to the unit operator of the above activated multiply iterated operator is not worse than the speed of operators , , and to the unit, respectively.
Proof. We prove Theorems 6–8 and (
89). □
We finish our work with multivariate simultaneous iterations.
Remark 8. Let be fixed. Assume that , with , with , ; . Then, according to (87), we obtain According to (78) and inductively, we obtain Similarly, we derive thatand Now, let . Then, based on (89), we find that Similarly, we determine thatand All the above inequalities (189)–(195) prove that our implied multivariate iterative simultaneous approximations do not have a speed worse than our basic simultaneous approximations using activated convolution operators. Conclusion: Here, we presented a new idea of going from neural networks’ main tools, the activation functions, to multivariate convolution integral approximation. This represents a rare case of applying applied mathematics to theoretical mathematics.