Next Article in Journal
Efficient Dynamic Flow Algorithms for Evacuation Planning Problems with Partial Lane Reversal
Next Article in Special Issue
Scattered Data Interpolation and Approximation with Truncated Exponential Radial Basis Function
Previous Article in Journal
On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes
Previous Article in Special Issue
Prediction of Discretization of GMsFEM Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

Department of Mathematics, Texas A&M, College Station, TX 77843, USA
Mathematics 2019, 7(10), 992; https://doi.org/10.3390/math7100992
Submission received: 29 September 2019 / Revised: 15 October 2019 / Accepted: 16 October 2019 / Published: 18 October 2019
(This article belongs to the Special Issue Computational Mathematics, Algorithms, and Data Processing)

Abstract

:
This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w min ( d ) so that ReLU nets of width w min ( d ) (and arbitrary depth) can approximate any continuous function on the unit cube [ 0 , 1 ] d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [ 0 , 1 ] d by ReLU nets with width d + 3 .

1. Introduction

Over the past several years, neural nets, particularly deep nets, have become the state-of-the-art in a remarkable number of machine learning problems, from mastering go to image recognition/segmentation and machine translation (see the review article [1] for more background). Despite all their practical successes, a robust theory of why they work so well is in its infancy. Much of the work to date has focused on the problem of explaining and quantifying the expressivity (the ability to approximate a rich class of functions) of deep neural nets [2,3,4,5,6,7,8,9,10,11]. Expressivity can be seen both as an effect of both depth and width. It has been known since at least the work of Cybenko [12] and Hornik-Stinchcombe-White [13] that if no constraint is placed on the width of a hidden layer, then a single hidden layer is enough to approximate essentially any function. The purpose of this article, in contrast, is to investigate the “effect of depth without the aid of width.” More precisely, for each d 1 , we would like to estimate:
w min ( d ) : = min w N | ReLU nets of width w can approximate any positive continuous function on [ 0 , 1 ] d arbitrarily well .
Here, N = { 0 , 1 , 2 , } are the natural numbers and ReLU is the so-called “rectified linear unit,” ReLU ( t ) = max { 0 , t } , which is the most popular non-linearity used in practice (see (4) for the exact definition). In Theorem 1, we prove that ω min ( d ) d + 2 . This raises two questions:
Q1. 
Is the estimate in the previous line sharp?
Q2. 
How efficiently can ReLU nets of a given width w w min ( d ) approximate a given continuous function of d variables?
A priori, it is not clear how to estimate ω m i n ( d ) and whether it is even finite. One of the contributions of this article is to provide reasonable bounds on ω m i n ( d ) (see Theorem 1). Moreover, we also provide quantitative estimates on the corresponding rate of approximation. On the subject of Q1, we will prove in forthcoming work with M.Sellke [14] that in fact, ω min ( d ) = d + 1 . When d = 1 , the lower bound is simple to check, and the upper bound follows for example from Theorem 3.1 in [5]. The main results in this article, however, concern Q1 and Q2 for convex functions. For instance, we prove in Theorem 1 that:
w min conv ( d ) d + 1 ,
where:
w min conv ( d ) : = min w N | ReLU nets of width w can approximate any positive convex function on [ 0 , 1 ] d arbitrarily well .
This illustrates a central point of the present paper: the convexity of the ReLU activation makes ReLU nets well-adapted to representing convex functions on [ 0 , 1 ] d .
Theorem 1 also addresses Q2 by providing quantitative estimates on the depth of a ReLU net with width d + 1 that approximates a given convex function. We provide similar depth estimates for arbitrary continuous functions on [ 0 , 1 ] d , but this time for nets of width d + 3 . Several of our depth estimates are based on the work of Balázs-György-Szepesvári [15] on max-affine estimators in convex regression.
In order to prove Theorem 1, we must understand what functions can be exactly computed by a ReLU net. Such functions are always piecewise affine, and we prove in Theorem 2 the converse: every piecewise affine function on [ 0 , 1 ] d can be exactly represented by a ReLU net with hidden layer width at most d + 3 . Moreover, we prove that the depth of the network that computes such a function is bounded by the number affine pieces it contains. This extends the results of Arora-Basu-Mianjy-Mukherjee (e.g., Theorem 2.1 and Corollary 2.2 in [2]).
Convex functions again play a special role. We show that every convex function on [ 0 , 1 ] d that is piecewise affine with N pieces can be represented exactly by a ReLU net with width d + 1 and depth N .

2. Statement of Results

To state our results precisely, we set notation and recall several definitions. For d 1 and a continuous function f : [ 0 , 1 ] d R , write:
f C 0 : = sup x [ 0 , 1 ] d f ( x ) .
Further, denote by:
ω f ( ε ) : = sup { f ( x ) f ( y ) | x y ε }
the modulus of continuity of f , whose value at ε is the maximum that f can change when its argument moves by at most ε . Note that by the definition of a continuous function, ω f ( ε ) 0 as ε 0 . Next, given d in , d out , and w 1 , we define a feed-forward neural net with ReLU activations, input dimension d in , hidden layer width w, depth n , and output dimension d out to be any member of the finite-dimensional family of functions:
ReLU A n ReLU A 1 ReLU A 1
that map R d to R + d out = { x = x 1 , , x d out R d out | x i 0 } . In (4),
A j : R w R w , j = 2 , , n 1 , A 1 : R d in R w , A n : R w R d out
are affine transformations, and for every m 1 :
ReLU ( x 1 , , x m ) = max { 0 , x 1 } , , max { 0 , x m } .
We often denote such a net by N and write:
f N ( x ) : = ReLU A n ReLU A 1 ReLU A 1 ( x )
for the function it computes. Our first result contrasts both the width and depth required to approximate continuous, convex, and smooth functions by ReLU nets.
Theorem 1.
Let d 1 and f : [ 0 , 1 ] d R + be a positive function with f C 0 = 1 . We have the following three cases:
1. (f is continuous)
There exists a sequence of feed-forward neural nets N k with ReLU activations, input dimension d , hidden layer width d + 2 , and output dimension 1 , such that:
lim k f f N k C 0 = 0 .
In particular, w m i n ( d ) d + 2 . Moreover, write ω f for the modulus of continuity of f , and fix ε > 0 . There exists a feed-forward neural net N ε with ReLU activations, input dimension d , hidden layer width d + 3 , output dimension 1 , and:
d e p t h N ε = 2 · d ! ω f ( ε ) d
such that:
f f N ε C 0 ε .
2. (f is convex)
There exists a sequence of feed-forward neural nets N k with ReLU activations, input dimension d , hidden layer width d + 1 , and output dimension 1 , such that:
lim k f f N k C 0 = 0 .
Hence, ω m i n c o n v ( d ) d + 1 . Further, there exists C > 0 such that if f is both convex and Lipschitz with Lipschitz constant L , then the nets N k in (8) can be taken to satisfy:
d e p t h N k = k + 1 , f f N k C 0 C L d 3 / 2 k 2 / d .
3. (f is smooth)
There exists a constant K depending only on d and a constant C depending only on the maximum of the first K derivative of f such that for every k 3 , the width d + 2 nets N k in (5) can be chosen so that:
d e p t h ( N k ) = k , f f N k C 0 C k 2 1 / d .
The main novelty of Theorem 1 is the width estimate w min conv ( d ) d + 1 and the quantitative depth estimates (9) for convex functions, as well as the analogous estimates (6) and (7) for continuous functions. Let us briefly explain the origin of the other estimates. The relation (5) and the corresponding estimate w min ( d ) d + 2 are a combination of the well-known fact that ReLU nets with one hidden layer can approximate any continuous function and a simple procedure by which a ReLU net with input dimension d and a single hidden layer of width n can be replaced by another ReLU net that computes the same function, but has depth n + 2 and width d + 2 . For these width d + 2 nets, we are unaware of how to obtain quantitative estimates on the depth required to approximate a fixed continuous function to a given precision. At the expense of changing the width of our ReLU nets from d + 2 to d + 3 , however, we furnish the estimates (6) and (7). On the other hand, using Theorem 3.1 in [5], when f is sufficiently smooth, we obtain the depth estimates (10) for width d + 2 ReLU nets. Indeed, since we are working on a compact set [ 0 , 1 ] d , the smoothness classes W w , q , γ from [5] reduce to classes of functions that have sufficiently many bounded derivatives.
Our next result concerns the exact representation of piecewise affine functions by ReLU nets. Instead of measuring the complexity of such a function by its Lipschitz constant or modulus of continuity, the complexity of a piecewise affine function can be thought of as the minimal number of affine pieces needed to define it.
Theorem 2.
Let d 1 and f : [ 0 , 1 ] d R + be the function computed by some ReLU net with input dimension d, output dimension 1 , and arbitrary width. There exist affine functions g α , h β : [ 0 , 1 ] d R such that f can be written as the difference of positive convex functions:
f = g h , g : = max 1 α N g α , h : = max 1 β M h β .
Moreover, there exists a feed-forward neural net N with ReLU activations, input dimension d , hidden layer width d + 3 , output dimension 1 , and:
d e p t h N = 2 ( M + N )
that computes f exactly. Finally, if f is convex (and hence, h vanishes), then the width of N can be taken to be d + 1 , and the depth can be taken to be N .
The fact that the function computed by a ReLU net can be written as (11) follows from Theorem 2.1 in [2]. The novelty in Theorem 2 is therefore the uniform width estimate d + 3 in the representation on any function computed by a ReLU net and the d + 1 width estimate for convex functions. Theorem 2 will be used in the proof of Theorem 1.

3. Relation to Previous Work

This article is related to several strands of prior work:
  • Theorems 1 and 2 are “deep and narrow” analogs of the well-known “shallow and wide” universal approximation results (e.g., Cybenko [12] and Hornik-Stinchcombe-White [13]) for feed-forward neural nets. Those articles show that essentially any scalar function f : [ 0 , 1 ] d R on the d-dimensional unit cube can be arbitrarily well approximated by a feed-forward neural net with a single hidden layer with arbitrary width. Such results hold for a wide class of nonlinear activations, but are not particularly illuminating from the point of understanding the expressive advantages of depth in neural nets.
  • The results in this article complement the work of Liao-Mhaskar-Poggio [3] and Mhaskar-Poggio [5], who considered the advantages of depth for representing certain hierarchical or compositional functions by neural nets with both ReLU and non-ReLU activations. Their results (e.g., Theorem 1 in [3] and Theorem 3.1 in [5]) give bounds on the width for approximation both for shallow and certain deep hierarchical nets.
  • Theorems 1 and 2 are also quantitative analogs of Corollary 2.2 and Theorem 2.4 in the work of Arora-Basu-Mianjy-Mukerjee [2]. Their results give bounds on the depth of a ReLU net needed to compute exactly a piecewise linear function of d variables. However, except when d = 1 , they do not obtain an estimate on the number of neurons in such a network and hence cannot bound the width of the hidden layers.
  • Our results are related to Theorems II.1 and II.4 of Rolnick-Tegmark [16], which are themselves extensions of Lin-Rolnick-Tegmark [4]. Their results give lower bounds on the total size (number of neurons) of a neural net (with non-ReLU activations) that approximates sparse multivariable polynomials. Their bounds do not imply a control on the width of such networks that depends only on the number of variables, however.
  • This work was inspired in part by questions raised in the work of Telgarsky [8,9,10]. In particular, in Theorems 1.1 and 1.2 of [8], Telgarsky constructed interesting examples of sawtooth functions that can be computed efficiently by deep width 2 ReLU nets that cannot be well approximated by shallower networks with a similar number of parameters.
  • Theorems 1 and 2 are quantitative statements about the expressive power of depth without the aid of width. This topic, usually without considering bounds on the width, has been taken up by many authors. We refer the reader to [6,7] for several interesting quantitative measures of the complexity of functions computed by deep neural nets.
  • Finally, we refer the reader to the interesting work of Yarofsky [11], which provides bounds on the total number of parameters in a ReLU net needed to approximate a given class of functions (mainly balls in various Sobolev spaces).

4. Proof of Theorem 2

Proof of Theorem 2.
We first treat the case:
f = sup 1 α N g α , g α : [ 0 , 1 ] d R affine
when f is convex. We seek to show that f can be exactly represented by a ReLU net with input dimension d , hidden layer width d + 1 , and depth N . Our proof relies on the following observation.
Lemma 1.
Fix d 1 , and let T : R + d R be an arbitrary function and L : R d R be affine. Define an invertible affine transformation A : R d + 1 R d + 1 by:
A ( x , y ) = x , L ( x ) + y .
Then, the image of the graph of T under:
A ReLU A 1
is the graph of x max { T ( x ) , L ( x ) } , viewed as a function on R + d .
Proof. 
We have A 1 ( x , y ) = ( x , L ( x ) + y ) . Hence, for each x R + d , we have:
A ReLU A 1 ( x , T ( x ) ) = x , T ( x ) L ( x ) 1 { T ( x ) L ( x ) > 0 } + L ( x ) = x , max { T ( x ) , L ( x ) } .
 □
We now construct a neural net that computes f . We note that the construction is potentially applicable to the study of avoiding sets (see the work of Shang [17]). Define invertible affine functions A α : R d + 1 R d + 1 by:
A α ( x , x d + 1 ) : = x , g α ( x ) + x d + 1 , x = ( x 1 , , x d ) ,
and set:
H α : = A α ReLU A α 1 .
Further, define:
H out : = ReLU e d + 1 , ·
where e d + 1 is the ( d + 1 ) th standard basis vector so that e d + 1 , · is the linear map from R d + 1 to R that maps ( x 1 , , x d + 1 ) to x d + 1 . Finally, set:
H in : = ReLU id , 0 ,
where id , 0 ( x ) = ( x , 0 ) maps [ 0 , 1 ] d to the graph of the zero function. Note that the ReLU in this initial layer is linear. With this notation, repeatedly using Lemma 1, we find that:
H out H N H 1 H in
therefore has input dimension d , hidden layer width d + 1 , depth N, and computes f exactly.
Next, consider the general case when f is given by:
f = g h , g = sup 1 α N g α , h = sup 1 β M h β
as in (11). For this situation, we use a different way of computing the maximum using ReLU nets.
Lemma 2.
There exists a ReLU net M with input dimension 2 , hidden layer width 2, output dimension 1, and depth 2 such that:
M x , y = max { x , y } , x R , y R + .
Proof. 
Set A 1 ( x , y ) : = ( x y , y ) , A 2 ( z , w ) = z + w , and define:
M = ReLU A 2 ReLU A 1 .
We have for each y 0 , x R :
f M ( x , y ) = ReLU ( ( x y ) 1 { x y > 0 } + y ) = max { x , y } ,
as desired.  □
We now describe how to construct a ReLU net N with input dimension d, hidden layer width d + 3 , output dimension 1 , and depth 2 ( M + N ) that exactly computes f. We use width d to copy the input x, width 2 to compute successive maximums of the positive affine functions g α , h β using the net M from Lemma 2 above, and width 1 as memory in which we store g = sup α g α while computing h = sup β h β . The final layer computes the difference f = g h .  □

5. Proof of Theorem 1

Proof of Theorem 1
We begin by showing (8) and (9). Suppose f : [ 0 , 1 ] d R + is convex, and fix ε > 0 . A simple discretization argument shows that there exists a piecewise affine convex function g : [ 0 , 1 ] d R + such that f g C 0 ε . By Theorem 2, g can be exactly represented by a ReLU net with hidden layer width d + 1 . This proves (8). In the case that f is Lipschitz, we use the following, a special case of Lemma 4.1 in [15].
Proposition 1.
Suppose f : [ 0 , 1 ] d R is convex and Lipschitz with Lipschitz constant L. Then, for every k 1 , there exist k affine maps A j : [ 0 , 1 ] d R such that:
f sup 1 j k A j C 0 72 L d 3 / 2 k 2 / d .
Combining this result with Theorem 2 proves (9). We turn to checking (5) and (10). We need the following observations, which seems to be well known, but not written down in the literature.
Lemma 3.
Let N be a ReLU net with input dimension d , a single hidden layer of width n , and output dimension 1 . There exists another ReLU net N ˜ that computes the same function as N , but has input dimension d and n + 2 hidden layers with width d + 2 .
Proof. 
Denote by { A j } j = 1 n the affine functions computed by each neuron in the hidden layer of N so that:
f N ( x ) = ReLU b + j = 1 n c j ReLU ( A j ( x ) ) .
Let T > 0 be sufficiently large so that:
T + j = 1 k c j ReLU ( A j ( x ) ) > 0 , 1 k n , x [ 0 , 1 ] d .
The affine transformations A ˜ j computed by the j th hidden layer of N ˜ are then:
A ˜ 1 ( x ) : = x , A j ( x ) , T and A ˜ n + 2 ( x , y , z ) = z T + b , x R d , y , z R
and:
A ˜ j ( x , y , z ) = x , A j ( x ) , z + c j 1 y , j = 2 , , n + 1 .
We are essentially using width d to copy in the input variable, width 1 to compute each A j , and width 1 to store the output.  □
Recall that positive continuous functions can be arbitrarily well approximated by smooth functions and hence by ReLU nets with a single hidden layer (see, e.g., Theorem 3.1 [5]). The relation (5) therefore follows from Lemma 3. Similarly, by Theorem 3.1 in [5], if f is smooth, then there exists K = K ( d ) > 0 and a constant C f depending only on the maximum value of the first K derivatives of f such that:
inf N f f N C f n 1 / d ,
where the infimum is over ReLU nets N with a single hidden layer of width n. Combining this with Lemma 3 proves (10).
It remains to prove (6) and (7). To do this, fix a positive continuous function f : [ 0 , 1 ] d R + with modulus of continuity ω f . Recall that the volume of the unit d-simplex is 1 / d ! , and fix ε > 0 . Consider the partition:
[ 0 , 1 ] d = j = 1 d ! / ω f ( ε ) d P j
of [ 0 , 1 ] d into d ! / ω f ( ε ) d copies of ω f ( ε ) times the standard d-simplex. Here, each P j denotes a single scaled copy of the unit simplex. To create this partition, we first sub-divide [ 0 , 1 ] d into at most ω f ( ε ) d cubes of side length at most ω f ( ε ) . Then, we subdivide each such smaller cube into d ! copies of the standard simplex (which has volume 1 / d ! ) rescaled to have side length ω f ( ε ) . Define f ε to be a piecewise linear approximation to f obtained by setting f ε equal to f on the vertices of the P j ’s and taking f ε to be affine on their interiors. Since the diameter of each P j is ω f ( ε ) , we have:
f f ε C 0 ε .
Next, since f ε is a piecewise affine function, by Theorem 2.1 in [2] (see Theorem 2), we may write:
f ε = g ε h ε ,
where g ε , h ε are convex, positive, and piecewise affine. Applying Theorem 2 completes the proof of (6) and (7).  □

6. Conclusions

We considered in this article the expressive power of ReLU networks with bounded hidden layer widths. In particular, we showed that ReLU networks of width d + 3 and arbitrary depth are capable of arbitrarily good approximations of any scalar continuous function of d variables. We showed further that this bound could be reduced to d + 1 in the case of convex functions and gave quantitative rates of approximation in all cases. Our results show that deep ReLU networks, even at a moderate width, are universal function approximators. Our work leaves open the question of whether such function representations can be learned by (stochastic) gradient descent from a random initialization. We will take up this topic in future work.

Funding

This research was funded by NSF Grants DMS-1855684 and CCF-1934904.

Acknowledgments

It is a pleasure to thank Elchanan Mossel and Leonid Hanin for many helpful discussions. This paper originated while I attended EM’s class on deep learning [18]. In particular, I would like to thank him for suggesting proving quantitative bounds in Theorem 2 and for suggesting that a lower bound can be obtained by taking piece-wise linear functions with many different directions. He also pointed out that the width estimates for the continuous function in Theorem 1 were sub-optimal in a previous draft. I would also like to thank Leonid Hanin for detailed comments on several previous drafts and for useful references to the results in approximation theory. I am also grateful to Brandon Rule and Matus Telgarsky for comments on an earlier version of this article. I am also grateful to BR for the original suggestion to investigate the expressivity of neural nets of width two. I also would like to thank Max Kleiman-Weiner for useful comments and discussion. Finally, I thank Zhou Lu for pointing out a serious error what used to be Theorem 3 in a previous version of this article. I have removed that result.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bengio, Y.; Hinton, G.; LeCun, Y. Deep learning. Nature 2015, 521, 436–444. [Google Scholar]
  2. Arora, R.; Basu, A.; Mianjy, P.; Mukherjee, A. Understanding deep neural networks with Rectified Linear Units. In Proceedings of the International Conference on Representation Learning, Vancouver, BC, Canada, 30 April 30–3 May 2018. [Google Scholar]
  3. Liao, Q.; Mhaskar, H.; Poggio, T. Learning functions: When is deep better than shallow. arXiv 2016, arXiv:1603.00988v4. [Google Scholar]
  4. Lin, H.; Rolnick, D.; Tegmark, M. Why does deep and cheap learning work so well? arXiv 2016, arXiv:1608.08225v3. [Google Scholar] [CrossRef]
  5. Mhaskar, H.; Poggio, T. Deep vs. shallow networks: An approximation theory perspective. Anal. Appl. 2016, 14, 829–848. [Google Scholar] [CrossRef]
  6. Poole, B.; Lahiri, S.; Raghu, M.; Sohl-Dickstein, J.; Ganguli, S. Exponential expressivity in deep neural networks through transient chaos. Adv. Neural Inf. Process. Syst. 2016, 29, 3360–3368. [Google Scholar]
  7. Raghu, M.; Poole, B.; Kleinberg, J.; Ganguli, S.; Dickstein, J. On the expressive power of deep neural nets. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 2847–2854. [Google Scholar]
  8. Telgrasky, M. Representation benefits of deep feedforward networks. arXiv 2015, arXiv:1509.08101. [Google Scholar]
  9. Telgrasky, M. Benefits of depth in neural nets. In Proceedings of the JMLR: Workshop and Conference Proceedings, New York, NY, USA, 19 June 2016; Volume 49, pp. 1–23. [Google Scholar]
  10. Telgrasky, M. Neural networks and rational functions. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 3387–3393. [Google Scholar]
  11. Yarotsky, D. Error bounds for approximations with deep ReLU network. Neural Netw. 2017, 94, 103–114. [Google Scholar] [CrossRef] [PubMed]
  12. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. (MCSS) 1989, 2, 303–314. [Google Scholar] [CrossRef]
  13. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. J. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  14. Hanin, B.; Sellke, M. Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv 2017, arXiv:1710.11278. [Google Scholar]
  15. Balázs, G.; György, A.; Szepesvári, C. Near-optimal max-affine estimators for convex regression. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA, 9–12 May 2015; Volume 38, pp. 56–64. [Google Scholar]
  16. Rolnick, D.; Tegmark, M. The power of deeper networks for expressing natural functions. In Proceedings of the International Conference on Representation Learning, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  17. Shang, Y. A combinatorial necessary and sufficient condition for cluster consensus. Neurocomputing 2016, 216, 611–616. [Google Scholar] [CrossRef]
  18. Mossel, E. Mathematical Aspects of Deep Learning. Available online: http://elmos.scripts.mit.edu/mathofdeeplearning/mathematical-aspects-of-deep-learning-intro/ (accessed on 10 September 2019).

Share and Cite

MDPI and ACS Style

Hanin, B. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. Mathematics 2019, 7, 992. https://doi.org/10.3390/math7100992

AMA Style

Hanin B. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. Mathematics. 2019; 7(10):992. https://doi.org/10.3390/math7100992

Chicago/Turabian Style

Hanin, Boris. 2019. "Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations" Mathematics 7, no. 10: 992. https://doi.org/10.3390/math7100992

APA Style

Hanin, B. (2019). Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. Mathematics, 7(10), 992. https://doi.org/10.3390/math7100992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop