Next Article in Journal
A Parsimonious Granger Causality Formulation for Capturing Arbitrarily Long Multivariate Associations
Next Article in Special Issue
A Novel Active Learning Regression Framework for Balancing the Exploration-Exploitation Trade-Off
Previous Article in Journal
Secrecy Enhancing Scheme for Spatial Modulation Using Antenna Selection and Artificial Noise
Previous Article in Special Issue
Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smooth Function Approximation by Deep Neural Networks with General Activation Functions

Department of Statistics, Seoul National University, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 627; https://doi.org/10.3390/e21070627
Submission received: 4 June 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 26 June 2019
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)

Abstract

:
There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions. This class of activation functions includes most of frequently used activation functions. We derive the required depth, width and sparsity of a deep neural network to approximate any Hölder smooth function upto a given approximation error for the large class of activation functions. Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.

1. Introduction

Neural networks are learning machines motivated by the architecture of the human brain. Neural networks are comprised of multiple hidden layers, and each of the hidden layers has multiple hidden nodes which consist of an affine map of the outputs from the previous layer and a nonlinear map called an activation function. Deep neural networks have been leading tremendous success in various pattern recognition and machine learning tasks such as object recognition, image segmentation, machine translation and others. For an overview on the empirical success of deep neural networks, we refer to the review paper [1] and recent book [2].
Inspired by the success of deep neural networks, many researchers have tried to give theoretical supports for the success of deep neural networks. Much of the work upto date has focused on the expressivity of deep neural networks, i.e., ability to approximate a rich class of functions efficiently. The well-known classical result on this topic is the universal approximation theorem, which states that every continuous function can be approximated arbitrarily well by a neural network [3,4,5,6,7]. But these results do not specify the required numbers of layers and nodes of a neural network to achieve a given approximation accuracy.
Recently, several results about the effects of the numbers of layers and nodes of a deep neural network to its expressivity have been reported. They provide upper bounds of the numbers of layers and nodes required for neural networks to uniformly approximate all functions of interest. Examples of a class of functions include the space of rational functions of polynomials [8], the Hölder space [9,10,11,12], Besov and mixed Besov spaces [13] and even a class of discontinuous functions [14,15].
The nonlinear activation function is a central part that makes neural networks differ from the linear models, that is, a neural network becomes a linear function if the linear activation function is used. Therefore, the choice of an activation function substantially influences on the performance and computational efficiency. Numerous activation functions have been suggested to improve neural network learning [16,17,18,19,20,21]. We refer to the papers [21,22] for an overview of this topic.
There are also many recent theoretical studies about the approximation ability of deep neural networks. However, most of the studies focus on a specific type of the activation function such as ReLU [9,10,13,14,15], or small classes of activation functions such as sigmoidal functions with additional monotonicity, continuity, and/or boundedness conditions [23,24,25,26,27] and m-admissible functions which are sufficiently smooth and bounded [11]. For definitions of sigmoidal and m-admissible functions, see [24] and [11], respectively. Thus a unified theoretical framework still lacks.
In this paper, we investigate the approximation ability of deep neural networks with a quite general class of activation functions. We derive the required numbers of layers and nodes of a deep neural network to approximate any Hölder smooth function upto a given approximation error for the large class of activation functions. Our specified class of activation functions and the corresponding approximation ability of deep neural networks include most of previous results [9,10,11,23] as special cases.
Our general theoretical results of the approximation ability of deep neural networks enables us to study statistical properties of deep neural networks. Schmidt-Hieber [10] and Kim et al. [28] proved the minimax optimality of a deep neural network estimator with the ReLU activation function in regression and classification problems, respectively. In this paper, we derive similar results for general activation functions.
This paper is structured as follows. In Section 2, we introduce some notions about deep neural networks. In Section 3, we introduce two large classes of activation functions. In Section 4, we present our main result on the approximation ability of a deep neural network with the general activation function considered in Section 3. In Section 5, we apply the result in Section 4 to the supervised learning problems of regression and classification. Conclusions are given in Section 6. The proofs of all results are given in Appendix.

Notation

We denote by 𝟙 · the indicator function. Let R be the set of real numbers and N be the set of natural numbers. For a real valued vector x ( x 1 , , x d ) , we let | x | 0 : = j = 1 d 𝟙 ( x j 0 ) , | x | p : = ( j = 1 d | x j | p ) 1 / p for p [ 1 , ) and | x | : = max 1 j d | x j | . For simplicity, we let | x | : = | x | 1 . For a real valued function f ( x ) : R R , we let f ( a ) , f ( a ) and f ( a ) are the first, second and third order derivatives of f at a, respectively. We let f ( a + ) : = lim ϵ 0 ( f ( a + ϵ ) f ( a ) ) / ϵ and f ( a ) : = lim ϵ 0 ( f ( a ϵ ) f ( a ) ) / ϵ . For x R , we write ( x ) + : = max { x , 0 } .

2. Deep Neural Networks

In this section we provide a mathematical representation of deep neural networks. A neural network with L N layers, n l N many nodes at the l-th hidden layer for l = 1 , , L , input of dimension n 0 , output of dimension n L + 1 and nonlinear activation function σ : R R is expressed as
N σ ( x | θ ) : = A L + 1 σ L A L σ 1 A 1 ( x ) ,
where A l : R n l 1 R n l is an affine linear map defined by A l ( x ) = W l x + b l for given n l × n l 1 dimensional weight matrix W l and n l dimensional bias vector b l and σ l : R n l R n l is an element-wise nonlinear activation map defined by σ l ( z ) : = ( σ ( z 1 ) , , σ ( z n l ) ) . Here, θ denotes the set of all weight matrices and bias vectors θ : = ( ( W 1 , b 1 ) , ( W 2 , b 2 ) , , ( W L + 1 , b L + 1 ) ) , which we call θ the parameter of the neural network, or simply, a network parameter.
We introduce some notations related to the network parameter. For a network parameter θ , we write L ( θ ) for the number of hidden layers of the corresponding neural network, and write n max ( θ ) for the maximum of the numbers of hidden nodes at each layer. Following a standard convention, we say that L ( θ ) is the depth of the deep neural network and n max ( θ ) is the width of the deep neural network. We let | θ | 0 be the number of nonzero elements of θ , i.e.,
| θ | 0 : = l = 1 L + 1 | vec ( W l ) | 0 + | b l | 0 ,
where vec ( W l ) transforms the matrix W l into the corresponding vector by concatenating the column vectors. We call | θ | 0 sparsity of the deep neural network. Let | θ | be the largest absolute value of elements of θ , i.e.,
| θ | : = max max 1 l L + 1 | vec ( W l ) | , max 1 l L + 1 | b l | .
We call | θ | magnitude of the deep neural network. We let in ( θ ) and out ( θ ) be the input and output dimensions of the deep neural network, respectively. We denote by Θ d , o ( L , N ) the set of network parameters with depth L, width N, input dimension d and output dimension o, that is,
Θ d , o ( L , N ) : = θ : L ( θ ) L , n max ( θ ) N , in ( θ ) = d , out ( θ ) = o .
We further define a subset of Θ d , o ( L , N ) with restrictions on sparsity and magnitude as
Θ d , o ( L , N , S , B ) : = { θ Θ d , o ( L , N ) : | θ | 0 S , | θ | B } .

3. Classes of Activation Functions

In this section, we consider two classes of activation functions. These two classes include most of commonly used activation functions. Definitions and examples of each class of activation functions are provided in the consecutive two subsections.

3.1. Piecewise Linear Activation Functions

We first consider piecewise linear activation functions.
Definition 1.
A function σ : R R is continuous piecewise linear if it is continuous and there exist a finite number of break points a 1 a 2 a K R with K N such that σ ( a k ) σ ( a k + ) for every k = 1 , , K and σ ( x ) is linear on ( , a 1 ] , [ a 1 , a 2 ] , , [ a K 1 , a K ] , [ a K , ) .
Throughout this paper, we write “picewise linear” instead of “continuous picewise linear” for notational simplicity unless there is a confusion. The representative examples of piecewise linear activation functions are as follows:
  • ReLU: σ ( x ) = max { x , 0 } .
  • Leaky ReLU: σ ( x ) = max { x , a x } for a ( 0 , 1 ) .
The ReLU activation function is the most popular choice in practical applications due to better gradient propagation and efficient computation [22]. In this reason, most of the recent results on the function approximation by deep neural networks are based on the ReLU activation function [9,10,13,14,15]. In Section 4, as Yarotsky [9] did, we extend these results to any continuous piecewise linear activation function by showing that the ReLU activation function can be exactly represented by a linear combination of piecewise linear activation functions. A formal proof for this argument is presented in Appendix A.1.

3.2. Locally Quadratic Activation Functions

One of the basic building blocks in approximation by deep neural networks is the square function, which should be approximated precisely. Piecewise linear activation functions have zero curvature (i.e., constant first-order derivative) inside each interval divided by its break points, which makes it relatively difficult to approximate the square function efficiently. But if there is an interval on which the activation function has nonzero curvature, the square function can be approximated more efficiently, which is a main motivation of considering a new class of activation functions that we call locally quadratic.
Definition 2.
A function σ : R R is locally quadratic if there exits an open interval ( a , b ) R on which σ is three times continuously differentiable with bounded derivatives and there exists t ( a , b ) such that σ ( t ) 0 and σ ( t ) 0 .
We now give examples of locally quadratic activation functions. First of all, any nonlinear smooth activation function with nonzero second derivative, is locally quadratic. Examples are:
  • Sigmoid: σ ( x ) = 1 1 + e x .
  • Tangent hyperbolic: σ ( x ) = e x e x e x + e x .
  • Inverse square root unit (ISRU) [18]: σ ( x ) = x 1 + a x 2 for a > 0 .
  • Soft clipping [19]: σ ( x ) = 1 a log ( 1 + e a x 1 + e a ( x 1 ) ) for a > 0 .
  • SoftPlus [22]: σ ( x ) = log ( 1 + e x ) .
  • Swish [21]: σ ( x ) = x 1 + e x .
In addition, piecewise smooth function having nonzero second derivative on at least one piece, is also locally quadratic. Examples are:
  • Rectified power unit (RePU) [12]: σ ( x ) = max { x k , 0 } for k N \ { 1 } .
  • Exponential linear unit (ELU) [17]: σ ( x ) = a ( e x 1 ) 𝟙 x 0 + x 𝟙 x > 0 for a > 0 .
  • Inverse square root linear unit (ISRLU) [18]: σ ( x ) = x 1 + a x 2 𝟙 x 0 + x 𝟙 x > 0 for a > 0 .
  • Softsign [16]: σ ( x ) = x 1 + | x | .
  • Square nonlinearity [20]:
    σ ( x ) = 𝟙 x > 2 + ( x x 2 / 4 ) 𝟙 0 x 2 + ( x + x 2 / 4 ) 𝟙 2 x < 0 𝟙 x < 2 .

4. Approximation of Smooth Functions by Deep Neural Networks

In this section we introduce the function class we consider and show the approximation ability of the deep neural networks with a activation function considered in Section 3.

4.1. Hölder Smooth Functions

We recall the definition of Hölder smooth functions. For a d-dimensional multiple index m ( m 1 , , m d ) N 0 d where N 0 : = N { 0 } , we let x m : = x 1 m 1 x d m d for x R d . For a function f : X R , where X denotes the domain of the function, we let f : = sup x X | f ( x ) | . We use notation
m f : = | m | f x m = | m | f x 1 m 1 x d m d ,
for m N 0 d to denote a derivative of f of order m . We denote by C m ( X ) , the space of m times differentiable functions on X whose partial derivatives of order m with | m | m are continuous. We define the Hölder coefficient of order s ( 0 , 1 ] as
[ f ] s : = sup x 1 , x 2 X , x 1 x 2 | f ( x 1 ) f ( x 2 ) | | x 1 x 2 | s .
For a positive real value α , the Hölder space of order α is defined as
H α ( X ) : = f C α ( X ) : f H α ( X ) < ,
where f H α ( X ) denotes the Hölder norm defined by
f H α ( X ) : = m N 0 d : | m | α m f + m N 0 d : | m | = α [ m f ] α α .
We denote by H α , R ( X ) the closed ball in the Hölder space of radius R with respect to the Hölder norm, i.e.,
H α , R ( X ) : = f H α ( X ) : f H α ( X ) R .

4.2. Approximation of Hölder Smooth Functions

We present our main theorem in this section.
Theorem 1.
Let d N , α > 0 and R > 0 . Let the activation function σ be either continuous piecewise linear or locally quadratic. Let f H α , R ( [ 0 , 1 ] d ) . Then there exist positive constants L 0 , N 0 , S 0 and B 0 depending only on d, α, R and σ such that, for any ϵ > 0 , there is a neural network
θ ϵ Θ d , 1 ( L 0 log ( 1 / ϵ ) , N 0 ϵ d / α , S 0 ϵ d / α log ( 1 / ϵ ) , B 0 ϵ 4 ( d / α + 1 ) )
satisfying
sup x [ 0 , 1 ] d | f ( x ) N σ ( x | θ ϵ ) | ϵ .
The result of Theorem 1 is equivalent to the results on the approximation by ReLU neural networks [9,10] in a sense that the upper bounds of the depth, width and sparsity are the same orders of those for ReLU, namely, depth = O ( log ( 1 / ϵ ) ) , width = O ( ϵ d / α ) and sparsity = O ( ϵ d / α log ( 1 / ϵ ) ) . We remark that each upper bound is equivalent to the corresponding lower bound established by [9] up to logarithmic factor.
For piecewise linear activation functions, Yarotsky [9] derived similar results to ours. For locally quadratic activation functions, only special classes of activation functions were considered in the previous work. Li et al. [12] considered the RePU activation function and Bauer and Kohler [11] considered sufficiently smooth and bounded activation functions which include the sigmoid, tangent hyperbolic, ISRU and soft clipping activation functions. However, soft plus, swish, ELU, ISRLU, softsign and square nonlinearity activation functions are new ones only considered in our results.
Even if the orders of the depth, width and sparsity are the same for both both piecewise linear and locally quadratic activation functions, the ways of approximating a smooth function by use of these two activation function classes are quite different. To describe this point, let us provide an outline of the proof. We first consider equally spaced grid points with length 1 / M inside the d-dimensional unit hypercube [ 0 , 1 ] d . Let G d , M be the set of such grid points, namely,
G d , M : = { 1 M ( m 1 , , m d ) : m j { 0 , 1 , , M } , j = 1 , , d } .
For a given Hölder smooth function f of order α , we first find a “local” function for each grid that approximates the target function near the grid point but vanishes at apart from the grid point. To be more specific, we construct the local functions g z , z G d , M which satisfies:
sup x [ 0 , 1 ] d f ( x ) z G d , M g z , M ( x ) C | G d , M | α / d ,
for some universal constant C > 0 . The inequality (4) implies that the more grid points we used, the more accurate approximation we get. Moreover, the quality of approximation is improved when the target function is more smooth (i.e., large α ) and low dimensional (i.e., small d). In fact, g z , M ( x ) is given by a product of the Taylor polynomial P z , M ( x ) : = m N 0 d : | m | α m f ( z ) ( x z ) m m ! at z and the local basis function ϕ z , M ( x ) : = j = 1 d ( 1 / M | x j z j | ) + , where m ! : = j = 1 d m j ! . By simple algebra, we have
P M ( x ) : = z G d , M g z , M ( x ) : = z G d , M P z , M ( x ) ϕ z , M ( x ) = z G d , M m : | m | α β z , m x m ϕ z , M ( x ) ,
where β z , m : = m ˜ : m ˜ m , | m ˜ | α m ˜ f ( z ) ( z ) m ˜ m m ! ( m ˜ m ) ! .
The second stage is to approximate each monomial x m and each local basis function ϕ z , M ( x ) by deep neural networks. Each monomial can be approximated more efficiently by a deep neural network with a locally quadratic activation function than a piecewise linear activation function since each monomial has nonzero curvature. On the other hand, the local basis function can be approximated more efficiently by a deep neural network with a piecewise linear activation than a locally quadratic activation function since the local basis function is piecewise linear itself. That is, there is a trade-off in using either a piecewise linear or a locally quadratic activation function.
We close this section by giving a comparison of our result to the approximation error analysis of [11]. Bauer and Kohler [11] studies approximation of the Hölder smooth function of order α by a two layer neural network with m-admissible activation functions with m α , where a function σ is called m-admissible if (1) σ is at least m + 1 times continuously differentiable with bounded derivatives; (2) a point t R exists, where all derivatives up to the order m of σ are different from zero; and (3) | σ ( x ) 1 | 1 / x for x > 0 and | σ ( x ) | 1 / | x | for x < 0 . Our notion of locally quadratic activation functions is a generalized version of the m-admissibility.
In the proof of [11], the condition m α is necessary because they approximate any monomial of order m with | m | α with a two layer neural network, which is impossible when m < α . We drop the condition m α by showing that any monomial of order m with | m | α can be approximated by deep neural network with a finite number of layers, which depends on α .

5. Application to Statistical Learning Theory

In this section, we apply our results about the approximation error of neural networks to the supervised learning problems of regression and classification. Let X be the input space and Y the output space. Let F be a given class of measurable functions from X to Y . Let P 0 be the true but unknown data generating distribution on X × Y . The aim of supervised learning is to find a predictive function that minimizes the population risk R ( f ) : = E ( X , Y ) P 0 ( Y , f ( X ) ) with respect to a given loss function . Since P 0 is unknown, we cannot directly minimize the population risk, and thus any estimator f ^ inevitably has the excess risk which is defined as R ( f ^ ) inf f F R ( f ) . For a given sample of size n, let F n be a given subset of F called a sieve and let ( x 1 , y 1 ) , , ( x n , y n ) be observed (training) data of input–output pairs assumed to be independent realizations of ( X , Y ) following P 0 . Let f ^ n be an estimated function among F n based on the training data ( x 1 , y 1 ) , , ( x n , y n ) . The excess risk of f ^ n is decomposed to approximation and estimation errors as
R ( f ^ n ) inf f F R ( f ) = R ( f ^ n ) inf f F n R ( f ) Estimation error + inf f F n R ( f ) inf f F R ( f ) Approximation error .
There is a trade-off between approximation and estimation errors. If the function class F n is sufficiently large to approximate the optimal estimator f * : = argmin f F R ( f ) well, then the estimation error becomes large due to high variance. In contrast, if F n is small, it leads to low estimation error but it suffers from large approximation error.
One of the advantages of deep neural networks is that we can construct a sieve which has good approximation ability as well as low complexity. Schmidt-Hieber [10] and Kim et al. [28] proved that a neural network estimator can achieve the optimal balance between the approximation and estimation errors to obtain the minimax optimal convergence rates in regression and classification problems, respectively. But they only considered the ReLU activation function. Based on the results of Theorem 1, we can easily extend their results to general activation functions.
The main tool to derive the minimax optimal convergence rate is that the complexity of a class of functions generated by a deep neural network is not affected much by a choice of an activation function, provided that the activation function is Lipschitz continuous. The function σ : R R is Lipschitz continuous if there is a constant C σ > 0 such that
| σ ( x 1 ) σ ( x 2 ) | C σ | x 1 x 2 | ,
for any x 1 , x 2 R . Here, C σ is called the Lipschitz constant. We use the covering number with respect to the L norm · as a measure of complexity of function classes. We recall the definition of the covering number. Let F be a given class of real-valued functions defined on X . Let δ > 0 . A collection { f j F : j = 1 , , J } is called a δ -covering set of F with respect to the L norm if for all f F , there exists f j in the collection such that f f j δ . The cardinality of the minimal δ -covering set is called the δ -covering number of F with respect to the L norm which is denoted by N ( δ , F , · ) . That is,
N ( δ , F , · ) : = inf J N : f 1 , , f J such   that F j = 1 J B ( f j , δ ) ,
where B ( f j , δ ) : = { f F : f f j δ } . The following proposition provides the covering number of a class of functions generated by neural networks.
Proposition 1.
Assume that the activation function σ is Lipschitz continuous with the Lipschitz constant C σ . Consider a class of functions generated by a deep neural network
F d , 1 ( L , N , S , B ) : = N σ ( · | θ ) : θ Θ d , 1 ( L , N , S , B ) .
For any δ > 0 ,
log N δ , F d , 1 ( L , N , S , B ) , · 2 L ( S + 1 ) log δ 1 C σ L ( N + 1 ) ( B 1 ) ,
where B 1 : = max { B , 1 } .
The result in Proposition 1 is very similar to the existing results in literature, e.g., Theorem 14.5 of [29], Lemma 5 of [10] and Lemma 3 of [13]. We employ similar techniques used in [10,13,29] to obtain the version presented here. We give the proof of this proposition in Appendix B.
All of the activation functions considered in Section 3 except RePU satisfy the Lipschitz condition (6) and hence Proposition 1 can be applied. An interesting implication of Proposition 1 is that the complexity of the function class generated by deep neural networks is not affected by the choice of an activation function. Hence, the remaining step to derive the convergence rate of a neural network estimator is that approximation accuracies by various activation functions are the same as that of the ReLU neural network.

5.1. Application to Regression

First we consider the regression problem. For simplicity, we let X = [ 0 , 1 ] d . Suppose that the generated model is Y | X = x N ( f 0 ( x ) , 1 ) for some f 0 : [ 0 , 1 ] d R . The performance of an estimator is measured by the L 2 risk R 2 , f 0 ( f ) , which is defined by
R 2 , f 0 ( f ) : = E f 0 , P x ( Y f ( X ) ) 2 : = E Y | X N ( f 0 ( X ) , 1 ) , X P x ( Y f ( X ) ) 2 ,
where P x is the marginal distribution of X . The following theorem proves that the optimal convergence rate is obtained by the deep neural network estimator of the regression function f 0 for a general activation function.
Theorem 2.
Suppose that the activation function σ is either piecewise linear or locally quadratic satisfying the Lipschitz condition (6). Then there are universal positive constants L 0 , N 0 , S 0 and B 0 such that the deep neural network estimator obtained by
f ^ n argmin f F σ , n i = 1 n ( y i f ( x i ) ) 2 ,
with
F σ , n : = N σ ( · | θ ) : N σ ( · | θ ) 2 R , θ Θ d , 1 L 0 log n , N 0 n d 2 α + d , S 0 n d 2 α + d log n , B 0 n κ
for some κ > 0 satisfies
sup f 0 H α , R ( [ 0 , 1 ] d ) E R 2 , f 0 ( f ^ n ) inf f F R 2 , f 0 ( f ) C n 2 α 2 α + d log 3 n ,
for some universal constant C > 0 , where the expectation is taken over the training data.

5.2. Application to Binary Classification

The aim of the binary classification is to find a classifier that predicts the label y { 1 , 1 } for any input x [ 0 , 1 ] d . An usual assumption on the data generating process is that Y | X = x 2 Bern ( η ( x ) ) 1 for some η : [ 0 , 1 ] d [ 0 , 1 ] , where Bern ( p ) denotes the Bernoulli distribution with parameter p. Note that η ( x ) is the conditional probability function P 0 ( Y = 1 | X = x ) . A common approach is, instead of finding a classifier directly, to construct a real valued function f, a so-called classification function, and predict the label y based on the sign of f ( x ) . The performance of a classification function is measured by the misclassification error R 01 , η ( f ) , which is defined by
R 01 , η ( f ) : = E η , P x 𝟙 ( Y f ( X ) < 0 ) : = E Y | X 2 Bern ( η ( X ) ) 1 , X P x 𝟙 ( Y f ( X ) < 0 ) .
It is well known that the convergence rate of the excess risk for classification is faster than that of regression when the conditional probability function η ( x ) satisfies the following condition: there is a constant q [ 0 , ] such that for any sufficiently small u > 0 , we have
P x | η ( X ) 1 / 2 | < u u q .
This condition is called the Tsybakov noise condition and q is called the noise exponent [30,31]. When q is larger, the classification task is easier since the probability of generating vague samples become smaller. The following theorem proves that the optimal convergence rate can be obtained by the deep neural network estimator with an activation function considered in Section 3. As is done by [28], we consider the hinge loss hinge ( z ) : = max { 1 z , 0 } .
Theorem 3.
Assume the Tsybakov noise condition (8) with the noise exponent q [ 0 , ] . Suppose that the activation function σ, which is either piecewise linear or locally quadratic satisfying the Lipschitz condition (6), is used for all hidden layers except the last one and the ReLU activation function is used for the last hidden layer. Then there are universal positive constants L 0 , N 0 , S 0 and B 0 such that the deep neural network estimator obtained by
f ^ n argmin f F σ , n i = 1 n hinge ( y i f ( x i ) ) ,
with
F σ , n : = N σ ( · | θ ) : N σ ( · | θ ) 1 , θ Θ d , 1 ( L 0 log n , N 0 n ν log 3 ν n , S 0 n ν log 3 ν + 1 n , B 0 n κ ) ,
for ν : = d / α ( q + 2 ) + d and some κ > 0 satisfies
sup η H α , R ( [ 0 , 1 ] d ) E R 01 , η ( f ^ n ) inf f F R 01 , η ( f ) C log 3 n n α ( q + 1 ) α ( q + 2 ) + d ,
for some universal constant C > 0 , where the expectation is taken over the training data.
Note that the Bayes classifier f * : = argmin f F R 01 , η ( f ) is given by
f * ( x ) = 2 𝟙 2 η ( x ) 1 0 1 ,
which is an indicator function. Since a neural network with the ReLU activation function can approximate indicator functions well [14,15,28], we use the ReLU activation function in the last layer in order to approximate the Bayes classifier more precisely and thus to achieve the optimal convergence rate.

6. Conclusions

In this study, we established the upper bounds of the required depth, width and sparsity of deep neural networks to approximate any Hölder continuous function for the general classes of activation functions. These classes of activation functions include most of the popularly used activation functions. The derived upper bounds of the depth, width and sparsity are optimal in a sense that they are equivalent to the lower bounds up to logarithmic factors. We used this generalization of the approximation error analysis to extend the statistical optimality of the deep neural network estimator in regression and classification problems, where the activation function is other than the ReLU.
Our construction of neural networks for approximation reveals that the piecewise linear activation functions are more efficient in approximating local basis functions while locally quadratic activation functions are more efficient in approximating polynomials. Hence if the activation function has both piecewise linear region and locally quadratic region, we could have a better approximation result. We leave the development of such activation functions as a future work.

Author Contributions

Conceptualization, Y.K.; methodology, I.O. and Y.K.; investigation, I.O.; writing—original draft preparation, I.O.; writing—review and editing, Y.K.; funding acquisition, Y.K.

Funding

This work was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1601-02.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Appendix A.1. Proof of Theorem 1 for Piecewise Linear Activation Functions

The main idea of the proof is that any deep neural network with the ReLU activation function can be exactly reconstructed by a neural network with a piecewise activation function whose proof is in the next lemma that is a slight modification of Proposition 1 (b) of [9].
Lemma A1.
Let σ be an any continuous peicewise linear activation function, and ρ be the ReLU activation function. Let θ Θ d , 1 ( L , N , S , B ) . Then there exists θ * Θ d , 1 ( L , 2 N , 4 S + 2 L N + 1 , C 1 B ) such that
sup x [ 0 , 1 ] d | N σ ( x | θ * ) N ρ ( x | θ ) | = 0 ,
where C 1 > 0 is a constant depending on the activation function σ.
Proof. 
Let a be any break point of σ. Note that σ ( a ) σ ( a + ) . Let r 0 be the distance between a and the closest other break point. Then σ is linear on [ a r 0 , a ] and [ a , a + r 0 ] . Then for any r > 0 , the ReLU activation function ρ ( x ) : = ( x ) + is expressed as
ρ ( x ) = σ a + r 0 2 r x σ a r 0 2 + r 0 2 r x σ ( a ) + σ a r 0 2 σ ( a + ) σ ( a ) r 0 2 r = : u 1 σ a + r 0 2 r x + u 2 σ a r 0 2 + r 0 2 r x + v
for any x [ r , r ] , where we define u 1 : = 1 / ( ( σ ( a + ) σ ( a ) ) r 0 2 r ) , u 2 : = 1 / ( ( σ ( a + ) σ ( a ) ) r 0 2 r ) and v : = ( σ ( a ) + σ ( a r 0 / 2 ) ) / ( ( σ ( a + ) σ ( a ) ) r 0 2 r ) .
Let θ ( ( W 1 , b 1 ) , , ( W L + 1 , b L + 1 ) ) Θ d , 1 ( L , N , S , B ) be given. Since both input x [ 0 , 1 ] d and the network parameter θ are bounded, we can take a sufficiently large r so that Equation (A1) holds for any hidden nodes of the network θ . We consider the deep neural network θ * ( ( W 1 * , b 1 * ) , , ( W L + 1 * , b L + 1 * ) ) Θ d , 1 ( L , 2 N ) , where we set
W l * : = r 0 2 r u 1 W l u 2 W l u 1 W l u 2 W l R 2 n l × 2 n l 1 , b l * : = a 1 n l + r 0 2 r ( v W l 1 n l 1 + b l ) a r 0 2 1 n l + + r 0 2 r ( v W l 1 n l 1 + b l ) R 2 n l ,
for l = 1 , , L and
W L + 1 * : = u 1 W L + 1 u 2 W L + 1 , b L + 1 * : = v .
Here, 1 n denotes the n-dimensional vector of 1 s . Then by Equation (A1) and some algebra, we have that N σ ( x | θ * ) = N ρ ( x | θ ) for any x [ 0 , 1 ] d . For the sparsity of θ * , we note that
| vec ( W l * ) | 0 + | b l * | 0 4 | vec ( W l ) | 0 + 2 n l
which implies that | θ * | 0 4 | θ | 0 + 2 L ( θ ) n max ( θ ) + 1 . □
Thanks to Lemma A1, to prove Theorem 1 for piecewise linear activation functions, it suffices to show the approximation ability of the ReLU networks, which is already done by [10] as in the next lemma.
Lemma A2
(Theorem 5 of [10]). Let ρ be the ReLU activation function. For any f H α , R ( [ 0 , 1 ] d ) and any integers m 1 and M max ( α + 1 ) d , ( R + 1 ) e d , there exists a network parameter θ Θ d , 1 ( L , N , S , 1 ) such that
sup x [ 0 , 1 ] d | N ρ ( x | θ ) f ( x ) | ( 2 R + 1 ) ( 1 + d 2 + α 2 ) 6 d M 2 m + R 3 α M α / d ,
where L = 8 + ( m + 5 ) ( 1 + log 2 ( d α ) ) , N = 6 ( d + α ) M , and S = 141 ( d + α + 1 ) 3 + d M ( m + 6 ) .
Theorem 1 for piecewise linear activation functions is a direct consequence of Lemmas A1 and A2, which is summarized as follows.
Proof of Theorem 1 for piecewise linear activation functions.
Let ρ be the ReLU activation function. By letting M = 3 d ( 2 R ) d / α ϵ d / α and m = log 2 2 ( 2 R + 1 ) ( 1 + d 2 + α 2 ) 18 d ( 2 R ) d / α ϵ d / α 1 , Lemma A2 implies that there exists a network parameter θ such that sup x [ 0 , 1 ] d | N ρ ( x | θ ) f ( x ) | ϵ with L ( θ ) L 0 log 1 / ϵ , n max ( θ ) N 0 ϵ d / α and | θ | 0 S 0 ϵ d / α log 1 / ϵ for some positive constants L 0 , N 0 , and S 0 depending only on α, d and R. Hence by Lemma A1, there is a network parameter θ producing the same output of the ReLU neural network N ρ ( · | θ ) with L ( θ ) = L ( θ ) , n max ( θ ) = 2 n max ( θ ) , | θ | 0 4 | θ | 0 + 2 L ( θ ) n max ( θ ) + 1 S 0 ϵ d / α log 1 / ϵ and | θ | B 0 | θ | for some S 0 > 0 depending only on α, d, R and σ, and some B 0 > 0 depending only on σ, which completes the proof. □

Appendix A.2. Proof of Theorem 1 for Locally Quadratic Activation Functions

Lemma A3.
Assume that an activation function σ is locally quadratic. There is a constant K 0 depending only on the activation function such that for any K > K 0 the following results hold.
(a) 
There is a neural network θ 2 Θ 1 , 1 ( 1 , 3 ) with | θ 2 | K 2 such that
sup x [ 1 , 1 ] | N σ ( x | θ 2 ) x 2 | C 1 K ,
where C 1 > 0 is a constant depending only on σ.
(b) 
Let A > 0 . There is a neural network parameter θ × , A Θ 2 , 1 ( 1 , 9 ) with | θ × , A | max { K 2 , 2 A 2 } such that
sup x [ A , A ] 2 | N σ ( x | θ × , A ) x 1 x 2 | 6 A 2 C 1 K .
(c) 
Let α be a positive integer. For any multi-index m N 0 d with | m | α , there is a network parameter θ m Θ d , 1 ( log 2 α , 9 α ) with | θ m | max { K 2 , C 2 } such that
sup x [ 0 , 1 ] d | N σ ( x | θ m ) x m | C 3 K ,
for some positive constants C 2 and C 3 depending only on σ and α.
(d) 
There is a network parameter θ 1 / 2 Θ 1 , 1 ( log K , 15 ) with | θ 1 / 2 | max { K 2 , C 4 } such that
sup x [ 0 , 2 ] | N σ ( x | θ 1 / 2 ) x | C 5 log K K
for some positive constants C 4 and C 5 depending only on σ.
(e) 
There is a network parameter θ a b s Θ 1 , 1 ( log K , 15 ) with | θ a b s | max { K 2 , C 6 } such that
sup x [ 1 , 1 ] | N σ ( x | θ abs ) | x | | C 7 K ,
for some positive constants C 6 and C 7 depending only on σ.
Proof. 
Recall that there is an interval ( a , b ) on which σ ( x ) is three times continuously differentiable with bounded derivatives and there is t ( a , b ) such that σ ( t ) 0 and σ ( t ) 0
Proof of (a). Take K large so that 2 / K < min { | t b | , | t a | } . Consider a neural network
N σ ( x | θ 2 ) : = k = 0 2 ( 1 ) k 1 K 2 σ ( t ) 2 k σ k K x + t .
Since σ is three times continuously differentiable on ( a , b ) and ( k 1 ) x / K + t ( a , b ) if x [ 0 , 1 ] , it can be expanded in the Taylor series with Lagrange remainder around t to have
N σ ( x | θ 2 ) = K 2 σ ( t ) k = 0 2 ( 1 ) k 2 k σ ( t ) + σ ( t ) k x K + σ ( t ) 2 ( k x ) 2 K 2 + σ ( ξ k ) 6 ( k x ) 3 K 3 = K 2 σ ( t ) σ ( t ) x 2 K 2 + k = 1 2 ( 1 ) k 2 k σ ( ξ k ) 6 ( k x ) 3 K 3 = x 2 + x 3 6 K σ ( t ) k = 1 2 ( 1 ) k k 3 2 k σ ( ξ k ) ,
where ξ k [ t k | x | / K , t + k | x | / K ] ( a , b ) . Since the third order derivative is bounded on ( a , b ) , we get the desired assertion by retaking K 2 / σ ( t ) K .
Proof of (b). The proof can be done straightforwardly by the polarization type identity:
x 1 x 2 = 2 A 2 x 1 + x 2 2 A 2 x 1 2 A 2 x 1 2 A 2 .
We construct the network as
N σ ( x | θ × , A ) : = 2 A 2 N σ x 1 + x 2 2 A | θ 2 N σ x 1 2 A | θ 2 N σ x 2 2 A | θ 2 ,
where θ 2 is defined in (A3). Since ( x 1 + x 2 ) / 2 A , x 1 / 2 A , x 2 / 2 A [ 1 , 1 ] for x [ A , A ] 2 , the triangle inequality implies that | N σ ( x | θ × , A ) x 1 x 2 | 6 A 2 C 1 / K .
Proof of (c). Let q : = log 2 α . We construct θ m as follows. Fix x ( x 1 , , x d ) [ 0 , 1 ] d . We first consider the affine map that transforms ( x 1 , , x d ) to z [ 0 , 1 ] 2 q which is given by
z : = ( x 1 , , x 1 m 1 times , x 2 , , x 2 m 2 times , , x d , , x d m d times , 1 , , 1 2 q | m | times ) .
The first hidden layer of θ m pairs neighboring entries in z and applies the network θ × , A 1 defined in (b) with A 1 = 1 to each pair. That is, the first hidden layer of θ m produces
g 1 , j : = N σ ( ( z 2 j 1 , z 2 j ) | θ × , 1 ) : j = 1 , , 2 q 1 .
Note that sup 1 j 2 q 1 | g 1 , j z 2 j 1 z 2 j | 6 C 1 / K and sup 1 j 2 q 1 | g 1 , j | 6 C 1 / K + 1 , where 6 C 1 / K + 1 can be bounded by some constant A 2 > 1 depending only on C 1 and K 0 . Then the second hidden layer of θ m pairs neighboring entries of g 1 , j : j = 1 , , 2 q 1 and applies θ × , A 2 to each pair to have
g 2 , j : = N σ ( ( g 1 , 2 j 1 , g 1 , 2 j ) | θ × , A 2 ) : j = 1 , , 2 q 2 .
Note that sup 1 j 2 q 2 | g 2 , j g 1 , 2 j 1 g 1 , 2 j | 6 C 1 A 2 2 / K and sup 1 j 2 q 2 | g 2 , j | 6 C 1 A 2 2 / K + 1 A 3 for some A 3 > 1 depending only on C 1 and K 0 . We repeat this procedure to produce g k , j : j = 1 , , 2 q k for k = 3 , , q with
sup 1 j 2 q k | g k , j g k 1 , 2 j 1 g k 1 , 2 j | 6 C 1 A k 2 K , sup 1 j 2 q k | g k , j | A k + 1 ,
for some A k + 1 > 1 , and we set N σ ( x | θ m ) equal to g q , 1 .
By applying the triangle inequality repeatedly, we have
| g q , 1 x m | | g q , 1 g q 1 , 1 g q 1 , 2 | + g q 1 , 1 j = 1 2 q 1 z j g q 1 , 2 + g q 1 , 2 j = 2 q 1 + 1 2 q z j j = 1 2 q 1 z j 6 C 1 A q 2 K + A q g q 1 , 1 j = 1 2 q 1 z j + g q 1 , 2 j = 2 q 1 + 1 2 q z j 6 C 1 A q 2 K + ( A q + 1 ) 6 C 1 A q 1 2 K + A q A q 1 g q 2 , 1 j = 1 2 q 2 z j + A q g q 2 , 2 j = 2 q 2 + 1 2 × 2 q 2 z j + A q 1 g q 2 , 3 j = 2 × 2 q 2 + 1 3 × 2 q 2 z j + g q 2 , 4 j = 3 × 2 q 2 + 1 4 × 2 q 2 z j k = 0 q 1 A q k 2 h = q k + 1 q ( A h + 1 ) 6 C 1 K C 1 1 K ,
for some C 1 > 0 depending only on C 1 , K 0 and q. Since we set x arbitrary, the proof is done.
Proof of (d). By (b), it is easy to verify that there is a network θ 1 Θ 1 , 1 ( 1 , 6 ) with | θ 1 | max { K 2 , 2 } such that | σ ( x ) x | C 1 / K for any x [ 1 , 1 ] and some constant C 1 > 0 . The Taylor series with Lagrange remainder around 1 of x is given by
x = k = 0 J ( x 1 ) k k ! + 1 ( J + 1 ) ! d J + 1 x d x J + 1 | x = ξ ( x 1 ) J + 1 ,
where ξ [ 0 , 2 ] , and thus
sup x [ 0 , 2 ] | x k = 0 J ( x 1 ) k k ! | C 1 1 ( J + 1 ) ! e e J + 1 J + 1 .
for some C 1 > 0 , where the last inequality is because n ! ( n / e ) n e .
Now, we will construct a neural network θ p , J that approximates the polynomial k = 0 J ( x 1 ) k k ! as follows. The first hidden layer computes ( N σ ( x 1 | θ 2 ) / 2 , N σ ( x 1 | θ 1 ) ) from the input x. Then
| ( N σ ( x 1 | θ 2 ) / 2 , N σ ( x 1 | θ 1 ) ) ( ( x 1 ) 2 / 2 , ( x 1 ) ) | C 2 1 K ,
for any x [ 0 , 1 ] and some constant C 2 > 0 . The next hidden layer computes ( N σ ( ( u , v ) | θ × , 1 + C 2 / K ) / 3 , N σ ( u + v | θ 1 ) ) from the input ( u , v ) from the first hidden layer. Using the triangle inequality, we have that the second hidden layer approximates the vector ( ( x 1 ) 3 / 3 ! , ( x 1 ) 2 / 2 + ( x 1 ) ) by error 2 C 3 / K for some C 3 > 0 . Repeating this procedure, we construct the network θ p , J Θ 1 , 1 ( J , 15 ) which approximates k = 0 J ( x 1 ) k k ! by error C 4 J / K for some C 4 > 0 . Taking J = log K , we observe that ( e / J + 1 ) J + 1 ( e / log K ) log K + 1 e K / ( log K ) log K 1 / K for all sufficiently large K, which implies the desired result.
Proof of (e). Let ζ ( 0 , 1 ) . Since for any x R ,
x 2 + ζ 2 | x | ζ 2 x 2 + ζ 2 + | x | ζ 2 ζ = ζ ,
the function x 2 + ζ 2 approximates the absolute value function | x | by error ζ. For θ 2 in (a) and θ 1 / 2 in (d), we have that
| N σ N σ ( x | θ 2 ) + ξ 2 | θ 1 / 2 | x | | | N σ N σ ( x | θ 2 ) + ζ 2 | θ 1 / 2 x 2 + ζ 2 | + ζ | N σ N σ ( x | θ 2 ) + ζ 2 | θ 1 / 2 N σ ( x | θ 2 ) + ξ 2 | + | N σ ( x | θ 2 ) + ζ 2 x 2 + ζ 2 | + ζ C 1 log K K + 1 K ζ + ζ
for some constant C 1 > 0 . We now set ζ = 1 / K and N σ ( x | θ a b s ) : = N σ ( N σ ( x | θ 2 ) + K 1 | θ 1 / 2 ) . Since ( log K ) / K = o ( 1 / K ) , the proof is done. □
Proof of Theorem 1 for locally quadratic activation functions.
Recall that
P M ( x ) = z G d , M m N 0 d : | m | α β z , m x m ϕ z , M ( x ) .
Then by Lemma B.1 of [10],
sup x [ 0 , 1 ] d | P M ( x ) f ( x ) | R M α .
From the equivalent representation of the ReLU function ( x ) + = ( x + | x | ) / 2 , we can easily check that the neural network N σ ( x | θ r e l u ) : = N ( x | θ a b s ) + N σ ( x | θ 1 ) / 2 with θ r e l u Θ 1 , 1 ( log K , 21 ) approximates the ReLU function by error C 1 / K for some C 1 > 0 , where θ 1 Θ 1 , 1 ( 1 , 6 ) is defined in the proof of (d) of Lemma A3 and θ a b s Θ 1 , 1 ( log K , 15 ) is defined in (e) of Lemma A3. For z ( 0 , 1 ) and M N , we define
N σ ( x | θ ϕ , z , M ) : = N σ 1 / M N σ ( ( x z ) | θ abs ) | θ relu .
Then it approximates the function ( 1 / M | x z | ) + by error C 2 / K for some C 2 > 0 . In turn, for z G d , M , by invoking the similar construction used in (c) of Lemma A3 to approximates the product of d components, we can construct the network θ ϕ , z , M Θ 1 , 1 ( log K + log 2 d , 21 d ) with | θ ϕ , z , M | C 3 K 2 for some C 3 > 0 such that
sup x [ 0 , 1 ] d | N ( x | θ ϕ , z , M ) j = 1 d 1 M | x j z j | + | C 4 1 K ,
for some C 4 > 0 . For each m N 0 d with | m | α , we have the neural network θ m in (c) of Lemma A3 that approximates x m . The number of these networks is d + α α , which is denoted by A α . Also there are | G d , M | = ( M + 1 ) d networks θ ϕ , z , M for z G d , M . We need approximation of each product x m ϕ z , M , which requires additional A α ( M + 1 ) d many networks θ × , A Θ 2 , 1 ( 1 , 9 ) , where θ × , A is defined as in (A4) for some A > 1 not depending on M and K. Finally we construct the output layer which computes the weighted sum of N σ ( N σ ( x | θ m ) , N σ ( x | θ ϕ , z , M ) ) | θ × , A : m N 0 d , | m | α , z G d , M . Letting θ f , K , M be the network constructed above, we can check that
sup x [ 0 , 1 ] d | N ( x | θ f , K , M ) P M ( x ) | C 5 A α ( M + 1 ) d 1 K + 1 K C 6 ( M + 1 ) d K ,
for some positive constants C 5 and C 6 . In addition, we have L ( θ f , K , M ) 1 + ( log K + log 2 ( α d ) C 7 log K and n max ( θ f , K , M ) C 8 A α ( M + 1 ) d for some positive constants C 7 and C 8 . For sparsity of the network, we have
| θ f , K , M | 0 A α ( M + 1 ) d | θ × , A | 0 + ( M + 1 ) d | θ ϕ , z , M | 0 + A α | θ m | 0 C 9 log K ( M + 1 ) d ,
for some C 9 > 0 . Taking M + 1 = ϵ 1 / α and K = ϵ 2 d / α 2 , we have
θ f , K , M Θ L 0 log ( 1 / ϵ ) , N 0 ϵ d / α , S 0 ϵ d / α log ( 1 / ϵ ) , B 0 ϵ 4 ( d / α + 1 ) ,
so that P M N σ ( · | θ f , K , M ) C 10 ϵ for some C 10 > 0 . Since f P M R M α C 11 ϵ for some C 11 > 0 , the proof is done. □

Appendix B. Proofs of Proposition 1

Proof. 
Given a deep neural network θ = ( ( W 1 , b 1 ) , , ( W L + 1 , b L + 1 ) ) Θ d , 1 ( L , N , S , B ) , we define N ˇ l , σ , θ : R d R n l 1 and N ^ l , σ , θ : R n l R as
N ˇ l , σ , θ ( x ) : = σ l 1 A l 1 σ 1 A 1 ( x ) , N ^ l , σ , θ ( x ) : = A L + 1 σ L A L σ l A l σ l 1 ( x ) ,
for l 2 , , L , where A l x = W l x + b l . Corresponding to the last and first layer, we define N ˇ 1 , σ , θ ( x ) = x and N ^ L + 1 , σ , θ ( x ) = x . Note that N σ ( x | θ ) = N ^ l + 1 , σ , θ A l N ˇ l , σ , θ ( x ) . For given δ > 0 , let θ = ( ( W 1 , b 1 ) , , ( W L + 1 , b L + 1 ) ) Θ d , 1 ( L , N , S , B ) and θ * = ( ( W 1 * , b 1 * ) , , ( W L + 1 * , b L + 1 * ) ) Θ d , 1 ( L , N , S , B ) be two neural network parameter such that | vec ( W l W l * ) | δ and | b l b l * | δ for l = 1 , , L + 1 . Let C σ be the Lipschitz constant of σ. We observe that
N ˇ l , σ , θ C σ N B N ˇ l 1 , σ , θ + B C σ ( B 1 ) ( N + 1 ) N ˇ l 1 , σ , θ { C σ ( B 1 ) ( N + 1 ) } l 1 ,
and similarly, N ^ l , σ , θ ( C σ B N ) L l + 1 . Letting A l * x = W l * x + b l * , we have
N σ ( · | θ ) N σ ( · | θ * ) l = 1 L N ^ l + 1 , σ , θ * A l N ˇ l , σ , θ ( · ) N ^ l + 1 , σ , θ * A l * N ˇ l , σ , θ ( · ) l = 1 L ( C σ B N ) L l ( A l A l * ) N ˇ l , σ , θ ( · ) l = 1 L ( C σ B N ) L l δ { C σ ( B 1 ) ( N + 1 ) } l 1 δ L { C σ ( B 1 ) ( N + 1 ) } L .
Thus, for a fixed sparsity pattern (i.e., the location of nonzero elements in θ ), the covering number is bounded by [ δ / L { C σ ( B 1 ) ( N + 1 ) } L ] S . Since the number of the sparsity patterns is bounded by ( ( N + 1 ) L S ) ( N + 1 ) L S , the log of covering number is bounded above by
log ( N + 1 ) L S L { C σ ( B 1 ) ( N + 1 ) } L δ S 2 L S log C σ L ( B 1 ) ( N + 1 ) δ ,
which completes the proof. □

Appendix C. Proof of Theorem 2

The proof Theorem 2 is based on the following oracle inequality.
Lemma A4
(Lemma 4 of [10]). Assume that Y | X = x N ( f 0 ( x ) , 1 ) for some f 0 with f 0 R . Let F be a given function class from [ 0 , 1 ] d to [ 2 R , 2 R ] , and let f ^ be any estimator in F . Then for any δ ( 0 , 1 ] , we have
E E X P x f ^ ( X ) f 0 ( X ) 2 4 [ inf f F E X P x f ( X ) f 0 ( X ) 2 + ( 4 R ) 2 18 log N ( δ , F , · ) + 72 n + 32 δ ( 4 R ) + Δ n ] ,
with
Δ n : = E 1 n i = 1 n Y i f ^ X i 2 inf f F 1 n i = 1 n Y i f X i 2 ,
where the expectations are taken over the training data.
Proof of Theorem 2.
We apply Lemma A4 to F = F σ , n and f ^ = f ^ n argmin f F σ , n i = 1 n ( y i f ( x i ) ) 2 . By definition of f ^ n , we have Δ n = 0 . Also it can be easily verified that f 0 = argmin f F R 2 , f 0 ( f ) and E f 0 , P x ( f ^ n ( X ) f 0 ( X ) ) 2 = R 2 , f 0 ( f ^ n ) R 2 , f 0 ( f 0 ) . Set δ = 1 / n . By Proposition 1,
log N 1 n , F σ , n , · C 1 n d 2 α + d log 3 n ,
for some C 1 > 0 . If a function f n is approximates f 0 by error ϵ which is sufficeintly small, then f n 2 R since f 0 R . Now, Theorem 1 implies that there is f n F σ , n such that
E f 0 , P x f n ( X ) f 0 ( X ) 2 C 2 sup x [ 0 , 1 ] d | f n ( x ) f 0 ( x ) | 2 C 3 n d 2 α + d d / α 2 = C 3 n 2 α 2 α + d ,
which completes the proof. □

Appendix D. Proof of Theorem 3

For a given real-valued function f, let R hinge , η ( f ) : = E Y | X 2 Bern ( η ( X ) ) 1 , X P x hinge ( Y f ( X ) ) , which we call the hinge risk. The proof of Theorem 3 is based on the following theorem, which is given in [28].
Lemma A5
(Theorem 6 of [28]). Assume that η ( x ) satisfies the Tsybakov noise condition (8) with the noise exponent q [ 0 , ] . Assume that there exists a sequence ( δ n ) n N such that
  • there exists a sequence of classes of functions { F n } n N with sup n N sup f F n f F for some F > 0 such that there is f n F n with R hinge , η ( f n ) min f F R hinge , η ( f ) C 1 δ n for some universal constant C 1 > 0 ;
  • log N ( δ n , F n , · ) C 2 n δ n ( q + 2 ) / ( q + 1 ) for some universal constant C 2 > 0 .
Then the estimator f ^ n obtained by
f ^ n argmin f F n i = 1 n hinge ( y i f ( x i ) )
satisfies
E R 01 , η ( f ^ n ) min f F R 01 , η ( f ) C 3 δ n ,
for some universal constant C 3 > 0 , where the expectation is taken over the training data.
Proof of Theorem 3.
It is well known that f * = 2 𝟙 η ( · ) 1 / 2 1 = argmin f F R hinge , η ( f ) , i.e., the hinge risk minimizer is equal to the Bayes classifier [32]. The first step is to find a function f n F σ , n which approximates the Bayes classifier f * well. Let ( ξ n ) n N be a given sequence of positive integers. Since η H α , R ( [ 0 , 1 ] d ) , by Theorem 6, for each ξ n there exists θ n such that N σ ( · | θ n ) η ( · ) ξ n with at most O ( log ( 1 / ξ n ) ) layers, O ( ξ n d / α ) nodes at each layer and O ( ξ n d / α log ( 1 / ξ n ) ) nonzero parameters. We construct the neural network f n by adding one ReLU layer to N σ ( · | θ n ) to have
f n ( x ) = 2 ρ 1 ξ n N σ ( x | θ n ) 1 2 ρ 1 ξ n N σ ( x | θ n ) 1 2 1 1 ,
where ρ is the ReLU activation function. Note that f n ( x ) is equal to 1 if N σ ( x | θ n ) 1 / 2 + ξ n , ( N σ ( x | θ n ) 1 / 2 ) / ξ n if 1 / 2 ( N σ ( x | θ n ) < 1 / 2 + ξ n and 1 otherwise. Let
B ( 4 ξ n ) = { x : | 2 η ( x ) 1 | > 4 ξ n } .
Then on B ( 4 ξ n ) , | f n ( x ) f * ( x ) | = 0 , since N σ ( x | θ n ) 1 / 2 = ( η ( x ) 1 / 2 ) ( ( N σ ( x | θ n ) η ( x ) ) ξ n when 2 η ( x ) 1 > 4 ξ n . Similarly we can show that N σ ( x | θ n ) 1 / 2 < ξ n when 2 η ( x ) 1 < 4 ξ n . Therefore the Tsybakov noise condition (8) implies
R hinge , η ( f n ) R hinge , η ( f * ) = | f n ( x ) f * ( x ) | | 2 η ( x ) 1 | d P x ( x ) = B ( 4 ξ n ) c | f n ( x ) f * ( x ) | | 2 η ( x ) 1 | d P x ( x ) 8 ξ n Pr ( | 2 η ( x ) 1 | 4 ξ n ) C 1 ξ n q + 1 ,
for some C 1 > 0 , where the first equality follows from Theorem 2.31 of [33].
We take δ n = C 1 ξ n q + 1 . Then there are positive constants L 0 , N 0 , S 0 and B 0 such that f n F σ , n where
F σ , n : = { N σ ( · | θ ) : N σ ( · | θ ) 1 , θ Θ d , 1 ( L 0 log ( δ n 1 ) , N 0 δ n d / α ( q + 1 ) , S 0 δ n d / α ( q + 1 ) log ( δ n 1 ) , B 0 δ n κ ) } ,
for some κ > 0 . Propostion 1 implies that the log covering number of F σ , n is bounded above by
log N ( δ n , F σ , n , · ) δ n d / α ( q + 1 ) log 3 ( δ n 1 ) .
Note that to satisfy the entropy condition of Lemma A5, δ n should satisfy
( δ n ) d α ( q + 1 ) + q + 2 q + 1 C 2 n 1 log 3 ( δ n 1 )
for some C 2 > 0 . If we let δ n = ( log 3 n / n ) α ( q + 1 ) / ( α ( q + 2 ) + d ) , the condition (A5) holds and so the proof is done. □

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
  2. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  3. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  4. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  5. Funahashi, K.I. On the approximate realization of continuous mappings by neural networks. Neural Netw. 1989, 2, 183–192. [Google Scholar] [CrossRef]
  6. Chui, C.K.; Li, X. Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 1992, 70, 131–141. [Google Scholar] [CrossRef] [Green Version]
  7. Leshno, M.; Lin, V.Y.; Pinkus, A.; Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 1993, 6, 861–867. [Google Scholar] [CrossRef] [Green Version]
  8. Telgarsky, M. Neural networks and rational functions. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 3387–3393. [Google Scholar]
  9. Yarotsky, D. Error bounds for approximations with deep ReLU networks. Neural Netw. 2017, 94, 103–114. [Google Scholar] [CrossRef] [Green Version]
  10. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. arXiv 2017, arXiv:1708.06633. [Google Scholar]
  11. Bauer, B.; Kohler, M. On deep learning as a remedy for the curse of dimensionality in nonparametric regression. Ann. Stat. 2019. accepted. [Google Scholar] [CrossRef]
  12. Li, B.; Tang, S.; Yu, H. Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units. arXiv 2019, arXiv:1903.05858. [Google Scholar]
  13. Suzuki, T. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: Optimal rate and curse of dimensionality. arXiv 2018, arXiv:1810.08033. [Google Scholar]
  14. Petersen, P.; Voigtlaender, F. Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Netw. 2018, 108, 296–330. [Google Scholar] [CrossRef] [Green Version]
  15. Imaizumi, M.; Fukumizu, K. Deep Neural Networks Learn Non-Smooth Functions Effectively. arXiv 2018, arXiv:1802.04474. [Google Scholar]
  16. Bergstra, J.; Desjardins, G.; Lamblin, P.; Bengio, Y. Quadratic Polynomials Learn Better Image Features; Technical Report 1337; Département d’Informatique et de Recherche Operationnelle, Université de Montréal: Montréal, QC, Canada, 2009. [Google Scholar]
  17. Clevert, D.A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv 2015, arXiv:1511.07289. [Google Scholar]
  18. Carlile, B.; Delamarter, G.; Kinney, P.; Marti, A.; Whitney, B. Improving deep learning by inverse square root linear units (ISRLUs). arXiv 2017, arXiv:1710.09967. [Google Scholar]
  19. Klimek, M.D.; Perelstein, M. Neural Network-Based Approach to Phase Space Integration. arXiv 2018, arXiv:1810.11509. [Google Scholar]
  20. Wuraola, A.; Patel, N. SQNL: A New Computationally Efficient Activation Function. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  21. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
  22. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
  23. Mhaskar, H.N. Approximation properties of a multilayered feedforward artificial neural network. Adv. Comput. Math. 1993, 1, 61–80. [Google Scholar] [CrossRef]
  24. Costarelli, D.; Vinti, G. Saturation classes for max-product neural network operators activated by sigmoidal functions. Results Math. 2017, 72, 1555–1569. [Google Scholar] [CrossRef]
  25. Costarelli, D.; Spigler, R. Solving numerically nonlinear systems of balance laws by multivariate sigmoidal functions approximation. Comput. Appl. Math. 2018, 37, 99–133. [Google Scholar] [CrossRef]
  26. Costarelli, D.; Vinti, G. Estimates for the neural network operators of the max-product type with continuous and p-integrable functions. Results Math. 2018, 73, 12. [Google Scholar] [CrossRef]
  27. Costarelli, D.; Sambucini, A.R. Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators. Results Math. 2018, 73, 15. [Google Scholar] [CrossRef]
  28. Kim, Y.; Ohn, I.; Kim, D. Fast convergence rates of deep neural networks for classification. arXiv 2018, arXiv:1812.03599. [Google Scholar]
  29. Anthony, M.; Bartlett, P.L. Neural Network Learning: Theoretical Foundations; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  30. Mammen, E.; Tsybakov, A.B. Smooth discrimination analysis. Ann. Stat. 1999, 27, 1808–1829. [Google Scholar]
  31. Tsybakov, A.B. Optimal aggregation of classifiers in statistical learning. Ann. Stat. 2004, 32, 135–166. [Google Scholar] [CrossRef]
  32. Lin, Y. A note on margin-based loss functions in classification. Stat. Probab. Lett. 2004, 68, 73–82. [Google Scholar] [CrossRef]
  33. Steinwart, I.; Christmann, A. Support Vector Machines; Springer Science & Business Media: New York, NY, USA, 2008. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ohn, I.; Kim, Y. Smooth Function Approximation by Deep Neural Networks with General Activation Functions. Entropy 2019, 21, 627. https://doi.org/10.3390/e21070627

AMA Style

Ohn I, Kim Y. Smooth Function Approximation by Deep Neural Networks with General Activation Functions. Entropy. 2019; 21(7):627. https://doi.org/10.3390/e21070627

Chicago/Turabian Style

Ohn, Ilsang, and Yongdai Kim. 2019. "Smooth Function Approximation by Deep Neural Networks with General Activation Functions" Entropy 21, no. 7: 627. https://doi.org/10.3390/e21070627

APA Style

Ohn, I., & Kim, Y. (2019). Smooth Function Approximation by Deep Neural Networks with General Activation Functions. Entropy, 21(7), 627. https://doi.org/10.3390/e21070627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop