Next Article in Journal
Lac Operon Boolean Models: Dynamical Robustness and Alternative Improvements
Next Article in Special Issue
On the Omega Distribution: Some Properties and Estimation
Previous Article in Journal
Volatility Co-Movement in Stock Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models

1
School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou 221116, China
2
Research Institute of Mathematical Sciences, Jiangsu Normal University, Xuzhou 221116, China
3
Department of Public Administration and Policy, College of Public Affairs, National Taipei University, Taipei 237, Taiwan
4
Jiangsu Provincial Key Laboratory of Educational Big Data Science and Engineering, Jiangsu Normal University, Xuzhou 221116, China
*
Authors to whom correspondence should be addressed.
The author order is alphabetically sorted. These authors contributed equally to this work. And the submit author is Ru Zhang.
Mathematics 2021, 9(6), 599; https://doi.org/10.3390/math9060599
Submission received: 15 February 2021 / Revised: 3 March 2021 / Accepted: 5 March 2021 / Published: 11 March 2021
(This article belongs to the Special Issue Probability, Statistics and Their Applications 2021)

Abstract

:
This paper uses the median-of-means (MOM) method to estimate the parameters of the nonlinear regression models and proves the consistency and asymptotic normality of the MOM estimator. Especially when there are outliers, the MOM estimator is more robust than nonlinear least squares (NLS) estimator and empirical likelihood (EL) estimator. On this basis, we propose hypothesis testing Statistics for the parameters of the nonlinear regression models using empirical likelihood method, and the simulation performance shows the superiority of MOM estimator. We apply the MOM method to analyze the top 50 data of GDP of China in 2019. The result shows that MOM method is more feasible than NLS estimator and EL estimator.

1. Introduction

A nonlinear regression model refers to a regression model in which the relationship between variables is not linear. Nonlinear regression model has been widely used in various disciplines. For instance, Hong [1] applied a nonlinear regression model to the economic system prediction; Wang et al. [2] studied the application of nonlinear regression model in the detection of protein layer thickness; Chen et al. [3] utilized a nonlinear regression model in the price estimation of surface-to-air missiles; Archontoulis and Miguez [4] used a nonlinear regression model in agricultural research.
The principle of median-of-means (MOM) was firstly introduced by Alon, Matias, and Szegedy [5] in order to approximate the frequency moment with space complexity. Lecu e ´ and Lerasle [6] proposed new estimators for robust machine learning based on MOM estimators of the mean of real-valued random variables. These estimators achieved optimal rates of convergence under minimal assumptions on the dataset. Lecu e ´ et al. [7] proposed MOM minimizers estimator based on MOM method. The MOM minimizers estimator is very effective when the instantaneous hypothesis may have been corrupted by some outliers. Zhang and Liu [8] applied MOM method to estimate the parameters in multiple linear regression models and AR error models of repeated measurement data.
For unknown parameters of a nonlinear regression model, Radchenko [9] proposed an estimator named nonlinear least square to approximate the unknown parameters. Ding [10] introduced the empirical likelihood (EL) estimator of the parameter of the nonlinear regression model based on the empirical likelihood method. However, when there are outliers, the general methods are more sensitive and easily affected by the outliers based on Gao and Li [11]. On the basis of the study of Zhang and Liu [8], this paper applies the MOM method to estimate the parameters of the nonlinear regression models and receives more robust results.
The paper is organized as follows: In Section 2, we review the definition of the nonlinear regression model and introduce the MOM method specifically. We prove the consistency and asymptotic properties of the MOM estimator. In Section 3, we introduce a new test method based on the empirical likelihood method for the median. Section 4 illustrates the superiority of the MOM method with simulation studies. A real application to GDP data is given in Section 5, and the conclusion is discussed in the last section.

2. Median-of-Means Method Applies to Nonlinear Regression Model

We consider the following nonlinear regression model introduced by Wu [12]
y i = g ( θ , x i ) + ε i i = 1 , , T .
where θ = ( θ 1 , , θ k ) T is a fixed k × 1 unknown parameter column vector. x i is the i-th “fixed” input vector with observation y i . g ( θ , x i ) is a known functional form (usually nonlinear). ε i are i.i.d errors with 0 mean and σ 2 unknown variance.
According to Zhang and Liu [8], MOM estimator of θ is produced by the following steps:
Step I: We seperate ( y i , x i ) , i = 1 , , T into g groups. The number of observations in each group is n = T / g (Usually for the convenience of calculation, we assume that T is always divisible by g). We discuss the choice of grouping number g. According to the suggestion by Emilien et al. [13], g = 8 × l o g ( 1 ζ ) for any ζ ( 0 , 1 ) , where η is the ceiling function. In fact, the structure of observations is always unknown, and the diagnosis of outliers is complicated. Therefore, we usually set ζ = C T for some constant C regardless of outliers.
Step II: We estimate the parameter θ in each group j , 1 j g by the nonlinear least square estimator θ ^ ( j ) = ( θ ^ 1 ( j ) , , θ ^ k ( j ) ) T .
Step III: The MOM estimator of θ ^ M O M = ( θ ^ 1 M O M , , θ ^ k M O M ) T is defined, where θ ^ q M O M = m e d i a n ( θ ^ q ( 1 ) , , θ ^ q ( g ) ) , q = 1 , , k .
The asymptotic properties of θ ^ M O M are summarized in the following theorems. Their proofs are postponed to Appendix A.
Theorem 1.
For some constant C and any positive integer g, we suppose the following:
(I) For certain 0 < a < b < and any θ 1 , θ 2 Θ , Θ is is an open interval (finite or infinite) of the real axis E 1 . φ n ( θ 1 , θ 2 ) = for θ 1 θ 2 if at least one of the points θ 1 , θ 2 is or ∞ ). i = 1 , , n .
a ( θ 1 θ 2 ) 2 φ n ( θ 1 , θ 2 ) = 1 n i = 1 n [ g ( θ 1 , x i ) g ( θ 2 , x i ) ] 2 b ( θ 1 θ 2 ) 2
Suppose E | ε 1 | s < for some s 2 . For n N 0 and sufficiently large positive ρ, c does not depend on n and ρ.
(II) g ( θ 0 , x i ) , g ( θ 0 , x i ) exist for all θ 0 near θ q , q = 1 , , k , the true value θ q is in the interior of θ 0 , and
lim sup n 1 n i = 1 n [ ( g ( θ q , x i ) ) 2 ] = S 0 < S <
(III) There exits θ υ Θ as n and | θ υ θ q | 0 .
lim n i = 1 n { ( g ( θ ν , x i ) ) 2 } i = 1 n { ( g ( θ q , x i ) ) 2 } = 1
(IV) There exits a δ > 0 such that
lim n ¯ 1 n i = 1 n sup | θ 0 θ q | δ { 2 g ( θ 0 , x i ) θ 0 2 } 2 <
for all i = 1, …, n, where s = { θ q Θ , | θ 0 θ q | δ } .
According to conditions I I V , for any fixed x > 0 , we can get
P ( | θ ^ q M O M θ q | x ) C ( T / g ) g / 5 .
Theorem 1.
(1) Suppose g is fixed and σ 0 . Let Θ 1 , Θ 2 , …, Θ g be i.i.d standard normal random variables. When T ,
n σ ^ n S 1 2 d m e d i a n { Θ 1 , , Θ g } .
(2) Suppose T / g 2 as g and σ 0 . Afterwards the following asymptotic normal holds
T σ ^ n S 1 2 d 2 / π N ( 0 , 1 ) .

3. Empirical Likelihood Test Based on MOM Method

In Section 2, this paper uses the MOM method to estimate the parameters of the nonlinear regression model. In this section, we consider the hypothesis test that θ equals a given value parameter based on the empirical likelihood method.
Because different groups are disjoint, θ ^ q ( 1 ) , θ ^ q ( 2 ) , , θ ^ q ( j ) , j = 1 , , g , q = 1 , , k are i.i.d. We treat them as a sample and apply empirical likelihood. For each j, we say T n , j = I ( θ ^ q ( j ) θ q ) . Obviously, E T n , j 0.5 . In fact, E T n , j 0.5 = O ( n 1 / 2 ) (by the process of proof of Theorem in the Appendix A. Given restrictive conditions, the empirical likelihood ratio of θ is
R ( θ ) = m a x { j = 1 g g ω j | j = 1 g ω j T n , j = 0.5 , ω j 0 , j = 1 g ω j = 1 } .
Using the Lagrange multiplier to find the maximum point we obtained the following equation.
ω j = 1 g 1 1 + λ ( T n , j 0.5 )
where λ = λ ( θ ) satisfies the equation
0 = 1 g j = 1 g T n , j 0.5 1 + λ ( T n , j 0.5 ) .
Theorem 3.
According to Theorem 2 and Owen [14], as g , n , we have
2 l o g R ( θ ) d χ 1 2 .
Using the Theorem 3, the rejection region for the hypothesis with significance level α ( 0 < α < 1 )
H 0 : θ = θ 0 v s . H 1 : θ θ 0
can be constructed as
R : = { 2 l o g R ( θ ) > χ 1 2 ( α ) }
where χ 1 2 ( α ) is the upper α -th quantile of χ 1 2 .

4. Simulation Study

In this section, we use R software for simulation. Simulation experiments are carried out to compare the performance of the MOM estimator with the nonlinear least squares (NLS) estimator and the EL estimator under “no outliers” and “with outliers” cases in Examples 1–3. The definition of Mean Square Error (MSE) of θ ^ E L , θ ^ M O M and θ ^ N L S are as follows.
M S E = 1 D 1 i = 1 D ( θ ^ q θ q ) 2 , q = 1 , , k .
θ ^ q , θ q represent the estimated value of the parameter and the true value of the parameter Respectively in formula (8). D represents the total number of simulations, and in this article, D = 1000. The MSE results calculated in Table 1, Table 2 and Table 3 are all multiplied by 100. The results are accurate to three decimal places. In Examples 4–6, we compare our proposed method with empirical likelihood inference proposed by Jiang [15].
We report the empirical sizes and powers of the two methods, where size represents the probability of rejecting the null hypothesis provided it is true. In this paper, we set that the nominal significance level is 0.05. If the value is close to 0.05 it is good. Power represents the probability of rejecting the null hypothesis provided it is false. If the value of power is close to 1 it is good. Empirical size or power represents n 1 D , where n 1 refers to the number of times the null hypothesis is rejected in D simulations. In Table 4, Table 5 and Table 6 of this article, the size value refers to the empirical likelihood, and power refers to the empirical power. In fact, the empirical size is the estimated value of size, and the empirical power is the estimated value of power. We consider the following three forms of nonlinear regression models, which were also considered by Hong [16].
m o d e l 1 : y i = 0.8 x i + ε i , i = 1 , , T .
m o d e l 2 : y i = x i 0.6 + ε i , i = 1 , , T .
m o d e l 3 : y i = e ( 0.5 x i ) + ε i , i = 1 , , T .
In this paper, for convenience, we fix the number of groups in simulation. We find that the result is consistent with the calculation result according to the formula g = 8 × l o g ( 1 ζ ) which suggested by Emilien et al. [13]. Throughout the paper, the distribution abbreviations B, U, N, P represent binomial distribution, uniform distribution, normal distribution respectively and Poisson distribution. N(0,1) represents the standard normal distribution. We set the number of repeated observations T to 100, 200, …, 1000.
Example 1.
We consider model y i = 0.8 x i + ε i , For the observation data, the grouping is carried out according to the grouping principle. Taking the effect of the measures of dispersion in data sets into consideration (accuracy of the estimator may be affected by the dispersion in the data set). x i are generated from the P ( 0.7 ) , ε i are generated from N ( 0 , 1 ) . The output variable y i has outliers. There are three cases of outliers. We choose 1 % T outliers from B ( 20 , 1 / 2 ) , 2 % T outliers from U ( 7 , 8 ) and 2 % T outliers from N ( 6 , 2 ) , respectively. The results are shown in Table 1.
Example 2.
We consider model y i = x i 0.6 + ε i , x i are generated from the U ( 2 , 3 ) , ε i are generated from N ( 0 , 1 ) . The output variable y i have outliers. There are three cases of outliers. We choose 1 % T outliers from B(22,1/2), 2 % T outliers from U ( 7 , 8 ) and 2 % T outliers from N ( 7 , 3 ) , respectively. The results are shown in the Table 2.
Example 3.
We consider model y i = e ( 0.5 x i ) + ε i , x i are generated from U ( 1 , 0 ) . ε i are generated from N ( 0 , 1 ) . The output variable y i have outliers. There are three cases of outliers. We choose 1 % T outliers from B ( 20 , 1 / 2 ) , 2 % T outliers from N ( 6 , 2 ) and 2 % T outliers from U ( 6 , 7 ) , respectively. The results are shown in the Table 3.
Form Table 1, Table 2 and Table 3, we have the following comments.
(1)
The MSE decrease for all estimators as T becomes large whether there are outliers.
(2)
When there are no outliers, the MSE of θ ^ M O M , θ ^ N L S and θ ^ E L are the same basically.
(3)
When there are outliers, the MSE of θ ^ M O M estimator is smaller than the MSE of θ ^ N L S estimator and θ ^ E L estimator. From Table 1 and Table 3, the results show that they are no significant differences between the MSE of θ ^ N L S estimator and θ ^ E L estimator as T is large.
Example 4.
We consider model y i = 0.8 x i + ε i , x i are generated from P ( 0.7 ) , ε i are generated from N ( 0 , 1 ) , For the power, we use θ + θ 0 with θ 0 { 0.1 , 0.2 } as the alternative hypothesis. The results are shown in Table 4. MOMEL repersents empirical likelihood test based on MOM method, and EL represents hypothesis test based on the EL estimator.
Example 5.
We consider model y i = x i 0.6 + ε i , suppose x i are generated from U ( 2 , 3 ) , ε i are generated from N ( 0 , 1 ) . For power, we use θ + θ 0 with θ 0 { 0.1 , 0.15 } as the alternative hypothesis. The results are shown in Table 5.
Example 6.
We consider model y i = e ( 0.5 x i ) + ε i , x i are generated by U ( 1 , 0 ) , ε i are generated by N ( 0 , 1 ) . For power, we use θ + θ 0 with θ 0 { 0.2 , 0.3 } as the alternative hypothesis. The results are shown in Table 6.
From simulation results that are displayed in Table 4, Table 5 and Table 6, we can see that the size of the proposed test is close to 0.05 and the power is close to 1 as T increases. Especially when N is small, the results of MOM are significantly better than EL’s. When T increases, the MOM also performs better in terms of size and power although the power of both methods tends to one. In summary, our method is better.

5. The Real Data Analysis

In this section, we apply the MOM method to analyze the top 50 data of GDP of China in 2019. Basing on the presentation of Zhu et al. [17], there are many methods to test whether there are outliers in the data, such as the 4d test, 3 σ principle, the Chauvenet method, the t-test and the Grubbs test. Sun [18] also introduced the box plot method. Different test methods will get different outliers. So we use the box plot as shown in Figure 1 to confirm the existence of outliers in the actual data based on the suggestion of Sun et al. [18]. The outliers are 381.55, 353.71, 269.27, and 236.28 (unit: ten billion RMB).
We also use a 3- σ principle to test whether there are outliers, and the result shows that the outliers are 381.55 and 353.71. Through the test of the above two methods, we can judge that there are outliers in this real data.
Yin and Du [19] introduced a power-law distribution. For the purpose of predicting the GDP development trend of major cities accurately in China. We use the EL method, the MOM method and the NLS method to fit the curve respectively. Where x i represents the sorting order of GDP of 50 cities in descending order. The dataset is from www.askci.com (accessed on 15 February 2021).
The EL gives the nonlinear regression equation
G D P E L = 444.0250 × x i 0.5176290 .
The MOM gives the nonlinear regression equation
G D P M O M = 594.1439 × x i 0.6111023 .
The NLS gives the nonlinear regression equation
G D P N L S = 443.0247 × x i 0.5167945 .
In Figure 2, the red line represents the fitting result of NLS method, and the blue line represents the fitting result of MOM method. the black line represents the fitting result of EL method, and the yellow points represent the true value of GDP.
In actual data, the true values of parameters are really unknown, so we cannot calculate MSE of the parameter. MAE refers to the average value of the absolute error. The definition of Mean Absolute Error (MAE) is given below.
M A E = 1 G i = 1 G | y i y ^ i | , i = 1 , , G .
In the actual data, y i refers to the true value of GDP, and y ^ i refers to the GDP value obtained from the fitted nonlinear regression model, so we calculated the MAE. MAE of the MOM method is 11.984. MAE of the NLS method is 12.024, MAE of the MOM method is 11.982. Cross-validations are taken to examine the accuracy of forecasting. Specifically, we take 40 data as experimental data and the other 10 as forecasting data randomly and the number of independent replications is 1000. The MAE of EL, ELS and MOM are 14.206, 14.271 and 12.242 respectively. These suggest that MOM is more plausible than NLS and EL.

6. Conclusions

It is shown that the NLS method is not robust to outliers based on the research of Gao and Li [11]. So in this paper, firstly, we apply the MOM method to the nonlinear regression model and introduce its theory. We give the theoretical results of asymptotic normality and consistency of the MOM estimator. Secondly, we propose a new test method based on the empirical likelihood method. Thirdly, we use the MOM method to estimate the parameters of three forms of nonlinear regression models, and compare the MSE of θ ^ N L S , θ ^ M O M and θ ^ E L . The results show that the MSE of θ ^ M O M is the smallest from Table 1, Table 2 and Table 3 and the size and power prove the superiority of the MOM method from Table 4, Table 5 and Table 6. Finally, the MOM method is applied to predict the GDP development of cities of China, the value of MAE shows that the prediction of the MOM method is better than the NLS method. All in all, the MOM method does not need to eliminate outliers. Regardless of whether there are outliers in the data, we will use the MOM method to get a robust estimation.

Author Contributions

Conceptualization, P.L.; methodology, P.L.; M.Z. and Q.Z.; software, R.Z.; writing—original draft, R.Z.; writing—review and editing, M.Z., R.Z. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Pengfei Liu’s research is supported by the National Natural Science Foundation of China (NSFC11501261, NSFC52034007) and the State Scholarship funded by China Scholarship Council (CSC201808320107). Ru Zhang’s research is supported by the Project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. Qin Zhou’s research is supported by the National Natural Science Foundation of China (NSFC11671178).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

www.askci.com (accessed on 15 February 2021).

Acknowledgments

Thank Shaochen Wang and Wang Zhou for their help. And thank the reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this Appendix A, we give the technical proofs of Theorems 1–3.
Lemma A1
(Chernoff’s inequality, cf. Vershynin [20] Theorem 2.3.1). Let X i ( i = 1 , , n ) is an independent Bernoulli random variable with the parameter p i . Consider sum M n = i = 1 n X i and their mean μ = E ( M n ) , for any t > μ , we have
P ( M n t ) e μ ( e μ t ) t .
Proof of Theorem 1. 
In accordance with the condition (I) of Theorem 1 and the Lemma 1 of Ivanov [21], for n N 0 and sufficiently large positive ρ , c does not depend on n and ρ . We have
P ( n | θ ^ k ( j ) θ k | > ρ ) c ρ s , j = 1 , , g .
According to Wu [22], we can get that the least square estimate of σ 2 is (j = 1, …, g)
σ ^ n 2 = 1 n i = 1 n ( y i θ ^ q ( j ) ) 2 , q = 1 , , k
According to Formula (2) and the conditions (II)(III)(IV). From the Theorem 5 of Wu [12], we can know
n ( θ ^ q ( j ) θ q ) d N ( 0 , σ ^ n 2 S 1 ) , q = 1 , , k .
According to Pinelis [23], C 1 is a constant, we can know
sup x R | P n σ ^ n S 1 2 ( θ ^ q ( j ) θ q ) x ) Φ ( x ) | C 1 n , q = 1 , , k .
where Φ represent the cumulative distribution function of Standard normal distribution.
Define random variables
α n , j = n σ ^ n S 1 2 ( θ ^ 1 ( j ) θ 1 ) , j = 1 , , g .
According to formula (A4), we have
sup x R | P ( n σ ^ n S 1 2 ( θ ^ 1 ( j ) θ 1 ) x ) Φ ( x ) | C 1 n
So we can get
sup x R | P ( α n , j x ) Φ ( x ) | C 1 n
For each j = 1 , , g , suppose x = n H σ ^ n S 1 2 , we have
P θ ^ 1 ( j ) θ 1 H C 1 n + 1 Φ ( n H σ ^ n S 1 2 )
for all H > 0 , according to the elementary inequality
1 Φ ( n H / σ ^ n S 1 2 ) e n H 2 / 2 ( σ ^ n 2 S 1 )
where o ( n 1 2 ) for large n and fixed H > 0 , hence
P θ ^ 1 ( j ) θ 1 H C 2 2 n .
Similarly, we can get
P θ ^ 1 ( j ) θ 1 H C 2 2 n
where C 2 is a constant that depends on H but not n, so we have
P | θ ^ 1 ( j ) θ 1 | H C 2 n .
It is easy to verify that
| θ ^ 1 M O M θ 1 | M e d i a n | θ ^ 1 ( j ) θ 1 | , j = 1 , , g
so we have the conclusion
P ( | θ ^ 1 M O M θ 1 | H ) P ( M e d i a n | θ ^ 1 ( j ) θ 1 | , j = 1 , , g H ) : = P ( E ) .
Definite Bernoulli random variable
η j = I ( | θ ^ 1 ( j ) θ 1 | H ) , j = 1 , , g
we have E η j C 2 n 1 2 by Formula (A9). It can be seen that event E occurs if and only if j = 1 g η j is larger than g 2 , hence
P ( E ) = P j = 1 g η j g 2 e g E η 1 2 e C 2 n 1 2 g 2 C n g 5 .
We have used Lemma 1 in the last step. This ends the proof of Theorem 1.
For any fixed x, we define i.i.d random variables
π n , j ( x ) = I ( α n , j x ) , j = 1 , , g
and suppose
p n ( x ) = P α n , j x
according to Formula (A4)
| p n ( x ) Φ ( x ) | = O ( n 1 2 )
for all real x. The following lemma gives the central limit theorem for the partial sums of π n , j ( x ) . □
Lemma A2.
Suppose n / g as g . We have
g 1 g j = 1 g π n , j ( x ) Φ ( x ) d N ( 0 , Φ ( x ) [ 1 Φ ( x ) ] ) .
for the fixed x, as g ,
g 1 g j = 1 g π n , j ( x g 1 2 ) 1 2 x 2 π g d N ( 0 , 1 4 ) .
Proof of Lemma 2. 
For convenience, we write π n , j ( x ) as π n , j . By independence, for any real t and i = 1 , we have
E e x p i t g 1 g j = 1 g π n , j Φ ( x ) = E e i t 1 g [ π n , 1 Φ ( x ) ] g .
through the Taylor’s expansion, we have
E e i t 1 g [ π n , 1 Φ ( x ) ] = p n e i t g 1 2 ( 1 Φ ( x ) ) + ( 1 p n ) e i t g 1 / 2 Φ ( x ) = p n ( 1 + i t g 1 / 2 ( 1 Φ ( x ) ) + ( i t g 1 / 2 ( 1 Φ ( x ) ) ) 2 2 ! ) + ( 1 p n ) ( 1 i t g 1 / 2 Φ ( x ) + ( i t g 1 / 2 Φ ( x ) ) 2 2 ! ) = 1 + p n i t g 1 / 2 ( 1 Φ ( x ) ) ( 1 p n ) i t g 1 / 2 Φ ( x ) p n 2 g [ t ( 1 Φ ( x ) ) ] 2 1 p n 2 g [ t Φ ( x ) ] 2 + o ( g 1 ) = 1 t 2 2 g Φ ( x ) [ 1 Φ ( x ) ] + o ( g 1 ) .
where we used the formula | p n Φ ( x ) | = O ( n 1 2 ) , when n / g and g ,
| p n g 1 / 2 ( 1 Φ ( x ) ) ( 1 p n ) g 1 / 2 Φ ( x ) | = g 1 / 2 | p n Φ ( x ) | = o ( g 1 ) .
so the first conclusion of the Lemma 2 can get by formula (A13).
For the second conclusion, we find that the above calculations still hold if we replace x with x g 1 2 and note the fact that
Φ ( x g 1 2 ) = 1 2 + 1 2 π 0 x g 1 / 2 e u 2 2 d u = 1 2 + x 2 π g + o ( g 1 2 ) .
We can proof the formula (A15) by the virtue of Slutsky’s theorem. □
Proof of Theorem 2. 
(1) This follows immediately by formula (A4) and the continuous mapping theorem since the Median function is continuous.
(2) We can observe that
N σ ^ n S 1 2 θ ^ 1 M O M θ 1 = g n σ ^ n S 1 2 θ ^ 1 M O M θ 1 = g M e d i a n { α n , j , j = 1 , , g }
We first assume g is odd and for any real x, and we have
P g M e d i a n { α n , j , j = 1 , , g } x = P j = 1 g I ( α n , j x g 1 2 ) ( g + 1 ) / 2 = P g { 1 g j = 1 g π n , j ( x g 1 2 ) 1 / 2 x 2 π g } x 2 π + O ( g 1 2 ) .
under the above lemma, it tends to Φ ( 2 π x ) .
If g is even, we can know
P g M e d i a n α n , j , j = 1 , , g x P j = 1 g I α n , j x g 1 2 g 2 + 1
and
P g M e d i a n { α n , j , j = 1 , , g } x P j = 1 g I α n , j x g 1 2 g 2
The right hand sides of the above two inequalities tend to Φ ( 2 π x ) as g . □
Proof of Theorem 3. 
Recall that
T n , j = I ( θ ^ q ( j ) θ q ) , q = 1 , , k .
where j = 1 , , g , so formula (6) is
f ( λ ) = 1 g j = 1 g T n , j 0.5 1 + λ ( T n , j 0.5 ) = 0 .
set that L n , j = λ ( T n , j 0.5 ) , and we have
λ R ˜ = 1 g j = 1 g λ ( T n , j 0.5 ) 2 1 + L n , j = 1 g j = 1 g L n , j ( T n , j 0.5 ) 1 + L n , j .
T ¯ n , j 0.5 = 1 g j = 1 g 1 + L n , j 1 + L n , j ( T n , j 0.5 ) = 1 g j = 1 g ( T n , j 0.5 ) 1 g j = 1 g T n , j 0.5 1 + L n , j = 1 g j = 1 g L n , j ( T n , j 0.5 ) 1 + L n , j .
So
( T ¯ n , j 0.5 ) = λ R ˜ .
where
R ˜ = 1 g j = 1 g ( T n , j 0.5 ) 2 1 + L n , j
T ¯ n , j = 1 g j = 1 g T n , j
R = 1 g j = 1 g ( T n , j 0.5 ) 2 = 0.25 .
T g = max 1 j g | T n , j 0.5 | = 0.5
Combining the constraint condition ω i > 0 , we can get that 1 + L n , j > 0 , and
λ R λ R ˜ ( 1 + max 1 j g L n , j ) λ R ˜ ( 1 + λ T g ) = ( T ¯ n , j 0.5 ) ( 1 + λ T g )
The last equality follows by formual (A20). So,
λ [ R ( T ¯ n , j 0.5 ) T g ] T ¯ n , j 0.5 .
and according to Lemma 2, T ¯ n , j 0.5 = O p ( g 1 / 2 ) , we can get
λ [ 0.25 O p ( g 1 / 2 ) ] = O p ( g 1 / 2 )
so
λ = O p ( g 1 / 2 ) .
In addition, we know
max 1 j g | L n , j | = O p ( g 1 / 2 ) .
Expanding formula (6),
0 = 1 g j = 1 g T n , j 0.5 1 + L n , j = ( T ¯ n , j 0.5 ) λ R + 1 g j = 1 g ( T n , j 0.5 ) L n , j 2 1 + L n , j = ( T ¯ n , j 0.5 ) 0.25 λ + 1 g j = 1 g ( T n , j 0.5 ) L n , j 2 1 + L n , j .
The final term in formula (A26) above has a norm bounded by
1 g j = 1 g | T n , j 0.5 | 3 λ 2 | 1 + L n , j | 1 = O ( 1 ) ( O p ( g 1 / 2 ) ) 2 O p ( 1 ) = o p ( g 1 / 2 )
Therefore
λ = R 1 ( T ¯ n , j 0.5 ) + β = 4 ( T ¯ n , j 0.5 ) + β
where β = o p ( g 1 / 2 ) .
Through formula (A26) and using Taylor expansion, we can find that
l o g ( 1 + L n , j ) = L n , j 1 2 L n , j 2 + η j .
holds for some finite B > 0 , 1 j g ,
P ( | η j | B | L n , j | 3 ) 1 .
as g and n .
Now, we calculate that
2 l o g R ( θ ) = 2 j = 1 g l o g ( 1 + L n , j ) = 2 j = 1 g ( L n , j 1 2 L n , j 2 + η j ) = j = 1 g ( 2 ( 4 ( T ¯ n , j 0.5 ) + β ) T n , j ( 4 T ¯ n , j 2 + β ) ( 4 T ¯ n , j 2 + β ) 2 T n , j 2 + ( 4 T ¯ n , j 2 + β ) 2 T n , j 0.25 ( 4 T ¯ n , j 2 + β ) 2 + 2 j = 1 g η j = 4 g ( T ¯ n , j 0.5 ) 2 1 4 g β 2 + 2 j = 1 g η j
By Lemma 2, we have
4 g ( T ¯ n , j 0.5 ) 2 χ 1 2 .
Noticed that
1 4 g β 2 = 1 4 g o p ( g 1 ) = o p ( 1 ) .
| j = 1 g η j | B | λ | 3 j = 1 g | T n , j 0.5 | 3 = O p ( g 3 2 ) O ( 1 ) = o p ( 1 )
This completes the proof. □

References

  1. Hong, Z. The application of nonlinear regression model to the economic system prediction. J. Jimei Inst. Navig. 1996, 4, 48–52. [Google Scholar]
  2. Wang, D.; Jiang, D.; Cheng, S. Application of nonlinear regression model to detectthe thickness of protein layer. J. Biophys. 2000, 16, 33–74. [Google Scholar]
  3. Chen, H.; Wang, J.; Zhang, H. Application of nonlinear regression analysis in establishing price model of ground-to-air missile. J. Abbr. 2005, 4, 77–79. [Google Scholar]
  4. Archontoulis, S.V.; Miguez, F.E. Nonlinear regression models and applications in agricultural research. Agron. J. 2015, 105, 1–13. [Google Scholar] [CrossRef] [Green Version]
  5. Alon, N.; Matias, Y.; Szegedy, M. The space complexity of approximating the frequency moment. J. Comput. Syst. Sci. 1999, 58, 137–147. [Google Scholar] [CrossRef] [Green Version]
  6. Lecué, G.; Lerasle, M. Robust machine learning by median-of-means: Theory and practice. Ann. Stat. 2017, 32, 4711–4759. [Google Scholar]
  7. Lecué, G.; Lerasle, M.; Mathieu, T. Robust classification via MOM minimization. Mach. Learn. 2018, 32, 1808–1837. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Liu, P. Median-of-means approach for repeated measures data. Commun. Stat. Theory Methods 2020, 2020, 1–10. [Google Scholar] [CrossRef]
  9. Radchenko, P.P. Nonlinear least-squares estimation. J. Multivar. Anal. 2006, 97, 548–562. [Google Scholar]
  10. Ding, X.; Xu, L.; Lin, J. Empirical likelihood diagnosis of nonlinear regression model. Chin. J. Appl. Math. 2012, 4, 693–702. [Google Scholar]
  11. Gao, S.; Li, X. Analysis on the robustness of least squares method. Stat. Decis. 2006, 15, 125–126. [Google Scholar]
  12. Wu, C.-F. Asymptotic theory of nonlinear least squares estimation. Ann. Stat. 1981, 9, 501–513. [Google Scholar] [CrossRef]
  13. Emilien, J.; Gábor, L.; Roberto, I.O. Sub-Gaussian estimators of the mean of a random vector. Ann. Stat. 2017, 47, 440–451. [Google Scholar]
  14. Owen, A.B. Empirical likelihood ratio confidence intervals for a single functional. Biometrika 1988, 75, 237–249. [Google Scholar] [CrossRef]
  15. Jiang, Y. Empirical Likelihood Inference of Nonlinear Regression Model Parameters. Master’s Thesis, Beijing University of Technology, Beijing, China, 2005. [Google Scholar]
  16. Ratkowski, H.Z. Nonlinear Regression Model: A Unified Practical Method; Nanjing University Press: Nanjing, China, 1986; pp. 12–25. [Google Scholar]
  17. Zhu, J.; Bao, Y.; Li, C. Discussion on data outlier test and processing method. Univ. Chem. 2018, 33, 58–65. [Google Scholar] [CrossRef]
  18. Sun, X.; Liu, Y.; Chen, W.; Jia, Z.; Huang, B. The application of box and plot method in the outlier inspection of animal health data. China Anim. Quar. 2010, 27, 66–68. [Google Scholar]
  19. Yin, C.; Du, J. The collision theory reaction rate coefficient for power-law distributions. Phys. A Stat. Mech. Its Appl. 2014, 407, 119–127. [Google Scholar] [CrossRef] [Green Version]
  20. Vershynin, R. High-Dimensional Probability (An Introduction with Applications in Data Science); Cambridge University Press: Cambridge, UK, 2018; pp. 70–97. [Google Scholar]
  21. Ivanov, A.V. An asymptotic expansion for the Distribution of the Least Squares Estimator of the nonlinear regression parameter. Theory Probab. Appl. 1977, 21, 557–570. [Google Scholar] [CrossRef]
  22. Wu, Q. Asymptotic normality of least squares estimation in nonlinear models. J. Guilin Inst. Technol. 1998, 18, 394–400. [Google Scholar]
  23. Pinelis, I.; Molzon, R. Optimal-order bounds on the rate of convergence to normality in the multivariate delta method. Electron. J. Stat. 2016, 10, 1001–1063. [Google Scholar] [CrossRef]
Figure 1. Box plot of the top 50 data of GDP of China in 2019.
Figure 1. Box plot of the top 50 data of GDP of China in 2019.
Mathematics 09 00599 g001
Figure 2. Fitting result figure of the top 50 data of GDP of China in 2019.
Figure 2. Fitting result figure of the top 50 data of GDP of China in 2019.
Mathematics 09 00599 g002
Table 1. Mean Square Error (MSE) for θ ^ N L S , θ ^ M O M and θ ^ E L in Example 1.
Table 1. Mean Square Error (MSE) for θ ^ N L S , θ ^ M O M and θ ^ E L in Example 1.
No Outliers 1 % from B ( 20 , 1 2 ) 2 % from U ( 7 , 8 ) 2 % from N ( 6 , 2 )
TELMOMNLSELMOMNLSELMOMNLSELMOMNLS
1001.6171.3751.3292.2082.0202.0812.3211.9692.0232.0651.7901.792
2000.6500.6690.6491.2301.0711.2292.0001.7271.7531.6101.4861.611
3000.4150.4210.4141.0110.8671.0121.7981.7141.7991.1571.0581.158
4000.3220.3280.3210.8300.7170.8311.3511.1781.3511.1341.0351.135
5000.2680.2740.2670.7790.6280.7801.2561.1041.2561.1291.0021.128
6000.2120.2140.2120.7290.5780.7291.1450.9891.1461.0360.8741.036
7000.1730.1740.1730.6970.5540.6981.1950.9791.1961.0150.8481.014
8000.1620.1630.1610.6670.4860.6681.1190.9761.1201.0160.8281.017
9000.1410.1420.1410.6370.4480.6381.1130.8911.1131.0070.8321.008
10000.1280.1290.1270.6220.4410.6231.0830.8651.0830.9960.8090.997
Table 2. MSE for θ ^ N L S , θ ^ M O M , θ ^ E L in Example 2.
Table 2. MSE for θ ^ N L S , θ ^ M O M , θ ^ E L in Example 2.
No Outliers 1 % from B ( 22 , 1 2 ) 2 % from N ( 7 , 3 ) 2 % from U ( 7 , 8 )
TELMOMNLSELMOMNLSELMOMNLSELMOMNLS
1000.3800.3840.3810.6260.5980.6270.8220.8090.8230.6060.5870.607
2000.1890.1930.1900.4810.4550.4800.5690.5540.5700.5960.5900.597
3000.1260.1270.1260.4140.3990.4150.5250.5220.5260.5780.5630.577
4000.1000.1000.0990.3790.3440.3800.4720.4650.4720.5470.5440.546
5000.0780.0790.0770.3700.3280.3690.4470.4280.4470.5330.5150.533
6000.0610.0650.0630.3460.3010.3470.4580.4150.4590.5010.4940.501
7000.0550.0560.0550.3370.2960.3360.4240.4010.4250.4980.4920.498
8000.0490.0490.0480.3360.2780.3360.4360.4030.4370.4960.4720.495
9000.0420.0440.0420.3420.2760.3410.4100.3690.4100.4900.4640.489
10000.0380.0420.0380.3320.2780.3330.4200.3870.4200.4870.4550.487
Table 3. MSE for θ ^ N L S , θ ^ M O M and θ ^ E L in Example 3.
Table 3. MSE for θ ^ N L S , θ ^ M O M and θ ^ E L in Example 3.
No Outliers 1 % from B ( 20 , 1 2 ) 2 % from N ( 6 , 2 ) 2 % from U ( 6 , 7 )
TELMOMNLSELMOMNLSELMOMNLSELMOMNLS
1006.7006.8906.7018.5598.1978.66610.5029.49610.5468.8768.6878.928
2003.3413.3513.3425.9705.4465.9718.0967.9118.1787.5007.4387.501
3002.2002.2392.2015.4825.0675.4837.0556.7427.0886.8246.6526.825
4001.5651.6241.5665.1024.5555.1036.6736.2096.7856.5286.3056.529
5001.2631.2951.2644.6454.1884.6466.5746.0456.5756.2446.1116.245
6001.1581.1751.1594.4973.6754.4986.3995.9096.4006.2196.0776.220
7000.8310.8440.8314.2483.4924.2496.2905.8176.2926.1015.9706.102
8000.8020.8150.8034.3363.4864.3376.3565.6646.3576.0675.6916.068
9000.7310.7360.7324.1383.1624.1396.2065.4116.2075.8585.4865.859
10000.6630.6690.6643.6282.7643.6296.3365.3126.3365.3854.9985.386
Table 4. Size and power in Example 4.
Table 4. Size and power in Example 4.
SizePower
θ 0 = 0.1 θ 0 = 0.2
TMOMELELMOMELELMOMELEL
1000.0540.0600.6290.1610.8520.584
2000.0510.0450.7130.2370.9460.847
3000.0590.0620.7640.3680.9870.966
4000.0470.0600.8280.4980.9960.987
5000.0500.0400.8690.5781.0000.996
6000.0550.0470.9000.6701.0000.999
7000.0520.0480.9200.7281.0000.999
8000.0500.0450.9360.7791.0001.000
9000.0500.0460.9530.8331.0001.000
10000.0460.0400.9610.8821.0001.000
Table 5. Size and power in Example 5.
Table 5. Size and power in Example 5.
SizePower
θ 0 = 0.1 θ 0 = 0.15
TMOMELELMOMELELMOMELEL
1000.0570.0650.7620.3220.8960.524
2000.0500.0460.8880.5560.9740.827
3000.0540.0430.9390.7290.9950.970
4000.0510.0440.9720.8350.9970.991
5000.0550.0470.9880.9190.9990.998
6000.0560.0470.9950.9491.0000.999
7000.0470.0440.9980.9661.0001.000
8000.0560.0431.0000.9851.0001.000
9000.0500.0481.0000.9911.0001.000
10000.0510.0461.0000.9961.0001.000
Table 6. Size and power in Example 6.
Table 6. Size and power in Example 6.
SizePower
θ 0 = 0.2 θ 0 = 0.3
TMOMELELMOMELELMOMELEL
1000.0630.0720.5770.1440.6630.190
2000.0580.0450.6400.1830.7540.290
3000.0470.0400.6980.2150.8340.398
4000.0510.0490.7680.2910.8850.501
5000.0550.0470.7990.3350.8920.577
6000.0570.0420.8240.3730.9460.667
7000.0560.0430.8510.4700.9570.725
8000.0480.0490.8590.4720.9690.801
9000.0550.0420.8930.5400.9830.852
10000.0510.0570.9200.6071.0000.880
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, P.; Zhang, M.; Zhang, R.; Zhou, Q. Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models. Mathematics 2021, 9, 599. https://doi.org/10.3390/math9060599

AMA Style

Liu P, Zhang M, Zhang R, Zhou Q. Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models. Mathematics. 2021; 9(6):599. https://doi.org/10.3390/math9060599

Chicago/Turabian Style

Liu, Pengfei, Mengchen Zhang, Ru Zhang, and Qin Zhou. 2021. "Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models" Mathematics 9, no. 6: 599. https://doi.org/10.3390/math9060599

APA Style

Liu, P., Zhang, M., Zhang, R., & Zhou, Q. (2021). Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models. Mathematics, 9(6), 599. https://doi.org/10.3390/math9060599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop