Next Article in Journal
Study of the Effect of Throttling on the Success of Starting a Line-Start Permanent Magnet Motor Driving a Centrifugal Fan
Next Article in Special Issue
Minimum Residual Sum of Squares Estimation Method for High-Dimensional Partial Correlation Coefficient
Previous Article in Journal
Fisher-like Metrics Associated with ϕ-Deformed (Naudts) Entropies
Previous Article in Special Issue
Multiple Change-Point Detection in a Functional Sample via the 𝒢-Sum Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model

1
Graduate School of Engineering Science, Osaka University, Osaka 560-0043, Japan
2
Department of Population and Quantitative Health Science, Case Western Reserve University, Cleveland, OH 44106, USA
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4322; https://doi.org/10.3390/math10224322
Submission received: 16 October 2022 / Revised: 7 November 2022 / Accepted: 12 November 2022 / Published: 17 November 2022
(This article belongs to the Special Issue Advances of Functional and High-Dimensional Data Analysis)

Abstract

:
The estimation of large functional and longitudinal data, which refers to the estimation of mean function, estimation of covariance function, and prediction of individual trajectory, is one of the most challenging problems in the field of high-dimensional statistics. Functional Principal Components Analysis (FPCA) and Functional Linear Mixed Model (FLMM) are two major statistical tools used to address the estimation of large functional and longitudinal data; however, the former suffers from a dramatically increasing computational burden while the latter does not have clear asymptotic properties. In this paper, we propose a computationally effective estimator of large functional and longitudinal data within the framework of FLMM, in which all the parameters can be automatically estimated. Under certain regularity assumptions, we prove that the mean function estimation and individual trajectory prediction reach the minimax lower bounds of all nonparametric estimations. Through numerous simulations and real data analysis, we show that our new estimator outperforms the traditional FPCA in terms of mean function estimation, individual trajectory prediction, variance estimation, covariance function estimation, and computational effectiveness.

1. Introduction

Functional Data Analysis (FDA) is a field of statistics that examines data to provide information about curves, surfaces, or anything else that varies along a continuum [1]. The concrete continuum of these functions can be time, spatial location, wavelength, and probability, with applications to biomechanics, biomedicine, ecology, epidemiology, and neurology, among many others [2,3]. Functional data can be classified into common design data and independent design data, where the sampling locations of all individuals are the same under the former setting, while the sampling locations are individual-wise under the latter setting [4]. Longitudinal data can be regarded as a specific type of functional data where the underlying continuum is just time [5]. In the field of longitudinal data analysis, the counterparts of common design and independent design are called balanced longitudinal data and unbalanced longitudinal data, respectively.
The coronavirus 2019 (COVID-19) data is typical balanced longitudinal data, which records the daily infections (transformed in logarithm) of the COVID-19 pandemic from 51 states of the United States (US), where the date is from 16 March 2020 to 14 August 2020 (Johns Hopkins COVID-19 Case Tracker: https://www.kaggle.com/datasets/thecansin/johns-hopkins-covid19-case-tracker?resource=download). Figure 1 demonstrates the profiles of daily increasing infections and the related trajectories predictions separately yielded with the R package mgcv [6] of six states in the US. From this figure, it is easy to see some basic features of large longitudinal data: (1) functionality, the daily infections of states are likely some smooth curves of time; (2) heterogeneity, different individuals usually have different trends over time; and (3) high-dimensionality, the number of observations ( T = 152 ) much larger than the number of individuals ( N = 51 ). How to deal with this kind of large functional and longitudinal data is one of the most challenging problems in the field of high-dimensional statistics.
Functional Principal Components Analysis (FPCA) and Functional Linear Mixed Model (FLMM) are two common approaches that estimate the dominant modes of variation of a sample of random trajectories around an overall mean function. FPCA and FLMM share the same principle to estimate the mean function, but apply totally different strategies to estimate the covariance function and predict the individual trajectory. Specifically, FPCA first estimates the two-dimensional covariance function, then yields the eigenfunctions through PCA, and finally predicts the individual trajectory with the estimated eigenfunctions. In contrast, FLMM parameterizes the individual trajectories through basis functions, in particular, it specifies a Gaussian prior distribution on the coefficients of basis functions, then estimates the covariance matrix of these random coefficients using REstricted Maximum Likelihood (REML) [7], and eventually predicts the individual trajectories through the Best Linear Unbiased Prediction (BLUP) [8] and yields the covariance function of random curves using a weighted sum of inner products of basis functions. Commonly-employed nonparametric techniques for FPCA include the kernel method and local polynomial modeling ([9,10,11] and smoothing splines ([12,13,14,15,16]), while frequently-used basis expansion functions include B-splines ([17,18,19,20]), wavelets ([21,22,23]), a combination of linear mixed-effects modeling and local polynomial smoothing ([24,25]), etc. See more aspects of functional data in the following monographs ([1,26]) and review papers ([3,27,28]).
One of the greatest advantages of FPCA may be its clear asymptotic properties. Under mild regularity conditions, Yao et al. [11] showed the convergence rates of estimates of mean function covariance function, variance, eigenfunctions, and eigenvalues. Li and Hsing [10] further derived the strong uniform convergence rates of these estimates. Cai and Yuan [4] investigated the minimax risks of the mean function and covariance function estimates and Cai and Yuan [29] showed the minimax risk of mean function estimate in functional linear regression. However, because FPCA needs to estimate a two-dimensional covariance surface, its computational burden will dramatically increase as the number of observations T rises. In addition, since most FPCA methods employ local polynomial modeling [30] as the estimator, it is very difficult to select the optimal bandwidth in the two-dimensional kernel function in practice. On the other hand, FLMM suffers from more problems compared with FPCA, preventing its practical application in the field of FDA. First, FLMM requires estimating the covariance matrix of the basis expansion coefficients (which are treated as normal variables in LMM), but the estimation of this covariance matrix is usually unreliable if the number of employed bases is large. James et al. [17] provided a dimension reduction to estimate this covariance matrix, which resulted in a more complex estimator. Besides, the asymptotic properties of FLMM are difficult to study. Proper regularity conditions and proof techniques are lacking for FLMM.
In this paper, we propose a computationally effective estimator of large functional and longitudinal data by using FLMM, in which all the parameters can be automatically estimated. We also investigate the large sample property of the proposed automatic and flexible estimation of large functional data. Under mild regularity conditions, we prove that the estimation error of the mean curve estimator is bounded by O ( N 1 + T 2 s ) and the prediction error of the individual trajectory estimator is bounded by O ( T 2 s / ( 2 s + 1 ) ) , where s > 1 is a constant governing the smoothness of tge mean function and individual trajectory. In particular, both these two convergence rates reach the minimax lower boundaries of the mean function estimation and individual trajectory prediction derived by Cai and Yuan [4], meaning that our estimation enjoys the minimax efficiency. To the best of our knowledge, our work is the first to investigate the large sample property of FLMM estimation when both T and N are diverging.
The rest of the paper is organized as follows. In Section 2, we present the primal representation of FLMM to analyze the functional data. In Section 3, the new estimator of large balanced longitudinal data and its large sample property are illustrated. In Section 4, we exhibit the simulation studies. In Section 5, we apply our method to study the COVID-19 data. The concluding remarks are summarized in Section 6 and all the technical proofs of the theorems are relegated to the Appendix.

2. Preliminary

In this section, we introduce the functional data and the top two statistical methods, i.e., FPCA and FLMM, to deal with the functional data briefly.

2.1. Settings

For a vector a = ( a j ) p × 1 , | | a | | q = ( j = 1 p | a j | q ) 1 / q with q [ 0 , ] . For a symmetric matrix A = ( A i j ) p × p , λ max ( A ) and λ min ( A ) denote the maximum and minimum eigenvalues of matrix A , and | | A | | q = max { | | A a | | q , | | a | | q = 1 } . Besides, a n b n if there are positive constants c and C such that c a n / b n C . In addition, O ( · ) and o ( · ) are the infinitely large and small quantities, respectively, while O P ( · ) and o P ( · ) mean that such relationships hold with a probability tending to 1.

2.2. Functional Data Model

Let X ( t ) be a second-order stochastic process defined in the compact interval T , which is usually set as T = [ 0 , 1 ] for convenience. The covariance function of X ( t ) is given by
C ( s , t ) = E [ { X ( s ) E ( X ( s ) ) } { X ( t ) E ( X ( t ) ) } ] , ( s , t ) T × T .
Besides, let X 1 ( t ) , , X N ( t ) be N independent realizations of X ( t ) , { t 1 , , t T } be T sampling locations of the realizations, and the observations of X i ( t j ) be
Y i j = X i ( t j ) + ε i j , ( i , j ) { 1 , , N } × { 1 , , T } ,
where ε i j are independent and identically distributed (IID) random errors with mean 0 and variance σ 2 . In addition, the sampling location is considered as fixed numbers in T , and { X i ( t j ) } and { e i j } are two mutually independent groups. In a more general case, the sampling locations can vary from individual, i.e.,
Y i j = X i ( t i j ) + ε i j , ( i , j ) { 1 , , N } × { 1 , , T i } ,
where t i j is a random sampling location and T i is the individual-wise sampling locations. In the literature, the setting of functional data subject to (2) is called common design, while the setting subject to (3) is termed independent design. In longitudinal data analysis where t is just time, the data subject to (2) is named as balanced longitudinal data or panel data, while the data subject to (3) is named as the unbalanced longitudinal data.

2.3. Methodologies of FPCA and FLMM

FPCA is almost the most frequently used technique to deal with the functional data in which the Karhunen–Loève representation of the stochastic process plays a central role. Specifically, let covariance function C ( s , t ) satisfy the definition of a Mercer kernel, i.e.,
i = 1 n j = 1 n C ( s i , t j ) c i c j 0
for all finite sequence of points ( s i , t j ) T × T and all choices of real numbers c 1 , , c n . Then, according to Mercer’s theorem, there consequently exists a series of eigenfunctions { ϕ k ( t ) } and a series of non-increasing eigenvalues { λ k } satisfying k = 1 λ k < such that
C ( s , t ) = k = 1 λ k ϕ k ( t ) ϕ k ( s ) , ( s , t ) T × T .
In particular, under the condition of Mercer kernel, the Karhunen–Loève representation theory is able to guarantee that the ith realization of X ( t ) can be expressed as
X i ( t ) = μ ( t ) + k = 1 ξ i k ϕ k ( t ) , t T ,
where μ ( t ) is termed the mean function and ξ i 1 , , ξ i k , are uncorrelated random coordinates with mean 0 and variance E ( ξ i k 2 ) = λ k .
FLMM deals with the functional data in a more direct way than FPCA. Specifically, FLMM expresses each random function by some known basis functions:
X i ( t ) = k = 1 p Ψ k ( t ) b i k + δ i ( t ) ,
where { Ψ k ( t ) } are a set of basis functions, { b i k } are the corresponding random coordinates, and δ i ( t ) is the approximation bias. In practice, the basis functions can be the Tikhonov basis (as defined in T = [ 0 , 1 ] ):
Ψ k ( t ) = 2 cos ( ( k 1 ) π t ) .
Alternatively, B-splines [31] are common bases that are generated in an iterative procedure. If δ i ( t ) is negligible, the uncertainty of random function X i ( t ) is almost described by the random coordinates { b i k } . In the literature, it is routine to assume
b i 1 b i 2 b i p N β 1 β 2 β p , Γ 11 Γ 12 Γ 1 p Γ 21 Γ 22 Γ 2 p Γ p 1 Γ p 2 Γ p p ,
where β 1 , , β p are the mean parameters and Γ = ( Γ i j ) p × p is a covariance matrix. As a result,
μ ( t ) k = 1 p Ψ k ( t ) β k , C ( s , t ) k = 1 p h = 1 p Γ k h Ψ k ( s ) Ψ h ( t ) .
In addition, if { Ψ k ( t ) } are chosen as the eigenfunctions of the covariance function C ( s , t ) , the covariance matrix Γ will reduce into a diagonal matrix with the eigenvalues { λ k } . On the other hand, FPCA can reduce to an FLMM if the eigenfunctions are rotated for a better practical interpretation, which is subject to a similar idea of factor analysis [32]. Generally, since C ( s , t ) is never known in advance, FLMM will employ certain commonly-used basis functions and estimate Γ empirically.

2.4. Comparison between FPCA and FLMM

FLMM is more efficient than FPCA in terms of both estimation and prediction. Due to the space limitation, we directly give the conclusions and show the estimation procedures of FPCA and FLMM in Appendix B.
We illustrate the features of FPCA from two aspects: estimation and prediction. For estimation, FLMM does not require estimating a two-dimensional covariance surface C ( s , t ) , which is the most time-consuming component for FPCA. Besides, all terms in FLMM can be automatically estimated using many mature and effective algorithms, such as the Newton–Raphson algorithm. Whereas the main approaches for FPCA to estimate the mean μ ( t ) and covariance C ( s , t ) are local linear modeling [10], which is based on the Kernel function, it requires determining three different bandwidths h u , h C , h v (see details in Appendix B), which is supposedly very complex in practice. For prediction, FPCA will use all the observations in addition to the local linear coefficients a ^ 0 , , c ^ 1 , which are generated in the estimation process, to predict the values of μ ( t ) , X i * ( t ) , C ( s , t ) at any new locations t i * j (and s i * j ), which is extremely costly in implementation. In contrast, predicting these functions is trivial for FLMM: it only needs to generate a new vector Ψ i * = ( Ψ ( t i * 1 ) , , Ψ ( t i * T i * ) ) and then predict the values by using β ^ , b ^ i , Γ ^ and σ ^ 2 .
However, the most serious problem of FLMM is the lack of statistical theory in comparison with the numerous theoretical investigations of FPCA. One theoretical burden of FLMM is inherited from the LMM and generalized linear model (GLM, [33]): the statistical properties of LMM and GLM are different to analyze. As far as we are concerned, the tool to investigate the asymptotic properties of GLM is first given by Vonesh et al. [34], who pointed out that the convergence rate of the mean parameter is of O ( T min 1 / 2 ) , where T min is the minimum number of observations among all individuals. However, this convergence rate is not ideal because T min can be less than 10 in many longitudinal data. Many works aim to improve the GLM estimation by removing the high-order bias of Laplace approximation, such as [35,36,37]; however, these bias-corrections make the asymptotic properties more difficult to analyze.
Another theoretical burden of FLMM is inherited from the basis expansion method, approximating a smooth function through a series of basis functions. As Ruppert et al. [38] once commented: “The literature on inference in smoothing is large and varied. Much of it is in the local polynomial or kernel smoothing context, where theoretical properties are more tractable. Inference for spline-based smoothing is less studied.” Indeed, it is unclear under what conditions the FLMM using the basis expansion method is consistent. In the next section, we improve the traditional FLMM in terms of these two burdens.

3. Optimal Estimation for Large Balanced Longitudinal Data

In this paper, we propose a novel FLMM estimation for large balanced longitudinal data, which is effective in terms of computation and optimal in terms of statistical theory. Our new FLMM estimation for large balanced longitudinal data is motivated by the COVID-19 data, in which the number of sampling locations (i.e., the dates from 16 March 2020 to 14 August 2020) is much larger than the number of individuals (i.e., the 50 states plus the District of Columbia). The traditional estimation of FLMM may encounter numerical instability and becomes extremely time-consuming because the complexity of the REML estimator dramatically increases with the dimension of the covariance matrix Σ y . In contrast, our new estimator for the variance components remains efficient and stable even when the dimension of Σ y is very high. Note that under the setting of balanced longitudinal data, the sampling locations are the same for all individuals, i.e., t = ( t 1 , , t T ) , the covariance matrix Σ y i will become a universal one Σ y and the basis function matrix Ψ i will reduce to a universal matrix Ψ for all individuals.

3.1. Novel Estimating Procedure

The biggest difference between our new estimation and the traditional one is that we will use different numbers of basis functions to estimate the mean function and the individual trajectory. In order to guarantee both the mean function estimation and individual trajectory prediction meet the optimal convergence rates, we approximate the mean function and individual trajectory as follows. Specifically:
μ ( t ) k = 1 q Ψ k ( t ) β k , X i ( t ) μ ( t ) + k = 1 p Ψ k ( t ) c i k ,
where 1 p q T . Regarding the matrix form, we have
μ Ψ q β , X i μ + Ψ p c i ,
where Ψ q = ( Ψ k ( t j ) ) T × q , Ψ p = ( Ψ k ( t j ) ) T × p , β = ( β 1 , , β q ) , c i = ( c i 1 , , c i p ) . Here, the vector of random coordinates c i is considered to follow a normal distribution
c i N ( 0 , Γ ) ,
where Γ = ( Γ j k ) is a ( p × p ) positive definite symmetric matrix. It is used to describe the local variability of individual trajectory X i ( t ) beyond the mean function μ ( t ) .
The first step of our estimator is estimating the mean coefficient β :
β ^ = arg min β i = 1 N ( Y i Ψ q β ) Σ ^ y 1 ( Y i Ψ q β ) + γ β Q β = ( N Ψ q Σ ^ y 1 Ψ q + γ Q ) 1 Ψ q Σ ^ y 1 i = 1 N Y i ,
where Σ ^ y = σ ^ 2 I T + Ψ p Γ ^ Ψ p , γ is a smoothing parameter, and Q is a known semi-positive definite symmetric matrix ensuring
t T [ μ ( t ) ] 2 d t β Q β .
Compared with traditional FLMM, which estimates μ ( t ) by (A51), we add a ridge penalty β Q β to control the smoothness of μ ( t ) . If Ψ ( t ) is chosen as the Tikhonov bases, then Q is a diagonal matrix with the jth entry P j j = 2 ( ( j 1 ) π ) 4 . If Ψ ( t ) is chosen as the Bsplines, then Q = D 2 D 2 where the expression of D 2 can be found in Eilers and Marx [31]. In addition, the covariance matrix of β ^ is given by
cov ( β ^ ) = ( N Ψ q Σ ^ y 1 Ψ q + γ Q ) 1 .
When T is large, we recommend the Sherman–Morrison–Woodbury formula to yield Σ ^ y 1 :
Σ ^ y 1 = 1 σ ^ 2 I T Ψ p ( Ψ p Ψ p + σ ^ 2 Γ ^ 1 ) 1 Ψ p .
The mean function is then given by μ ^ = Ψ q β ^ and its asymptotic covariance matrix is
cov ( μ ^ ) = Ψ q ( N Ψ q Σ ^ y 1 Ψ q + γ Q ) 1 Ψ q .
The confidence band of μ ^ is constructed by using the diagonal elements of cov ( μ ^ ) .
The second step is to predict c i by using BLUP:
c i ^ = arg min c i { 1 σ ^ 2 | | Y i μ ^ Ψ p c i | | 2 2 + c i Γ ^ 1 c i } = ( Ψ p Ψ p + σ ^ 2 Γ ^ 1 ) Ψ p ( Y i u ^ ) .
This estimator is called BLUP because c i ^ can be regarded as the mean of the posterior distribution of c i . The prediction of X i is X ^ i = μ ^ + Ψ c ^ i . In addition, the asymptotic covariance matrix of c ^ i is
cov ( c ^ i ) = σ ^ 2 ( Ψ p Ψ p + σ ^ 2 Γ ^ 1 ) 1 ,
and the asymptotic covariance matrix of X ^ i is given by
cov ( X ^ i ) = Ψ q ( N Ψ q Σ ^ y 1 Ψ q + γ Q ) 1 Ψ q + σ ^ 2 Ψ p ( Ψ p Ψ p + σ ^ 2 Γ ^ 1 ) 1 Ψ p ,
which is used to constructed the confidence band of X ^ i .
The novelty of our new estimating procedure is the estimator of Γ and σ 2 . Specifically, our method employs the Laplace Approximation Marginal Likelihood (LAML) [39] to estimate Γ and σ 2 , which has a much lower computational cost than REML. The objective function of Γ and σ 2 based on LAML is given by
L ( σ 2 , Γ ) = i = 1 N 1 σ ^ 2 | | Y i μ ^ Ψ p c ^ i | | 2 2 + c ^ j Γ 1 c ^ j + log det Γ 1 + log ( σ 2 I T ) + log det Ψ p Ψ p σ 2 + Γ 1 .
In this objective function, the first four components come from the joint log-likelihood function of { Y i } and { c i } , while the fifth term is resulted by the Laplace approximation of integral f ( Y i | c i ) f ( c i ) d c i , where f ( Y i | c i ) is the density function of Y i conditioned on c i and f ( c i ) is the density function of c i . The derivative of L ( σ 2 , Γ ) with respective to σ 2 is
L ( σ 2 , Γ ) σ 2 = 0 = N T σ 2 + i = 1 N | | Y i μ ^ Ψ p c ^ i | | 2 2 + trace ( Δ ( σ 2 , Γ ) Ψ p Ψ p ) ,
where Δ ( σ 2 , Γ ) = ( Ψ p Ψ p / σ 2 + Γ 1 ) 1 . As a result, the fixed-point iteration for σ 2 is
σ 2 [ t + 1 ] = 1 N T i = 1 N | | Y i μ ^ Ψ p c ^ i | | 2 2 + trace ( Δ ( σ 2 [ t ] , Γ [ t ] ) Ψ p Ψ p ) ,
where σ 2 [ t ] and Γ [ t ] are the current estimates. On the other hand, the derivative of L ( σ 2 , Γ ) with respective to Γ 1 is
L ( σ 2 , Γ ) Γ 1 = N Γ + i = 1 N c ^ i c ^ i + Δ ( σ 2 , Γ ) .
Hence, the fixed-point iteration for Γ is
Γ [ t + 1 ] = 1 N i = 1 N c ^ i c ^ i + Δ ( σ 2 [ t ] , Γ [ t ] ) .
We implement these two fixed-point iterations iteratively and regard the stable solutions as σ ^ 2 and Γ ^ . Likewise, we also implement the estimator of β , the predictors of { c i } , and the estimator of Γ and σ 2 iteratively, and regard the stable solutions as the outputting estimates of β , { c i } , Γ , and σ 2 .
The reason why LAML is more efficient than REML is illustrated as follows. The objective function of REML with respect to balanced longitudinal data is
R ( ϑ ) = i = 1 N ( Y i Ψ q β ^ ) Σ y 1 ( Y i Ψ q β ^ ) + log det ( Σ y ) + log det ( Ψ q Σ y 1 Ψ q ) ,
where ϑ = ( σ 2 , vec ( Γ ) ) and Σ y σ 2 I T + Ψ p Γ Ψ p . Because σ 2 and Γ are involved in Σ y , it is impossible to obtain fixed-point iterations for σ 2 and Γ separately as LAML. In other words, it seems that the only way to jointly estimate σ 2 and vec( Γ ) is by using the Fisher-scoring algorithm. The first-order derivative of R ( ϑ ) with respect to ϑ k is
R ( ϑ ) ϑ k = 0 = i = 1 N ( Y i Ψ q β ^ ) Σ y 1 Σ y ϑ k Σ y 1 ( Y i Ψ q β ^ ) + trace P Σ y ϑ k ,
while the expectation of second-order derivative of R ( ϑ ) with respect to ϑ k and ϑ h , i.e., the ( k , h )th element of the Hessian matrix of R ( ϑ ) , is given by
E 2 R ( ϑ ) ϑ k ϑ h = trace P Σ y ϑ k P Σ y ϑ h ,
where
P = Σ y 1 Σ y 1 Ψ q ( Ψ q Σ y 1 Ψ q ) 1 Ψ q Σ y 1 .
Thus, ϑ is updated by
ϑ [ t + 1 ] = ϑ [ t ] H ( ϑ [ t ] ) 1 g ( ϑ [ t ] ) ,
where the kth element of g ( ϑ [ t ] ) is calculated according to (28) and the ( k , h )th element of H ( ϑ [ t ] ) is yielded by (29).
It is worth pointing out that there is no matrix formula to directly yield g ( ϑ [ t ] ) and H ( ϑ [ t ] ) , i.e., it is the sole way to calculate their elements one-by-one according to (28) and (29). This drawback makes the minimization of REML extremely costly because the computation complexity of matrix inverse Σ y 1 and matrix product P Σ y / ϑ k are of O ( T 3 ) . In particular, as the number of employed basis function p increases, the number of elements in H ( ϑ [ t ] ) is ( p 2 + 1 ) 2 , which will be more than 10 , 000 if p is just 10. As a result, the total computation complexity of the Hessian matrix calculation is O ( T 3 p 4 ) , which is unacceptable for large balanced longitudinal data. In contrast, our separate fixed-point iterations (24) and (26) have only O ( T p 2 ) computation complexity, and are able to guarantee σ ^ 2 positive and Γ ^ positive definite as long as the initial estimates σ 2 [ 0 ] is positive and Γ [ 0 ] is positive definite. Therefore, our novel estimator is more attractive than the REML-based one for FLMM in high dimensionality. Note that what we contribute is the fixed-point iterations based on LAML. In the paper of LAML [39], the Newton–Raphson algorithm was employed to minimize the corresponding LAML objective function, in which will still face the same numerical problems as REML if T or p is large.

3.2. Regularity Assumptions

Now we turn to investigate the large sample properties of FLMM using our novel estimator. We first give three essential concepts that play central roles in our asymptotic investigation.
Definition 1 
(Sub-Gaussianity). A random error ε i with mean E ( ε i ) = 0 and variance var ( ε i ) = σ 2 is termed a sub-Gaussian variable if for all r R ,
E ( exp ( r ε i ) ) exp 1 2 τ 2 r 2 ,
where τ > 0 is called the sub-Gaussian parameter.
Definition 2 
(Well-conditioned covariance matrix). An ( p × p ) symmetric matrix Σ is termed the well-conditioned covariance matrix if there is a constant c 0 independent of p such that
0 < c 0 1 λ min ( Σ ) λ max ( Σ ) c 0 < .
Definition 3 
(Sobolev–Hilbert space). A functional space F s is called s-order Sobolev–Hilbert space if there is a constant c 0 such that f F s ,
E t T [ f s ( t ) ] 2 d t c 0 < .
Sub-Gaussianity is an important concept in high-dimensional statistical analysis, which generalizes the Gaussian distribution to include common continuous variables and all bounded discrete variables [40]. Besides, the concept of well-conditioned covariance is proposed by Bickel and Levina [41], which guarantees that all of its eigenvalues are bounded away from zero and infinity whatever its dimension is. In addition, Sobolev–Hilbert space is the most commonly-used functional space in nonparametric statistical analysis [42]. The minimax lower bound of function estimate in this space is
inf f ^ sup f F s E ( | | f ^ f | | 2 2 ) d 0 N 2 s 2 s + 1 ,
where d 0 > 0 is a certain constant and N describes the sample size.
Now we give the following assumptions that facilitate the proofs of theorems.
Assumption 1 
(Sub-Gaussian random error). For all ( i , j ) { 1 , , N } × { 1 , , T } , the random error ε i j is independently and identically distributively generated from a sub-Gaussian distribution with mean 0, variance σ 2 , and sub-Gaussian parameter τ ϵ .
Assumption 2 
(Quasi uniform locations). The sampling locations t 1 , , t T form a quasi uniform sequence defined in T .
Assumption 3 
(Basis functions). For a series of basis functions { Ψ k ( t ) } ,
t T Ψ k ( t ) d t = 0 , t T [ Ψ k ( t ) ] 2 d t = 1 , t T [ Ψ k ( t ) ] 4 d t c 0 < ,
for all k { 1 , , p } , where c 0 > 0 is a constant. Besides, let Ψ p ( t ) = ( Ψ 1 ( t ) , , Ψ p ( t ) ) and
Σ Ψ p = t T Ψ p ( t ) Ψ p ( t ) d t .
Then Σ Ψ p is a well-conditioned covariance matrix.
Assumption 4 
(Approximation error of mean function). For a mean function μ ( t ) F s and a series of basis functions { Ψ k ( t ) } satisfying Assumption 3, there exists a unique series of coefficients { β k } satisfying lim q k = 1 q β k 2 c 0 < such that
t T μ ( t ) k = 1 q Ψ k ( t ) β k 2 d t c 0 q 2 s ,
where c 0 > 0 is certain constant.
Assumption 5 
(Approximation error of individual function). For any individual trajectory X i ( t ) F s and a series of basis functions { Ψ k ( t ) } satisfying Assumption 3, there exists a unique series of random coefficients { c i k } satisfying lim p k = 1 p E ( c i k 2 ) c 0 < such that
E t T X i ( t ) μ ( t ) k = 1 p Ψ k ( t ) c i k 2 d t c 0 p 2 s , i { 1 , , N } ,
where c 0 > 0 is certain constant.
Assumption 6 
(Distribution of random coefficients). Random coefficient c i k is sub-Gaussian with E ( c i k ) = 0 , var( c i k ) = Γ k k , sub-Gaussian parameter τ k , and c i k is independent of c j k for all i j . Furthermore, T Γ = ( T Γ j k ) p × p is a well-conditioned covariance matrix.
Assumption 1 considers the random error to be independent and identically distributed (IID) sub-Gaussian variable for all i and j. Assumption 2 sets the sampling locations and t 1 , , t T can be regarded as a uniform sequence. As a result, the related integral of a certain function of t can be approximated by averaging the values of this function at these discrete locations. Assumption 3 summarizes some basic properties of the basis functions { Ψ k ( t ) } . Assumptions 4 and 5 describe how the accuracy of a function can be approximated by using p bases. Tsybakov [43] showed that, if the bases are the Tikhonov bases, Assumptions 4 and 5 hold as long as μ ( t ) and X i ( t ) belong to the Sobolev–Hilbert space F s . However, for a series of general basis functions, it remains unclear if Assumptions 4 and 5 still hold. In other words, we assume that we have picked the basis functions that have similar properties as the Tikhonov bases such that these two hold. Assumption 6 lists some conditions of the random coefficients. In particular, since
λ max ( Ψ Γ Ψ ) = O P ( T ) × λ max ( Σ Ψ ) × λ max ( Γ ) ,
λ max ( Γ ) must be O ( T 1 ) otherwise the covariance matrix of Y i becomes divergent and violates some basic assumption of FDA; see, e.g., assumption (A4) in Yao et al. [11]. Hence, we assume that the covariance matrix of T c i , i.e., T Γ , is well-conditioned.

3.3. Large Sample Property

We provide four theorems describing the asymptotic properties of estimators obtained by our new FLMM.
Theorem 1. 
Suppose Assumptions 1–6 hold. If β ^ is yielded by (14), λ max ( γ Q ) = O ( 1 ) , and Σ ^ y is chosen such that λ max ( Σ ^ y 1 Σ y ) c 0 < for certain constant c 0 > 0 , then
| | β ^ β | | 2 2 = O P q N T + 1 q 2 s .
If q = O ( T ) , then the optimal convergence rate of β ^ is
| | β ^ β | | 2 2 = O P 1 N + 1 T 2 s .
Theorem 1 indicates the convergence rate of the mean parameter estimate β ^ , which is a trade-off between variance term O P ( q / N T ) and bias term O ( p 2 s ) . As a corollary,
t T ( μ ( t ) μ ^ ( t ) ) 2 d t t T ( μ ( t ) Ψ q ( t ) β ) 2 d t + 1 T j = 1 T ( Ψ q ( t j ) { β β ^ } ) 2 = O P q N T + 1 q 2 s .
which means that the optimal convergence rate μ ^ ( t ) is O P ( N 1 + T 2 s ) by letting q = O ( T ) . In particular, this convergence rate reaches the minimax lower bound of all nonparametric estimates of μ ( t ) [29,43], indicating that our estimate is optimal in terms of minimax efficiency. Besides, our theory does not require λ to vanish asymptotically. Indeed, any γ such that λ max ( γ Q ) = O ( 1 ) is able to guarantee β ^ to reach the minimax lower bound.
Theorem 2. 
Suppose Assumptions 1–6 hold. If c ^ i is yielded by (19), then for i = 1 , , N ,
| | c ^ i c i | | 2 2 = O P p T + 1 p 2 s .
If p = O ( T 1 2 s + 1 ) , then the optimal convergence rate of c ^ i is
| | c ^ i c i | | 2 2 = O P T 2 s 2 s + 1 .
Theorem 2 shows the convergence rate of the parameter estimate in individual trajectory c ^ i , which is a trade-off between variance term O P ( p / T ) and bias term O ( p 2 s ) . Similarly, the optimal convergence rate of X ^ i ( t ) is
t T ( X i ( t ) X ^ i ( t ) ) 2 d t t T ( X i ( t ) μ ( t ) Ψ p ( t ) c i ) 2 d t + t T ( μ ( t ) μ ^ ( t ) ) 2 d t + 1 T j = 1 T ( Ψ p ( t j ) { c i c ^ i } ) 2 = O P p T + 1 p 2 s ,
which reaches the minimax lower bound of all nonparametric estimates of X i ( t ) [29,43] by letting p = O ( T 1 / ( 2 s + 1 ) ) . In addition, it is easy to see that the optimal q and the optimal p have obviously different orders of magnitude: the former should diverge at the same rate as T while the latter is much slower than T. Hence, to reach the optimal convergence rates, we should use an adaptive number of bases when estimating μ ( t ) and predicting X i ( t ) . As far as we are concerned, we are the first ones to indicate this principle. Traditional FLMM methods, such as James et al. [17] and Shi et al. [20], did not require different numbers of bases when estimating the mean function and individual function, which may result in less efficient estimates of mean function and individual function in practice.
Theorem 3. 
Suppose Assumptions 1–6 hold. If σ ^ 2 is yielded by (24),
( σ ^ 2 σ 2 ) 2 = O P p T + 1 p 2 s .
If p = O ( T 1 2 s + 1 ) , the optimal convergence rate is
( σ ^ 2 σ 2 ) 2 = O P T 2 s 2 s + 1 .
Theorem 4. 
Suppose Assumptions 1–6 hold. If Γ ^ is yielded by (26),
| | Γ 1 2 Γ ^ Γ 1 2 I p | | 2 2 = O P p N + p T + 1 p 2 s .
If p = O ( T 1 2 s + 1 ) , the optimal convergence rate is
| | Γ 1 2 Γ ^ Γ 1 2 I p | | 2 2 = O P max N 1 T 1 2 s + 1 , T 2 s 2 s + 1 .
Theorems 3 and 4 guarantee that the variance components σ 2 and Γ can be estimated consistently by using the LAML criterion. The convergence rates of σ 2 and Γ will be influenced by the estimation errors of β ^ and c ^ i . As a corollary,
| | Σ y * Σ ^ y | | 2 2 = O P N 1 T 1 2 s + 1 + T 2 s 2 s + 1 ,
where Σ y * = σ 2 I T + Ψ p Γ Ψ p and Σ ^ y = σ ^ 2 I T + Ψ p Γ ^ Ψ p . However, the true covariance matrix is Σ y = σ 2 I T + cov ( X i ) . The difference between Σ y and Σ y * is unknown without additional assumption. Hence, in Theorem 1, we give the condition λ max ( Σ ^ y 1 Σ y ) < to ensure the correctness of the corresponding proof.

3.4. Tuning Parameter Selection

In the new estimator of FLMM, three tuning parameters including p, q, and γ need to be determined artificially. Here, we should emphasize that our FLMM estimator is insensitive to the choices of p, q as long as they are moderately large (e.g., p = 10 and q = 20 , or any other reasonable numbers), though the theoretical investigation requires them to be bounded by O ( T 1 / ( 2 s + 1 ) ) and O ( T ) . In terms of regulating the wigglinesses of the mean function estimate and individual trajectory prediction, the smoothing parameter γ and covariance matrix Γ are more essential than the numbers of basis functions p and q. A large number of empirical analyses have confirmed this principle. For example, Wood [44] showed it can accurately fit a common function by setting p = 10 and set this number as the default one in R package mgcv. We follow Wood’s measure and recommend choosing p and q as any reasonable numbers in this paper.
As for γ , we propose to automatically estimate the optimal one by using the LAML:
γ ^ = arg min γ γ β ^ Q β ^ log det ( γ I rank ( Q ) ) + log det n Ψ q Ψ q + γ Q .
Similar to the estimation of Γ and σ 2 , the score function of γ is
0 = β ^ Q β ^ rank ( Q ) γ 1 + trace ( ( n Ψ q Ψ q + γ Q ) 1 Q ) .
As a result, the fixed-point iteration of γ is
γ [ t + 1 ] = rank ( Q ) β ^ Q β ^ + trace ( ( n Ψ q Ψ q + γ [ t ] Q ) 1 Q ) .
We update γ [ t ] along with the estimation of β . The stable estimate is regarded as γ ^ .
Since γ ^ can be estimated automatically, there is indeed no tuning parameter to be determined. In contrast, the estimating procedure of FPCA needs to select three bandwidths and the number of employed PCs, which is supposedly very time-consuming. In particular, FPCA is sensitive to the number of employed PCS, while FLMM is insensitive to numbers of the employed bases p and q. As a result, our new estimator of FLMM outperforms FPCA absolutely in terms of tuning-parameter robustness.

4. Simulation Study

In this section, we conduct a simulation study to assess the performance of our new FLMM estimator in comparison with FPCA. Here, we choose the well-known Principal Analysis by Conditional expectation (PACE) estimator (Matlab codes of PACE: https://www.stat.ucdavis.edu/PACE/, from 16 March 2020 to 14 August 2020) to estimate the FPCA.

4.1. Simulation Settings

The settings of simulations are as follows. Throughout the study, we set the t = ( 1 / T , 2 / T , , 1 ) to keep the t ( 0 , 1 ) . The mean function is
μ ( t ) = t 11 ( 10 ( 1 t ) ) 6 + 10 ( 10 t ) 3 ( 1 t 11 ) 1.396 ,
which is constructed by Wood [44]. Next, we generate the covariance function C ( s , t ) by
C ( s , t ) = k = 1 5 λ k ϕ k ( s ) ϕ k ( t ) .
Here, the eigenvalues ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) = ( 5 , 2.5 , 30 , 0.5 , 0.25 ) / 4 , and the eigenfunction vector ϕ ( t ) = ( ϕ 1 ( t ) , , ϕ 5 ( t ) ) is constructed by rotating the Tikhonov bases through a (5 × 5 ) matrix G :
ϕ ( t ) = T ( t ) G ,
where T ( t ) = ( 1 , 2 cos ( π t ) , , 2 cos ( 4 π t ) ) , A 5 ( 0.5 ) is a ( 5 × 5 ) 1-order autoregressive (AR(1)) structure correlation matrix with correlation coefficient ρ = 0.5 , and G is the Cholesky factor of A 5 ( 0.5 ) 1 , i.e., G G = A 5 ( 0.5 ) 1 . Furthermore, the individual trajectory is generated by
X i ( t j ) = μ ( t j ) + k = 1 5 ϕ k ( t j ) ξ i k , ( i , j ) { 1 , , N } × { 1 , , T } ,
where ξ i k N ( 0 , λ k ) . The observation
y i ( t j ) = X i ( t j ) + ε i j ,
where ε i j N ( 0 , 2 ) . Figure 2 visualizes the aforementioned settings. Specifically, the left-top panel shows the first three Tikhonov bases and the left-bottom panel demonstrates the first three eigenfunctions rotated by the Tikhonov bases. The middle-top and middle-bottom panels show the first and second individual trajectories and the related observations with random errors. It is observed that the individual trajectory is similar to the mean function but has its adaptive trend. The right-top and right-bottom panels visualize the covariance surfaces.

4.2. Simulation Results

For better comparison, we notated FLMM with Bsplines and Tikhonov bases as FLMM-Bsplines and FLMM-Tikhonov, respectively. The numbers of employed bases for Bsplines and Tikhonov bases are 30 and 15 when estimating the mean function trajectory and individual trajectory, respectively. Meanwhile, the FPCA implemented by PACE is noted as FPCA-PACE, where all settings follow the default ones in the package. We compare the proposed FLMM with FPCA and generate the data using the above settings. In order the get the whole evaluation of the three approaches, we take two cases in the simulations: (1) fix the N as 150, and set T = 50 , 100 , 150 , 200 , 250 , 300 ; (2) fix T = 150 , set N = 50 , 100 , 150 , 200 , 250 , 300 . On the other hand, the criteria to evaluate the results are as follows. For mean function estimation, we adopt the root mean squared error (RMSE) as the criterion:
1 T i = 1 T μ ^ ( t i ) μ ( t i ) 2
for individual trajectory prediction, we consider the maximum RMSE as the criterion:
max i { 1 , , N } 1 T i = 1 T X ^ i ( t i ) X i ( t i ) 2
the absolute value | σ ^ 2 σ | is used to evaluate the variance estimation; the Frobenius-norm of the matrix is adopted as a measure to compare the covariance surface estimation:
1 T i = 1 T j = 1 T C ^ ( t i , t j ) C ( t i , t j ) 2
and the computing time in seconds is recorded to compare the computational efficiency. The system and software to implement the simulations are Linux 4.18 (standard High Performance Computing (HPC) machine, icosa192gb feature nodes, memory 50 GB and 12 cpu-cores) and MATLAB (R2021). The replication runs are 500.
Figure 3 shows the results of the simulation. Specifically, the five subplots on the left side of Figure 3 show the results of FLMM-Bsplines, FLMM-Tikhonov, and FPCA-PACE based on mean function estimation, individual estimation prediction, variance estimation, covariance surface estimation, and computation time as the sample size N increases. Likewise, the five subplots on the right side of Figure 3 show the counterparts of the left subplots as the number of observations T increases. Based on these results, we make the following comments.
Mean function estimation. When sample size N is very small, FLMM-Bsplines is worse than FLMM-Tikhonov and FPCA-PACE. However, as N increases, the performances of FLMM-Bsplines and FLMM-Tikhonov boost significantly faster than FPCA-PACE. On the other hand, if N is fixed to 150, FPCA-PACE is extremely unreliable when T is very small and is always less accurate than them no matter whether T is large or small. These results show that our novel FLMM estimator is better than FPCA in terms of mean function estimation. In addition, no matter whether N or T rises, the estimation error of the mean function will decrease, which is consistent with Theorem 1.
Individual trajectory prediction. In terms of individual trajectory prediction, FPCA-PACE is much worse than FLMM-Bsplines and FLMM-Tikhonov. We checked the codes of the PACE package and found that it estimates the eigenvalues eigenfunctions by applying PCA on a ( 51 × 51 ) covariance surface predictor, which may result in less accurate eigenvalues eigenfunctions estimates. (Nevertheless, we did not change the default settings of the PACE package in the simulations.) On the other hand, we find that our estimates become more accurate as T increases, while slowly becoming worse when T remains and N rises (the maximum RMSE (46) is affected by N). This is consistent with Theorem 2: the prediction of individual trajectory is only related to the number of observations T.
Variance estimation. Regarding variance estimation, FLMM-Bsplines and FLMM-Tikhonov enjoy the same accuracy in all cases. Since the variance estimate yielded by FLMM becomes more accurate no matter whether T or N increases, Theorem 3 is supposedly established correctly. However, it seems that FPCA-PACE cannot benefit from either the increase of T or the rise of N.
Covariance surface estimation. Regarding covariance surface estimation, all three estimators have the same degree of accuracy. In particular, as N is fixed and T increases, all the covariance surface estimators become worse in terms of the Frobenius-norm. This phenomenon indicates that the averaged Frobenius-norm of | | cov ( X i ) Ψ p Γ Ψ p | | F / T should diverge to infinity as T increases.
Covariance surface approximation. To verify this hypothesis, we conduct an additional simulation. Specifically, because c i is unknown in simulation (we generate the random coordinates of eigenfunctions ξ i directly and there should exist a correspondence between c i and ξ i ), we approximate b i by
c ˜ i = ( Ψ p Ψ p ) 1 Ψ p ( X i μ ) ,
and the covariance matrix Γ is approximated by
Γ ˜ = 1 N 1 ( c ˜ i c ¯ ) ( c ˜ i c ¯ ) ,
where c ¯ is the sample mean. We implement the above simulations 300 times, record the empirical covariance matrix Γ ˜ and the approximation error | | cov ( X i ) Ψ p Γ ˜ Ψ p | | F / T . The left and middle panels in Figure 4 show the simulated covariance matrices of c i with respect to Bsplines and Tikhonov bases. Because the basis functions of Bsplines have no natural order while the Tikhonov basis functions have one, the covariance matrix of the Tikhonov basis functions has sharply decreasing diagonal elements with an increasing order. The approximation error does increase with T and so our hypothesis is confirmed. In addition, there is no difference between Bsplines and Tikhonov bases in approximating the covariance surface although the latter have a natural order. In summary, the decreasing performance of covariance surface estimation is due to the approximation error.
Computational efficiency. In terms of computational efficiency, the computing time of our two methods is almost independent of T and N and is much less than that of FPCA-PACE. In particular, the computing time of FPCA-PACE increases sharply as T increases, confirming our comments on the drawbacks of FPCA given in Section 2. Thus, for the estimation of large balanced longitudinal data, our proposed method has a very clear advantage.

5. Real Data Analysis

We demonstrate the effectiveness of the proposed method via the analysis of the COVID-19 data in this section. This data is updated daily on the number of COVID-19 infectious cases and deaths in the 50 states and DC in the US by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University. It contains N = 51 subjects and T = 152 observations. The realization is the number of COVID-19 cases after the logarithm, which was recorded every day from 16 March 2020 to 14 August 2020. Partial observations are negative because of certain unknown reasons. For such an observation, we replace it with the average of the three observations around it. We employ 15 and 10 numbers of Bsplines bases to approximate the mean function and individual trajectories, respectively. It should be pointed out that we have not changed the default settings of the PACE package, and in our FLMM method, the number of employed bases p is the only tuning parameter to select. Therefore, the comparison between FPCA and FLMM is fair.
The results of this data analysis are shown in Figure 5, Figure 6 and Figure 7. Figure 5 displays the two estimated mean functions obtained by FPCA and our novel FLMM, respectively. The smaller panel within tries to draw attention to the stability of the two mean curves. The results further reveal the reason as to why FPCA performs worse than FLMM in terms of mean function estimation: it may overfit the mean function. Besides, due to the N = 51 sample size, we chose six states with the top populations among all states to represent individual trajectories, and the consequence is shown in Figure 6. Likewise, FPCA tends to overfit the individual trajectories while FLMM with our new estimator can fit them accurately. In addition, Figure 7 demonstrated the estimated covariance surface yielded by the two approaches. Generally, the two covariance surfaces are similar; but the one generated by our FLMM is smoother than FPCA. In addition, in terms of computing time, our FLMM estimator only takes 0.088 seconds, while PACE takes 44.08 seconds. Such a substantial gap confirms that our FLMM estimator is much more efficient than FPCA in terms of computation. In conclusion, FPCA implemented by PACE is very likely to overfit the mean function estimation, individual trajectory prediction, and covariance function estimation.

6. Discussion

In this paper, we propose a novel estimator of large functional data using the FLMM technique. In comparison with the FPCA, this novel estimator is much more efficient because all parameters can be automatically estimated. In comparison with the traditional estimator of FLMM, i.e., the REML criterion, our novel estimator adopts the LAML criterion, which enjoys a significantly lower computational complexity when the number of observations T or the number of employed bases p is large. In the simulation, our novel estimator of FLMM outperforms or performs equally as FPCA in all five criteria. In real data analysis, it is able to provide more reliable estimates than PFCA in terms of avoiding overfitting. Note that we only compare the novel estimator with the FPCA implemented by the PACE package, because the computing time of the traditional estimator of FLMM with REML is extremely large, and it is very easy to encounter numerical problems such as a degenerate Hessian matrix.
Another contribution of this paper is the asymptotic theory of FLMM. As far as we are concerned, our work is the first one to point out the convergence rates of mean function estimate, individual trajectory prediction, variance estimate, and covariance surface estimate. Theorems 1 and 2 show that the mean function estimate and individual trajectory prediction can reach the minimax lower bounds if the numbers of employed bases are chosen optimally. In particular, we point out that the number of basis functions should be adaptively chosen when estimating the mean function and individual trajectory, providing a novel guide about how to perform FLMM in practice. However, Theorems 3 and 4 illustrate that the convergence rates of variance estimate and covariance surface estimate cannot reach the minimax lower bounds because of the estimation errors of mean function estimate and individual trajectory prediction. These two findings indicate that the variance components cannot be precisely estimated if the mean function estimate and individual trajectory prediction are not consistent.
It should be pointed out that the proposed estimator can be simply extended to analyze unbalanced longitudinal data or the so-called sparse functional data [11]. Indeed, the FLMM estimator of unbalanced longitudinal data resembles the combination of penalized quasi-likelihood (PQL) and REML/LAML; a similar estimating procedure can be found in Breslow and Clayton [33]. Since in unbalanced longitudinal data, the number of observations for each individual is usually small, the FLMM estimator will not suffer from the theoretical and computational difficulties caused by the “dimensional curse”. As mentioned before, Vonesh et al. [34] pointed out that the convergence rate of the mean parameter in GLMM is O ( T min 1 / 2 ) . However, as far as we are concerned, the convergence rates of covariance matrix estimate and individual prediction are still unknown. As for FLMM, since the approximation bias of basis expansions is further involved, it is even unclear if the four estimates are consistent. Hence, it is worth future studying under what conditions the FLMM estimation is consistent for unbalanced longitudinal data or sparse functional data.
Discrete functional data analysis is an area that has received much less attention than continuous functional data analysis. However, discrete data are more common in longitudinal data analysis than continuous data, and hence the technical tools for analyzing discrete functional data are more urgently needed. Specifically, the model of discrete functional data is
E ( Y i ( t j ) ) = g 1 ( X i ( t j ) + W i j γ ) ,
where Y i ( t j ) is a random variable following the exponential family distribution [45], g ( · ) is the so-called link function, and W i j is certain vector of covariates. We conjecture that the convergence rates of mean function estimate, individual trajectory prediction, variance estimate, and covariance surface estimate in this new model are the same as the ones indicated by Theorems 1–4 if the sampling locations { t j } are balanced for all individuals. On the other hand, such a problem has been studied by Hall et al. [46] using the FPCA technique. It is also worth comparing the performances of FLMM and FPCA when dealing with such discrete functional data.

Author Contributions

Conceptualization, M.R.; methodology, Y.Y.; writing—original draft preparation, M.R.; writing—review and editing, Y.Y.; resources, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.kaggle.com/datasets/thecansin/johns-hopkins-covid19-case-tracker?resource=download, from 16 March 2020 to 14 August 2020. Additionally, Matlab codes of PACE: https://www.stat.ucdavis.edu/PACE/, from 16 March 2020 to 14 August 2020.

Acknowledgments

The authors are appreciative of the numerous valuable comments from the editor, the associate editor, and referees during the preparation of the article. The author Ran is sincerely grateful to Kano Yutaka and Morikawa Kosuke of the graduate school of engineering science at Osaka University for their generous personalities and countless times of assistance to Ran in challenging times.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of Theorems

Appendix A.1. Lemmas

We present some basic lemmas that facilitate the proofs of Theorems 1–4.
Lemma A1. 
Suppose X 1 , , X n are n independent sub-Gaussian variables with mean-zero and variance σ 1 2 , , σ n 2 . Then
lim n 1 n i = 1 n X i D N ( 0 , σ x 2 ) ,
where
σ x 2 = lim n 1 n i = 1 n σ i 2 .
Proof of Lemma A1. 
By using the second item of Proposition 2.5.2 of Vershynin [40],
[ E ( | X i | p ) ] 1 p = O ( p )
for all p 1 . Hence, for all fixed δ > 0 ,
lim n 1 n 1 + δ i = 1 n E ( | X i | 2 + 2 δ ) ( K 0 2 + 2 δ ) 2 + 2 δ n δ 0 ,
where K 0 > 0 is a fixed number, which verifies Lyapunov’s condition. □
Lemma A2. 
Let X 1 , , X n be n ( p × 1 ) independent identically distributed random vector with entries x i 1 , , x i p are sub-Gaussian with zero-mean. Besides, define the covariance matrix of X i as
Σ = E ( X i X i )
and the related sample covariance matrix
Σ ^ = 1 n i = 1 n X i X i .
Then for every positive integer n,
E ( | | Σ ^ Σ | | 2 ) C p n + p n | | Σ | | 2 ,
where C is a certain positive constant.
This lemma is provided by Vershynin [40] (Theorem 4.7.1).
Lemma A3. 
  • Weyl’s lemma: Let S , E be two Hermitian ( p × p ) matrices. Then, for each 1 i p ,
    λ i ( S ) + λ max ( E ) λ i ( S + E ) λ i ( S ) + λ min ( E ) ,
  • Ostrowski’s lemma: Let A be a Hermitian ( n × p ) matrix, and S be an ( p × p ) matrix. Then, for each 1 i p , there exists an nonnegative real θ i with λ min ( A A ) θ i λ max ( A A ) such that
    λ i ( A S A ) = θ i λ i ( S ) ,
    where λ i ( S ) means the ith eigenvalues of S .
This lemma can be found in Horn and Johnson [47] (Theorem 4.3.1 and Theorem 4.5.9).
Lemma A4. 
LetΓbe an ( p × p ) symmetric matrix satisfying p Γ , a well-conditioned covariance matrix. Then
T C 0 1 λ min ( Ψ q V 1 Ψ q ) λ max ( Ψ q V 1 Ψ q ) C 0 T ,
where V = σ 2 I T + Ψ p Γ Ψ p , C 0 > 0 is a constant, σ 2 > 0 is an alternative constant, Ψ q is an ( T × q ) matrix satisfying Ψ q Ψ q / T Σ Ψ q , Ψ p is an ( T × p ) matrix satisfying Ψ p Ψ p / T Σ Ψ p , and Σ Ψ q , Σ Ψ p are two well-conditioned covariance matrices.
Proof of Lemma A4. 
By the Ostrowski’s lemma,
0 λ min ( Ψ p Γ Ψ p ) , λ max ( Ψ p Γ Ψ p ) λ max ( T Σ Ψ p ) λ max ( Γ ) c 0 T T = c 0 ,
for certain constant c 0 , because the minimum eigenvalue of Ψ p Ψ p is 0. By using Weyl’s lemma,
σ 2 λ min ( V ) λ min ( V ) σ 2 + c 0 .
Hence,
T σ 2 λ max ( Σ Ψ q ) σ 2 + c 0 λ min ( T Σ Ψ q ) λ min ( V 1 ) λ min ( Ψ q V 1 Ψ q ) λ max ( Ψ q V 1 Ψ q ) λ max ( T Σ Ψ q ) λ max ( V 1 ) = T σ 2 λ max ( Σ Ψ q ) .
That is,
T C 0 1 λ min ( Ψ q V 1 Ψ q ) λ max ( Ψ q V 1 Ψ q ) C 0 T ,
for certain positive constant C 0 . □

Appendix A.2. Proofs

Proof of Theorem 1. 
The score function of β ^ is
0 = 1 N i = 1 N ( Y i Ψ q β ) Σ ^ y 1 ( Y i Ψ q β ) + γ β Q β β | β = β ^ = Ψ q Σ ^ y 1 1 N i = 1 N Y i Ψ q β ^ = Ψ q Σ ^ y 1 1 N i = 1 N Y i μ Ψ q Σ ^ y 1 ( μ Ψ q β ) Ψ q Σ ^ y 1 Ψ q ( β β ^ ) + γ N Q β + γ N Q ( β ^ β )
= I 1 + I 2 + I 3 + I 4 + I 5 .
As for I 1 ,
I 1 = Ψ q Σ ^ y 1 1 N i = 1 N Y i μ = Ψ q Σ ^ y 1 Σ y 1 2 Σ y 1 2 1 N i = 1 N ( X i + ε i ) = Ψ q Σ ^ y 1 Σ y 1 2 1 N i = 1 N e i ,
where e = ( e i 1 , , e i T ) with E ( e i j ) = 0 , var( e i j )=1, and E ( e i j e i k ) = 0 for j k . Here,
E ( I 1 ) = 0 , var ( I 1 ) = 1 N × trace ( Ψ q Σ ^ y 1 Σ y 1 2 I T Σ y 1 2 Σ ^ y 1 Ψ q )
which guarantees
E ( | | I 1 | | 2 2 ) 1 N × trace ( Ψ q Σ ^ y 1 Ψ q ) × λ max ( Σ ^ y 1 Σ y ) = O T q N .
As for I 2 , by Assumption 4,
| | I 2 | | 2 2 λ max ( Ψ q Σ ^ y 1 Ψ q ) × λ min 1 ( Σ ^ y ) × | | μ Ψ q β | | 2 2 = O ( T ) × O ( 1 ) × O ( T q 2 s ) = O ( T 2 q 2 s ) .
As for I 3 + I 5 ,
I 3 + I 5 λ min Ψ q Σ ^ y Ψ q + γ N Q × | | β ^ β | | 2 O ( T ) × | | β ^ β | | 2 .
As for I 4 ,
| | I 4 | | 2 2 1 N 2 × λ max 2 ( γ Q ) × | | β | | 2 2 = O 1 N 2
As q T when estimating the mean function,
| | β ^ β | | 2 2 O P 1 T 2 ( | | I 1 | | 2 2 + | | I 3 | | 2 2 ) = O P q N T + 1 q 2 s .
Hence, by letting q = O ( T ) ,
| | β ^ β | | 2 2 = O P 1 N + T 2 s ,
which is consistent with the minimax lower bounder provided by Cai and Yuan [29]. Therefore, Theorem 1 is proved. □
Proof of Theorem 2. 
The score function of c ^ i is
0 = { | | Y i μ ^ Ψ p c i | | 2 2 + c i Λ ^ 1 c i } c i | c i = c ^ i = Ψ p ( Y i μ ^ Ψ p c ^ i ) + Λ ^ c ^ i = Ψ p ( Y i X i ) Ψ p ( X i Ψ p b i ) Ψ q Ψ q ( β ^ β ) Ψ p Ψ p ( c ^ i c i ) + Λ ^ c i + Λ ^ ( c ^ i c i ) = J 1 + J 2 + J 3 + J 4 + J 5 + J 6 ,
where Λ ^ = σ ^ 2 Γ ^ 1 . As for J 1 , it is easy to see
E ( J 1 ) = 0 , var ( J 1 ) = σ 2 trace ( Ψ p I T Ψ p ) = O ( p T ) ,
which guarantees
E ( | | J 1 | | 2 2 ) = O ( p T ) .
As for J 2 , by Assumption 4,
E ( | | J 2 | | 2 2 ) λ max ( Ψ p Ψ p ) E ( | | X i Ψ p b i | | 2 2 ) = O P ( T 2 p 2 s ) .
As for J 3 ,
| | J 3 | | 2 2 λ max ( Ψ q Ψ q ) | | β ^ β | | 2 2 = O P T N + T 1 2 s .
As for J 5 ,
| | J 5 | | 2 2 λ max ( Λ ^ ) c i Λ ^ c i .
Using the consequence that Λ ^ σ 2 Γ 1 ,
c i Λ ^ c i σ 2 χ p 2 .
Hence,
| | J 5 | | 2 2 = O P ( T ) × O P ( p ) = O P ( T p ) .
As for J 4 + J 6 ,
| | J 4 + J 6 | | 2 2 ( λ min ( Ψ p Ψ p ) + λ max 1 ( Λ ^ ) ) 2 | | c ^ i c i | | 2 2 O P ( T 2 ) | | c ^ i c i | | 2 2 O P ( T 2 ) | | c ^ i c i | | 2 2 .
As a result,
| | c ^ i c i | | 2 2 O P p T + 1 p 2 s + 1 N T + 1 T 2 s + 1 + p T
By letting p = O ( T 1 / ( 2 s + 1 ) ) ,
| | c ^ i c i | | 2 2 O P ( T 2 s 2 s + 1 ) .
Thus, Theorem 2 is proved. □
Proof of Theorem 3. 
The score function of σ ^ 2 is
0 = L ( σ 2 , Γ ) N T σ 2 | σ 2 = σ ^ 2 = ( σ ^ 2 σ 2 ) + 1 N T i = 1 N | | Y i μ Ψ p c i | | 2 2 σ 2 + 1 N T i = 1 N | | Y i μ Ψ p c i | | 2 2 | | Y i μ ^ Ψ p c i | | 2 2 + 1 N T i = 1 N | | Y i μ ^ Ψ p c ^ i | | 2 2 | | Y i μ ^ Ψ p c i | | 2 2 + trace ( Δ ( σ 2 , Γ ) Ψ p Ψ p ) N T = K 1 + K 2 + K 3 + K 4 + K 5 .
As for K 2 , it is easy to see
K 2 = 1 N T i = 1 N | | Y i μ Ψ p c i | | 2 2 σ 2 = ( σ ^ oracle 2 σ 2 ) ,
where
σ ^ oracle 2 = 1 N T i = 1 N | | Y i μ Ψ p c i | | 2 2 .
It is well known that | | σ ^ oracle 2 σ 2 | | 2 = O P ( ( N T ) 1 ) hence K 2 2 = O P ( ( N T ) 1 ) . As for K 3 ,
K 3 = 1 N T i = 1 N ( μ μ ^ ) ( 2 Y i μ ^ μ 2 Ψ p c i ) .
Therefore,
K 3 2 1 N T i = 1 N | | μ μ ^ | | 2 2 1 N T i = 1 N | | 2 Y i μ ^ i μ i 2 Ψ p c i | | 2 2 = O P 1 N + 1 T 2 s .
As for K 4 ,
K 4 = 1 N T i = 1 N ( c i c ^ i ) Ψ p ( 2 Y i 2 μ ^ Ψ p c i Ψ p c i ^ ) .
Therefore,
K 4 2 1 N 2 T 2 i = 1 N | | 2 Y i 2 μ ^ Ψ p c i Ψ p c i ^ | | 2 2 i = 1 N λ max ( T Σ Ψ p ) | | c i c ^ i | | 2 2 = O P p T + 1 p 2 s .
As for K 5 ,
K 5 2 1 N 2 T 2 × λ max ( Ψ p Ψ p ) × λ min 1 ( Ψ p Ψ p ) = O P 1 N 2 T 2 .
As a result,
( σ ^ 2 σ 2 ) 2 = O ( K 4 ) = O P p T + 1 p 2 s .
Hence, letting p = O ( T 1 / ( 2 s + 1 ) ) ,
( σ ^ 2 σ 2 ) 2 = O P T 2 s 2 s + 1 .
Thus, Theorem 3 is proved. □
Proof of Theorem 4. 
The score function of Γ ^ is
0 = 1 N L ( σ 2 , Γ ) Γ 1 | Γ 1 = Γ ^ 1 = Γ ^ + 1 N i = 1 N c ^ i c ^ i + 1 N Δ ( σ 2 , Γ ) = ( Γ ^ Γ ) + 1 N i = 1 N c i c i Γ + 1 N i = 1 N c ^ i c ^ i 1 N i = 1 N c i c i + 1 N Δ ( σ 2 , Γ ) = L 1 + L 2 + L 3 + L 4 .
As for L 2 , by using Vershynin [40] (Theorem 4.7.1),
| | L 2 | | 2 2 = 1 N i = 1 N c i c i Γ 2 2 = O P p N × | | Γ | | 2 2 = O P p N T .
As for L 3 ,
L 3 = 1 N i = 1 N c ^ i ( c ^ i c i ) + 1 N i = 1 N ( c ^ i c i ) c i = L 31 + L 32 .
For L 31 ,
| | L 31 | | 2 2 λ max 1 N i = 1 N c ^ i c ^ i 1 N i = 1 N | | c ^ i c i | | 2 O P p T 2 + 1 p 2 s T .
As for L 32 ,
| | L 32 | | 2 2 λ max 1 N i = 1 N c i c i 1 N i = 1 N | | c ^ i c i | | 2 O P p T 2 + 1 p 2 s T .
As for L 4 ,
| | L 4 | | 2 2 N 2 λ min 1 ( Ψ p Ψ p ) = O P ( T 1 N 2 ) .
Note that L 2 dominates L 3 and L 4 . Therefore,
| | Γ ^ Γ | | 2 2 = O P p N T + p T 2 + 1 p 2 s T .
Hence, letting p = O ( T 1 / ( 2 s + 1 ) ) ,
| | Γ 1 2 Γ ^ Γ 1 2 I p | | 2 2 = O P N 1 T 1 2 s + 1 + T 2 s 2 s + 1
Thus, Theorem 4 is proved. □

Appendix B. Estimators of FPCA and FLMM

Appendix B.1. Estimator of FPCA

In implementation, FPCA will (1) estimate the mean function μ ( t ) ; (2) estimate the two-dimensional covariance function C ( s , t ) ; (3) yield the first K eigenfunctions and eigenvalues { ϕ k ( t ) } and { λ k } through PCA; and (4) predict the corresponding coordinates { ξ i k } by using Gaussian conditional expectation. Without loss of generality, we illustrate these four steps with the procedure employed by Li and Hsing [10]. Specifically, Li and Hsing employed the local linear modeling [30] to estimate μ ( t ) :
( a ^ 0 , a ^ 1 ) = arg min a 0 , a 1 1 N i = 1 N 1 T i j = 1 T i Y i j a 0 a 1 ( t i j t ) 2 K h u ( t i j t ) ,
where K h u ( t ) is a kernel function with bandwidth h u . The estimate u ^ ( t ) is just a ^ 0 . Subsequently, they estimate C ( s , t ) by
( b ^ 0 , b ^ 1 , b ^ 2 ) = arg min b 0 , b 1 , b 2 { 1 N i = 1 N 1 T i j = 1 T i k = 1 T i Y i j Y i k b 0 b 1 ( t i j s ) b 2 ( t i k t ) 2 × K h C ( t i j s ) K h C ( t i k t ) } ,
where K h C ( t ) is a kernel function with bandwidth h C . The estimate C ^ ( s , t ) is given by b ^ 0 μ ^ ( t ) μ ^ ( s ) . Next, the variance of random error is estimated using a two-step produce. In the former step, Li and Hsing will estimate the variance function
( c ^ 0 , c ^ 1 ) = arg min c 0 , c 1 1 N i = 1 N 1 T i j = 1 T i Y i j 2 c 0 c 1 ( t i j t ) 2 K h v ( t i j t ) ,
where K h v ( t ) is a kernel function with bandwidth h v . The estimate V ^ ( t ) is given by c ^ 0 . In the latter step, σ ^ is yielded by
σ ^ 2 = 0 1 V ^ ( t ) C ^ ( t , t ) μ ^ ( t ) 2 d t .
Furthermore, { ϕ ^ k ( t ) } and { λ ^ k } are yielded by
C ^ ( s , t ) = k = 1 K λ ^ k ϕ ^ k ( s ) ϕ ^ k ( t ) ,
where K is a pre-given cutoff that can be selected by information criterion such as Bayesian information criterion (BIC) [48]. Finally, the random coordinate ξ ^ i k is predicted by
ξ ^ i k = E ^ ( ξ i k | Y i ) = λ ^ k ϕ ^ i k Σ ^ y i 1 ( Y i μ ^ i ) ,
where Y i = ( y i 1 , , y i T i ) , ϕ ^ i k = ( ϕ ^ k ( t i 1 , , ϕ ^ k ( t i T i ) ) , μ ^ i = ( μ ^ ( t i 1 , , μ ^ ( t i T i ) ) , Σ ^ i = C ^ i + σ ^ 2 I T i , and the ( k , h ) th element of C ^ i is C ^ ( t i k , t i h ) .

Appendix B.2. Estimator of FLMM

In the implementation, FLMM will (1) estimate the mean function μ ( t ) ; (2) estimate the covariance matrix Γ by REML, and (3) horizontally yield the covariance function C ( s , t ) and predict the random coordinates { b j k } by using the estimate of Γ . Without loss of generality, we illustrate these three steps with the procedure employed by Shi et al. [20]. Specifically, Shi et al. first estimated μ ( t ) by
β ^ = arg min β 1 N i = 1 N ( Y i Ψ i β ) Σ y i 1 ( Y i Ψ i β ) ,
where β = ( β 1 , , β p ) , Ψ i = ( Ψ 1 ( t i ) , , Ψ p ( t i ) ) is an ( T i × p ) matrix, Ψ k ( t i ) = ( Ψ k ( t i 1 ) , , Ψ k ( t i T i ) ) , cov ( Y i ) = Σ y i is approximated by
Σ y i σ 2 I T i + Ψ i Γ Ψ i ,
and Σ ^ y i is an estimate of Σ y i . Then μ ^ ( t ) = k = 1 p Ψ k ( t ) β ^ k . Next, Shi et al. employed the technique of LMM [8] such as REML [7] to estimate σ 2 and Γ :
( σ ^ 2 , Γ ^ ) = arg min σ 2 , Γ { i = 1 N ( ( Y i Ψ i β ^ ) Σ y i 1 ( Y i Ψ i β ^ ) + log det ( Σ y i ) + log det ( Ψ i Σ y i 1 Ψ i ) ) } .
FLMM will iteratively implement the above two steps until the stable estimates of β ^ 1 , , β ^ p , σ ^ 2 , and Γ ^ is obtained. Here, the estimate of covariance function C ( s , t ) is
C ^ ( s , t ) = k = 1 p h = 1 p Γ ^ k h Ψ k ( s ) Ψ h ( t ) .
As for the random coordinate vector b i = ( b j 1 , , b j p ) , its prediction is given by
b ^ i = β ^ + Γ ^ Ψ i Σ ^ y i 1 ( Y i μ ^ i ) .
The individual trajectory is predicted by X ^ i ( t ) = k = 1 p Ψ k ( t ) b ^ i k .

References

  1. Ramsay, J.; Silverman, B. Functional Data Analysis; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  2. Králík, M.; Klíma, O.; Ŏuta, M.; Malina, R.M.; Kozieł, S.; Polcerová, L.; Škultétyová, A.; Španěl, M.; Kukla, L.; Zemčík, P. Estimating Growth in Height from Limited Longitudinal Growth Data Using Full-Curves Training Dataset: A Comparison of Two Procedures of Curve Optimization—Functional Principal Component Analysis and SITAR. Children 2021, 8, 934. [Google Scholar] [CrossRef] [PubMed]
  3. Ullah, S.; Finch, C. Applications of functional data analysis: A systematic review. BMC Med Res. Methodol. 2013, 13, 43. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Cai, T.; Yuan, M. Nonparametric Covariance Function Estimation for Functional and Longitudinal Data; University of Pennsylvania and Georgia Inistitute of Technology: Philadelphia, PA, USA, 2010. [Google Scholar]
  5. Diggle, P.; Diggle, P.; Heagerty, P.; Liang, K.; Zeger, S. Analysis of Longitudinal Data; Oxford University Press: Oxford, UK, 2002. [Google Scholar]
  6. Wood, S.; Wood, M. Package ‘mgcv’. R Package Version 2015, 1, 729. [Google Scholar]
  7. Patterson, H.; Thompson, R. Recovery of inter-block information when block sizes are unequal. Biometrika 1971, 58, 545–554. [Google Scholar] [CrossRef]
  8. Laird, N.; Ware, J. Random-effects models for longitudinal data. Biometrics 1982, 1, 963–974. [Google Scholar] [CrossRef]
  9. Hall, P.; Müller, H.; Wang, J. Properties of principal component methods for functional and longitudinal data analysis. Ann. Stat. 2006, 34, 1493–1517. [Google Scholar] [CrossRef] [Green Version]
  10. Li, Y.; Hsing, T. Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. Ann. Stat. 2010, 38, 3321–3351. [Google Scholar] [CrossRef]
  11. Yao, F.; Müller, H.; Wang, J. Functional data analysis for sparse longitudinal data. J. Am. Stat. Assoc. 2005, 100, 577–590. [Google Scholar] [CrossRef]
  12. Paul, D.; Peng, J. Consistency of restricted maximum likelihood estimators of principal components. Ann. Stat. 2009, 37, 1229–1271. [Google Scholar] [CrossRef]
  13. Peng, J.; Paul, D. A geometric approach to maximum likelihood estimation of the functional principal components from sparse longitudinal data. J. Comput. Graph. Stat. 2009, 18, 995–1015. [Google Scholar] [CrossRef] [Green Version]
  14. Bunea, F.; Ivanescu, A.; Wegkamp, M. Adaptive inference for the mean of a Gaussian process in functional data. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2011, 73, 531–558. [Google Scholar] [CrossRef]
  15. Rice, J.; Silverman, B. Estimating the mean and covariance structure nonparametrically when the data are curves. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1991, 53, 233–243. [Google Scholar] [CrossRef]
  16. Yu, F.; Liu, L.; Yu, N.; Ji, L.; Qiu, D. A method of L1-norm principal component analysis for functional data. Symmetry 2020, 12, 182. [Google Scholar] [CrossRef] [Green Version]
  17. James, G.; Hastie, T.; Sugar, C. Principal component models for sparse functional data. Biometrika 2000, 87, 587–602. [Google Scholar] [CrossRef] [Green Version]
  18. James, G.; Sugar, C. Clustering for sparsely sampled functional data. J. Am. Stat. Assoc. 2003, 98, 397–408. [Google Scholar] [CrossRef]
  19. Rice, J.; Wu, C. Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics 2001, 57, 253–259. [Google Scholar] [CrossRef] [Green Version]
  20. Shi, M.; Weiss, R.; Taylor, J. An analysis of paediatric CD4 counts for acquired immune deficiency syndrome using flexible random curves. J. R. Stat. Soc. Ser. C Appl. Stat. 1996, 45, 151–163. [Google Scholar] [CrossRef]
  21. Antoniadis, A.; Sapatinas, T. Estimation and inference in functional mixed-effects models. Comput. Stat. Data Anal. 2007, 51, 4793–4813. [Google Scholar] [CrossRef]
  22. Morris, J.; Carroll, R. Wavelet-based functional mixed models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2006, 68, 179–199. [Google Scholar] [CrossRef] [Green Version]
  23. Zhu, H.; Brown, P.; Morris, J. Robust, adaptive functional regression in functional mixed model framework. J. Am. Stat. Assoc. 2011, 106, 1167–1179. [Google Scholar] [CrossRef] [Green Version]
  24. Qiu, P.; Zou, C.; Wang, Z. Nonparametric profile monitoring by mixed effects modeling. Technometrics 2010, 52, 265–277. [Google Scholar] [CrossRef]
  25. Wu, H.; Zhang, J. Local polynomial mixed-effects models for longitudinal data. J. Am. Stat. Assoc. 2002, 97, 883–897. [Google Scholar] [CrossRef]
  26. Wu, H.; Zhang, J. Nonparametric Regression Methods for Longitudinal Data Analysis: Mixed-Effects Modeling Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  27. Rice, J. Functional and longitudinal data analysis: Perspectives on smoothing. Stat. Sinica 2004, 1, 631–647. [Google Scholar]
  28. Wang, J.; Chiou, J.; Müller, H. Functional data analysis. Annu. Rev. Stat. Appl. 2016, 3, 257–295. [Google Scholar] [CrossRef] [Green Version]
  29. Cai, T.; Yuan, M. Optimal estimation of the mean function based on discretely sampled functional data: Phase transition. Ann. Stat. 2011, 39, 2330–2355. [Google Scholar] [CrossRef] [Green Version]
  30. Fan, J.; Gijbels, I. Local Polynomial Modelling and Its Applications; Routledge: London, UK, 2018. [Google Scholar]
  31. Eilers, P.; Marx, B. Flexible smoothing with B-splines and penalties. Stat. Sci. 1996, 11, 89–102. [Google Scholar] [CrossRef]
  32. Acal, C.; Aguilera, A.; Escabias, M. New modeling approaches based on varimax rotation of functional principal components. Mathematics 2020, 8, 2085. [Google Scholar] [CrossRef]
  33. Breslow, N.; Clayton, D. Approximate inference in generalized linear mixed models. J. Am. Stat. Assoc. 1993, 88, 9–25. [Google Scholar]
  34. Vonesh, E.; Wang, H.; Nie, L.; Majumdar, D. Conditional second-order generalized estimating equations for generalized linear and nonlinear mixed-effects models. J. Am. Stat. Assoc. 2002, 97, 271–283. [Google Scholar] [CrossRef]
  35. Breslow, N.; Lin, X. Bias correction in generalised linear mixed models with a single component of dispersion. Biometrika 1995, 82, 81–91. [Google Scholar] [CrossRef]
  36. Lee, Y.; Nelder, J. Hierarchical generalized linear models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1996, 58, 619–656. [Google Scholar] [CrossRef]
  37. Lin, X.; Breslow, N. Bias correction in generalized linear mixed models with multiple components of dispersion. J. Am. Stat. Assoc. 1996, 91, 1007–1016. [Google Scholar] [CrossRef]
  38. Ruppert, D.; Wand, M.; Carroll, R. Semiparametric Regression; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  39. Wood, S.; Pya, N.; Säfken, B. Smoothing parameter and model selection for general smooth models. J. Am. Stat. Assoc. 2016, 111, 1548–1563. [Google Scholar] [CrossRef]
  40. Vershynin, R. High-Dimensional Probability: An Introduction with Applications in Data Science; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  41. Bickel, P.; Levina, E. Regularized estimation of large covariance matrices. Ann. Stat. 2008, 36, 199–227. [Google Scholar] [CrossRef]
  42. Wainwright, M. High-Dimensional Statistics: A Non-asymptotic Viewpoint; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  43. Tsybakov, A.B. Introduction to Nonparametric Estimation; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  44. Wood, S. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2011, 73, 3–36. [Google Scholar] [CrossRef] [Green Version]
  45. Nelder, J.; Wedderburn, R. Generalized linear models. J. R. Stat. Soc. Ser. A (Gen.) 1972, 135, 370–384. [Google Scholar] [CrossRef]
  46. Hall, P.; Müller, H.; Yao, F. Modelling sparse generalized longitudinal observations with latent Gaussian processes. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 703–723. [Google Scholar] [CrossRef]
  47. Horn, R.; Johnson, C. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  48. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 1, 461–464. [Google Scholar] [CrossRef]
Figure 1. Illustration of the COVID-19 data. The X-axis records the dates from 16 March 2020 to 14 August 2020 and the Y-axis shows the numbers of the logarithm of the daily infections.
Figure 1. Illustration of the COVID-19 data. The X-axis records the dates from 16 March 2020 to 14 August 2020 and the Y-axis shows the numbers of the logarithm of the daily infections.
Mathematics 10 04322 g001
Figure 2. The eigenfunctions, mean function, individual trajectory, and covariance function used in the simulation.
Figure 2. The eigenfunctions, mean function, individual trajectory, and covariance function used in the simulation.
Mathematics 10 04322 g002
Figure 3. Estimation errors of mean function, individual trajectory, variance, covariance surface, and running time in the three methods.
Figure 3. Estimation errors of mean function, individual trajectory, variance, covariance surface, and running time in the three methods.
Mathematics 10 04322 g003
Figure 4. Investigation of the covariance surface approximation.
Figure 4. Investigation of the covariance surface approximation.
Mathematics 10 04322 g004
Figure 5. Fitted mean functions of daily infection via FPCA and proposed FLMM.
Figure 5. Fitted mean functions of daily infection via FPCA and proposed FLMM.
Mathematics 10 04322 g005
Figure 6. Predictions of the trajectories of six states via FPCA and proposed FLMM.
Figure 6. Predictions of the trajectories of six states via FPCA and proposed FLMM.
Mathematics 10 04322 g006
Figure 7. Fitted covariance surfaces via FPCA and proposed FLMM.
Figure 7. Fitted covariance surfaces via FPCA and proposed FLMM.
Mathematics 10 04322 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ran, M.; Yang, Y. Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model. Mathematics 2022, 10, 4322. https://doi.org/10.3390/math10224322

AMA Style

Ran M, Yang Y. Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model. Mathematics. 2022; 10(22):4322. https://doi.org/10.3390/math10224322

Chicago/Turabian Style

Ran, Mengfei, and Yihe Yang. 2022. "Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model" Mathematics 10, no. 22: 4322. https://doi.org/10.3390/math10224322

APA Style

Ran, M., & Yang, Y. (2022). Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model. Mathematics, 10(22), 4322. https://doi.org/10.3390/math10224322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop