Next Article in Journal
Predicting the Incidence of Human Cataract through Retinal Imaging Technology
Next Article in Special Issue
A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials
Previous Article in Journal
Prevalence of Diabetes and Impaired Fasting Glucose in Hypertensive Adults in Rural China: Far from Leveling-Off
Previous Article in Special Issue
A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders

1
Department of Environmental and Occupational Health, Dornsife School of Public Health, Drexel University, Philadelphia, PA 19104, USA
2
A.J. Drexel Autism Institute, Dornsife School of Public Health, Drexel University, Philadelphia, PA 19104, USA
3
Department of Public Health Sciences, University of California at Davis, Davis, CA 95616, USA
4
Department of Epidemiology and Biostatistics, Dornsife School of Public Health, Drexel University, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2015, 12(11), 14780-14799; https://doi.org/10.3390/ijerph121114780
Submission received: 12 October 2015 / Revised: 4 November 2015 / Accepted: 6 November 2015 / Published: 19 November 2015
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)

Abstract

:
We sought to determine the potential effects of pooling on power, false positive rate (FPR), and bias of the estimated associations between hypothetical environmental exposures and dichotomous autism spectrum disorders (ASD) status. Simulated birth cohorts in which ASD outcome was assumed to have been ascertained with uncertainty were created. We investigated the impact on the power of the analysis (using logistic regression) to detect true associations with exposure (X1) and the FPR for a non-causal correlate of exposure (X2, r = 0.7) for a dichotomized ASD measure when the pool size, sample size, degree of measurement error variance in exposure, strength of the true association, and shape of the exposure-response curve varied. We found that there was minimal change (bias) in the measures of association for the main effect (X1). There is some loss of power but there is less chance of detecting a false positive result for pooled compared to individual level models. The number of pools had more effect on the power and FPR than the overall sample size. This study supports the use of pooling to reduce laboratory costs while maintaining statistical efficiency in scenarios similar to the simulated prospective risk-enriched ASD cohort.

1. Introduction

In autism etiologic research, as in other areas of perinatal research and epidemiologic investigation, there is growing interest in analytic methods that maximize accuracy while minimizing costs. One of the methods under consideration to conserve scarce resources is pooling of bio-samples prior to laboratory analysis to conserve biological specimens, enable the study of more biomarkers (exposures), minimize problems related to the limits of detection, reduce the impact of measurement error in exposure variables, and include participants who contributed only a small quantity of bio-samples [1,2,3,4,5,6]. Pooling may also be used to screen batches of samples for an infectious agent or biomarker after which all specimens in a positive pool are tested individually [7]. Pooling (combining bio-samples from multiple individuals and analyzing them as a single sample) was proposed for public health research and surveillance in 1943 [8] and has been receiving increased attention in the epidemiology literature in the context of non-communicable diseases since 2012 [3,9,10,11,12,13,14,15,16,17,18]. The use of pooling has been investigated in other areas of study, such as perinatal epidemiology [3,19], infectious disease epidemiology [8,20,21,22,23,24], environmental epidemiology [3,10,25], cancer epidemiology [12], and genetics [10,12] but has only recently been studied in the context of research on autism spectrum disorders (ASD) [17,18]. An example of a current research area that may greatly benefit from pooling is the potential role of polychlorinated biphenyls (PCBs) in the etiology of ASD because the levels of PCBs are low, laboratory analyses are very costly, and the blood volumes needed to achieve acceptable sensitivity of analysis are large [26,27].
In applications of the pooling approach, earlier work suggests that a smaller pool size (number of participants per pool ~5) is ideal as it yields the most accurate study results but scarce resources may necessitate larger pools (e.g., with 10 or more individuals per pool), which have received less attention in the literature. Furthermore, a wide range of pooling strategies in the context of epidemiological analyses were presented by Saha-Chaudhuri et al., highlighting that optimal allocation of the number of pools and number of subjects per pool should be informed by the specific confounders and effect modifiers under consideration [3,28]. Little is known about the impact of pooling in the presence of misspecification of the disease risk function (e.g., when investigators assume that there is a monotonic gradient in log(risk) when in fact there is non-linearity), miss-measured covariates, and in the presence of outcome misclassification. Here we present results of a simulation analysis designed to shed light on the effects of pooling and measurement error that is informed by interim data from the Early Autism Risk Longitudinal Investigation (EARLI). EARLI is an enriched-risk pregnancy cohort (mothers of a child affected by an ASD enrolled at the start of a subsequent pregnancy) intended to generate evidence on environmental exposures important in the etiology of ASD (http://www.earlistudy.org/) [29]. Our previous work that was informed by the EARLI elucidated the potential impact of measurement errors in exposures and outcomes under different scenarios of study size, precision of exposure measurement and magnitude of the true association [30]. We also evaluated the influence of categorization of a mismeasured exposure in this context with particular attention to non-linearity of the true association [31]. Importantly, EARLI investigators plan to conduct analyses focused on continuous phenotypic measures as well as dichotomous outcomes. Quantitative phenotypes are being increasingly considered in autism etiologic research (such as in references [32,33,34]) and may prove to be important in the study of environmental risk factors. Here we consider the same general scenarios involving both dichotomous outcomes based on categorization of a quantitative measure as in our previous paper as well as pools of different sizes and aim to determine the potential effects of pooling on power, false positives, and bias of the estimated associations between hypothetical environmental exposures and dichotomous ASD status. We also extend our previous work by considering the utility of pooling when the true shape of the exposure-response association is unknown and categorization of exposures is deemed undesirable.

2. Material and Methods

2.1. Simulated Population

The population and sample parameters are listed in Table 1 and are based on the synthetic population used in our previous work [30,31]. Briefly, we generated a cohort of 1,000,000 children with observed sex (Z) and mismeasured gestational age (Wga) distributions similar to the general population in the US, accounting for shortened gestation, on average, among boys [35]. We assumed that the cohort was also exposed to two agents: exposure (1), represented by X1, and exposure (2), represented by X2, that both follow standard normal distributions and are correlated (Pearson ρ = 0.7). We posited that only exposure (1) (with values of X1) exerts causal influence on ASD-related phenotype. We simulated true gestational age Xga (in weeks) in the cohort to follow a distribution that is (43 − χ2(3)), after which boys were assigned gestational age that was one week shorter for 5% of boys. The simulated population was created and all analyses were conducted using SAS version 9.2 (SAS Institute, Cary, NC, USA).

2.2. Covariate Measurement Error

We assumed that the ith environmental exposure (here: i = 1 or 2) is observed with classical measurement error: Wi = Xi + εi, where εi~N(0,σ2i). Error was added to the individual level values as opposed to after pool allocation, implying that errors arise not from the pooling procedure in the laboratory or laboratory testing errors but from factors such as variation in the etiologically relevant time window when exposure was ascertained and random day-to-day variability in dose. The extent of measurement error for exposure 1 (ME1) was selected to span the plausible range: from good precision of environmental measurements with error 6% of true exposure variability to poor precision with error variance equal to the true exposure variability. Gestational age was subject to two sources of error: (a) classical error in the observed continuous measure of the length of gestation, Wga = Xga + εga, where εga~N(0,σ2ga) and (b) round-off error due to reporting gestational age in completed weeks between 23 and 43 weeks. All errors were conditionally independent of each other. Specific values used in the simulation studies are given in Table 1.
Table 1. True and observed values in the simulated population.
Table 1. True and observed values in the simulated population.
Values in the Population (Notation)True ValuesObserved ValuesMeasurement ErrorPostulated True Association with Latent Measure of Outcome aCutoff Used for Dichotomization
Environmental exposure 1 (X1)X1~N(0,1), correlated with X2 by Pearson correlation ρ = 0.7 W1 = X1 + ε1ε1~N(0,σ2), where σ2 ∈ {0.0625, 0.25, 1}{0.15, 0.25, 0.5}
Environmental exposure 2 (X2)X2~N(0,1), correlated with X1 by Pearson correlation ρ = 0.7 W2 = X2 + ε2ε2~N(0,0.25)0
Sex (Z)Z~Binomial(0.5, 1)ZNone1
Gestational age (Xga)Xga ~ (43 – χ2(3))
1 week was subtracted from the above gestational age for 5% of males
Wga = R((Wga + εga); 23, 43),
Where R(.) is a function that is the rounded expression to integers, and then truncated to 23 to 43 weeks.
ε g a ~ N ( 0 , 1 72 ) 0.1
Autism endophenotype (latent, Y)εy~N(0,1)
Linear model
YL = β1X1 + β3Z + β4Xga + εy,
Semi-linear model 1: threshold model
If x1 < −1 then
YT = β3Z+ β4Xga + εy
If x1 ≥ −1 then
YT = 1.5 × β1X1 + β3Z + β4Xga + εy
Semi-linear model 2: saturation model
If x1 <−1 then
YS =1.5 × β1X1 + β3Z + β4Xga + εy
If x1 ≥ −1 then
YS = 0.5 × β1X1 + β3Z + β4Xga + εy,
Y* = R(T(y); 0, 18),
where T(.) is a function that is transformed to the Y log-normal distribution that match observed AOSI in EARLI
due to rounding by R(.) bNot applicable0–6, 7–18
Notes: AOSI—Autism Observation Scale for Infants; EARLI—Early Autism Risk Longitudinal Investigation; a coefficients of linear regression, see text and bottom of the table for details, β’s; b R(f(.); min, max) is the function that rounds values of function f(.) to integers and truncates values (retains only values) that fall within interval [min, max].

2.3. Linear and Semi-Linear Risk Models

We assumed that in addition to X1, sex (sex ratio of 4 males: 1 female [36]) and gestational age [24,37,38] exert causal influence on dichotomous ASD status and continuous ASD-related phenotype (Y). We verified that these yielded parameter estimates within the expected range. As the true shape of the exposure-response curve is unknown, the simulated disease was modeled using the following three forms:
Linear Model:
YL = β1X1 + β2X2 + β3Z + β4Xga + εy
“Threshold” (Semi-Linear) Model:
If x1 < meanx1-standard deviationx1 then YT = 0 × β1X1 + β2X2 + β3Z + β4Xga + εy
If x1 ≥ meanx1-standard deviationx1 then YT = 1.5 × β1X1 + β2X2 + β3Z + β4Xga + εy
“Saturation” (Semi-Linear) Model:
If x1 < meanx1-standard deviationx1 then YS = 1.5 × β1X1 + β2X2 + β3Z + β4Xga + εy
If x1 ≥ meanx1-standard deviationx1 then YS=0.5 × β1X1 + β2X2 + β3Z + β4Xga + εy,
The deviation from linearity is created here by altering the slope of the causal association with X1 above and below the inflection point defined by (meanx1-standard deviationx1). Constant multipliers (here: 0.5, 1, and 1.5) are used. Thus, in the saturation model, the effect of X1 is 3 times greater below than above the inflection point because 1.5 × β1X1/ 0.5 × β1X1 = 3. Likewise, in the threshold model, the effect of X1 is 1.5 times greater above than below the inflection point because 1.5 × β1X11X1 = 1.5. These multipliers were chosen arbitrarily and yet have numerical values that yield plausible “average” associations if the linear model was fitted to the data. For all models εy~N(0,1) and β2 was 0 since X2 had no true effect on Y. The latent values of ASD-related phenotype (YL, T, S) were computed for each of three β1 ∈ {0.15, 0.25, 0.5} that correspond to weak, moderate and strong associations as judged by expected odds ratios (eOR1) that result if X1 and YL are dichotomized at one standard deviation above their respective means, i.e. eOR1 = 1.5, 2, and 4.

2.4. Case Definition and Outcome Misclassification

The outcome measure simulated for this particular work was the Autism Observation Scale for Infants (AOSI) score that takes on values of 0–18 [39]. Latent continuous ASD-related endophenotype, was subjected to rounding error and categorized using a cutoff suggested by prior work (Zwaigenbaum L., personal communications). For YL, YT and YS, Y C * = 0 if Y* is 0–6 and Y C * = 1 if Y* is 7–18 (Table 1). For all exposure-disease models, subjects with a high AOSI score (at least 7) were considered to be cases. It must be noted that we do not here equate high AOSI with clinical diagnosis of ASD but merely state that ASD is a collection of traits that naturally occur on a continuous scale and are segregated into binary disease and healthy groups based on some criteria. This is consistent with the current conceptualization of ASD as a “spectrum” of phenotypes, not a definitive state common to all cases, as would be true of another condition such as death from a cardiac event or acquisition of an infection. However, diagnostic thresholds are considered essential in clinical practice as well as epidemiology and the chosen cutoff of 7 has been suggested to have clinical and/or etiologic significance. One can argue that various AOSI sub-scales may be more related to environmental exposure than others but this is not our focus here and our argument applies to any measures of continuous traits that have AOSI-like properties.

2.5. Pool Construction and Composition and Simulated Cohorts

Strata (based on sex and case status) and pool allocation are illustrated in Figure 1. For each model, strength of the true association of X1, variance of ME1, and shape of the exposure-disease relationship, members of the population were divided into 4 strata based on the value of their dichotomous AOSI score (high versus low) and sex, similar to the pooling strategy suggested by Weinberg and Umbach [6]. Each stratum was divided into pools. One thousand cohorts (i.e., study samples) each of size (n) 225, 450 and 675 were randomly selected with 5, 10 and 15 individuals per pool (g will be used to refer to the pool size), respectively. The pools were allocated as follows: 10 pools for male controls, 18 pools for female controls, 12 pools for male cases and five pools for female cases. Many pool allocation schemes are possible but this scheme reflects a plausible situation that would arise when stratifying on an established strong risk factor (e.g., sex) and maintaining pools of uniform size within a study. In this scheme, the overall cost of laboratory analyses, which is determined by the number of pools, is kept constant across simulations, i.e., reflecting the reality of having to conduct a study on a fixed budget.
Figure 1. Strata and pool allocation. ME1—measurement error variance for the causal exposure; Sample size 225—5 individuals per pools; Sample size 450—10 individuals per pools; Sample size 675—15 individuals per pools.
Figure 1. Strata and pool allocation. ME1—measurement error variance for the causal exposure; Sample size 225—5 individuals per pools; Sample size 450—10 individuals per pools; Sample size 675—15 individuals per pools.
Ijerph 12 14780 g001

2.6. Assessing the Effect of Pooling

The pooled models were in the form of:
L O G I T ( p r ( Y c p o o l * = 1 ) ) = c o + c 1 W 1 + c 2 W 2 + c 3 Z + c 4 W g a
with offset = ln(rgz) where r g z = #   c a s e   p o o l s   o f   s i z e   g   a n d   s e x   z #   c o n t r o l   p o o l s   o f   s i z e   g a n d   s e x   z .
Thus, for logistic models, exp(c1) is the odds ratio of the estimated effect of X1 (OR1) on odds of AOSI ≥ 7 and exp(c2) is the odds ratio of the observed effect of X2 (OR2) on odds of AOSI ≥ 7. All independent variables except for z were considered as continuous predictors of the dichotomized AOSI score. We also fitted analogous logistic regression models with values of exposure that were not contaminated by measurement error, e.g., with X1 instead of W1, etc. These models were repeated tor YT and YS. The pooled models were compared to the individual level population and replicate models in the form:
L O G I T ( p r ( Y c * = 1 ) = c 0 + c 1 X 1 + c 2 X 2 + c 3 Z + c 4 X g a ( l o g i s t i c )
Regression models that did not converge or with an OR1 or OR2 less than or equal to 0.1 or at least 10 were considered unstable and not included in bias calculations.

2.7. Comparison of the Replicates to the Population

We compared the replicate results to the population results to get an idea of how close pooled estimates are to the true parameter values. For each combination of model, strength of the true association, variance of ME1, and sample (or pool) size, the mean OR, and the power of the analysis and bias in the OR for exposure 1 were calculated for the models without measurement error. These quantities as well as the false positive rate (FPR) and bias in the OR for exposure 2 were calculated for the models with measurement error.
The numerator for the power and FPR calculation was the number of models for which OR1 and OR2, respectively, were 0.1 or less, at least 10, or statistically significant (α=0.05). The bias was calculated as:
b i a s O R W = k = 1 #   r e p l i c a t e s O R W k O R X p o p u l a t i o n # s t a b l e r e p l i c a t e s
for replicate ORs between 0.1 and 10. The percent mean difference in OR between the pooled and individual level replicate analyses was calculated for X1 and X2.

2.8. Comparison of the Individual Level Replicate Analysis and Pools of Different Sizes

A comparison of the pooled results to the individual level replicate results gives us an idea of what information is lost due to pooling. For each replicate of sample size 675 with a linear exposure-disease relationship, individual level (g = 1) and pooled analyses were compared. Pools of size 5 and 15 were analyzed for all replicates. This represents scenarios where the cost of analysis increases for a fixed cohort size as the pools become smaller. The power, FPR, and bias in OR1 and OR2 were compared.

3. Results

The characteristics of the viable population are similar to what was expected based on the simulation parameters (data not shown). The logit plots of the linear, threshold and saturation models appear to be very similar despite the fact that they have potentially very different biological implications (Figure 2). Unless otherwise stated, the description of the results pertains to the linear model.
Figure 2. Distribution of high AOSI scores in the linear, threshold and situation models. AOSI—Autism Observation Scale for Infants.
Figure 2. Distribution of high AOSI scores in the linear, threshold and situation models. AOSI—Autism Observation Scale for Infants.
Ijerph 12 14780 g002
The proportion of the replicates for which there was a stable logistic regression model, which converged within 50 iterations and yielded an OR1 and OR2 between 0.1 and 10 (not inclusive), drastically decreased between the moderate effect (eOR = 2: when nearly all the replicates produced a stable regression model) and strong effect (eOR = 4) and decreased with increasing sample size (Table S1). There were a greater number of stable regression models for the semi-linear models than the linear models when the eOR was 4. Overall, the models with greater measurement error had a higher proportion of stable regression models. The characteristics of the unstable models for pool size 15, eOR = 4 and the smallest measurement error variance are in Table S2. (This scenario was chosen as it had the greatest number of unstable replicates.) Nearly all of these unstable replicates had OR1 ≥ 10 and half of these also had OR2 ≥ 10.
We next compared results of the pooled replicate analysis with the individual level population analysis and individual level replicate analysis. There was a loss of power for many of the pooled models: e.g., the power was 25%–50% for the cohort of size 450 and eOR of 2, but little difference in the OR1 (all bias terms between 0.8 and 1.2 except in cases of strong true association or large variance in measurement error) (Table 2, Figure 3). For all combinations of pool size and strengths of the true association, except eOR = 4, there was an increase in power as the pool size (and sample size) increased. The FPR was relatively low for the pooled analysis: less than 15% unless there was a strong association (eOR = 4), or a moderate association (eOR = 2) and large variance in measurement error. Likewise, the mean OR2 (i.e., bias in the estimate for the non-causal exposure) was low (1.0–1.1 in most of the scenarios) except in the models with large measurement error and strong true association (eOR = 4), when mean estimates of OR2 were 1.4–1.6 (data not shown). In general, the power for detecting associations of X1 and W1 was slightly lower for the semi-linear models than the linear models but the bias in the estimate of the overall “average effect” was similar (Figure 3). However, for the strong true association, large measurement error variance and, especially, bigger pool size, the semi-linear models had higher power than the linear models.
Table 2. Performance of pooled logistic regression analysis: averages of 1000 simulation realizations for each scenario with a consistent number of pools (45).
Table 2. Performance of pooled logistic regression analysis: averages of 1000 simulation realizations for each scenario with a consistent number of pools (45).
Cohort (Pool) SizeeOR1 (eβ1) ME X 1 2)Comparison to Population AnalysisComparison to Individual Level Replicate Analysis
Exposure 1Exposure 2% Mean Change in OR d
Power a (%)Bias bFPR c (%)Exposure 1Exposure 2
Linear model
225 (5)1.5 (0.15)Truth e26.3–27.51.13.1–3.6
0.062515.71.14.43.20.0
0.2513.11.05.42.70.2
19.30.96.81.21.1
2.0 (0.25)Truth e60.8–63.31.15.7–6.5
0.062536.01.14.86.1−0.8
0.2529.41.04.24.30.5
116.60.811.42.42.4
4.0 (0.5)Truth e98.7–98.91.310.6–12.3
0.062582.11.33.913.60.2
0.2569.31.07.610.81.9
141.50.723.66.04.0
450 (10)1.5 (0.15)Truth e40.3–41.61.13.7–4.1
0.062523.31.14.23.9−0.7
0.2516.81.04.72.90.5
112.50.98.71.91.5
2.0 (0.25)Truth e82.0–83.41.1–1.27.8–8.6
0.062552.11.15.58.3−1.0
0.2541.41.05.37.20.3
125.00.815.83.63.9
4.0 (0.5)Truth e92.9–93.61.416.6–18.3
0.062580.21.318.416.2−0.7
0.2574.21.114.216.62.0
151.80.725.710.16.6
675 (15)1.5 (0.15)Truth e48.3–50.51.14.9–5.0
0.062528.81.13.95.4−0.8
0.2524.01.04.85.1−1.1
115.30.99.72.81.3
2.0 (0.25)Truth e87.8–89.41.29.4–11.3
0.062557.91.26.711.2−0.2
0.2543.51.16.19.00.7
126.90.816.54.53.7
4.0 (0.5)Truth e82.2–84.91.3–1.415.3–16.8
0.062575.21.442.818.3−2.9
0.2565.11.133.817.02.3
128.80.712.20.10.1
Threshold model
225 (5)1.5 (0.15)Truth e22.6–24.41.12.9-3.3
0.062513.41.13.62.9−0.1
0.2513.91.05.62.8−0.1
19.10.96.31.20.4
2.0 (0.25)Truth e50.8–54.41.14.3–4.8
0.062530.11.15.05.0−0.9
0.2525.71.05.34.20.1
116.50.98.92.41.8
4.0 (0.5)Truth e96.4–96.91.2–1.37.8-8.8
0.062573.21.23.89.4−0.9
0.2563.81.06.87.90.8
138.10.720.14.43.3
450 (10)1.5 (0.15)Truth e36.5–36.71.13.4–3.6
0.062520.61.14.83.10.0
0.2517.01.04.92.90.4
110.30.99.01.71.2
2.0 (0.25)Truth e73.5–76.21.16.1–6.9
0.062544.01.14.36.60.3
0.2535.91.05.36.00.9
121.50.814.13.22.2
4.0 (0.5)Truth e95.0–97.11.2–1.310.8–11.5
0.062579.81.26.712.7−0.8
0.2571.61.08.411.21.6
147.50.725.16.85.2
675 (15)1.5 (0.15)Truth e42.9–43.91.14.2–4.6
0.062522.31.13.74.9−1.0
0.2520.21.04.33.9−0.5
113.20.98.02.01.2
2.0 (0.25)Truth e80.8–83.11.27.5–8.5
0.062550.31.14.68.4−0.6
0.2539.51.05.86.61.4
125.30.914.53.83.5
4.0 (0.5)Truth e85.0–86.91.312.2–13.3
0.062571.21.322.815.8−3.5
0.2563.41.019.011.83.8
148.70.827.38.35.9
Saturation model
225 (5)1.5 (0.15)Truthe19.4–22.51.0–1.12.3–3.5
0.062514.61.15.23.40.1
0.2512.91.05.12.8−0.8
19.00.96.30.8−0.1
2.0 (0.25)Truth e49.7–52.71.14.7–5.4
0.062529.41.14.34.7−0.2
0.2522.61.04.63.80.1
114.90.910.12.51.9
4.0 (0.5)Truth e95.5–96.11.210.1–10.6
0.062570.71.23.610.6−0.1
0.2560.41.06.28.80.5
137.30.719.54.63.4
450 (10)1.5 (0.15)Truth e32.9–33.31.13.4–4.1
0.062519.11.15.23.6−0.6
0.2516.31.05.13.4−0.1
110.00.97.31.71.4
2.0 (0.25)Truth e72.2–73.81.16.7–7.0
0.062542.81.14.36.50.7
0.2537.41.05.45.41.2
119.20.911.33.12.9
4.0 (0.5)Truth e95.3–96.61.415.9–17.7
0.062578.11.37.616.8−0.4
0.2572.21.19.814.62.7
145.40.825.87.65.8
675 (15)1.5 (0.15)Truth e41.3–41.81.14.6–5.3
0.062523.71.13.85.6−1.1
0.2518.61.05.74.10.3
111.90.96.22.41.3
2.0 (0.25)Truth e78.7–82.61.1–1.28.9–9.3
0.062548.41.25.410.3−1.7
0.2538.01.06.47.11.1
123.60.912.84.23.4
4.0 (0.5)Truth e85.4–87.31.415.5–18.2
0.062570.21.423.617.5−1.7
0.2562.61.119.214.34.4
143.70.826.510.57.9
Notes: eOR—expected odds ratio; FPR—false positive rate; ME—measurement error; OR—odds ratio; (a) P o w e r = # mod e l s w i t h O R x 1    0.1 o r O R x 1    10 o r s i g n i f i c a n t O R x 1 T o t a l # o f r e p l i c a t e s ; (b) B i a s = k = 1 1000 ( O R x k   p o o l e d O R x   p o p u l a t i o n ) #   o f   s t a b l e   r e p l i c a t e s w h e r e 0.1 < O R x 1 k < 10 (c) F P R = # mod e l s w i t h O R x 2 0.1 o r O R x 2 10 o r s i g n i f i c a n t O R x 2 T o t a l # o f r e p l i c a t e s (d) % m e a n c h a n g e i n O R = 100 k = 1 1000 ( O R x k p o o l e d O R x k i n d i v i d u a l O R x k p o o l e d ) Total # of replicates w h e r e 0.1 < O R x k p o o l e d < 10 ; (e) Models without measurement error. Values vary as a different viable population was selected for each measurement error scenario.
Figure 3. Mean differences in power and FPR due to pooling in 1000 replicates for models with measurement error. eOR—expected odds ratio, FPR—false positive rate; OR—odds ratio: (a) P o w e r = # mod e l s w i t h O R x 1 0.1 o r O R x 1 10 o r s i g n i f i c a n t O R x 1 T o t a l # o f r e p l i c a t e s ; (b) F P R = # mod e l s w i t h O R x 2 0.1 o r O R x 2 10 o r s i g n i f i c a n t O R x 2 T o t a l # o f r e p l i c a t e s .
Figure 3. Mean differences in power and FPR due to pooling in 1000 replicates for models with measurement error. eOR—expected odds ratio, FPR—false positive rate; OR—odds ratio: (a) P o w e r = # mod e l s w i t h O R x 1 0.1 o r O R x 1 10 o r s i g n i f i c a n t O R x 1 T o t a l # o f r e p l i c a t e s ; (b) F P R = # mod e l s w i t h O R x 2 0.1 o r O R x 2 10 o r s i g n i f i c a n t O R x 2 T o t a l # o f r e p l i c a t e s .
Ijerph 12 14780 g003aIjerph 12 14780 g003b

Comparison of Pool (g) and Sample Sizes (n)

When the pool and cohort size varied, a number of trends emerged (Table 2). Larger cohort and pool sizes (n ≥ 450 with g ≥ 10) had more power to detect a relationship than smaller pool sizes for smaller true effects with the linear model (e.g., for moderate ME1 and eOR = 2, power = 41%–44% for n = 450 or 675 and 29% for n = 225).
Table 3. Comparison of linear model, individual level and pool sizes of 5 and 15 for the same replicates (linear model only, cohort size = 675).
Table 3. Comparison of linear model, individual level and pool sizes of 5 and 15 for the same replicates (linear model only, cohort size = 675).
eOR1 (eβ1)ME12)Individual Level Analysis (Pool Size 1)Pool Size 5Pool Size 15
Stable
Replicates a
Exposure 1Exposure 2Stable
Replicates a
Exposure 1Exposure 2Stable
Replicates a
Exposure 1Exposure 2
Power bBias cFPR dPower bBias cFPR dPower bBias cFPR d
1.5 (0.15)Truth e100097.5–98.41.0 997–99992.0–93.81.1 997–99948.3–50.51.1
0.0625100090.51.052.899284.11.154.899228.81.13.9
0.25100086.30.953.899479.01.057.199424.01.04.8
1100075.60.966.699470.50.961.799415.30.99.7
2.0 (0.25)Truth e1000100.01.0 967–97799.7–99.81.2 967–97787.8–89.41.2
0.0625100099.21.055.494796.51.257.594757.91.26.7
0.25100099.00.962.497094.01.163.197043.51.16.1
1100093.20.882.097884.90.875.197826.90.816.5
4.0 (0.5)Truth e1000100.01.0 522–537100.01.3–1.4 522–53782.2–84.91.3–1.4
0.06251000100.00.953.145899.71.468.945875.21.442.8
0.251000100.00.875.460398.21.171.660365.11.133.8
11000100.00.699.576996.00.788.976928.80.712.2
Notes: eOR—expected odds ratio; FPR—false positive rate; ME1—measurement error variance for exposure 1; OR—odds ratio; (a) Number of replicates for which the ORx1/ORw1 and ORw2 were between 0.1 and 10. (b) P o w e r = # mod e l s w i t h O R x 1 0.1 o r O R x 1 10 o r s i g n i f i c a n t O R x 1 T o t a l # o f r e p l i c a t e s ; (c) B i a s = k = 1 1000 ( O R x k   p o o l e d O R x   p o p u l a t i o n ) #   o f   s t a b l e   r e p l i c a t e s w h e r e 0.1 < O R x 1 k < 10 ; (d) F P R = # mod e l s w i t h O R x 2 0.1 o r O R x 2 10 o r s i g n i f i c a n t O R x 2 T o t a l # o f r e p l i c a t e s ; (e) Models without measurement error. Values vary as a different viable population was selected for each measurement error scenario.
This was also true for the threshold and saturation models. However, for the stronger true effect (eOR = 4) and moderate–to-high measurement error variance the power was greatest for pool size 10 (n = 450) for the linear and semi-linear models. When there was no measurement error, power increased with increasing pool sample size for small and moderate true associations and decreased with increasing sample size when the true effect of X1 was large. In general, larger sample size resulted in a slight increase in FPR for a moderate true effect (FPR = 4%, 5% and 6% for eOR = 2 and moderate ME1). For small and large eOR the relationship between FPR and pool and sample size was more complex.
A comparison of the linear model with cohort size of 675 analyzed at the individual level and using pools of size 5 and 15 is shown in Table 3. For more modest eORs (1.5 or 2), the results of the individual logistic regression and analysis of pools of size 5 were very similar. There was a dramatic drop in power and the FPR between pool sizes of 5 and 15. The bias was identical for the two pool sizes. However, for a stronger true association (eOR = 4) and smaller measurement error variance, the bias in OR1 was in the opposite direction compared to the individual level models. The bias in the association of the miss-measured non-causal exposure (mean OR2) increased slightly when the pool size changed from 1 (individual level analysis) to 5 but there was no difference between pool sizes of 5 and 15 (data not shown). The number of replicates for which a logistic model was stable was the same for pool sizes 5 and 15.

4. Discussion

This study supports the use of pooling as an efficient means to reduce laboratory costs in scenarios similar to this specific prospective risk-enriched ASD cohort that aims to elucidate environmental associations (EARLI). The results of this series of simulations found that there is some loss of power for pooled compared to individual level models. We also quantified the resulting reduction in the false positive rate between pool sizes of 5 and 15 when the cohort size was held constant. In addition, there was minimal change in the measure of association for the main effect (X1). This was true for the linear as well as semi-linear models. Consistent with previous work [30], the models with measurement error had lower power than models without measurement error.
We looked at larger pool sizes than many of the previous articles about pooling methods and simulations (e.g., [3,12,28]). Saha-Chaudhuri [28] and Weinberg [3] found that pooling had little effect on power with large sample sizes (but did not consider pool sizes larger than (6) and recommended smaller pool sizes. Small pool sizes are indeed ideal in many circumstances but as Dorfman observed in 1943, financial constraints may have an effect on the feasible pool sizes and number of pools [8]. Pool size had little effect on bias but greatly affected the power and FPR in our simulations. In addition, we found that pool size 5 is best if it is suspected that the true association is weak and pool size of 10 may be better when a stronger true effect is hypothesized. A wide range of putative strengths of exposure-autism associations was included here to model the possibility of finding either a weak risk factor or a “smoking gun” (e.g., as exists for the four-fold increased risk of ASD for males).
We demonstrated that even pool sizes as large as 15 lead to fairly accurate results, albeit with considerable loss of power, even when measurement error is taken into consideration, in scenarios similar to those of EARLI. Furthermore, the larger pool size resulted in a much lower FPR, which may be desirable in circumstances where multiple exposures will be examined, such as research into the etiology of ASD where well-founded a priori hypotheses are lacking. The number of pools has more of an effect on power than the overall number of subjects. Although the loss in power for pool size of 15 makes it hard to justify looking at larger pool sizes for the sample sizes and number of pools under consideration here, investigators with considerably larger samples should consider running this set of simulations with a larger pool size. Fortunately, the measures of association are not very biased for large pool sizes, so if small pool sizes are cost prohibitive, researchers should focus more on the magnitude of these associations than on significance testing.
It should also be noted that for these numbers of pools (and sample sizes) we were unable to stratify by more than two dichotomous variables (case status and sex). Knowledge of the determinants of exposure may help construct more efficient pooling schemes, as is done in group-based exposure assessment in occupational epidemiology [40]. However, it is not currently clear how pooling and exposure modeling or grouping can be used together.
There were a large number of replicates for which the logistic regression models did not yield a stable measure of association (OR between 0.1 and 10) when the pool size and true effect of the causal exposure were large. If a researcher has a sample such as this it should be obvious, based on the bivariate and stratified analyses, that a regression model is not appropriate. In fact, our inspection of the individual simulations revealed that in such cases, an investigator would have no doubt, based on descriptive analyses, that there is evidence to support the existence of an association (this presumed certainty of an association is the reason for their inclusion in the numerator of the power and FPR calculations). Pooling performs well even with a high degree of measurement error as long as the pool size is small and the strength of the true association is moderate. Measurement error was added to the individual values for exposure (X1) not to the analytical values that results after pooling of specimens. This was done based on the assumption that error occurs at the time of sampling measurement and that assay-based error is typically miniscule compared to the day-to-day variance in exposure. Weinberg and Umbach modeled both types of error and found that “if the measurement error is individual based, the dependence on g disappears and pooling has no effect on the bias in the β estimation” [6]. We found that, except in cases of a strong true association, the change in bias resulting from increasing variance in measurement error was only slightly affected by pooling (Table 3).
Pooling may offer several advantages in scenarios similar to those in EARLI. In addition to the financial savings for analyzing fewer samples for an individual exposure, a limited amount of bio-samples can be analyzed for more environmental exposures than would be possible in an individual-level analysis. As pooling resulted in fewer spurious results (i.e., the FPR was lower), researchers may have more confidence in variables identified through pooled analysis. Power is important but false positives do not in any way help create useful interventions. When selecting a pooling strategy, researchers will need to balance this lower FPR against the relatively low power in some of the scenarios. Although a multitude of scenarios were studied here, we will provide our syntax upon request so that researchers who have data with other exposure distributions (including potentially skewed exposure data such as is likely with PCBs), exposure-response relationships and error distributions can investigate the ideal pool size for their study.

5. Conclusions

In conclusion, our current work supports the use of pooling as a means to reduce laboratory costs while maintaining the statistical efficiency of studies that are similar to the simulated prospective cohort. Given its commendable track record under the realistic constraints of observational studies, pooling should be considered as an analytical strategy in any epidemiological study that faces constraints on the cost of analyses and availability of biological samples.

Supplementary Files

Supplementary File 1

Acknowledgements

This study was funded by a postdoctoral grant (grant#12-1002) from the Autism Science Foundation.

Author Contributions

Karyn Heavner and Igor Burstyn conceived and designed the study with input from Craig Newschaffer, Irva Hertz-Picciotto and Deborah Bennett. Karyn Heavner conducted the data analysis and drafted the manuscript. All authors reviewed and contributed to the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

List of Abbreviations

ASD
autism spectrum disorders
PCB
polychlorinated biphenyls
AOSI
Autism Observation Scale for Infants
EARLI
Early Autism Risk Longitudinal Investigation
OR
odds ratio
FPR
false positive rate

References

  1. Caudill, S.P. Characterizing populations of individuals using pooled samples. J. Expo. Sci. Environ. Epidemiol. 2010, 20, 29–37. [Google Scholar] [CrossRef] [PubMed]
  2. Mumford, S.L.; Schisterman, E.F.; Vexler, A.; Liu, A. Pooling biospecimens and limits of detection: Effects on roc curve analysis. Biostatistics 2006, 7, 585–598. [Google Scholar] [CrossRef] [PubMed]
  3. Saha-Chaudhuri, P.; Weinberg, C.R. Specimen pooling for efficient use of biospecimens in studies of time to a common event. Am. J. Epidemiol. 2013, 178, 126–135. [Google Scholar] [CrossRef] [PubMed]
  4. Schisterman, E.F.; Vexler, A. To pool or not to pool, from whether to when: Applications of pooling to biospecimens subject to a limit of detection. Paediatr. Perinat. Epidemiol. 2008, 22, 486–496. [Google Scholar] [CrossRef] [PubMed]
  5. Vexler, A.; Liu, A.; Schisterman, E.F. Efficient design and analysis of biospecimens with measurements subject to detection limit. Biom. J. 2006, 48, 780–791. [Google Scholar] [CrossRef] [PubMed]
  6. Weinberg, C.R.; Umbach, D.M. Using pooled exposure assessment to improve efficiency in case-control studies. Biometrics 1999, 55, 718–726. [Google Scholar] [CrossRef] [PubMed]
  7. Kim, H.Y.; Hudgens, M.G.; Dreyfuss, J.M.; Westreich, D.J.; Pilcher, C.D. Comparison of group testing algorithms for case identification in the presence of test error. Biometrics 2007, 63, 1152–1163. [Google Scholar] [CrossRef] [PubMed]
  8. Dorfman, R. The detection of defective members of large populations. Ann. Math. Stat. 1943, 14, 436–440. [Google Scholar] [CrossRef]
  9. Albert, P.S.; Schisterman, E.F. Novel statistical methodology for analyzing longitudinal biomarker data. Stat. Med. 2012, 31, 2457–2460. [Google Scholar] [CrossRef] [PubMed]
  10. Danaher, M.R.; Schisterman, E.F.; Roy, A.; Albert, P.S. Estimation of gene-environment interaction by pooling biospecimens. Stat. Med. 2012, 31, 3241–3252. [Google Scholar] [CrossRef] [PubMed]
  11. Erickson, H.S. Measuring molecular biomarkers in epidemiologic studies: Laboratory techniques and biospecimen considerations. Stat. Med. 2012, 31, 2400–2413. [Google Scholar] [CrossRef] [PubMed]
  12. Lyles, R.H.; Tang, L.; Lin, J.; Zhang, Z.; Mukherjee, B. Likelihood-based methods for regression analysis with binary exposure status assessed by pooling. Stat. Med. 2012, 31, 2485–2497. [Google Scholar] [CrossRef] [PubMed]
  13. McMahan, C.S.; Tebbs, J.M.; Bilder, C.R. Regression models for group testing data with pool dilution effects. Biostatistics 2013, 14, 284–298. [Google Scholar] [CrossRef] [PubMed]
  14. Mitchell, E.M.; Lyles, R.H.; Manatunga, A.K.; Danaher, M.; Perkins, N.J.; Schisterman, E.F. Regression for skewed biomarker outcomes subject to pooling. Biometrics 2014, 70, 202–211. [Google Scholar] [CrossRef] [PubMed]
  15. Shen, H.; Xu, W.; Peng, S.; Scherb, H.; She, J.; Voigt, K.; Alamdar, A.; Schramm, K.W. Pooling samples for “top-down” molecular exposomics research: The methodology. Environ. Health 2014, 13. [Google Scholar] [CrossRef] [PubMed]
  16. Whitcomb, B.W.; Perkins, N.J.; Zhang, Z.; Ye, A.; Lyles, R.H. Assessment of skewed exposure in case-control studies with pooling. Stat. Med. 2012, 31, 2461–2472. [Google Scholar] [CrossRef] [PubMed]
  17. Ngounou Wetie, A.G.; Wormwood, K.L.; Charette, L.; Ryan, J.P.; Woods, A.G.; Darie, C.C. Comparative two-dimensional polyacrylamide gel electrophoresis of the salivary proteome of children with autism spectrum disorder. J. Cell. Mol. Med. 2015, 19, 2664–2678. [Google Scholar] [CrossRef] [PubMed]
  18. Ngounou Wetie, A.G.; Wormwood, K.L.; Russell, S.; Ryan, J.P.; Darie, C.C.; Woods, A.G. A pilot proteomic analysis of salivary biomarkers in autism spectrum disorder. Autism Res. 2015, 8, 338–350. [Google Scholar] [CrossRef] [PubMed]
  19. Burstyn, I.; Martin, J.W.; Beesoon, S.; Bamforth, F.; Li, Q.; Yasui, Y.; Cherry, N.M. Maternal exposure to bisphenol-A and fetal growth restriction: A case-referent study. Int. J. Environ. Res. Public Health 2013, 10, 7001–7014. [Google Scholar] [CrossRef] [PubMed]
  20. Chohan, B.H.; Tapia, K.; Merkel, M.; Kariuki, A.C.; Khasimwa, B.; Olago, A.; Gichohi, R.; Obimbo, E.M.; Wamalwa, D.C. Pooled HIV-1 RNA viral load testing for detection of antiretroviral treatment failure in Kenyan children. J. Acquir. Immune. Defic. Syndr. 2013, 63, e87–e93. [Google Scholar] [CrossRef] [PubMed]
  21. May, S.; Gamst, A.; Haubrich, R.; Benson, C.; Smith, D.M. Pooled nucleic acid testing to identify antiretroviral treatment failure during HIV infection. J. Acquir. Immune. Defic. Syndr. 2010, 53, 194–201. [Google Scholar] [CrossRef] [PubMed]
  22. Pannus, P.; Fajardo, E.; Metcalf, C.; Coulborn, R.M.; Duran, L.T.; Bygrave, H.; Ellman, T.; Garone, D.; Murowa, M.; Mwenda, R.; et al. Pooled HIV-1 viral load testing using dried blood spots to reduce the cost of monitoring antiretroviral treatment in a resource-limited setting. J. Acquir. Immune. Defic. Syndr. 2013, 64, 134–137. [Google Scholar] [CrossRef] [PubMed]
  23. Kim, S.B.; Kim, H.W.; Kim, H.S.; Ann, H.W.; Kim, J.K.; Choi, H.; Kim, M.H.; Song, J.E.; Ahn, J.Y.; Ku, N.S.; et al. Pooled nucleic acid testing to identify antiretroviral treatment failure during HIV infection in Seoul, South Korea. Scand. J. Infect. Dis. 2014, 46, 136–140. [Google Scholar] [CrossRef] [PubMed]
  24. Tilghman, M.W.; Guerena, D.D.; Licea, A.; Perez-Santiago, J.; Richman, D.D.; May, S.; Smith, D.M. Pooled nucleic acid testing to detect antiretroviral treatment failure in Mexico. J. Acquir. Immune. Defic. Syndr. 2011, 56. [Google Scholar] [CrossRef] [PubMed]
  25. Bates, M.N.; Buckland, S.J.; Garrett, N.; Caudill, S.P.; Ellis, H. Methodological aspects of a national population-based study of persistent organochlorine compounds in serum. Chemosphere 2005, 58, 943–951. [Google Scholar] [CrossRef] [PubMed]
  26. Cheslack-Postava, K.; Rantakokko, P.V.; Hinkka-Yli-Salomaki, S.; Surcel, H.M.; McKeague, I.W.; Kiviranta, H.A.; Sourander, A.; Brown, A.S. Maternal serum persistent organic pollutants in the finnish prenatal study of autism: A pilot study. Neurotoxicol. Teratol. 2013, 38, 1–5. [Google Scholar] [CrossRef] [PubMed]
  27. Lesiak, A.; Zhu, M.; Chen, H.; Appleyard, S.M.; Impey, S.; Lein, P.J.; Wayman, G.A. The environmental neurotoxicant PCB 95 promotes synaptogenesis via ryanodine receptor-dependent mir132 upregulation. J. Neurosci. 2014, 34, 717–725. [Google Scholar] [CrossRef] [PubMed]
  28. Saha-Chaudhuri, P.; Umbach, D.M.; Weinberg, C.R. Pooled exposure assessment for matched case-control studies. Epidemiology 2011, 22, 704–712. [Google Scholar] [CrossRef] [PubMed]
  29. Newschaffer, C.J.; Croen, L.A.; Fallin, M.D.; Hertz-Picciotto, I.; Nguyen, D.V.; Lee, N.L.; Berry, C.A.; Farzadegan, H.; Hess, H.N.; Landa, R.J.; et al. Infant siblings and the investigation of autism risk factors. J. Neurodev. Disord. 2012, 4. [Google Scholar] [CrossRef] [PubMed]
  30. Heavner, K.; Newschaffer, C.; Hertz-Picciotto, I.; Bennett, D.; Burstyn, I. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD). J. Epidemiol. Community Health 2014, 68, 438–445. [Google Scholar] [CrossRef] [PubMed]
  31. Heavner, K.; Burstyn, I. A simulation study of categorizing continuous exposure variables measured with error in autism research: Small changes with large effects. Int. J. Environ. Res. Public Health 2015, 12, 10198–10234. [Google Scholar] [CrossRef] [PubMed]
  32. Martin, L.A.; Horriat, N.L. The effects of birth order and birth interval on the phenotypic expression of autism spectrum disorder. PLoS One 2012, 7. [Google Scholar] [CrossRef] [PubMed]
  33. Warrier, V.; Baron-Cohen, S.; Chakrabarti, B. Genetic variation in gabrb3 is associated with asperger syndrome and multiple endophenotypes relevant to autism. Mol. Autism 2013, 4. [Google Scholar] [CrossRef] [PubMed]
  34. Moreno-De-Luca, A.; Myers, S.M.; Challman, T.D.; Moreno-De-Luca, D.; Evans, D.W.; Ledbetter, D.H. Developmental brain dysfunction: Revival and expansion of old concepts based on new genetic evidence. Lancet Neurol. 2013, 12, 406–414. [Google Scholar] [CrossRef]
  35. Aibar, L.; Puertas, A.; Valverde, M.; Carrillo, M.P.; Montoya, F. Fetal sex and perinatal outcomes. J. Perinat. Med. 2012, 40, 271–276. [Google Scholar] [CrossRef] [PubMed]
  36. Werling, D.M.; Geschwind, D.H. Sex differences in autism spectrum disorders. Curr. Opin. Neurol. 2013, 26, 146–153. [Google Scholar] [CrossRef] [PubMed]
  37. Gardener, H.; Spiegelman, D.; Buka, S.L. Perinatal and neonatal risk factors for autism: A comprehensive meta-analysis. Pediatrics 2011, 128, 344–355. [Google Scholar] [CrossRef] [PubMed]
  38. Kuzniewicz, M.W.; Wi, S.; Qian, Y.; Walsh, E.M.; Armstrong, M.A.; Croen, L.A. Prevalence and neonatal factors associated with autism spectrum disorders in preterm infants. J. Pediatr. 2014, 164, 20–25. [Google Scholar] [CrossRef] [PubMed]
  39. Bryson, S.E.; Zwaigenbaum, L.; McDermott, C.; Rombough, V.; Brian, J. The autism observation scale for infants: Scale development and reliability data. J. Autism Dev. Disord. 2008, 38, 731–738. [Google Scholar] [CrossRef] [PubMed]
  40. Kim, H.M.; Richardson, D.; Loomis, D.; Van, T.M.; Burstyn, I. Bias in the estimation of exposure effects with individual- or group-based exposure assessment. J. Expo. Sci. Environ. Epidemiol. 2011, 21, 212–221. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Heavner, K.; Newschaffer, C.; Hertz-Picciotto, I.; Bennett, D.; Burstyn, I. Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders. Int. J. Environ. Res. Public Health 2015, 12, 14780-14799. https://doi.org/10.3390/ijerph121114780

AMA Style

Heavner K, Newschaffer C, Hertz-Picciotto I, Bennett D, Burstyn I. Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders. International Journal of Environmental Research and Public Health. 2015; 12(11):14780-14799. https://doi.org/10.3390/ijerph121114780

Chicago/Turabian Style

Heavner, Karyn, Craig Newschaffer, Irva Hertz-Picciotto, Deborah Bennett, and Igor Burstyn. 2015. "Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders" International Journal of Environmental Research and Public Health 12, no. 11: 14780-14799. https://doi.org/10.3390/ijerph121114780

APA Style

Heavner, K., Newschaffer, C., Hertz-Picciotto, I., Bennett, D., & Burstyn, I. (2015). Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders. International Journal of Environmental Research and Public Health, 12(11), 14780-14799. https://doi.org/10.3390/ijerph121114780

Article Metrics

Back to TopTop