Next Article in Journal
Multilevel Drought-Induced Resistance and Resilience Analysis for Vegetation in the Yellow River Basin
Next Article in Special Issue
The Development and Optimization of a New Wind Tunnel Design for Odour Sampling
Previous Article in Journal
Evaluation of Near-Taiwan Strait Sea Surface Wind Forecast Based on PanGu Weather Prediction Model
Previous Article in Special Issue
Air Quality Prediction and Ranking Assessment Based on Bootstrap-XGBoost Algorithm and Ordinal Classification Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Fine Particulate Matter (PM2.5) Concentrations Measured by Collocated Federal Reference Method and Federal Equivalent Method Monitors in the U.S.

National Council for Air and Stream Improvement (NCASI), Newberry, FL 32669, USA
*
Author to whom correspondence should be addressed.
Atmosphere 2024, 15(8), 978; https://doi.org/10.3390/atmos15080978
Submission received: 11 July 2024 / Revised: 2 August 2024 / Accepted: 8 August 2024 / Published: 15 August 2024
(This article belongs to the Special Issue Atmospheric Pollutants: Monitoring and Observation)

Abstract

:
The comparison between Federal Equivalent Method (FEM) and Federal Reference Method (FRM) monitors in measuring fine particulate matter (PM2.5) concentrations frequently raises concerns about the accuracy and reliability of data. The comparability, or lack thereof, of data between FRM and FEM monitors may have significant implications for maintaining compliance with the National Ambient Air Quality Standards (NAAQSs). This study investigates the performance of continuous FEM monitors collocated with FRM monitors across 10 EPA regions in the U.S., focusing on PM2.5 measurements collected from 276 monitoring stations. Through an analysis of annually averaged paired concentration data, the study examines concentration ratios (FEM/FRM) and associated biases (in %, defined as [(FEM/FRM)−1] × 100) in FEM monitors across different manufacturers, measurement methods, EPA regions, and sampling location types. The study findings reveal a varied distribution of FEM/FRM ratios, with more than 50% of the FEM monitors having FEM/FRM > 1.1 and approximately 30% having FEM/FRM > 1.2. Substantial variations in estimated biases are identified among monitor types, measurement methods, EPA regions, and sampling site locations. Light scatter-based FEM monitors, notably Teledyne models 640 and 640x, dominate all locations (urban, suburban, and rural), with rural areas exhibiting higher mean bias values for both light scatter and beta attenuation FEM monitors (41% and 23%, respectively). On average, light scatter-based FEM monitors demonstrate higher biases compared to beta attenuation monitors across all EPA regions (28% vs. 12%). Irrespective of the measurement method employed, FEM monitors demonstrate a significant positive bias (mean bias 22%) relative to FRM monitors, which could result in an overestimation of PM2.5 design values (DVs) by 13–21% at monitoring sites designating FEMs as primary monitors for NAAQSs compliance designations. These findings emphasize the critical need to address method comparability issues, especially considering the recent tightening of NAAQSs for PM2.5 (annual) from 12 µg/m3 to 9 µg/m3 in the U.S.
Keywords:
PM2.5; NAAQS; permitting; FRM; FEM

1. Introduction

To assess compliance with the U.S. National Ambient Air Quality Standards (NAAQSs) outlined in Title 40 of the Code of Federal Regulations (CFR), Part 50, air quality monitoring is typically conducted using either Federal Reference Method (FRM) or Federal Equivalent Method (FEM). Title 40 Part 50 of the CFR includes details about FRMs and FEMs that should be used for monitoring criteria air pollutants to determine compliance with the established NAAQS. This regulation plays a crucial role in ensuring that air quality is monitored consistently and accurately across different regions in the U.S. [1]. In the U.S., the monitoring of fine particulate matter (PM2.5) has traditionally depended on networks owned and operated by state, local, and tribal (SLT) agencies, utilizing regulatory-grade samplers and monitors [2]. PM2.5 is considered, and typically regulated as, a “method-defined pollutant”, in that specific methods and protocols are established and stipulated for its measurement and assessment by regulatory agencies like the U.S. Environmental Protection Agency (EPA). These standardized procedures ensure consistency and accuracy in monitoring PM2.5 concentrations, utilizing specialized equipment and protocols to sample, analyze, and report PM2.5 levels in the atmosphere. For PM2.5, 40 CFR Appendix L to Part 50 provides a Reference Method for the measurement of PM2.5 mass concentration in ambient air, while the Equivalent Methods are designated in accordance with 40 CFR Part 53 [3].
FRMs are designed to offer the most robust and scientifically defensible concentration measurements [4]. The FRM data are not the “true” or “actual” PM2.5 but they are the “reference” values used to determine compliance with the NAAQSs [5]. FRMs play a pivotal role as the standard for comparison, serving as the benchmark against which other measurement methods are evaluated. In contrast, FEMs are performance-based methods and designed to deliver a level of compliance decision-making quality comparable to that provided by FRMs. They may incorporate newer and innovative technologies, offering benefits such as reduced overall operating costs and the ability to fulfill multiple monitoring objectives. Both FRMs and FEMs serve as standard methods for PM2.5 measurements, and FEM monitors can be deployed either concurrently with, or as substitutes for, FRM monitors [5]. The FRM samplers employ gravimetric analysis to determine the 24 h average PM2.5 concentrations. On the other hand, FEM monitors calculate 1 h average PM2.5 concentrations using different principles or operational conditions [6].
Both the Reference and Equivalent methods adhere to stringent measurement performance criteria to ensure the accuracy and effectiveness of air quality management decisions. The EPA’s Office of Research and Development (ORD) in Research Triangle Park, North Carolina, is mandated by Congress to review new instrument designs and formally designate approved monitors as either FRMs or FEMs [4]. The Code of Federal Regulations (CFR) Title 40, Part 50, is a section of the CFR that pertains to EPA regulations concerning primary and secondary NAAQSs [7]. Part 50 specifically outlines the methods and procedures for measuring air quality parameters for six criteria for air pollutants, including particulate matter. Reference monitors set the standard for manual PM2.5 measurements, following criteria in Appendix L of Part 50 and Subparts A and E of Part 53. Equivalent Class I Monitors, also manual, meet these criteria and comply with Subparts A, C, and E of Part 53. Equivalent Class II Monitors, like the others, share these criteria and adhere to Subparts A, C, E, and F of Part 53, expanding their applicability. Equivalent Class III Monitors, automated, meet the standards of Appendix L of Part 50 and Subparts A, C, E, and F of Part 53, offering automated operation with enhanced efficiency and data collection capabilities [7].
The utilization of FEM monitors, which began in 2008, has become increasingly widespread across the U.S. air monitoring networks [8], with nearly 50% of the PM2.5 NAAQS network using a continuous FEM monitor as the ‘primary’ sampler [9]. However, FEMs are not immune to challenges, and one significant concern is the potential for data bias. It has been reported that FEMs tend to overpredict collocated FRMs by 5–15% [9]. Monitoring agencies face ongoing challenges in achieving satisfactory performance levels for their continuous FEM PM2.5 monitors, hindering their ability to demonstrate compliance with the PM2.5 NAAQS. Among the roughly 900 FEM monitors operational, data from 40% of them cannot be regarded as official FEM measurements due to performance issues [10].
On 6 January 2023, the EPA proposed a revision to the primary annual PM2.5 standard, aiming to lower it from 12 µg/m3 to a range of 9 to 10 µg/m3 [11]. Alongside the proposal, the EPA solicited comments on the issue of comparability between FRM and FEM data. Numerous comments were received, revealing biases in FEM monitors in various regions. For example, FEM biases were observed in Tennessee’s monitoring networks, with certain FEM monitors reporting an average of 2.4 μg/m3 higher than collocated FRM monitors on an annual basis, and up to 7.6 μg/m3 higher on a daily basis [12]. FEM monitors (particularly Teledyne T640/T640x) in Georgia exhibit an inherent bias in PM2.5 concentrations when compared to the FRM samplers [13]. In certain monitoring sites in EPA Region 5, FEM monitors exhibited substantial biases, approaching approximately 20% annually when compared to FRM data for the year 2021 [14]. On 7 February 2024, the EPA strengthened PM NAAQSs, setting the primary (health-based) annual PM2.5 standard at 9.0 µg/m3 [15].
Previous studies investigating FEM/FRM bias predominantly focused on specific regions or a limited number of monitoring sites [5,16,17] and chamber studies [8]. Moreover, many of these evaluations were conducted outside the U.S. [18], leaving the assessment of FEM monitors within the U.S. air monitoring network relatively unexplored. Our work represents a significant advancement in this important area by expanding upon previous studies and conducting a nationwide evaluation of collocated FEM monitors. This study encompasses a diverse array of equipment types and monitoring techniques operating in various settings (e.g., urban, suburban, and rural) across multiple EPA regions. This study aims to assess the accuracy and reliability of PM2.5 concentration measurements from collocated FRM and FEM monitors in the U.S. by evaluating these measurements across different manufacturers, measurement methods, and geographical locations. It investigates potential biases in FEM monitors compared to FRM monitors, providing insights into their performance and comparability. Ultimately, this study contributes to enhancing air quality monitoring practices and informing regulatory decision-making.

2. Methods

2.1. Source of FEM and FRM Data

We relied on the EPA’s PM2.5 Continuous Monitor Comparability Assessments tool (https://www.epa.gov/outdoor-air-quality-data/pm25-continuous-monitor-comparability-assessments) (accessed on 1 August 2023) as our primary data source for obtaining PM2.5 concentration measurements from collocated FRM and FEM monitors. This tool offers a detailed site-specific technical report evaluating the comparability of PM2.5 continuous monitors when paired with an FRM sampler using various metrics. These reports aid monitoring agencies in determining if PM2.5 continuous monitors in their network align with their monitoring objectives, such as NAAQS comparison or Air Quality Index reporting. Leveraging this tool, we accessed site-specific data across the U.S. for the most recent three years (2020–2022 or 2019–2021). Since FRM measurements provide a 24 h integrated average, the assessment tool evaluates FEM monitors for the corresponding 24 consecutive hours when both FEM and FRM were operational (i.e., from midnight-to-midnight local standard time on days when the FRM is active). For this study, we used only annually averaged PM2.5 concentrations. We selected monitoring sites to ensure a representative sample across diverse geographic and environmental conditions. The primary criteria included geographic distribution (urban, suburban, and rural) and data availability, requiring a minimum of three years of continuous data from both FEM and FRM monitors.

2.2. Data Selection, Preparation, and Analysis

In this study, we employed two key statistical metrics, namely the mean and bias, to thoroughly assess the compatibility between collocated FEM and FRM monitors. The means were utilized to establish a straightforward ratio of concentrations recorded by FEM monitors to those by FRM monitors. This ratio was computed by dividing the average of all data collected by FEM monitors by the average of all data collected by FRM monitors. A ratio of FEM/FRM = 1 denotes perfect agreement between FEM and FRM monitors. Conversely, FEM/FRM < 1 signifies that the FEM monitor underestimates FRM, while FEM/FRM > 1 indicates that the FEM monitor overestimates the FRM monitor. Additionally, we employed paired biases expressed as a percentage difference between FEM and FRM measurements. This formula, represented as % difference = [(FEM/FRM) − 1] × 100, facilitated the quantification of the deviation between FEM and FRM readings. A positive bias suggests that the FEM monitor overestimates FRM data, whereas a negative bias indicates the opposite.
While the tool offered the option to consider only measurements above a certain concentration threshold (≥3 μg/m3) for comparability assessment, our evaluation encompassed all measurements by default unless specified otherwise. This inclusive approach ensured a comprehensive examination of the performance of FEM monitors across various concentration levels. Furthermore, our assessment operated under the assumption that the FRM deployed at each site represented a true value when compared to PM2.5 continuous monitoring, despite acknowledging the inherent uncertainty (e.g., evaporation loss due to the volatile properties of semivolatile species in PM2.5 collected on the after-filter associated with FRM measurements [19,20].
We augmented our dataset by collecting essential information, such as equipment manufacturer and model specifications for both FRM and FEM monitors, in addition to site identification, county, state, and EPA region data for each monitoring station. This comprehensive dataset served as a foundation for conducting a detailed analysis of the factors influencing measurement compatibility, enabling us to draw robust conclusions from our study.

3. Results and Discussion

We identified a total of 276 monitoring sites with collocated FRM and FEM monitors, from which we acquired compatibility assessment data, spanning 50 states and 10 EPA regions. In the ensuing subsections, we present a detailed analysis that encompasses three primary aspects: (i) an examination of the distribution of FEM monitors categorized by manufacturers, methods of measurement, and location types, (ii) an exploration of FEM/FRM concentration ratios, and (iii) an in-depth investigation into the biases observed between FEM and FRM measurements. This comprehensive approach aims to provide a nuanced understanding of the monitoring landscape, shedding light on the diversity in monitor distribution, concentration relationships, and potential biases inherent in the data.

3.1. FEM Monitors by Manufacturer, Method of Measurement, and Location

There are two prominent manufacturers of PM2.5 continuous FEM monitors: Met One Instruments Inc. (Grant Pass, OR, USA) and Teledyne API (San Diego, CA, USA). Within the Met One FEM monitors, notable models include the BAM 1020 and BAM 1022, while Teledyne contributes models 640 and 640x to the market. The distribution of FEM monitors among the manufacturers is represented in Figure 1a. Notably, among the 276 FEM monitors analyzed, Teledyne captures the majority, with 51% (140 units), while Met One closely follows with 36% (100 units). Additionally, 11% (31 units) of the monitors are attributed to Thermo Scientific (Waltham, MA, USA).
Categorizing the FEM monitors based on particulate measurement methods reveals two distinct groups: light scattering and beta attenuation (Figure 1b). T640 and 640x, manufactured by Teledyne, stand out as real-time, continuous PM mass monitors utilizing scattered light spectrometry for particulate concentration measurements. In contrast, Met One’s BAM (Models 1020 and 1022), along with Thermo Scientific (Models 5014i, 5030 SHARP), employ beta attenuation for concentration measurement. An overarching observation is that light scatter monitors dominate the landscape, constituting 52%, while beta attenuation monitors closely follow at 45% among the collocated FEM monitors.
The Teledyne monitors (T640 and T640x) utilize light scattering to quantify the concentration of fine particulate matter in ambient air [21]. Briefly, the monitor emits a beam of light, typically using a laser diode or an LED, into the air sample containing suspended particulate matter. As the emitted light interacts with airborne particles, scattering occurs, leading to the dispersion of light in various directions. A dedicated detector within the monitor captures a portion of this scattered light, enabling the measurement of its intensity. The amount of light scattered is proportional to the concentration of PM2.5 particles in the air sample. In contrast, the Met One BAM monitors (Models 1020, 1022) and Thermo Scientific Model 5014i employ a sampling inlet to collect fine particulate on a filter tape, utilizing beta ray transmission to quantify the concentration of particulate matter amassed within a designated sampling period [22,23] Thermo Scientific Model 5030 SHARP combines light scattering photometry and beta attenuation for continuous PM measurements [24]. Thermo Scientific Model 1405 employs a unique approach by capturing particulate matter on a filter connected to an oscillating glass rod. The concentration of particulate matter is determined in proportion to the change in the oscillating frequency [25].
Figure 2 illustrates the distribution of FEM monitors across various locations, categorized by measurement methods. In urban areas, 65 monitors employ light scatter technology, while 53 monitors utilize beta attenuation, with an additional 3 monitors employing alternative measurement methods. Likewise, in suburban areas, 57 monitors utilize light scatter technology, 54 employ beta attenuation, and 3 utilize other measurement techniques. In rural locations, the distribution includes 21 light scatter monitors, 18 beta attenuation monitors, and 2 monitors employing other methods. The data suggest variations in the choice of measurement methods across different environments, with light scatter being the predominant method in all three settings. However, there exists a notable discrepancy in the distribution of FEM monitors between urban (and suburban) and rural areas. In rural regions, monitoring sites are often spaced farther apart compared to urban areas, and certain rural areas may lack any monitoring infrastructure altogether. EPA data reveal that as of 2019, two-thirds (2120 out of 3142) of the counties in the U.S. lacked ambient air quality monitors [26], underscoring the limited monitoring coverage in rural areas. The Clean Air Act regulations prioritize population density as well as high-emitting point sources, such as power plants and traffic, when designing criteria pollutant monitoring networks [27,28,29]. Consequently, more monitoring sites are typically situated in urban areas than in rural ones. The sparse monitoring coverage in rural areas poses challenges in fully understanding air quality issues such as wildfires that might be prevalent in some of these regions. This deficiency is particularly concerning given that a vast majority of industrial facilities, such as forest product mills operating in rural areas, are often faced with air permitting challenges due to the lack of nearby ambient regulatory monitors for compliance demonstrations.

3.2. FEM/FRM Concentration Ratios

3.2.1. Distribution of Ratios

The frequency distribution of FEM/FRM concentration ratios provides valuable insights into the agreement or discrepancy between measurements obtained from FEM and FRM monitors. Figure 3 illustrates the distribution of monitors (in the Y-axis) categorized into eight bins based on FEM/FRM concentration ratios, ranging from <0.9 to >1.2, with a 0.05 increment. The categories “0.95–1.0” and “0.95–1.05” represent FEM/FRM ratios around 1.0, indicating a high level of agreement between FEM and FRM monitors. With 48 monitors falling into these categories out of 276, representing about 17.4% of the total, this demonstrates that a notable proportion of monitoring locations exhibit close agreement between FEM and FRM measurements. The highest frequency is observed in the category “greater than 1.20”, with 78 data points, indicating a substantial number of cases where FEM measurements exceed FRM measurements. The dataset reveals a varied distribution of FEM/FRM ratios, emphasizing that perfect agreement (ratio = 1) is not the most common scenario. Deviations from the ideal agreement (i.e., FEM/FRM = 1) are observed in both directions, with more than 50% of the FEM monitors having FEM/FRM > 1.1 while nearly 30% having FEM/FRM > 1.2. Ratios consistently greater than 1.0 may suggest a systematic bias in FEM measurements.
We explored FEM/FRM concentration ratios for FEM monitors across various states (Figure 4). In each state, the ratio represents the mean value for all collocated monitors assessed. For example, at a monitoring site in Georgia, where both types of monitors are placed together, the FEM monitor shows PM2.5 concentrations that are 1.2 times higher or more than those measured by the FRM monitor. It is essential to understand that this map offers a broad perspective on FEM monitor performance when collocated with FRM monitors. At monitoring sites, where FEM monitors like the Teledyne T640/640x or Met One BAM 1020/1022 are used separately, their performance may differ from the assessments conducted in this study.
Several states exhibit FEM/FRM ratios greater than 1, indicating a tendency for FEM monitors to overestimate PM2.5 concentrations compared to FRM monitors. Notable examples include Illinois (1.44), Oklahoma (1.35), North Dakota (1.55), and Pennsylvania (1.21). States with FEM/FRM ratios less than 1 suggest a trend of FEM monitors underestimating PM2.5 concentrations compared to FRM monitors. Maine (0.83), Rhode Island (0.93), and South Dakota (0.81) are among the states where underestimation is observed. The range of ratios across states indicates significant variability in the agreement between FEM and FRM measurements. Some states, such as New Hampshire (1.13), Texas (1.00), and Washington (1.01), demonstrate ratios close to 1, suggesting relatively better agreement between the two methods. The observed variations in ratios could be influenced by factors such as calibration methods, monitor types, and regional differences in air quality and meteorological characteristics. For instance, states with high pollutant concentrations may exhibit different ratio patterns than those with lower concentrations.

3.2.2. Ratios by FEM Equipment Manufacturer

The distribution of FEM/FRM concentration ratios, categorized by equipment manufacturer, is depicted in Figure 5. The horizontal dotted line at 1.0 signifies perfect agreement between FEM and FRM data. Within the figure, each box showcases a solid line indicating the median, a plus symbol representing the mean, and whiskers depicting the spread of the data. Table 1 contains statistical measures for FEM monitors both by manufacturer type and working principle. The statistics include minimum, maximum, median, and mean values for each variable, as well as percentiles (25th, 50th, 75th, and 90th) to provide insights into the distribution of data. The minimum and maximum values indicate the range of FEM/FRM concentration ratios observed for each instrument. The interquartile range (IQR) (the difference between the 75th and 25th percentiles) provides information about the spread of the middle 50% of the data.
The Teledyne T640 instrument demonstrates moderate variability, with the median and mean values closely aligned, indicating a symmetric distribution. The percentiles reveal a relatively narrow range of bias values. Conversely, the Teledyne T640x exhibits a higher maximum value, suggesting potential outliers. Similar to the T640, its median and mean are closely matched, suggesting symmetry, but the 90th percentile is higher, suggesting possible higher bias values at the upper end. When combined, data from both Teledyne models show similar patterns, albeit with a slightly wider IQR and a slightly higher 90th percentile, indicating the potential for higher bias values. On the other hand, Met One BAM monitors (Models 1020 and 1022) display a wide range of variability, particularly evident from the high maximum value, indicating higher variability. The median is lower, and the mean is slightly higher, suggesting potential skewness in the distribution. The 25th and 50th percentiles indicate a narrower range of bias values. Thermo Scientific monitors exhibit moderate variability with a narrower range compared to Met One BAM. The median and mean are closely aligned, indicating a symmetric distribution, and percentiles suggest a consistent spread of bias values. Among these monitors, Teledyne has the highest mean value (FEM/FRM = 1.21), suggesting a higher average bias compared to the other monitors. Therefore, based on the mean values, the Teledyne monitors might be considered the least accurate. In summary, this analysis reveals distinct variability in the performance of different air quality monitoring instruments and underscores the importance of considering instrument characteristics and performance when interpreting air quality data.

3.2.3. Ratios by Method of Measurement

We evaluated the performance of FEM monitors based on their operational principles, categorizing them into two main categories: (i) monitors utilizing light scatter technology, such as Teledyne T640, T604x, and GRIMM EDM 180, and (ii) monitors employing beta attenuation, including BAM 1020, 1022, Thermo Scientific 5014i, and 5030 SHARP. As shown in Figure 6, beta attenuation monitors exhibited superior performance compared to their light scatter-based monitors, with mean FEM/FRM ratios of 1.03 and 1.20, respectively. Table 1 provides detailed statistical insights, comparing the two measurement methods and offering a detailed examination of the distribution and central tendencies of the data. Light scatter FEM monitors demonstrate a FEM/FRM range from 0.75 to 1.7, with both median and mean values at 1.2. The notable range of values (0.75 to 1.7) underscores considerable variability in the measurements. However, the close correspondence between the median and mean (both 1.2) suggests a symmetric distribution, indicating minimal skewness. In contrast, beta attenuation monitors exhibit a broader range from 0.72 to 2.44, with both median and mean values at 1.0. The wider range of beta attenuation values (0.72 to 2.44) indicates potential outliers or extreme measurements. Notably, the high maximum value (2.44) may point to specific environmental conditions or measurement anomalies influencing the data.
The percentiles offer a detailed understanding of the data distribution. For instance, the 25th percentile for light scatter FEMs is 1.15, indicating that 25% of observations are below this value. Similarly, the 90th percentile for beta attenuation FEMs is 1.17, suggesting that 90% of observations are below this threshold. Considering both the mean ratios and the range of ratios, beta attenuation appears to be the superior measurement method in this context.

3.2.4. Ratios by EPA Region

We assessed the FEM/FRM ratio across 10 EPA regions (Figure 7). The summarized outcomes of this assessment are illustrated in Figure 8, with regions designated as R1, R2, R3, and so forth along the X-axis. A statistical summary of the data analysis is provided in Table 2.
The FEM/FRM ratios exhibit substantial regional variability, ranging from 0.72 to 2.44. While median ratios are relatively consistent across regions (1.05 to 1.18), mean ratios vary, indicating potential skewness or outliers. For example, R7 shows a higher median (1.18) than the mean (1.12), suggesting a potential influence of extreme values. Region R6 displays a higher mean (1.11) compared to the median (1.06), indicating potential challenges in performance or the presence of outliers. Region R5 stands out with the widest range of ratios (0.87 to 2.44), implying unknown factors impacting FEM/FRM measurements. R7 exhibits a higher median (1.18) than nearby regions, indicating potential outliers and a small sample size (n = 10) influencing the central tendency. Regions R8 and R10 demonstrate relatively consistent median and mean ratios, suggesting stability in monitoring performance. Region R3 exhibits a wide range (0.94 to 1.70), suggesting potential challenges in instrument performance or measurement conditions. Regions with narrower IQRs, such as R8 and R10, suggest more consistent measurements, while wider IQRs, as observed in R5 and R7, indicate greater variability. The 90th percentile provides insight into potential outliers. Regions with higher values at the 90th percentile, such as R7 and R9, may be influenced by extreme FEM/FRM ratios.
The sample size (n), representing the number of monitoring stations in each EPA region, is a key factor in interpreting the results and drawing conclusions from the data. Regions with larger n are likely to provide more representative insights into FEM monitor performance, as the data cover a broader geographic area or mix of pollution sources. In contrast, conclusions drawn from regions with smaller n may be more specific to local conditions. Outliers or extreme values in smaller n can have a more significant impact on overall results. Robustness to outliers is higher in regions with larger n. Overall, the results presented here (Figure 8; Table 2) highlight the diverse dynamics influencing FEM/FRM ratios across EPA regions, providing a foundation for targeted interventions, quality control measures, and improved understanding of regional air quality conditions. Further investigation into the specific factors influencing each region will contribute to more accurate and region-specific monitor performance assessments.

3.3. FEM/FRM Bias

Using the reported bias from paired PM2.5 concentrations at collocated sites, we assessed the performance of FEM monitors. This evaluation considered various factors, including equipment manufacturer (model), measurement method, EPA region, and location type. Furthermore, we examined potential bias within the 2022 DVs for PM2.5. The following sections outline the results obtained from these assessments.

3.3.1. Bias by Equipment Manufacturer

Figure 9 displays bias results categorized by equipment manufacturer and model type, comparing biases against the ±10% range, a data quality criterion recommended by the EPA for comparing PM2.5 concentrations from collocated FEM and FRM monitors (Technical Note—PM2.5 Continuous Monitor Comparability Assessment, available at https://www.epa.gov/outdoor-air-quality-data/assessment-pm25-fems-compared-collocated-frms) (accessed on 1 August 2023).
The data in Table 3 represent the bias percentages between FEM and FRM monitors across different types of FEM monitors. Notably, Teledyne T640 and T640x monitors exhibit similar bias patterns, with mean biases of 29.4% and 28.6%, respectively. The combined use of T640 and T640x shows a mean bias of 29.0%. In contrast, Met One BAM 1020 and 1022 monitors display a lower mean bias of 11.5%. Thermo Scientific monitors fall in between, with a mean bias of 14.6%. The quartile values provide additional insights: Teledyne monitors have higher 3rd and 1st quartile values compared to Met One BAM and Thermo Scientific, indicating greater variability in the bias distribution. The number of stations (i.e., n) for each monitor type also plays a crucial role in understanding the robustness of the observed biases. Teledyne T640 and T640x, with 140 stations (combined), exhibit a higher presence in the dataset compared to Met One BAM and Thermo Scientific monitors, which have 100 and 25 stations, respectively.
Teledyne monitors consistently display higher mean biases compared to Met One, and Thermo Scientific monitors. This finding indicates a potential pattern where Teledyne monitors consistently overestimate concentrations relative to FRM monitors. Met One and Thermo Scientific monitors demonstrate lower mean biases, indicating a relatively closer agreement with FRM monitors. This suggests that these monitor types may provide more accurate measurements of PM2.5 concentrations.

3.3.2. Bias by Method of Measurement

This analysis provides insights into the bias characteristics of both light scatter and beta attenuation monitors. The average bias for light scatter monitors is relatively high at 28.3%, indicating a systematic deviation from the reference measurements (Figure 10; Table 3). The median bias of 25.7% suggests that the central tendency is influenced by the lower half of the data, indicating a skewed distribution with a tail towards higher biases. The upper whisker extends up to 44.3%, indicating the maximum observed bias within 1.5 times the interquartile range above the third quartile. The third quartile (75th percentile) is 31.15%, indicating that 75% of the data falls below this threshold. The first quartile (25th percentile) is 19.7%, suggesting that 25% of the data falls below this threshold. The lower whisker extends down to 6.8%, indicating the minimum observed bias within 1.5 times the IQR below the first quartile.
The mean bias for beta attenuation monitors is 12.1%, which is lower than that of light scatter monitors, suggesting a comparatively smaller systematic deviation. The median bias of 5.0% is relatively lower than the mean, indicating a potential skewness towards lower biases. The upper whisker extends up to 39.2%, indicating the maximum observed bias within 1.5 times the IQR above the third quartile. The third quartile (75th percentile) is 14.9%, suggesting that 75% of the data falls below this threshold. The first quartile (25th percentile) is −6.2%, suggesting that 25% of the data falls below this threshold. The lower whisker extends down to −34.0%, indicating the minimum observed bias within 1.5 times the interquartile range below the first quartile.
Together, these findings suggest that light scatter monitors exhibit a higher mean and median bias compared to beta attenuation monitors. Light scatter monitors have a narrower IQR compared to beta attenuation monitors, indicating less spread in the central 50% of the data. Beta attenuation monitors show a wider range in both upper and lower whiskers, indicating higher variability and potential outliers. The negative lower whisker for beta attenuation monitors suggests instances of underestimation by FEM monitors.
The disparity in performance between beta attenuation and light scatter-based monitors can be predominantly attributed to differences in detection methods, operating procedures, aerosol composition, and environmental factors. Beta attenuation monitors are primarily influenced by the mass of particulate matter, regardless of density, chemical composition, or optical properties [17,30]. However, factors such as evaporation loss and the impact of aerosol water content due to humidity control systems can affect beta attenuation readings [5]. [17,30] Previous studies found that the readings of beta attenuation monitors are significantly influenced by ambient relative humidity (RH) levels [16,31] because of water absorption by inorganic aerosols [31,32,33]. The operation protocol typically involves heating the inlet line to reduce RH to below 35%. However, in conditions of very high ambient RH (>60%), the heating may not sufficiently evaporate water in aerosols, leading to an overestimation of PM concentration [16,34,35].
BAM monitors utilize a pre-separator as a size-selective inlet, such as a cyclone, impactor, or both. Two major types of PM2.5 separators approved by the USEPA, the Well-type Impactor Ninety-Six (WINS) Impactor and the Very Sharp Cut Cyclone (VSCC), have been identified [36]. Both the Sharp Cut Cyclone (SCC) and VSCC are known to exhibit relatively smaller bias compared to WINS [36,37]. In our dataset, Met One BAMs (Models 1020 and 1022) and Thermo Scientific (Models 5014i and 5030) were paired with either VSCC or SCC.
In contrast, light scatter-based instruments operate under the assumption that all particulates share identical optical properties [38,39,40]. However, this assumption poses challenges when dealing with particulate matter that exhibits diverse optical properties, thereby impacting the monitors’ performance. Aerosol water content, for instance, can significantly alter the size and distribution of particles in the air [41]. The absorption of water or hygroscopic growth can cause particles to swell or aggregate, consequently changing their optical properties [42]. Since light scatter-based monitors rely on the interaction of light with aerosol particles to measure concentrations, any variations in particle size or distribution due to water content can result in inaccuracies in the measurements. Thus, despite the assumption of constant optical properties for particulate matter in ambient air, variability in results can arise, deviating from true values.

3.3.3. Bias by EPA Region

The FEM/FRM bias across all EPA regions (R1-R10) is shown in Figure 11, revealing mean bias values ranging from 8.4% (R10) to 52.0% (R7) and median bias values varying from 1.5% (R10) to 25.4% (R7). The upper whisker values indicate the maximum bias observed in each region, ranging from 33.5% (R10) to 64.4% (R8). The lower whisker values show the minimum bias observed, with ranges from −34.0% (R9) to −3.2% (R4). There is substantial variability in bias percentages across different EPA regions (Table 4). Regions with consistently high or low bias values may suggest systematic differences between FEM and FRM monitors in those areas.
The sample size (n, total number of monitoring sites) varies across the EPA regions, ranging from 10 (R7) to 51 (R5). Regions with fewer data points (e.g., R7, R10) may have more uncertain estimates of bias, and the observed patterns could be influenced by the limited sample size. Regions with a higher number of data points (e.g., R5, R4) are likely to have more reliable and stable estimates of bias. Larger sample sizes generally provide more robust statistical measures, reducing the impact of individual data points on the overall estimate. Regions with smaller sample sizes may experience greater variability in statistical measures such as mean, median, and quartiles. Extreme values in regions with a small sample size can have a larger impact on the calculated statistics. When interpreting the results for regions with a small number of data points, it is essential to recognize the potential for higher variability and uncertainty in the estimates.
Further investigation of the data reveals the potential cause for the observed variability in bias across EPA regions. Figure 12 illustrates the overall method dominance for each EPA region. Note that individual states within each region may exhibit different method dominance patterns, as the regional summary encompasses several states. The results indicate notable variability in median bias percentages across regions, ranging from 5% to 25%. Regions dominated by light scatter monitors exhibited higher biases (R1, R3, R4, R5, and R7), while those dominated by beta attenuation monitors showed lower biases (R8, R9, and R10). Regions R2 and R6 (median biases of 21% and 5%, respectively) are not dominated by either monitor type. Regions with higher median bias percentages may indicate potential challenges or differences in measurement methods. It could be related to the characteristics of the monitoring sites, calibration methods, or other factors specific to light scatter monitors.

3.3.4. Bias by Location Type

Figure 13 illustrates the bias across different location types and measurement methods. The mean bias for light scatter FEM monitors is highest in rural locations (41.3%), followed by suburban (26.8%) and urban (25.5%) locations. The median bias is highest in rural locations (30.7%), followed by urban (25%) and suburban (24.1%) locations. The IQR for light scatter monitors across locations are nearly identical (Table 5). Collectively, these findings suggest that there are notable differences in light scatter bias across urban, suburban, and rural settings. It is worth mentioning that urban and suburban areas yield a higher number of data points (65 and 57, respectively) compared to rural locations (21), enhancing the robustness of observed biases in these settings.
The mean bias for beta attenuation in urban areas is 5.6%, with a median bias of 3.4%. The IQR is from −6.3% to 11.9%. In suburban areas, the mean bias for beta attenuation is higher at 14.7%, with a median bias of 8.9%. The IQR is from −1% to 18.4%. Rural areas exhibit the highest mean bias for beta attenuation at 22.9%, but the median bias is negative at −3.6%. The IQR is from −12% to 13.2%. Although biases for beta attenuation monitor exhibit variability across location types, we note the discrepancy in sample sizes collected from each location. For instance, the smaller sample size in rural areas (n = 18) might impact the precision of the bias estimates or the ability to detect significant differences in bias compared to urban and suburban locations.
Table 5 provides a statistical summary of the distribution, central tendency, and variability of the data for each location and variable. These comparisons highlight the differences in performance between light scatter and beta attenuation measurements across different location types. For both light scatter and beta attenuation monitors, the bias is highest for monitors located in rural areas and the lowest in urban areas. However, the estimated bias for all locations (except for urban areas with beta attenuation monitors) is outside the ±10% range, as shown in Figure 13.

3.3.5. Potential Bias in PM2.5 Annual Design Values (DVs)

Design values are essential in designating and categorizing nonattainment areas, serving as a crucial metric for assessing progress in meeting the NAAQSs. Specifically, the PM2.5 DVs serve as a critical indicator to assess air quality over a defined annual period and are typically calculated as a three-year average concentration. We assessed potential bias in PM2.5 DVs specifically at monitoring sites designating FEM as the primary monitors for compliance with NAAQSs. Primary monitors play a critical role in NAAQS determination, particularly for pollutants where DVs are computed at the site level rather than at the individual monitor level. Our investigation identified 68 counties utilizing collocated FEM monitors as primary monitors, leading us to infer that the data from these monitors have been integral in the calculation of DVs for the years 2020–2022. The results of our evaluation indicate a potential mean (positive) bias in the calculated DVs within this subset of counties, with the range estimated to be between 13 and 21%, as illustrated in Figure 14. This finding aligns with the findings reported by the Association of Air Pollution Control Agencies (AAPCA) and others on this issue. According to AAPCA, there are instances where FEM monitors demonstrate discrepancies leading to DVs notably higher than those derived from collocated FRM monitors. In certain instances, the utilization of FEM as the primary monitor can overestimate annual DVs by 2.3 to 2.9 µg/m3 [43], which could potentially impact the current or future attainment status of an area.

3.3.6. Implications of FRM/FEM Bias and Future Directions

The potential implications of data bias between FEM and FRM monitors are substantial and require careful consideration. A systematic understanding of these implications is vital for informed decision-making and determination of area designations. The bias between FEM and FRM monitors may impact the accuracy of data used to calculate annual DVs and assess compliance with NAAQSs. The recent tightening of annual NAAQSs for PM2.5 from 12 µg/m3 to 9 µg/m3 [11] is likely to effectively reduce the compliance threshold, particularly in scenarios where air quality dispersion modeling is utilized to validate adherence to the NAAQSs. According to current practice, permit applicants must demonstrate, through a comprehensive air quality modeling analysis, that the PM2.5 concentrations simulated by their operations align with NAAQSs when added to the background concentration. The background concentration accounts for all sources not explicitly simulated in a regulatory dispersion model such as AERMOD, typically quantified as the DV from a representative (usually nearest) FRM or FEM ambient monitor. The difference between the annual standard level and the background concentration is denoted as the “headroom”. To illustrate, considering the revised PM2.5 NAAQSs established at 9.0 µg/m3, and a county’s DV set at 7.5 µg/m3, the county would have a headroom of 1.5 µg/m3 (calculated as 9.0–7.5 µg/m3). Now, if the county’s DV is based on measurement conducted by a positively biased FEM monitor, the current DV (7.5 µg/m3) is likely to be an overestimate. Therefore, if the bias in FEM monitors is not addressed, less headroom will be available within which new projects can be permitted.
In April 2023, the EPA approved a modification for the Teledyne Model T640/T640x PM mass monitors, allowing them to operate with or without Network Data Alignment using updated firmware from Teledyne (see supplemental information in [44]). This adjustment addressed concerns about bias in the T640 and T640x monitors compared to reference methods. The EPA retroactively applied the Network Data Alignment equations to all hourly unaligned T640 and T640x PM2.5 concentrations in the EPA’s Air Quality System (AQS), starting from 2017 when these monitors were first deployed across the U.S. [44,45]. The effectiveness of the firmware update in addressing the bias in the T640/T640x monitors still needs to be fully evaluated.
The data incomparability or bias between FEM and FRM monitors have implications beyond regulatory compliance. Understanding and quantifying these biases can inform air quality model performance assessments by providing insights into the accuracy and reliability of model predictions. Additionally, in health research, acknowledging and accounting for these biases enables epidemiological studies to accurately assess the relationship between particulate pollution exposure and health outcomes. Addressing bias issues in public communications about air quality using tools such as the Air Quality Index (AQI), fosters transparency by providing the public with accurate and reliable information about air quality.

4. Study Boundaries

The evaluation of collocated FEM monitor performance in this study relied on yearly averaged concentrations, given the primary focus of the study was to investigate for and quantify FEM/FRM ratios and any systematic, long-term bias. We acknowledge that annual averages may obscure seasonal variations, such as ambient temperature and relative humidity, as well as short-term fluctuations, like those on a 24 hour basis. Analyzing monitor performance over diverse time scales and seasons could provide additional insights into FEM monitor performance. However, they were outside the scope of this study.
We also considered the implications of including all concentration data, including low-concentration ranges (e.g., <3 µg/m3), in calculating FEM/FRM bias. However, since most paired observations (~86%) in our evaluation dataset exceeded this threshold (i.e., ≥3 µg/m3), the impact of low-concentration data on the overall bias is expected to be minimal. Similarly, very high concentrations associated with exceptional events, such as wildfire smoke or prescribed fires, may cause increased positive bias in FEM monitors compared to FRM monitors over shorter time scales [46]. However, the impact of these short-term variations on the annual average is also expected to be minimal.
As discussed above, this study predominantly focuses on reporting FEM/FRM ratios and biases as quantitative differences between measurements without explicitly considering other factors like instrument precision or sensitivity to aerosol type. FEM and FRM monitors typically undergo rigorous calibration and maintenance procedures to ensure accuracy and regulatory compliance [47]. These procedures generally involve multi-point calibration using NIST-traceable standards and routine performance checks, as outlined by the EPA (see Appendix D of [47]). For precise information on the calibration and maintenance practices of the specific equipment used in this study, such details can be obtained from the relevant monitoring agency.

5. Conclusions

This study offers insights into the performance of regulatory continuous PM2.5 monitors across diverse regions in the U.S. By analyzing paired concentration data from 276 monitoring stations, we assessed the performance of FEM monitors in comparison to collocated FRM monitors. The biases observed in FEM monitors, as explored in this study, vary depending on the specific FEM manufacturer, measurement method, and sampling site location. Notably, light scatter-based FEM monitors, primarily Teledyne 640/640x models, emerge as the dominant measurement method across all locations, including urban, suburban, and rural settings. Overall, FEM monitors tend to exhibit high biases (mean bias of 22%) compared to FRM monitors. The potential bias in PM2.5 DVs could range from 13 to 21% at monitoring sites where FEMs are designated as the primary monitors. These findings collectively underscore the importance of addressing method comparability issues to ensure accurate and reliable air quality assessments as the EPA promulgates a revised PM2.5 NAAQS and develops implementation strategies.
Although meteorological factors like ambient temperature and relative humidity could potentially impact instrument performance, we did not disaggregate our data by season. Our goal was not to correlate PM2.5 measurements with their contributing variables but rather to understand overall bias and potentially provide a framework for comparing measurements. We acknowledge that disaggregating data by season could be a valuable topic for future research, offering further insights into the influence of seasonal variations on instrument accuracy and bias. By focusing on regional and locational differences, we have laid the foundation for more nuanced studies that can explore the intricate dynamics between environmental factors and monitor performance.

Author Contributions

Conceptualization, T.R.K. and Z.I.E.; methodology, T.R.K.; software, T.R.K.; validation, T.R.K.; formal analysis, T.R.K.; investigation, T.R.K., Z.I.E.; resources, Z.I.E.; data curation, T.R.K.; K.H.M.; writing—original draft preparation, T.R.K.; writing—review and editing, T.R.K., Z.I.E.; visualization, T.R.K.; supervision, Z.I.E.; project administration, T.R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The PM2.5 data used for comparison between reference and equivalent methods (i.e., FRM and FEM) were provided by the United States Environmental Protection Agency (USEPA). These data were obtained from USEPA’s PM2.5 Continuous Monitor Comparability Assessments Tool, accessible at: https://www.epa.gov/outdoor-air-quality-data/pm25-continuous-monitor-comparability-assessments (accessed on 1 August 2023).

Acknowledgments

We thank Vipin Varma for his helpful comments and technical discussions. We also thank three anonymous reviewers for their constructive comments on improving the manuscript.

Conflicts of Interest

The authors confirm that there are no relevant financial or non-financial competing interests to report with this study.

References

  1. USEPA. Ambient Air Monitoring Reference and Equivalent Methods; 40 CFR, Part 50. Federal Code of Regulations; US Government Printing Office: Washington, DC, USA, 2024.
  2. Kelp, M.M.; Fargiano, T.; Lin, S.; Liu, T.; Turner, J.; Kutz, N.; Mickley, L. Data-driven placement of PM2.5 air quality sensors in the United States: An approach to target urban environmental injustice. Geohealth 2023, 7, e2023GH000834. [Google Scholar] [CrossRef] [PubMed]
  3. USEPA. Ambient Air Monitoring Reference and Equivalent Methods; 40 CFR, Part 53, Federal Code of Regulations; US Government Printing Office: Washington, DC, USA, 2024.
  4. Clements, A.; Vanderpool, R. EPA Tools and Resources Webinar FRMs/FEMs and Sensors: Complementary Approaches for Determining Ambient Air Quality. 2019. Available online: https://www.epa.gov/sites/default/files/2019-12/documents/frm-fem_and_air_sensors_dec_2019_webinar_slides_508_compliant.pdf (accessed on 25 May 2024).
  5. Le, T.C.; Shukla, K.K.; Chen, Y.T.; Chang, S.C.; Lin, T.Y.; Li, Z.; Pui, D.Y.; Tsai, C.J. On the concentration differences between PM2. 5 FEM monitors and FRM samplers. Atmos. Environ. 2020, 222, 117138. [Google Scholar] [CrossRef]
  6. Noble, C.A.; Vanderpool, R.W.; Peters, T.M.; McElroy, F.F.; Gemmill, D.B.; Wiener, R.W. Federal reference and equivalent methods for measuring fine particulate matter. Aerosol Sci. Technol. 2001, 34, 457–464. [Google Scholar] [CrossRef]
  7. USEPA. Part 50—National Primary and Secondary Ambient Air Quality Standards. e-CFR: Title 40. Protection of Environment. Chapter I. ENVIRONMENTAL PROTECTION AGENCY. Subchapter C. AIR PROGRAMS. Available online: https://www.ecfr.gov/current/title-40/chapter-I/subchapter-C/part-50 (accessed on 1 July 2024).
  8. Long, R.W.; Urbanski, S.P.; Lincoln, E.; Colón, M.; Kaushik, S.; Krug, J.D.; Vanderpool, R.W.; Landis, M.S. Summary of PM2.5 measurement artifacts associated with the Teledyne T640 PM Mass Monitor under controlled chamber experimental conditions using polydisperse ammonium sulfate aerosols and biomass smoke. J. Air Waste Manag. Assoc. 2023, 73, 295–312. [Google Scholar] [CrossRef]
  9. Hanley, T. Continuous FEM Network Status and Method Update. In Proceedings of the National Association of Clear Air Agencies Monitoring Steering Committee Meeting, Boston, MA, USA, 13–14 May 2019. [Google Scholar]
  10. Frey, H.C.; Adams, P.; Adgate, J.L.; Allen, G.; Balmes, J.; Boyle, K.; Chow, J.C.; Dockery, D.W.; Felton, H.; Gordon, T.; et al. Advice from the Independent Particulate Matter Review Panel (formerly EPA CASAC Particulate Matter Review Panel) on EPA’s Policy Assessment for the Review of the National Ambient Air Quality Standards for Particulate Matter (External Review Draft–September 2019). Docket ID No. EPA–HQ–OAR–2015–0072 (Comment ID: EPA-HQ-OAR-2015-0072-0037), and Clean Air Scientific Advisory Committee, US Environmental Protection Agency, Washington, DC. 2019. Available online: https://www.regulations.gov/comment/EPA-HQ-OAR-2015-0072-0037 (accessed on 10 July 2023).
  11. USEPA. Reconsideration of the National Ambient Air Quality Standards for Particulate Matter (EPA-HQ-OAR-2015-0072; FRL-8635-02-OAR). 2023. Available online: https://www.regulations.gov/document/EPA-HQ-OAR-2015-0072-1543 (accessed on 27 January 2023).
  12. Owenby, M.W. Tennessee Comments on Proposed Reconsideration of the National Ambient Air Quality Standards for Particulate Matter (Docket No. EPA-HQ-OAR-2015-0072). 2023. Available online: https://www.regulations.gov/comment/EPA-HQ-OAR-2015-0072-1980 (accessed on 29 March 2023).
  13. Dunn, R.E. Georgia Environmental Protection Division comments on EPA’s Reconsideration of the National Ambient Air Quality Standards for Particulate Matter (Docket ID No. EPA-HQ-OAR-2015-0072). 2023. Available online: https://www.regulations.gov/comment/EPA-HQ-OAR-2015-0072-1972 (accessed on 29 March 2023).
  14. Sloan, J. Association of Air Pollution Control Agencies, to P. Tsirigotis, U.S. EPA, (AAPCA Letter). 2022. Available online: https://cleanairact.org/wp-content/uploads/2022/11/AAPCA-Letter-Particulate-Matter-Monitoring-FINAL-11-23-2022.pdf (accessed on 25 May 2023).
  15. USEPA. Final Rule: Reconsideration of the National Ambient Air Quality Standards for Particulate Matter (EPA–HQ–OAR–2015–0072; FRL–8635–02–OAR). 2024. Available online: https://www.federalregister.gov/documents/2024/03/06/2024-02637/reconsideration-of-the-national-ambient-air-quality-standards-for-particulate-matter (accessed on 6 March 2024).
  16. Takahashi, K.; Minoura, H.; Sakamoto, K. Examination of discrepancies between beta-attenuation and gravimetric methods for the monitoring of particulate matter. Atmos. Environ. 2008, 42, 5232–5240. [Google Scholar] [CrossRef]
  17. Shukla, K.; Aggarwal, S.G. A technical overview on beta-attenuation method for the monitoring of particulate matter in ambient air. Aerosol Air Qual. Res. 2022, 22, 220195. [Google Scholar] [CrossRef]
  18. Hagler, G.; Hanley, T.; Hassett-Sipple, B.; Vanderpool, R.; Smith, M.; Wilbur, J.; Wilbur, T.; Oliver, T.; Shand, D.; Vidacek, V.; et al. Evaluation of two collocated federal equivalent method PM2. 5 instruments over a wide range of concentrations in Sarajevo, Bosnia and Herzegovina. Atmos. Pollut. Res. 2022, 13, 101374. [Google Scholar] [CrossRef]
  19. Barhate, P.G.; Le, T.C.; Shukla, K.K.; Lin, Z.Y.; Hsieh, T.H.; Li, Z.; Pui, D.Y.; Tsai, C.J. Effect of aerosol sampling conditions on PM2. 5 sampling accuracy. J. Aerosol Sci. 2022, 162, 105968. [Google Scholar] [CrossRef]
  20. Le, T.C.; Barhate, P.G.; Zhen, K.J.; Mishra, M.; Pui, D.Y.; Tsai, C.J. Optimization of sampling conditions to minimize sampling errors of both PM2. 5 mass and its semi-volatile inorganic ion concentrations. Aerosol Sci. Technol. 2023, 57, 1264–1279. [Google Scholar] [CrossRef]
  21. Teledyne, A.P.I. User Manual Model T640 PM Mass Monitor. 2021. Available online: https://www.teledyne-api.com/prod/downloads/08354c%20t640%20user%20manual.pdf (accessed on 1 July 2024).
  22. Met One Instrument, Inc. BAM 1020 Particulate Monitor Operation Manual. 2016. Available online: https://metone.com/wp-content/uploads/2024/02/BAM-1020-9805-Manual-Rev-G-Reduced.pdf (accessed on 1 July 2024).
  23. Thermo Fisher Scientific. Model 5014i/Beta Instruction Manual. 2014. Available online: https://assets.thermofisher.com/TFS-Assets%2FCAD%2Fmanuals%2Fepm-model-5014i-beta-manual-en.pdf (accessed on 1 July 2024).
  24. Thermo Fisher Scientific. Model 5030 Instruction Manual. 2013. Available online: https://tools.thermofisher.com/content/sfs/manuals/EPM-manual-Model%205030%20SHARP.pdf (accessed on 1 July 2024).
  25. Thermo Fisher Scientific. TEOM® 1405 Ambient Particulate Matter Monitor Instruction Manual. 2008. Available online: https://assets.thermofisher.com/TFS-Assets%2FLSG%2Fmanuals%2FEPM-TEOM1405-Manual.pdf (accessed on 1 July 2024).
  26. United States Government Accountability Office. Air Pollution: Opportunities to Better Sustain and Modernize the National Air Quality Monitoring System. 2019. Available online: https://www.gao.gov/products/gao-21-38 (accessed on 29 July 2023).
  27. Di, Q.; Amini, H.; Shi, L.; Kloog, I.; Silvern, R.; Kelly, J.; Sabath, M.B.; Choirat, C.; Koutrakis, P.; Lyapustin, A.; et al. An ensemble-based model of PM2. 5 concentration across the contiguous United States with high spatiotemporal resolution. Environ. Int. 2019, 130, 104909. [Google Scholar] [CrossRef]
  28. Kelp, M.M.; Lin, S.; Kutz, J.N.; Mickley, L.J. A new approach for determining optimal placement of PM2.5 air quality sensors: Case study for the contiguous United States. Environ. Res. Lett. 2022, 17, 034034. [Google Scholar] [CrossRef]
  29. Marlier, M.E.; Brenner, K.I.; Liu, J.C.; Mickley, L.J.; Raby, S.; James, E.; Ahmadov, R.; Riden, H. Exposure of agricultural workers in California to wildfire smoke under past and future climate conditions. Environ. Res. Lett. 2022, 17, 094045. [Google Scholar] [CrossRef]
  30. Magi, B.I.; Cupini, C.; Francis, J.; Green, M.; Hauser, C. Evaluation of PM2.5 measured in an urban setting using a low-cost optical particle counter and a Federal Equivalent Method Beta Attenuation Monitor. Aerosol Sci. Technol. 2020, 54, 147–159. [Google Scholar] [CrossRef]
  31. Chang, C.T.; Tsai, C.J.; Lee, C.T.; Chang, S.Y.; Cheng, M.T.; Chein, H.M. Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler. Atmos. Environ. 2001, 35, 5741–5748. [Google Scholar] [CrossRef]
  32. Pilinis, C.; Seinfeld, J.H.; Grosjean, D. Water content of atmospheric aerosols. Atmos. Environ. 1989, 23, 1601–1606. [Google Scholar] [CrossRef]
  33. Khlystov, A.; Stanier, C.O.; Takahama, S.; Pandis, S.N. Water content of ambient aerosol during the Pittsburgh Air Quality Study. J. Geophys. Res. Atmos. 2005, 110. [Google Scholar] [CrossRef]
  34. Shin, S.E.; Jung, C.H.; Kim, Y.P. Estimation of the optimal heated inlet air temperature for the beta-ray absorption method: Analysis of the PM10 concentration difference by different methods in coastal areas. Adv. Environ. Res. 2012, 1, 69–82. [Google Scholar] [CrossRef]
  35. Kiss, G.; Imre, K.; Molnár, Á.; Gelencsér, A. Bias caused by water adsorption in hourly PM measurements. Atmos. Meas. Tech. 2017, 10, 2477–2484. [Google Scholar] [CrossRef]
  36. Kenny, L.C.; Merrifield, T.; Mark, D.; Gussman, R.; Thorpe, A. The development and designation testing of a new USEPA-approved fine particle inlet: A study of the USEPA designation process. Aerosol Sci. Technol. 2004, 38, 15–22. [Google Scholar] [CrossRef]
  37. Kenny, L.C.; Gussman, R.A. A Direct Approach to the Design of Cyclones for Aerosol Monitoring Applications. J. Aerosol Sci. 2020, 31, 1407–1420. [Google Scholar] [CrossRef]
  38. Chow, J.C.; Watson, J.G.; Lowenthal, D.H.; Richards, L.W. Comparability between PM2.5 and particle light scattering measurements. Environ. Monit. Assess. 2002, 79, 29–45. [Google Scholar] [CrossRef] [PubMed]
  39. Donateo, A.; Contini, D.; Belosi, F. Real-time measurements of PM2.5 concentrations and vertical turbulent fluxes using an optical detector. Atmos. Environ. 2006, 40, 1346–1360. [Google Scholar] [CrossRef]
  40. Grimm, H.; Eatough, D.J. Aerosol measurement: The use of optical light scattering for the determination of particulate size distribution, and particulate mass, including the semi-volatile fraction. J. Air Waste Manag. Assoc. 2009, 59, 101–107. [Google Scholar] [CrossRef] [PubMed]
  41. McMurry, P.H. A review of atmospheric aerosol measurements. Atmos. Environ. 2000, 34, 1959–1999. [Google Scholar] [CrossRef]
  42. Flores, J.M.; Bar-Or, R.Z.; Bluvshtein, N.; Abo-Riziq, A.; Kostinski, A.; Borrmann, S.; Koren, I.; Koren, I.; Rudich, Y. Absorbing aerosols at high relative humidity: Linking hygroscopic growth to optical properties. Atmos. Chem. Phys. 2012, 12, 5511–5521. [Google Scholar] [CrossRef]
  43. Bermudez, R. Continuous PM2.5–Road to Transition. In Proceedings of the National Ambient Monitoring Conference, Pittsburgh, PA, USA, 22–25 August 2022. [Google Scholar]
  44. USEPA. Notice of Opportunity to Comment on Proposed Update of PM2.5 Data from T640/T640X PM Mass Monitors. Federal Register, EPA–HQ–OAR–2023–0642; FRL: 11720–01–OAR. 2024. Available online: https://www.federalregister.gov/documents/2024/02/15/2024-02935/notice-of-opportunity-to-comment-on-proposed-update-of-pm25-data-from-t640t640x-pm-mass-monitors (accessed on 15 February 2024).
  45. USEPA. Development of an FRM alignment factor for the Teledyne API (TAPI) Model T640/x Instruments. EPA-HQ-OAR-2023-0642-0029. 2024. Available online: https://downloads.regulations.gov/EPA-HQ-OAR-2023-0642-0029/content.pdf (accessed on 11 April 2024).
  46. Iowa, D.N.R. Iowa Wildfire Smoke Episode on 8/1/21: Performance of the T640 and other FEM/FRM Methods. NACAA Monitoring Steering Committee “Virtual” Meeting with EPA, 29–30 November 2021. Available online: https://www.4cleanair.org/msc_virtually_meeting_with_epa_11_2021/ (accessed on 10 May 2024).
  47. USEPA. Quality Assurance Handbook for Air Pollution Measurement Systems, Volume II: Ambient Air Quality Monitoring Program. 2017. Available online: https://www.epa.gov/sites/default/files/2020-10/documents/final_handbook_document_1_17.pdf (accessed on 1 July 2024).
Figure 1. Distribution of FEM monitors by (a) equipment manufacturer and (b) method of measurement.
Figure 1. Distribution of FEM monitors by (a) equipment manufacturer and (b) method of measurement.
Atmosphere 15 00978 g001
Figure 2. Distribution of FEM monitors by location type (n = 276).
Figure 2. Distribution of FEM monitors by location type (n = 276).
Atmosphere 15 00978 g002
Figure 3. Frequency distribution of FEM/FRM concentration ratios.
Figure 3. Frequency distribution of FEM/FRM concentration ratios.
Atmosphere 15 00978 g003
Figure 4. Spatial distribution of FEM/FRM concentration ratios (n = 276).
Figure 4. Spatial distribution of FEM/FRM concentration ratios (n = 276).
Atmosphere 15 00978 g004
Figure 5. FEM/FRM concentration ratios by the equipment manufacturer (n = 265, three GRIMM EDM 180 and eight TEOM monitors were excluded).
Figure 5. FEM/FRM concentration ratios by the equipment manufacturer (n = 265, three GRIMM EDM 180 and eight TEOM monitors were excluded).
Atmosphere 15 00978 g005
Figure 6. FEM/FRM concentration ratios by method of measurement.
Figure 6. FEM/FRM concentration ratios by method of measurement.
Atmosphere 15 00978 g006
Figure 7. US EPA regional map (available at https://www.environmentalprotectionnetwork.org/ta-epa-regions-map/, accessed on 17 September 2023).
Figure 7. US EPA regional map (available at https://www.environmentalprotectionnetwork.org/ta-epa-regions-map/, accessed on 17 September 2023).
Atmosphere 15 00978 g007
Figure 8. FEM/FRM concentration ratios by EPA region (n = 276).
Figure 8. FEM/FRM concentration ratios by EPA region (n = 276).
Atmosphere 15 00978 g008
Figure 9. FEM/FRM bias by the equipment manufacturer.
Figure 9. FEM/FRM bias by the equipment manufacturer.
Atmosphere 15 00978 g009
Figure 10. FEM/FRM bias by the method of measurement.
Figure 10. FEM/FRM bias by the method of measurement.
Atmosphere 15 00978 g010
Figure 11. FEM/FRM bias by EPA region.
Figure 11. FEM/FRM bias by EPA region.
Atmosphere 15 00978 g011
Figure 12. Method dominance at various EPA regions (n = 276).
Figure 12. Method dominance at various EPA regions (n = 276).
Atmosphere 15 00978 g012
Figure 13. Bias by location type and measurement method (#LS = the number of light-scatter monitors; #BA = the number of beta attenuation monitors; n = 268, eight sites with TEOM monitors were excluded).
Figure 13. Bias by location type and measurement method (#LS = the number of light-scatter monitors; #BA = the number of beta attenuation monitors; n = 268, eight sites with TEOM monitors were excluded).
Atmosphere 15 00978 g013
Figure 14. Potential bias in PM2.5 design values.
Figure 14. Potential bias in PM2.5 design values.
Atmosphere 15 00978 g014
Table 1. Statistical summary of FEM/FRM ratios by equipment manufacturer and measurement method.
Table 1. Statistical summary of FEM/FRM ratios by equipment manufacturer and measurement method.
FEM Monitors by Manufacturer (Model)By Method
Teledyne
T640
Teledyne
T640x
Teledyne
T640, T640x
Met One BAM 1020, 1022Thermo ScientificLight
Scatter
Beta Attenuation
Min0.910.950.910.720.840.750.72
Max1.701.651.702.441.351.702.44
Median1.211.191.200.981.081.201.00
Mean1.201.211.211.011.081.201.03
Percentiles
25th1.161.151.150.911.051.150.92
50th1.211.191.200.981.081.201.00
75th1.251.241.251.061.121.251.08
90th1.291.351.321.191.171.321.17
Sample size (n)786214010025143125
Table 2. Statistical summary of FEM/FRM ratios by EPA region.
Table 2. Statistical summary of FEM/FRM ratios by EPA region.
EPA
Region
R1R2R3R4R5R6R7R8R9R10
Sample Size (n)26163144511710274014
Min0.770.900.940.890.870.910.810.750.720.89
Max1.391.311.701.272.441.431.511.551.651.55
Median1.051.101.161.161.191.061.181.051.060.99
Mean1.061.111.171.151.171.111.121.051.091.03
Percentiles
25th0.911.051.091.101.080.990.910.930.950.93
50th1.051.101.161.161.191.061.181.051.060.99
75th1.191.201.231.221.251.241.241.141.201.06
90th1.261.301.331.251.311.341.491.251.321.35
Table 3. Statistical summary of FEM/FRM bias by equipment manufacturer and measurement method *.
Table 3. Statistical summary of FEM/FRM bias by equipment manufacturer and measurement method *.
Bias (%) in FEM Monitors by Manufacturer (Model)Bias by Method
Teledyne
T640
Teledyne
T640x
Teledyne
T640, T640x
Met One BAM 1020, 1022Thermo ScientificLight
Scattering
Beta Attenuation
Mean29.428.629.011.514.628.312.1
Median27.224.025.80.812.925.75.0
Upper whisker4244.344.333.527.444.339.2
3rd quartile31.630.331.613.618.431.1514.9
1st quartile21.518.619.95−8.158.919.7−6.2
Lower whisker6.89.86.8−34−46.8−34
Sample
Size (n)
786214010025143125
* Together, the FEM monitors employing the light-scatter and beta attenuation principles constitute a sample size of n = 268. However, the FEM types, such as tapered element oscillating microbalances (TEOM), from the remaining 8 sites did not align with these two categories and were consequently excluded from this table.
Table 4. Statistical summary of FEM/FRM bias (%) by EPA region.
Table 4. Statistical summary of FEM/FRM bias (%) by EPA region.
R1R2R3R4R5R6R7R8R9R10
Mean17.817.125.217.726.624.252.018.012.68.4
Median19.420.723.819.725.05.025.413.212.51.5
Upper whisker53.036.753.431.654.156.043.364.459.833.5
3rd quartile26.725.931.625.231.629.629.530.222.911.7
1st quartile0.79.514.812.314.7−0.2−10.0−0.6−3.7−10.0
Lower whisker−24.0−10.0−3.6−3.2−9.8−10.0−23.0−30.0−34.0−20.0
Sample
Size (n)
26163144511710274014
Table 5. Statistical summary of FEM/FRM bias (%) location and equipment type.
Table 5. Statistical summary of FEM/FRM bias (%) location and equipment type.
UrbanSuburbanRural
Light ScatterBeta AttenuationLight ScatterBeta AttenuationLight ScatterBeta Attenuation
Mean25.55.626.814.741.322.9
Median253.424.18.930.7−3.6
Upper whisker44.328.938.933.536.739.2
3rd quartile30.511.929.518.436.713.2
1st quartile18.9−6.319.5−126.5−12
Lower whisker6.8−309.2−2018.2−20
Sample Size (n)655357542118
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, T.R.; Emerson, Z.I.; Mentz, K.H. Evaluation of Fine Particulate Matter (PM2.5) Concentrations Measured by Collocated Federal Reference Method and Federal Equivalent Method Monitors in the U.S. Atmosphere 2024, 15, 978. https://doi.org/10.3390/atmos15080978

AMA Style

Khan TR, Emerson ZI, Mentz KH. Evaluation of Fine Particulate Matter (PM2.5) Concentrations Measured by Collocated Federal Reference Method and Federal Equivalent Method Monitors in the U.S. Atmosphere. 2024; 15(8):978. https://doi.org/10.3390/atmos15080978

Chicago/Turabian Style

Khan, Tanvir R., Zachery I. Emerson, and Karen H. Mentz. 2024. "Evaluation of Fine Particulate Matter (PM2.5) Concentrations Measured by Collocated Federal Reference Method and Federal Equivalent Method Monitors in the U.S." Atmosphere 15, no. 8: 978. https://doi.org/10.3390/atmos15080978

APA Style

Khan, T. R., Emerson, Z. I., & Mentz, K. H. (2024). Evaluation of Fine Particulate Matter (PM2.5) Concentrations Measured by Collocated Federal Reference Method and Federal Equivalent Method Monitors in the U.S. Atmosphere, 15(8), 978. https://doi.org/10.3390/atmos15080978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop