Next Article in Journal
Web Strategy to Convey Marine Biogeochemical Feedback Concepts to the Policy Community: Aerosol and Sea Ice
Next Article in Special Issue
A Robust Non-Gaussian Data Assimilation Method for Highly Non-Linear Models
Previous Article in Journal
The Dynamic Character of Northern Hemisphere Flow Regimes in a Near-Term Climate Change Projection
Previous Article in Special Issue
Evaluating the Role of the EOF Analysis in 4DEnVar Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Impact of Surface and Upper-Air Observations on the Forecast Skill of the ACCESS Numerical Weather Prediction Model over Australia

Australian Bureau of Meteorology, 700 Collins Str., Melbourne, VIC 3008, Australia
*
Author to whom correspondence should be addressed.
Atmosphere 2018, 9(1), 23; https://doi.org/10.3390/atmos9010023
Submission received: 15 December 2017 / Revised: 11 January 2018 / Accepted: 12 January 2018 / Published: 16 January 2018
(This article belongs to the Special Issue Efficient Formulation and Implementation of Data Assimilation Methods)

Abstract

:
The impact of the Australian Bureau of Meteorology’s in situ observations (land and sea surface observations, upper air observations by radiosondes, pilot balloons, wind profilers, and aircraft observations) on the short-term forecast skill provided by the ACCESS (Australian Community Climate and Earth-System Simulator) global numerical weather prediction (NWP) system is evaluated using an adjoint-based method. This technique makes use of the adjoint perturbation forecast model utilized within the 4D-Var assimilation system, and is able to calculate the individual impact of each assimilated observation in a cycling NWP system. The results obtained show that synoptic observations account for about 60% of the 24-h forecast error reduction, with the remainder accounted for by aircraft (12.8%), radiosondes (10.5%), wind profilers (3.9%), pilot balloons (2.8%), buoys (1.7%) and ships (1.2%). In contrast, the largest impact per observation is from buoys and aircraft. Overall, all observation types have a positive impact on the 24-h forecast skill. Such results help to support the decision-making process regarding the evolution of the observing network, particularly at the national level. Consequently, this 4D-Var-based approach has great potential as a tool to assist the design and running of an efficient and effective observing network.

1. Introduction

Numerical weather prediction (NWP) models are critically reliant on a large number of appropriate quality in situ and remotely sensed observations. These observations provide the data that are needed to accurately define the forecast initial conditions, via the process of data assimilation. Hence, the forecast skill of NWP is substantially influenced by the observations.
To both improve the analysis and increase the forecast skill of NWP systems such as ACCESS [1], the national observing network needs to be regularly evaluated in terms of design and observational types. The significant investment by national meteorological services like the Bureau of Meteorology (from hereon, the Bureau) in national observing networks, and the constant evolution of observational technologies, requires an ongoing assessment of the value of the network components. NWP is one of the major mechanisms for converting observed data into information and services, so an objective measure of the impact of each observation on the quality of short-term forecasts can potentially guide decisions related to network efficiency and effectiveness.
Traditionally, the impact of an observation system, type, or individual instrument on the forecast skill of an NWP system that assimilates it, is assessed by performing an Observing System Experiment (OSE), usually called a Data Denial Experiment (DDE). In an OSE, the forecast skill of two separate runs are compared—one with all observations assimilated and the other with a given observation type (or individual instrument) withheld or added (e.g., [2,3,4,5,6,7]). Any change in forecast skill is attributed to the observations, which have been withheld. The technique can also be used to assess the impact of newly available observations. OSEs can be informative but come with disadvantages: they are computationally expensive and time-consuming to carry out and their results are not especially “fine-grained”. They are not suited to evaluate the impact of, say, a single station in an observing network due to the very long duration of the experiment that would be required to produce statistically significant results. For similar reasons, OSEs are not suited to evaluating the impact of assimilating a few extra channels from a multi-channel satellite instrument such as a modern hyper-spectral infra-red sounder because of the length of time required to get statistically significant results. OSEs also only provide information on the dataset that was withheld, and no direct information on the value of other subsets of observations.
A relatively new technique (e.g., [8]), which makes use of the adjoint models utilized within variational assimilation systems, is able to calculate the individual impact that each assimilated observation has in a cycling NWP system—where the impact is typically measured by the reduction in the 24-h forecast error (typically expressed as a total dry or moist energy norm). Such a technique, which uses the same code as four-dimensional data assimilation (4D-Var) systems [9,10], was pioneered by Baker and Daley [11] and Langland and Baker [8], then further explored (e.g., [12,13,14,15,16,17,18] and subsequently implemented in a number of NWP centres (e.g., [5,19,20]). This 4D-Var approach has the advantage of being able to continually generate and aggregate forecast impacts for all observations, and allows much more fine-grained impact statistics to be generated than is feasible in OSEs. At the UK Met Office, Lorenc and Marriott [20] developed a method of calculating Forecast Sensitivity to Observations (FSO) by means of the technology already used in the Met Office 4D-Var system [21]: specifically, the adjoint perturbation forecast (PF) model which is used in the minimization of the 4D-Var cost function. The development of this capability in the UK Met Office gave rise to the opportunity to implement it in ACCESS, which is built from the Met Office Unified Model [22] and associated 4D-Var software.
The key feature of the adjoint-based FSO lies in the fact that the diagnostic information is generated for all assimilated observations simultaneously—that is, the model does not need to be re-run to examine the impact of each observation type. However, this approach has limitations caused by the linearity of the adjoint model, which limits the lead-time of the forecasts that can be assessed. In addition, the impact of an observation is assessed in the presence of all other observations: this is not the same as measuring the increment in forecast skill caused by adding or withholding the observation from the assimilation. Furthermore, non-assimilated observations, which nevertheless contribute to the skill of an NWP forecast (e.g., the sea surface temperatures which provide some of the boundary condition), are not included. Impact results are also strongly dependent on the choice of the error norm and the spatial domain over which it is calculated.
In this paper, we use FSO capabilities to quantitatively analyze the impact of the Bureau’s in situ, non-satellite observations (synoptic, ship weather, radiosonde, wind profiler, moored and drifting buoy and aircraft data) on the short-term 24 h numerical weather forecasts of the operational global NWP system (ACCESS-G) and its 4D-Var incremental data assimilation system. The impact of the satellite data will be dealt with in a subsequent paper.
The results provide objective guidance to support decisions related to the ongoing management and planning of the Bureau’s observing network.

2. Experiments

2.1. Essentials of the Forecast Sensitivity to Observation Method

The adjoint-based forecast sensitivity method involves the comparison of a 24 h and a 30 h forecast provided at the same point in time. The improved skill of the 24 h forecast compared to the 30 h forecast is attributed to the observations that were assimilated to produce the 24 h forecast. Let us denote the 24 h forecasts from an analysis and the background used in the analysis by x t fa and x t fb , respectively, where x t fb is identical to a 30 h forecast from the previous cycle of analysis. Let x t a be the verifying analysis at time t. In the incremental 4D-Var system used in ACCESS, the adjoint PF model—which has a coarser grid and simpler moist physics than the full field forecast model—is applied. Thus, the forecast errors are calculated with these simplifications. Let S be the operator that reduces the set of cloud moisture variables and interpolates model variables into lower resolution [20]:
δ w t fa = S x t fa S x t a
δ w t fb = S x t fb S x t a
δ w t = S x t fa S x t fb = δ w t fa δ w t fb
where δ w t fa is the error of the forecast started from the analysis; δ w t fb is the error of the forecast started from the background; δ w t is the forecast difference.
This technique requires the choice of a suitable metric to measure the change in forecast error. As is standard in the UK Met Office FSO suite (and most others), this metric is the total energy norm calculated as an inner product on the latitude (φ)—longitude ( λ ) spherical grid [20]:
e = δ w Τ C δ w = 1 M D E r 2 cos ϕ d λ d ϕ d r
E = 1 2 ( ρ u 2 + ρ v 2 + ρ g 2 θ 2 N 2 θ 2 + 1 ρ c 2 p 2 + ε ρ L 2 c p q 2 )
where MD is the mass of the atmosphere in the integration domain D; r is the radial distance; u, v are the zonal and meridional wind components, respectively; θ is the potential temperature, p is pressure; q is the specific humidity; cp is the heat capacity at constant pressure; L is the latent heat of water vapor condensation; g is the acceleration of gravity; ρ is the air density; c is the speed of sound; N2 is the square of the Brunt-Väisälä frequency. The primed variables denote PF model state vector components and the unprimed variables linearization state vector elements. Assuming ε = 0 we obtain the dry total energy norm. In the PF model ε = 1 meaning that the water vapor can condense releasing its latent heat.
The model provides an option to limit the integration domain in (4), which allows the energy norm to be calculated for the Australian region as well as the global domain. Formally, this option is expressed in the matrix equations by a projection P that is equal to 1 in the domain of interest and zero outside of this domain. The expression for the change in energy norm in the limited domain can be written as follows [20]:
δ e ( M 0 t K δ y ) Τ ( δ e δ w t ) = δ y Τ K Τ M 0 t Τ P Τ C P ( δ w t fa + δ w t fb ) δ y Τ ( δ e δ y )
Here M 0 t is the PF model used to calculate perturbation forecast from the initial time 0 to time t; K is the gain matrix calculated by incremental 4D-Var scheme; δ y = y o H ( x b ) are the observation innovations, where y o is the observation vector, H is the observation operator, and x b is the background model state. A superscript T denotes the transpose operation. Therefore, K Τ and M 0 t Τ represent the adjoint data assimilation scheme and the adjoint PF model, respectively.
Note that ( δ e / δ y ) is the vector of finite observation sensitivities, the components of which correspond to forecast sensitivity with respect to each observation in δ y . The influence of kth observation type on the forecast is given by δ e k δ y k ( δ e / δ y ) k in units of J∙kg−1. The calculation of FSO is represented schematically in Figure 1.

2.2. Experimental Design

The Bureau’s operational global NWP system within the overall APS-2 (Australian Parallel Suite version 2) NWP suite, is used for these calculations [24]. The horizontal resolution of the nonlinear forecast model is N512 (1024 × 769 grid points along longitude and latitude, respectively, with the average distance between grid points about 25 km), with 70 vertical levels up to ~80 km altitude. The linear PF includes simplified moist physics and has the same vertical resolution as the nonlinear model with a horizontal resolution of N216 (about 60 km). The analyses (initial conditions for the NWP model) are generated by means of the 4D-Var system with a 6-h assimilation window. Observation impacts represent an estimate of the change in a 24-h forecast error as a consequence of the assimilation of observations. Forecast error is measured in terms of a moist energy norm calculated from the surface to the 150 hPa level over the Australian region (see Figure 2). The adjoint-based observation impacts are calculated from 00Z 1 September 2015 to 00Z 31 December 2016 in 6-h intervals. The experiment details are summarized in Table 1.
For each analysis time, the adjoint FSO system produces an ASCII output file that contains the information on all the observations including satellite and in situ, non-satellite observation types (see Table 2). The following information is contained in the FSO output file:
-
Sequential number of observation;
-
The observation value;
-
The value of innovation (the difference between observation and background values);
-
Sensitivity of the forecast to observation;
-
Latitude and longitude of observation;
-
Pressure level of observation (in hPa);
-
Indicator of the instrument type (radiosonde, surface station, wind profiler);
-
Indicator of the observation variable type (temperature, moisture, pressure, horizontal wind components);
-
The time offset of the observation from the analysis time;
-
Observation error variance;
-
WMO (World Meteorological Organization) station identification number;
-
Satellite identifier, satellite instrument and channel number.
Calculation of the observation impact is a multi-step process. At each analysis time, the FSO output file is processed into a set of JSON (Java Script Object Notation) files, from which a set of Python-based tools aggregate the individual forecast sensitivities on the basis of observation type and/or station and statistically analyze and visualize the results.

3. Results

3.1. Observation Impacts of the Australian Sonde Network

Of particular interest to this study, is the relative value of the radiosonde stations located at Maquarie Island, Cocos Island and the three Antarctic stations (Mawson, Casey and Davis), since the operation of these stations requires considerable resources.
There are 34 upper air-observing stations in the Australian network (Figure 3). The contribution of each of these radiosonde stations to the reduction of the forecast error is shown in Table 3. The most significant contribution to the forecast quality is provided by the more remote stations Casey, Davis and Mawson in the Antarctic, and Macquarie Island. These stations are shown in red in Figure 3. Upper air stations with the least impact on forecast skill are Moree, Townsville, Cobar, Williamstown, Mount Gambier, Rockhampton, Ceduna, Wagga Wagga and Sydney. These stations are displayed in blue in Figure 3. The analysis of spatial distribution of impacts on the mainland shows that remoter stations are more influential on forecast skill than others. It is also worth noting the high relative impact of remote stations which sample the Southern Ocean (e.g., the Antarctic stations) in comparison to the remote Indian Ocean station on the Cocos Islands. We interpret this as indicating that assimilated information from upstream stations is more likely to propagate into the verification region and beneficially impact the forecast and is consistent with the spatial distribution of impacts from drifting buoys reported in Section 3.4.

3.2. Observation Impacts of Australian Wind Profilers

Wind profilers are radar weather observing instruments that continually measure wind speed and direction in the upper air at different levels above the ground using radio waves.
During the study period, the Australian wind profiler network included 11 operational profilers (Figure 4) supplied by ATRAD Pty. Ltd. (Adelaide, Australia) [25]. Eight profilers are boundary-layer measurement devices and three profilers measure wind in the troposphere and lower stratosphere. The new profilers in Longreach and Mackay are also tropospheric-stratospheric profilers, however, they were not yet operational during the study period. The profilers at Launceston and Carnarvon were not assimilated during the bulk of the study period due to quality control settings supplied by the Met Office and are thus not included here. All Australian operational profilers operate at a frequency of 55 MHz, providing winds from 500 m to 20 km (Stratospheric-Tropospheric profilers), and from 300 m to 7 km (Boundary Layer profilers). Operational profilers produce a wind estimate every 2–6 min, depending on mode of operation. These data are then quality controlled and averaged to produce wind estimates every 30 min. The wind profile data are then sent to the Bureau in BUFR (Binary Universal Form for the Representation of meteorological data) format [26].
Figure 5 shows the total impacts and the impact per observation calculated for the Australian summer (December 2015–February 2016). The results calculated for the Australian winter (June–August 2016) are shown in Figure 6. In these figures stations are ranked in descending order of their impact on the 24 h forecast skill. Observations from wind profilers located at Halls Creek, Tennant Creek, Ceduna and Cairns yield the largest contributions to the forecast error reduction. Contributions of these wind profilers differ only slightly. In contrast, observations from wind profilers located at Canberra, Sydney and East Sale contribute 3–4 times less to the forecast error reduction.
In general, profilers located in northern Australia have a greater impact (in terms of the forecast error reduction) than profilers situated in the southern part of the Australian continent. This north-south difference may arise from the fact that the number of observations from the northern profilers is considerably larger than the number of observations from the southern profilers (Figure 7)—the southern (older) stations report observations less frequently than the northern stations. This also may explain why the observation data in the layer between 850 and 500 hPa pressure levels contribute the most to the reduction of the forecast error (Figure 5a and Figure 6a). In contrast, the impact per observation is more uniform across the pressure levels (Figure 5b and Figure 6b) and consequently, data from all levels have a similar impact per observation on the forecast quality.

3.3. Observation Impacts of Australian Synoptic Observations

Surface weather data are obtained from different types of observing stations around Australia, on offshore islands, and in the Antarctic. They include Bureau-staffed observer stations and Bureau-owned automatic weather stations, cooperative observer stations at which manual readings are made. Surface weather observations used in the ACCESS data assimilation system are taken at 756 manned and automated weather stations. However, only 70 percent of them (525 stations) are standard automatic weather stations (AWS) managed by the Bureau. The remaining 231 stations are cooperative stations that are funded by the Bureau but operated by contractors—all but a few of these stations provide only manual readings. Station locations are chosen based on a range of different requirements and constraints, including:
(a)
The need for observations of particular types;
(b)
How well a location represents the surroundings;
(c)
Availability of land, observers (if required) and infrastructure;
(d)
The presence of other stations in the area.
The spatial distribution of synoptic stations is not even across the country. The observing network is densest in the most highly populated areas in the country (Sydney, Melbourne, Canberra, Adelaide, Perth, Brisbane, Hobart and Darwin). Generally, the southeastern quadrant of Australia has the densest synoptic observing network. At the majority of locations, AWS send synoptic send data to the Bureau on an hourly basis (although one-minute data is transmitted every minute for core Bureau products and services). However, synoptic observation reports prepared by cooperative stations are typically sent to the Bureau one to two times per day.
The total impacts of synoptic observations from the AWS and cooperative stations for the periods of December 2015–February 2016 and June–August 2016 calculated for four NWP analysis times are illustrated in Figure 8. This figure shows that the impact of observations from AWS exceed the impact of observations from cooperative stations by at least an order of magnitude. This is because the number of observations is considerably different between AWS and cooperative stations.
In the Australian summer (December–February) the total impacts for all four analysis times are less than the impact for the Australian winter (June–August). This difference is most likely due to the fact that the moisture contributes about 33 percent of the total energy [20]. Moisture content in the atmosphere fluctuates from season to season. Typically, an Australian summer season is drier than a winter season, thus, the moisture contribution to the total energy norm is larger in winter than in summer. The difference between summer and winter impacts may also be the consequence of the adjoint model properties and limitations (e.g., [27]). Adjoint models are linear models and, therefore, their results are most applicable when the tangent linear approximation is valid. Due to the effects of baroclinic instability, weather fluctuates more intensively in winter than in summer seasons. Therefore, winter trajectories generated by the (nonlinear) NWP model are more oscillating than summer trajectories and, therefore, more sensitive to the initial conditions. In addition, error growth in the adjoint model may be dominated by highly nonlinear moist physics. As a result, observation sensitivities calculated by the adjoint model for a winter season may be slightly larger than for the summer season.
Figure 9 shows the impact of each AWS and cooperative station on the reduction of the forecast error calculated for the southern hemisphere winter season (June–August 2016). The winter season is taken as an example because the effect is the same in summer. In Figure 9 stations are divided into three categories: stations that have on average a positive effect on the forecast quality (δe < 0), stations with on average a negative effect (δe > 0) on the forecast skill, and “neutral” stations (δe = 0). Weather observations, obtained from about 78 percent of both AWS and cooperative stations, are beneficial. Observations taken at about 7 percent of AWS and 17 percent of cooperative stations are neutral. Observations obtained from about 15 percent of AWS and 5 percent of cooperative stations are not beneficial, at least within the framework of the adjoint-based method discussed in Section 2. It appears that the denser the network the lower the impact of individual stations and vice versa. This is consistent with the upper air results.
The reason that some stations have, on average, detrimental impact on forecast skill will be a topic of further investigation. For now, we note that most such stations are only weakly detrimental and that those that are more detrimental tend to be inland and remote. Given that these results are only for two seasons (summer and winter), this suggests that exploring the seasonal variation of surface station impacts, and the possible influence of particular synoptic regimes on observational representativeness errors, may be a useful line of enquiry. Stations that are persistently detrimental can also be investigated for quality control issues.

3.4. Observation Impacts from Buoys

A significant source of surface observations in ACCESS are moored and drifting ocean buoys and ships that collect weather data. The latter two sources of data come from both Bureau funded and internationally funded programs. Such marine surface observations are especially important in the Southern Hemisphere, which is geographically dominated by ocean.
Figure 10 shows the average impact per observation for each of the buoys assimilated during the period September to December 2015, for the Australian region error norm; impacts are plotted at the average position of the buoy during the period. Benefical impact is dominated by drifting buoys in the southern Indian Ocean (upstream from the Australian region in the prevailing westerlies), the Southern Ocean and the moored buoys in the western equatorial region of the Pacific Ocean. The substantial impact of buoys far from Australia should be interpreted with caution: only average position is shown and some such buoys will have drifted considerably during the period. In contrast, buoy impacts derived from the global error norm (not shown) are much more evenly distributed across the globe.

3.5. Comparison of Observation Impacts of Each In Situ Observation Type

The total average observation impact per day in the ACCESS system, aggregated by observation type, and the impact per observation for each in situ observation type (Table 2) are shown in Figure 11. Observations of all types reduce the forecast error on average during both winter and summer periods. Synoptic observations from AWS stations yield the largest contribution to the forecast error reduction (Figure 11a). Aircraft and radiosondes rank second and third, respectively, followed by wind profilers and pilot balloons. Recall that these results are obtained for the Australian moist energy norm. For the moist energy norm calculated over the globe, synoptic observations are usually not the major in situ observation type contributors in the forecast error reduction. Typically, radiosondes and aircrafts demonstrate the largest contributions (e.g., [19,20]). In contrast, the impact per observation is the largest for buoys (Figure 11b), which is consistent with other studies (e.g., [19]). This is because buoys are the main source of information at the marine boundary layer. Aircraft and synoptic observations rank second and third, followed by ships, radiosondes, pilot balloons and wind profilers.

4. Discussion and Conclusions

In this paper, we evaluated impacts of in situ observations (ground and ocean-based synoptic and ship observations, wind profiler information, radiosonde and wind balloon upper air observations, and aircraft data) on 24-h weather forecast error reduction using the adjoint-based FSO approach developed by the Met Office run in conjunction with the operational ACCESS global NWP system. The impact is measured by the reduction in the 24-h forecast error expressed as a moist energy norm calculated in the Australian domain. Results show that synoptic observations account for about 60% of the 24-h forecast error reduction, with the remainder accounted for by aircraft (12.8%), radiosondes (10.5%), wind profilers (3.9%), pilot balloons (2.8%), buoys (1.7%) and ships (1.2%). In contrast, the largest impact per observation is from buoys and aircraft. The large impact per observation from buoys concurs with other published studies and results from their provision of data from the otherwise poorly sampled surface ocean. Unsurprisingly, the drifting buoys closer to Australia have the larger impact on forecast skill, with buoys in the eastern Indian Ocean and off the southern Australian coast yielding the highest impacts. Indian ocean buoy observations are thus especially valuable for NWP forecast skill over the Australian continent. In terms of total impact per day, data from surface automatic weather stations have the largest impact on the forecast quality over the Australian region, a result that contrasts with global error norm calculations, which show that radiosondes are the most impactful in situ observations. This contrast reflects the much shorter forecast error length scales at the surface as compared to the mid-troposphere in typical data assimilation systems [28].
Previous work on quantifying the impact of Australian region rawinsonde observations in the then Bureau global NWP system [29,30] was reported by Seaman [31] using an analysis impact method described earlier [32]. In this, workstations were ranked according to how often they appeared in the top ten of the most influential Australian stations (in the sense of reducing forecast error) over a period from 1994 to 2007. As might be expected, remote stations were found to have more impact than those in the denser parts of the Australian network: in particular stations located in the tropical and sub-tropical north west of the continent were found to have the most impact. The results obtained in this paper show that there does not seem to be a significant dependence of impact on latitude, in contrast with an earlier study by Seaman [32]. This contrast is most likely due to the significant improvements in NWP data assimilation methods in the last two decades. A similar effect has been noted in most data assimilation systems, where short range forecast error variances now only have a weak latitudinal dependence—a much different situation to 10 or 20 years ago. The results obtained allow us to rank the Australian upper air stations with respect to their impacts on the forecast skill and to identify the most influential observations. Upper air radiosonde stations in more remote locations (e.g., Macquarie Island, Antarctic sites) have a greater impact on forecast skill than stations on the mainland.
Similarly, the more densely distributed synoptic sites have less of an impact on forecast skill compared to the more remote locations. Remote stations also tend to be upwind of substantial parts of the Australian mainland which may enhance their impact. In general, wind profilers located in northern Australia have a greater impact (in terms of the forecast error reduction) than profilers situated in the southern part of the Australian continent, a difference attributable to both to the larger number of observations from the northern stations and their remoteness.
Overall, all observation types have a positive impact on the 24-h forecast skill. Such results help to support the decision-making process regarding the evolution of the observing network, particularly at the national level. Consequently, this 4D-Var-based approach can be considered to be a tool for guiding the design and running of an efficient and effective observing network. The results obtained indicate great potential for FSO impact data to guide and inform the planning, assessment and development of Bureau observing networks. Future studies will explore further the seasonal and inter-annual variability of these impacts and also address the impact of satellite observations assimilated by the ACCESS NWP global system.

Acknowledgments

The atmospheric model (UM) and data assimilation (4D-Var) used in the ACCESS NWP suite were developed at the Met Office, as was the FSO software used in this work. The continuing and timely support and advice by Met Office staff is gratefully acknowledged.

Author Contributions

All authors contributed equally in the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Puri, K.; Dietachmayer, G.; Steinle, P.; Dix, M.; Rikus, L.; Logan, L.; Naughton, M.; Tingwell, C.; Xiao, Y.; Barras, V.; et al. Implementation of the initial ACCESS numerical weather prediction system. Aust. Meteorol. Oceanogr. J. 2013, 63, 265–284. [Google Scholar] [CrossRef]
  2. Kelly, G.; Thépaut, J.-N. Evaluation of the impact of the space component of the Global Observing System through Observing System Experiments. ECMWF Newsl. 2007, 113, 16–28. [Google Scholar]
  3. Kelly, G.; Bauer, P.; Geer, A.J.; Lopez, P.; Thépaut, J.-N. Impact of SSM/I observations related to moisture, clouds and precipitation on global NWP forecast skill. Mon. Weather Rev. 2007, 136, 2713–2726. [Google Scholar] [CrossRef]
  4. Marseille, G.J.; Stoffelen, A.; Barkmeijer, J. Sensitivity Observing System Experiment (SOSE)—A new effective NWP-based tool in designing the global observing system. Tellus A 2008, 60, 216–233. [Google Scholar] [CrossRef]
  5. Gelaro, R.; Zhu, Y. Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus A 2009, 61, 179–193. [Google Scholar] [CrossRef]
  6. Bauer, P.; Radnóti, G. Study on Observing System Experiments (OSEs) for the Evaluation of Degraded EPS/Post-EPS Instrument Scenarios; Report Available from ECMWF; ECMWF: Reading, UK, 2009; p. 99. [Google Scholar]
  7. Masutani, M.; Schlatter, T.W.; Erico, R.M.; Stoffen, A.; Andersson, E.; Lahoz, W.; Woollen, J.S.; Emmitt, G.D.; Riishøjgaard, L.-P.; Lord, S.J. Observing System Simulation Experiments. In Data Assimilation; Lahoz, W., Khattatov, B., Menard, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 647–679. [Google Scholar]
  8. Langland, R.H.; Baker, N.L. Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus A 2004, 56, 189–201. [Google Scholar] [CrossRef]
  9. Le Dimet, F.-X.; Talagrand, O. Variational algorithms for analysis and assimilation of meteorological observations: Theoretical aspects. Tellus 1986, 38A, 97–110. [Google Scholar] [CrossRef]
  10. Courtier, P.; Thepaut, J.-N.; Hollingsworth, A. A strategy for operational implementation of 4D-Var, using an incremental approach. Q. J. R. Meteorol. Soc. 1994, 120, 1367–1387. [Google Scholar] [CrossRef]
  11. Baker, N.; Daley, R. Observation and background adjoint sensitivity in the adaptive observation targeting problem. Q. J. R. Meteorol. Soc. 2000, 126, 1434–1454. [Google Scholar] [CrossRef]
  12. Errico, M.R. Interpretations of an adjoint-derived observational impact measure. Tellus A 2007, 59, 273–276. [Google Scholar] [CrossRef]
  13. Daescu, D.N. On the deterministic observation impact guidance: A geometrical perspective. Mon. Weather Rev. 2009, 137, 3567–3574. [Google Scholar] [CrossRef]
  14. Daescu, D.N. On the sensitivity equations of four-dimensional variational (4D-Var) data assimilation. Mon. Weather Rev. 2008, 136, 3050–3065. [Google Scholar] [CrossRef]
  15. Daescu, D.N.; Todling, R. Adjoint estimation of the variation in a model functional output due to assimilation of data. Mon. Weather Rev. 2009, 137, 1705–1716. [Google Scholar] [CrossRef]
  16. Sandu, A.; Daescu, D.N.; Carmichael, G.R.; Chai, T. Adjoint sensitivity analysis of regional air quality models. J. Comput. Phys. 2005, 204, 222–252. [Google Scholar] [CrossRef]
  17. Tremolet, Y. Computation of observation sensitivity and observation impact in incremental variational data assimilation. Tellus A 2008, 60A, 964–978. [Google Scholar] [CrossRef]
  18. Cioaca, A.; Sandu, A.; de Sturler, E. Efficient methods for computing observation impact in 4D-Var data assimilation. Comput. Geosci. 2013, 17, 975–990. [Google Scholar] [CrossRef]
  19. Cardinali, C. Monitoring the observation impact on the short-range forecast. Q. J. R. Meteorol. Soc. 2009, 135, 239–250. [Google Scholar] [CrossRef]
  20. Lorenc, A.C.; Marriott, R.T. Forecast sensitivity to observations in the Met Office Global numerical weather prediction system. Q. J. R. Meteorol. Soc. 2014, 140, 209–224. [Google Scholar] [CrossRef]
  21. Rawlins, F.; Ballard, S.P.; Bovis, K.J.; Clayton, A.M.; Li, D.; Inverarity, G.W.; Lorenc, A.C.; Payne, T.J. The Met Office global 4-dimensional data assimilation system. Q. J. R. Meteorol. Soc. 2007, 133, 347–362. [Google Scholar] [CrossRef]
  22. Walters, D.; Boutle, I.; Brooks, M.; Melvin, T.; Stratton, R.; Vosper, S.; Wells, H.; Williams, K.; Wood, N.; Allen, T.; et al. The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 configurations. Geosci. Model Dev. 2017, 10, 1487–1520. [Google Scholar] [CrossRef]
  23. Auligné, T.; Xiao, Q. Adjoint Sensitivity and Observation Impact. UCAR 2009. Available online: http://www2.mmm.ucar.edu/wrf/users/workshops/WS2009/presentations/2A-05.pdf (accessed on 12 April 2017).
  24. BNOC Operational Bulletin No. 105, 16 March 2016: “APS2 Upgrade to the ACCESS-G Numerical Weather Prediction System”. Available online: http://www.bom.gov.au/australia/charts/bulletins/APOB105.pdf (accessed on 1 June 2017).
  25. Dolman, B.; Reid, I.; Kane, T. The Australian Wind Profiler Network. In Proceedings of the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of Observation (CIMO TECO 2016), Madrid, Spain, 27–30 September 2016. [Google Scholar]
  26. World Meteorological Organisation (WMO). Manual on Codes International Codes Volume I.2 Annex II to the WMO Technical Regulations Part B—Binary Codes. 2016. Available online: https://library.wmo.int/opac/doc_num.php?explnum_id=3421 (accessed on 7 December 2017).
  27. Errico, M.R. What is an adjoint model? Bull. Am. Met. Soc. 1997, 78, 2577–2591. [Google Scholar] [CrossRef]
  28. Ingleby, N.B. The statistical structure of forecast errors and its representation in the Met Office global three-dimensional variational data assimilation system. Q. J. R. Meteorol. Soc. 2001, 127, 209–231. [Google Scholar] [CrossRef]
  29. Seaman, R.; Bourke, W.; Steinle, P.; Hart, T.; Embery, G.; Naughton, M.; Rikus, L. Evolution of the Bureau of Meteorology’s Global Assimilation and Prediction System. Part 1: Analysis and initialisation. Aust. Meteor. Mag. 1995, 44, 1–18. [Google Scholar]
  30. Bourke, W.; Hart, T.; Steinle, P.; Seaman, R.; Embery, G.; Naughton, M.; Rikus, L. Evolution of the Bureau of Meteorology’s Global Assimilation and Prediction System. Part 2: Resolution enhancements and case studies. Aust. Meteor. Mag. 1995, 44, 19–40. [Google Scholar]
  31. Seaman, R.S. Which Australian rawinsonde stations most influence wind analysis? Aust. Meteorol. Mag. 2007, 56, 285–289. [Google Scholar]
  32. Seaman, R.S. Monitoring a data assimilation system for the impact of observations. Aust. Meteorol. Mag. 1994, 43, 41–48. [Google Scholar]
Figure 1. Schematic representation of the adjoint calculation (adapted from [23]).
Figure 1. Schematic representation of the adjoint calculation (adapted from [23]).
Atmosphere 09 00023 g001
Figure 2. Horizontal domain of the Australian region error norm (red lines).
Figure 2. Horizontal domain of the Australian region error norm (red lines).
Atmosphere 09 00023 g002
Figure 3. The Australian upper air-observing network (the most influential stations are shown in red, stations with the least impact are displayed in blue, the remaining stations are shown in green). (Mawson station is outside the domain shown).
Figure 3. The Australian upper air-observing network (the most influential stations are shown in red, stations with the least impact are displayed in blue, the remaining stations are shown in green). (Mawson station is outside the domain shown).
Atmosphere 09 00023 g003
Figure 4. The location of operational Australian wind profilers during this study.
Figure 4. The location of operational Australian wind profilers during this study.
Atmosphere 09 00023 g004
Figure 5. Total impact (J kg−1) (a) and impact per observation (105 J kg−1) (b) for the period of December 2015–February 2016.
Figure 5. Total impact (J kg−1) (a) and impact per observation (105 J kg−1) (b) for the period of December 2015–February 2016.
Atmosphere 09 00023 g005
Figure 6. Total impact (J kg−1) (a) and impact per observation (105 J kg−1) (b) for the period of June–August 2016.
Figure 6. Total impact (J kg−1) (a) and impact per observation (105 J kg−1) (b) for the period of June–August 2016.
Atmosphere 09 00023 g006
Figure 7. Total number of observations (103) for the period of December 2015–February 2016 (a) and June–August 2016 (b).
Figure 7. Total number of observations (103) for the period of December 2015–February 2016 (a) and June–August 2016 (b).
Atmosphere 09 00023 g007
Figure 8. Impact of synoptic observations from AWS (automatic weather station) and cooperative stations for the period of December 2015–February 2016 (a) and June–August 2016 (b).
Figure 8. Impact of synoptic observations from AWS (automatic weather station) and cooperative stations for the period of December 2015–February 2016 (a) and June–August 2016 (b).
Atmosphere 09 00023 g008
Figure 9. Impact (J kg−1) of AWS (a) and cooperative (b) synoptic observations for winter 2016.
Figure 9. Impact (J kg−1) of AWS (a) and cooperative (b) synoptic observations for winter 2016.
Atmosphere 09 00023 g009
Figure 10. Average impact (10−5 J kg−1) for the Australian region error norm from individual buoys over the period September to December 2015: the impact is plotted at the average position of the buoy during this period.
Figure 10. Average impact (10−5 J kg−1) for the Australian region error norm from individual buoys over the period September to December 2015: the impact is plotted at the average position of the buoy during this period.
Atmosphere 09 00023 g010
Figure 11. Average daily impact (a) and impact per observation (b) for each non-satellite observation types for the period of December 2015–February 2016 and June–August 2016 (SYNOP R—AWS synoptic stations, SYNOP C—cooperative observer stations).
Figure 11. Average daily impact (a) and impact per observation (b) for each non-satellite observation types for the period of December 2015–February 2016 and June–August 2016 (SYNOP R—AWS synoptic stations, SYNOP C—cooperative observer stations).
Atmosphere 09 00023 g011
Table 1. Summary of experiment.
Table 1. Summary of experiment.
Data PeriodFrom 00Z 1 September 2015 to 00Z 31 December 2016
Impact Measure24-h forecast error reduction on the moist energy norm calculated from the surface to 150 hPa level over the Australian region.
NWP SystemOperational version of the Bureau of Meteorology NWP system, APS-2, with the resolution of N512 for the forecasting model and N216 for the inner loop of 4D-Var, in horizontal, and 70 levels in vertical. The adjoint of PF includes the moist physics.
Table 2. Summary of the observation types and the parameters used in this study. The meanings of the symbols in the third columns are as follows: T: temperature; u: zonal wind; v: meridional wind; ps: surface pressure; Rh: relative humidity.
Table 2. Summary of the observation types and the parameters used in this study. The meanings of the symbols in the third columns are as follows: T: temperature; u: zonal wind; v: meridional wind; ps: surface pressure; Rh: relative humidity.
Data IDData TypeInformation
SYNOPSurface observations from land-based weather stationsT, u, v, Rh, ps
SHIPSurface observations from ships and oil rigsT, u, v, Rh, ps
TEMPUpper air observations by radiosondesT, u, v, Rh, ps
PILOTUpper air observations by radiosondes or pilot balloons released from land stationsu, v
WINPROUpper air wind profile observationsu, v
AircraftAircraft observations T, u, v
BUOYSea surface observations from drifting and moored buoysT, ps
Table 3. Australian upper air sonde stations in average rank order with respect to total impact.
Table 3. Australian upper air sonde stations in average rank order with respect to total impact.
RankStationLatitude (S)Longitude (E)Relative Impact
189611Casey (Antarctic)66°17′110°31′1.00
289571Davis (Antarctic)68°34′77°58′0.95
389564Mawson (Antarctic)67°36′62°52′0.94
494998Macquarie Island54°30′158°57′0.83
594120Darwin12°25′130°53′0.69
694461Giles Met Office25°02′128°17′0.69
794975Hobart42°50′147°30′0.52
894299Willis Island16°18′149°59′0.49
994203Broome Airport17°57′122°14′0.45
1096996Cocos Island12°11′96°50′0.45
1194659Woomera Aero31°09′136°49′0.41
1294326Alice Spring23°48′133°53′0.39
1394995Lord Howe Island31°32′159°04′0.38
1494150Gove Aero12°17′136°49′0.37
1594302Learmonth Aero22°14′114°05′0.32
1694170Weipa Aero12°41′141°55′0.28
1794610Perth Aero31°56′115°58′0.25
1894510Charleville Aero26°25′146°16′0.25
1994672Adelaide34°57′138°32′0.25
2094637Kalgoorlie-Boulder30°47′121°27′0.21
2194638Esperance33°50′121°53′0.19
2294802Albany Aero34°56′117°48′0.18
2394312Port Hedland Aero20°22′118°38′0.17
2494430Meekatharra Aero26°37′118°33′0.15
2594866Melbourne37°40′144°51′0.13
2694527Moree Aero29°29′149°50′0.10
2794294Townsville19°15′146°46′0.10
2894711Cobar Mo31°29′145°50′0.10
2994776Williamstown32°48′151°50′0.10
3094821Mount Gambier Aero37°44′140°47′0.10
3194374Rockhampton Aero23°23′150°29′0.07
3294653Ceduna32°08′133°42′0.07
3394910Wagga Wagga35°10′147°27′0.04
3494767Sydney33°56′151°10′0.01

Share and Cite

MDPI and ACS Style

Soldatenko, S.; Tingwell, C.; Steinle, P.; Kelly-Gerreyn, B.A. Assessing the Impact of Surface and Upper-Air Observations on the Forecast Skill of the ACCESS Numerical Weather Prediction Model over Australia. Atmosphere 2018, 9, 23. https://doi.org/10.3390/atmos9010023

AMA Style

Soldatenko S, Tingwell C, Steinle P, Kelly-Gerreyn BA. Assessing the Impact of Surface and Upper-Air Observations on the Forecast Skill of the ACCESS Numerical Weather Prediction Model over Australia. Atmosphere. 2018; 9(1):23. https://doi.org/10.3390/atmos9010023

Chicago/Turabian Style

Soldatenko, Sergei, Chris Tingwell, Peter Steinle, and Boris A. Kelly-Gerreyn. 2018. "Assessing the Impact of Surface and Upper-Air Observations on the Forecast Skill of the ACCESS Numerical Weather Prediction Model over Australia" Atmosphere 9, no. 1: 23. https://doi.org/10.3390/atmos9010023

APA Style

Soldatenko, S., Tingwell, C., Steinle, P., & Kelly-Gerreyn, B. A. (2018). Assessing the Impact of Surface and Upper-Air Observations on the Forecast Skill of the ACCESS Numerical Weather Prediction Model over Australia. Atmosphere, 9(1), 23. https://doi.org/10.3390/atmos9010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop