Next Article in Journal
Climate and Production: The Case of the Administrative Region of Grande Dourados, Mato Grosso do Sul, Brazil
Previous Article in Journal
Variations of Rainfall Rhythm in Alto Pardo Watershed, Brazil: Analysis of Two Specific Years, a Wet and a Dry One, and Their Relation with the River Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Effect of Physics Schemes in WRF Simulations of Summer Rainfall in North West Iran

1
Atmospheric Science & Meteorological Research Center (ASMRC), Tehran P.O. Box 14977-16385, Iran
2
Water Engineering Department, University of Guilan, Rasht P.O. Box 41889-58643, Iran
*
Author to whom correspondence should be addressed.
Climate 2017, 5(3), 48; https://doi.org/10.3390/cli5030048
Submission received: 3 May 2017 / Revised: 27 June 2017 / Accepted: 2 July 2017 / Published: 6 July 2017

Abstract

:
The numerical weather forecast model Weather Research and Forecasting (WRF) has a range of applications because it offers multiple physical options, enabling the users to optimizing WRF for specific scales, geographical locations and applications. Summer rainfall cannot be predicted well in North West of Iran (NWI). Most of them are convective. Sometimes rainfall is heavy, so that it causes flash flood. In this research, some configurations of WRF were tested with four summer rainfall events in NWI to find the best configuration. Five cumulus, four planetary boundary layers (PBL) and two microphysical schemes were combined. Twenty-six different configurations (models) were implemented at two resolutions of 5 and 15 km for duration of 48 h. Four events, with over 20 mm convective daily rainfall total, were selected at NWI during summer season between 2010 and 2015. These events were tested by developing 26 unique models. Results were verified using several methods. The aim was to find the best results during the first 24 h. Although no single configuration can be introduced for all times, thresholds, and atmospheric system to provide reliable and accurate forecast, the best configuration for WRF can be identified. Kain-Fritsch (new Eta), Betts-Miller-Janjic, Modified Kain-Fritsch, Multi-scale Kain-Fritsch and newer Tiedtke cumulus schemes and Mellor-Yamada-Janjic, Shin-Hong ‘scale-aware’, Medium Range Forecast (MRF) and Yonsei University (YSU) Planetary Boundary Layer schemes and Kessler, WRF Single Moment 3 class simple ice (WSM3) microphysics schemes were selected. The result show that Cumulus schemes are the most sensitive and Microphysics schemes are the less sensitive. The comparison of 15 km and 5 km resolution simulations do not show obvious advantages in downscaling the results. Configuration with newer Tiedtke cumulus, Mellor-Yamada-Janjic PBL, WSM3 and Kessler microphysics schemes give the best results for the 5 and 15 km resolutions. The output image of models and statistical methods verification indexes show that WRF could not accurately simulate convective rainfall in the NWI in summer.

1. Introduction

In the Northwestern area of Iran about half of the annual precipitation occurs between months of March and May. Less than 4% of the annual precipitation occurs in summer [1], typically with heavy convective rainfall. Topography and other factors provide favorable conditions for the occurrence of flood in this area in summer. There is a rising cost and social impact associated with extreme weather, not to mention the loss of human life [2]. These events have major effects on the society, economy and the environment [3], and have a direct impact on people, countries and vulnerable regions [4]. Records show that these events are increasing [5]. There are several methods to predict rainfall-runoff. Rainfall-runoff modeling was improved in many researches [6,7,8,9,10].
An important thing to predict flash flood is heavy rainfall forecasting. If we can predict rainfall, it will be very useful to determine flash flood. An attempt to seek a relatively optimal data-driven model for rainfall forecasting from three aspects: model inputs, modeling methods, and data-preprocessing techniques. A proposed model, modular artificial neural network (MANN), is compared with three benchmark models, viz. artificial neural network (ANN), K-nearest-neighbors (K-NN), and linear regression (LR). The proposed optimal rainfall forecasting model can be derived from MANN coupled with SSA [11]. In this study, Numerical weather prediction (NWP) was used to improve rainfall simulation.
According to statistical data from 87 meteorological stations, 132 cases have been recorded with daily rainfall total over 20 mm in this region during the summer since 1954 to 2015, and 41 cases have recorded in the last 5 years. These few events during the summer since 1954 to 2015 because there were a few stations before 2000 and the most of weather stations were stablished after that. For example 41 events were recorded in 5 years between 2010 and 2015. All 87 weather stations had data during 2010 to 2015 and their data is reliable [12], then historical records of 2010 to 2015 are taken.
Most of these events have not been predicted. Then we need to improve our rainfall simulation. The WRF system is a mesoscale numerical weather prediction system used for operational forecasting, which allows users to make adjustments for a specific scale and geographical location [13]. Four different WRF microphysics schemes have been tested [14] over southeast India (Thompson, Lin, WRF Single-Moment 6 class (WSM6) and Morrison). While the Thompson scheme simulated surface rainfall distribution closer to observations, the other three schemes overestimated observed rainfall. Furthermore [15], Austral summer rainfall over the period 1991/1992 to 2010/2011 was dynamically downscaled by the WRF model at 9 km resolution for South Africa, and utilized three different convective parameterization schemes, namely the (1) Kain-Fritsch (KF), (2) Betts-Miller-Janjic (BMJ) and (3) Grell-Devenyi-ensemble (GDE) schemes. All three schemes have generated positive rainfall biases over South Africa, with the KF scheme producing the largest biases and mean absolute errors. Only the BMJ scheme could reproduce the intensity of rainfall anomalies, and also exhibited the highest correlation with the observed internal summer rainfall variability. In another study [16], simulations of a 15 km grid resolution were compared using five different cumulus parameterization schemes for three flooding events in Alberta, Canada. The Kain-Fritsch and explicit cumulus parameterization schemes were found to be the most accurate when simulating precipitation across three summer events.
The accuracy of South East of United State (SE US) summer rainfall simulations at 15 km resolution were evaluated [17] using (WRF) model and were compared with those at the 3 km resolution. Results indicated that the simulations at the 3 km resolution do not show significant advantages over those at the 15 km resolution using the Zhang-McFarlane cumulus scheme. In this study it was suggested that to obtain an acceptable simulation, selection of a suitable cumulus scheme that realistically represents the convective rainfall triggering mechanism is more important than just increasing the model resolution. Another study [18] has evaluated the WRF model for regional climate applications over Thailand, focusing on simulated precipitation using various convective parameterization schemes possible in WRF. Experiments were carried out for the year 2005 using four convective cumulus parameterization schemes, namely, Betts-Miller-Janjic (BMJ), Grell-Devenyi (GD), improved Grell-Denenyi (G3D) and Kain-Fritsch (KF) with and without nudging applied to the outermost nest. The results showed that the experiments with nudging generally performed better than un-nudged experiments and that the BMJ cumulus scheme with nudging provided the smallest bias relative to the observations. Another study [19] that examined the sensitivity of the WRF model performance using three different PBL schemes Mellor-Yamada-Janjic (MYJ), Yonsei University (YSU), and the asymmetric convective model, version 2 (ACM2) WRF simulations with different schemes over Texas in July–September 2005 showed that the simulations with the YSU and ACM2 schemes gave much less bias compared to the MYJ scheme. An examination combined different physical scheme for simulation series of rainfall events near the southeast coast of Australia known as East Coast Lows. The study [13] was made using a thirty-six member multi-physics ensemble such that each member had a unique set of physics parametrisations. These results suggested that the Mellor-Yamada-Janjic planetary boundary layer scheme and the Betts-Miller-Janjic cumulus scheme can be used with some level of robustness. The results also suggested that the Yonsei University planetary boundary layer scheme, Kain-Fritsch cumulus scheme and Rapid Radiative Transfer Model for General circulation model (RRTMG) radiation scheme should not be used in combination for that region. In other research [20], a matrix of 18 WRF model configurations were created using different physical scheme combinations, ran with 12 km grid spacing for eight International H2O Project (IHOP) mesoscale convective system (MCS) cases. For each case, three different treatments of convection, three different microphysical schemes, and two different planetary boundary layer schemes were used. The greatest variability in forecasts was found to come from changes in the choice of convective scheme, while notable impacts also occurred by changes in the microphysics and PBL schemes. On the other hand [21], 3 km resolution WRF model was simulated with four different microphysics schemes and two different PBL schemes. The results showed that simulated rain volume was particularly affected by changes in microphysics schemes for both initializations. The change in the PBL scheme and corresponding synergistic terms (which corresponded to the interactions between different microphysical and PBL schemes) resulted in a statistically significant impact on rain volume.
A simulation [22] of West Africa used combinations of three convective parameterization schemes (CPSs) and two planetary boundary layer schemes (PBLSs). The different parameterizations tested showed that the PBLSs have the largest effect on temperature, humidity, vertical distribution and rainfall amount. Whereas dynamics and precipitation variability are strongly influenced by CPSs. In particular, the Mellor-Yamada-Janjic PBLS provided more realistic values of humidity and temperature. Combined with the Kain-Fritsch CPS, the West African monsoon (WAM) onset is well represented.
More recently [23], several aspects of the WRF modelling systems including two land surface models and two cumulus schemes have been tested for 4 months at 30 km resolution over the USA. The two cumulus schemes were found to perform similarly in terms of mean precipitation everywhere except over Florida where the Kain-Fritsch scheme performed better than the Betts-Miller-Janjic scheme and hence was chosen for future studies.
The results of research discussed above were used to select convective and planetary boundary layer schemes for this study. Some new schemes used in WRF 3.8 were also considered. For more information, visit the WRF USERS PAGE, 2016. Although a unique combination of different schemes cannot work well to give accurate forecasts for all atmospheric conditions, the best combinations from the many choices available on WRF could be investigated. But final choices would certainly depend on geographical area of study and season.
WRF was simulated of four summer rainfall events in NWI. For each event the simulations were repeated with 26 different model physics configurations. Combinations of 5 cumulus convection schemes, 4 planetary boundary layer schemes and 2 microphysics schemes were tested. Each simulation ran for 48 h, and was repeated in two domains; a larger one with 15 km grid spacing and smaller domain with 5 km grid spacing. The simulation results were compared with a range of indices based on contingency tables, and also a comparison of standard deviations, root mean square errors and correlation coefficients. The rainfall patterns from the simulations were also compared qualitatively with observed rainfall patterns. The overall result of the study is that, in general, the simulations give a poor representation of the convective rainfall events, but that specific combinations of the physics schemes perform slightly less poorly.

2. Data and Model Configuration

This study used the rainfall data from the NWI. This area covers four provinces and total area of 127,394 square kilometers. The area of study is located between latitude 35°32′54″ and 39°46′36″ North and longitude 44°2′5″ and 49°26′27″ East. The highest elevation is over 4500 m above sea level. In this area heavy convective rainfall can’t predict well. That is why this area selected to this study.
Synoptic systems are generally associated with shallow and weak troughs in the level of 500 mb in selected events. It causes cold air advection from higher latitudes regions to this area. Black sea is a source of moisture for these synoptic systems. Positive vorticity increases the instability in front of 500 mb troughs in the reign. These synoptic systems can only increase the amounts of cloud and wind speed due to the shortage of humidity and weak trough. The mountain effect intensifies the instability, so heavy precipitation occurs in some parts of NWI. The altitude of 4.32% of NWI is between 1600 and 2000 m [24]. In this research, different models were tested to simulate precipitation by changing convective parameters. However WRF model cannot fully show the effects of the mountains well.
The models were generated using the WRF system with Advanced Research WRF (ARW) version 3.8 hosted at the National Center of Atmospheric Research (NCAR) [25]. The WRF model has been updated to version 3.8 on 8 April 2016. It was the last version of WRF, when this study was done.
It is necessary to choose between many parametrizations for each physics option to run WRF. The wide of applications is possible due to the presence of multiple options for the physics and dynamics of WRF, enabling the user to optimize WRF for specific scales, geographical locations and applications. Determining the optimal combination of physics parameterizations to use is an increasingly difficult task as the number of parameterizations increases [13]. A range of physics combinations are used to simulate rainfall events for the purpose of optimizing WRF for dynamical downscaling in this region. There are 15 cumulus schemes, 14 PBL schemes and 23 microphysics scheme options in WRF 3.8. According to most studies, cumulus schemes are very important on summer models. Therefore five different cumulus schemes were used. Kain-Fritsch (new Eta) (KF) (cu = 1) and Betts-Miller-Janjic (BMJ) (cu = 2) were selected due to result of the recent research some of them mentioned in introduction. Modified Kain-Fritsch (MKF) (cu = 10) which is new in WRF 3.8 was also selected as a candidate. This scheme modifies the Kain-Fritsch ad-hoc trigger function with one linked to boundary layer turbulence via probability density function using the cumulus potential scheme of Berg and Stull (2005) [26]. Multi-scale Kain-Fritsch (MsKF) (cu = 11) and newer Tiedtke (NT) (cu = 16) were selected, as well [27]. Three planetary boundary layer schemes (PBL) were selected based on the results from a previous researches as mentioned in introduction. Mellor-Yamada-Janjic (MYJ) (pbl = 2), Shin-Hong ‘scale-aware’ (SHsa) (pbl = 11) and Medium Range Forecast (MRF) (pbl = 99) were selected. MsKF cumulus scheme had to combine with YSU (pbl = 1) planetary boundary layer scheme, then this PBL was used. Based on the results of previous researches, microphysics scheme are less sensitive. Therefore, Kessler (mc = 1) and WSM 3 class simple ice (WSM3) (mc = 3) were used. Finally five cumulus, four PBL and two microphysics schemes were selected as shown in Table 1. Combination of these schemes lead to 26 different configurations.
Four events were selected randomly from forty-one events with more than 20 mm of daily rainfall total in summer in the last five years (2010–2015), see Table 2.

3. Domains

The National Centers for Environmental Prediction (NCEP’s) Global Forecast System (GFS) model output with a 0.25-degree resolution was used as an input for the WRF models. GFS 0.25 is the highest resolution available of global modeling.
Twenty-six WRF models ran for 48 h and four events in two domains for both 5 km and 15 km resolutions. Simulation of convective rainfall need to highest resolution. Increases in the model resolution may not lead to improved NWP forecasts [28]. Consequently, Resolution, hardware facilities and model accuracy should balance to each other. Resolution 5 km was chosen for these reasons. Resolution 15 is an intermediate to achieve resolution 5 km. in the other hand, we want to compare accuracy of high resolution and low resolution.
These domains are shown in Figure 1 and Table 3. There are 91 grid points in the west-east direction and 64 in the north-south direction with 15 × 15 km resolution in domain 1. The number of horizontal grid in both directions are 130 with 5 × 5 km resolution in domain 2.5 km grid domain nested within the 15 km grid domain in the same run. Thirty vertical levels were used in domain 1 and 2. The highest (in altitude) input pressure level was at 50 hPa (5000 Pa). The default datasets used to produce landuse is interpolated from 30-arc-second the U.S. Geological Survey (USGS) Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). The vegetation is interpolated from the moderate-resolution imaging spectroradiometer (MODIS) fraction of photosynthetically active radiation (FPAR) (400–700 nm) absorbed by green vegetation are interpolated from 21 class MODIS.

4. Verification

There are many methods for forecast verification, including their characteristics, pros and cons. Although the correct term is simulation, the words “forecast” and “simulation” are used interchangeably here for convenience. The methods range from simple traditional statistics and scores, to methods with more detailed diagnostic and scientific verification. The forecast is compared, or verified, against a corresponding observation of what actually occurred, or some good estimate of the true outcome [29].
In this work, several methods were used to verify the models. These method included all of Standard verification methods [30]. Methods for dichotomous (yes/no) forecasts, Methods for ulti-category forecasts, Methods for forecasts of continuous variables, Methods for probabilistic forecasts and Taylor diagram were used for verification. Observation data was collected from 87 stations gridded by kriging method [31] to produce 5 km and 15 km grids to match the WRF output.
Several statistical tests have been developed to detect significance of the contingency. The chi-square contingency test [32] is used for goodness-of-fit when there are one nominal variable with two or more values [33]. Therefore, Chi-square contingency test was ran for the models. The results were significance. Categorical statistics that could be computed from the yes/no-contingency table. The indexes were determined from Table 4. Result of seven Scalar quantities and five skill scores that were computed from yes/no-contingency table and Root Mean Square Error (RMSE) are shown in Table 5 and Table 6.
The scalar quantity indexes are included Proportion Correct (PC) [34], Bias score (B), Treat Score (TS) or Critical Success Index [35], odds ratio (θ) [36], False Alarm Ratio (FAR), Hit Rate (H) and False Alarm Rate (F).
Heidke skill score (HSS) [37,38], Peirce skill score (PSS) [39,40], Clayton skill score (CSS), Gilbert skill score (GSS) [41] and Q skill score (Q) [42,43] are skill scores indexes.
These Indexes, mentioned above, were obtained from the following formula:
P C = a + d n
B = a + b a + c
T S = a a + b + c
θ = a d b c
F A R = b a + b
H = a a + c
F = b b + d
H S S = 2 ( a d b c ) ( a + c ) ( c + d ) + ( a + b ) ( b + d )
C S S = a a + b c c + d
Q = a d b c a d + b c
G S S = a a r e f a a r e f + b + c    a r e f = ( a + b ) ( a + c ) n
P S S = a d b c ( a + c ) ( b + d ) = H F
RMSE = Root Mean Square Error, R M S E = [ 1 N i = 1 N ( F i O i ) 2 ] 1 / 2 (Hydman and Koehler 2006), F = forecast and O = observation, PC = indicate percent of correct forecasts.
Models and observation were compared spatially with among of daily rainfall total. Number of comparison in a, b, c and d indexes in Table 5 and Table 6 are different because number of grids in 5 km resolution are more than 15 km resolution.
Green and red cells in the tables represent the best models verified. The smallest value represents the best model as shown by green cells for some indexes. For other indexes, the highest value represents the best model as shown by red cells.
Verification by Taylor diagram [44] is illustrated in Figure 2 and Figure 3. Taylor diagrams provide a way of graphically summarizing how closely a pattern matches observations. The similarity between two patterns is quantified in terms of their correlation, their centered root-mean-square difference and the amplitude of their variations (represented by their standard deviations). These diagrams are especially useful in evaluating multiple aspects of complex models or in gauging the relative skill of many different models [45]. The position of each point appearing on the plot quantifies how closely that model’s simulated precipitation pattern matches observations. The reason that each point in the two-dimensional space of the Taylor diagram can represent three different statistics simultaneously (the centered RMS difference, the correlation, and the standard deviation) is that these statistics are related by the following formula:
E 2 = σ f 2 + σ o 2 2 σ f σ o R
where R is the correlation coefficient between the forecast and observation fields, E’ is the centered RMS difference between the fields, and σ f 2 and σ o 2 are the variances of the forecast and observation fields, respectively. Given a “forecast” field (f) and a observation field (O), the formulas for calculating the correlation coefficient (R), the centered RMS difference (E’), and the standard deviations of the “forecast” field ( σ f ) and the observation field ( σ o ) are given below:
R = 1 N n = 1 N ( f n f ¯ ) ( o n o ¯ ) σ f σ o
E 2 = 1 N n = 1 N [ ( f n f ¯ ) ( o n o ¯ ) ] 2
σ f 2 = 1 N n = 1 N ( f n f ¯ ) 2
σ o 2 = 1 N n = 1 N ( o n o ¯ ) 2

5. Results

Based on results of a, b, c and d indexes were shown in Table 5 and Table 6 configuration NT cumulus, MYJ pbl and WSM3 or Kessler microphysics have most correct forecast frequency. This result is accepted by PC, TS, odds ratio, H, HSS, PSS, CSS, GSS and Q indexes. These models are shown with c16-p2-m1 and c16-p2-m3 symbols. Also combination BMJ (cu = 2) cumulus, MRF pbl, WSM 3 and Kessler microphysics schemes as shown with c2-p99-m1 and c2-p99-m3 models (cu = 2, pbl = 99, mc = 1 and mc = 3) show good results. RMSE results represented in c2-p99-m1 and c2-p99-m3 appear better than other models.
Looking at Table 6, it is apparent that for 5 km resolution, KF cumulus (cu = 1), MYJ (pb l = 2) planetary boundary layer and Kessler (c1-p2-m1 model) gives good results in odds ratio, FAR, HSS, PSS, CSS, GSS and Q. However, the best answer is achieved by models c16-p2-m1 and c16-p2-m3 (cu = 2, pbl = 99, mc = 1 and mc = 3). Also Models c2-p11-m1 and c2-p99-m1 show the lowest RMSE.
Model c2-p99-m1 had the best RMSE. It is 1.96 for 15 km resolution and 1.75 for 5 km resolution. It indicates the absolute fit of the model to the data—how close the observed data points are to the model’s predicted values. This model has the lowest standard deviation as shown in Figure 4 and Figure 5. 81.86% of the simulated values are within +/−1 mm deviation from the interpolated values from observation data.
PC values indicating that more than 62% of all forecasts were correct in models c16-p2-m1 and c16-p2-m3. B index indicates forecast with these models have a slight tendency to under forecasting in 15 km resolution, while the best choosing model according this index in 5 km resolution is c2-p99-m1 with slight over forecasting of rain frequency. H index value show that roughly 3/4 of the observed rain events were correctly predicted with c16-p2-m1 and c16-p2-m3 models in the both resolutions. FAR index indicating that in roughly 1/3 of the forecast rain events, rain was not observed with c2-p99-m1 and c2-p99-m3. F index representing that for 36% of the observed “no rain” events the forecasts were incorrect with c2-p99-m1 and c2-p99-m3 models in 15 resolution and it is 22% with c10-p99-m1 model in 5 resolution. TS index value in c16-p2-m1 and c16-p2-m3 models meaning that approximately half of the “rain” events (observed and/or predicted) were correctly forecast. The GSS index is often used in the verification of rainfall in NWP models because its “equitability” allows scores to be compared more fairly across different regimes. It does not distinguish the source of forecast error. GSS in models c16-p2-m1, c16-p2-m1 and c1-p2-m1 have best results and gives a lower score than TS. PSS score may be more useful for more frequent events. Can be expressed in a form similar to the GSS [46]. Range of values are between −1 and 1. Zero indicates no skill. Perfect score is one. The best model according this index is c16-p2-m1.
According Figure 2 in 15 km resolution, Models with number 25 and 26 (c11-p1-m1 and c11-p1-m3) show better correlation than other models. However, models 11 and 12 (c2-p99-m1 and c2-p99-m3) show correlation similar to those of models 25 and 26, and better result in RMSE and Standard deviations.
Taylor diagram in Figure 3 shows that models 11 and 12 (c2-p99-m1 and c2-p99-m3) Are more suitable than other models in the 5 km resolution.
To summarize the results of verifications indexes made here, the scalar quantity, the skill scores of contingency table (2 × 2), the RMSE and the Taylor diagrams are summarized in the Table 7 and Table 8. In each verification method, two of the best models were identified. Four of the best models are also identified in Taylor diagram maximum. The best models are scored 1 in each verification method and the others are scored 0.
For resolution 15 km: c16-p2-m1, c16-p2-m3 models have the highest total scores (11 points). C2-p99-m1 and c2-p99-m3 models are the runner ups (7 points).
For resolution 5 km: c16-p2-m1 (with 9 points), c1-p2-m1 (with 7 points) and c16-p2-m3 (with 6 points) are the optimum models. The model c2-p99-m1 has the best rating from RMSE and Taylor diagram.
Finally, the results were checked by eyeball verification to identify the best model amongst the models selected by statistical methods. The output of these models was compared with observations obtained by interpolation with kriging in two resolutions (5 km & 15 km). Figure 4 and Figure 5 display the result of models and observation in four events. One of the oldest and best verification methods is the good old fashioned visual, or “eyeball”, method. Comparison the results, show that heavy rain in large area Conformity with the models, except of event on date of 7 November 2010. For example, Models C16-P2-M3 and C16-P2-M1 were simulated heavy rainfall in the upper right of area in event date 22 September 2013 in Figure 4. There is a clear difference in rainfall amount in the north central region between the c16 models and c2 models at 15 km resolution. It is shown c16 models are closer than c2 models to observations. Also, Figure 4, especially in event 26 August 2015, are shown that precipitation are simulated better by c16 models in the south central. Finally, comparing the model output images with map observations leads to a final decision on the best model. According to Figure 4 and Figure 5, models c16-p2-m1 and c16-p2-m3 show a very close similarity to the observations in both 5 and 15 km resolutions. They have used newer Tiedtke cumulus, Mellor-Yamada-Janjic PBL, WSM3 and Kessler microphysical schemes.
Since there are only 87 actual data points, the interpolated grid points are not all statistically independent of each other as there must be a spatial correlation between the interpolated points that is related to the average spacing of the original stations. It is suggested that in additional studies, radar data or more events can be used to more accurately control the models, then the comparison would be improved if the simulation data are interpolated to the station positions.

6. Conclusions

Most rainfalls in the Northwest of Iran in summer season are convective, with heavy rainfall occurring in some smaller areas. The results of verification by all of the methods clearly shows that NWP models cannot accurately predict this type of precipitation.
On the other hand, the result in Table 4 and Table 5 indicate that cumulus schemes are the most sensitive. Microphysical schemes are the least sensitive. This is also illustrated in the Taylor diagrams, in Figure 2 and Figure 3. In the Tayor diagrams, the models with the same cumulus and different PBL and microphysics show very similar outcomes. Comparison of 15-km resolution simulation with 5 km resolution simulation does not show obvious advantages, according to the verification score.
Configuration with the newer Tiedtke cumulus, Mellor-Yamada-Janjic PBL, WSM3 and Kessler microphysics schemes demonstrate the best results in both resolutions (5 and 15 km). There is little difference in the results between WSM3 and Kessler microphysics schemes. However, the results of WSM3 microphysics scheme are better than Kessler microphysical scheme.
No single configuration of the multi-physics performed best for all cases. In the first 24 h forecast of these 26 configurations, it has been observed, based on the output image of models and statistical methods, that convective rainfall in the NWI could not be simulated with high accuracy using WRF model in summer. This conclusion is based on statistical indexes and Eyeball verifications. Therefore, accuracy in the first 24 h forecast with the other methods needs to be improved. A method such as extrapolation of observation data using satellite and radar data can be used to improve forecast in first 24 h.

Acknowledgments

This work was supported by the Iran Atmospheric Science and Meteorological Research Center. Most of the data used in this research were collected from I.R. of Iran Meteorological Organization (IRIMO) and National Drought Warning and Monitoring Center (NDWMC) in Iran. We would like to thank all of those at these organizations, especially Director of NDWMC Shahrokh Fateh and President of IRIMO Davood Parhizkar.

Author Contributions

This paper is the result of research conducted by Sadegh Zeyaeyan as part of his PhD studies at the Atmospheric Science & Meteorological Research Center. Ebrahim Fattahi and Abbas Ranjbar jointly supervised and Majid Azadi and Majid Vazifedoust jointly advisor this project. Model running was conducted by Sadegh Zeyaeyan with guidance from Majid Azadi, Ebrahim Fattahi, Abbas Ranjbar and Majid Vazifedoust provided guidance on the analysis of the results. The manuscript was prepared by Sadegh Zeyaeyan with suggestions and corrections from Majid Azadi, Ebrahim Fattahi and Abbas Ranjbar. All authors have seen and approved the final article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tohidi, A.; Ranjbar, A.; Meshqati, A.H. Investigation about Heavy Rains in West Azarbaijan Province in Spring and Determine the Atmospheric Circulation Pattern on That Rainfall; Islamic Azad University, Science and Research: Tehran, Iran, 2011. [Google Scholar]
  2. Easterling, D.R.; Gerald, A.M.; Camille, P.; Stanley, A.C.; Thomas, R.K.; Linda, O.M. Climate Extremes. Obs. Model. Impacts Sci. 2000, 289, 2068–2074. [Google Scholar]
  3. Manton, M.J.; Della-marta, P. M.; Haylock, M. R.; Hennessy, K. J.; Nicholls, N.; Chambers, L. E.; Collins, D.A.; Daw, G.; Finet, A.; Gunawan, D.; et al. Trend in Extreme Daily Rainfall and Temperature in Southeast Asia and South Pacific. Int. J. Clim. 2001, 21, 269–284. [Google Scholar] [CrossRef]
  4. Sura, P. Stochastic Models of Climate Extremes: Theory and Observations; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  5. Sheikh, M.M.; Ahmed, A.U.; Revadekar, J.V.; Shrestha, M.L.; Premlal, K.H.M.S. Development and Application of Climate Extreme Indices and Indicators for Monitoring Trends in Climate Extremes and Their Socio-Economic Impacts in South Asian Countries; Final Report for APN Project: ARCP2008-10CMY-Sheikh; APN: Kobe, Japan, 2009. [Google Scholar]
  6. Gholami, V.; Chau, K.W.; Fadaee, F.; Torkaman, J.; Ghaffari, A. Modeling of groundwater level fluctuations using dendrochronology in alluvial aquifers. J. Hydrol. 2015, 529, 1060–1069. [Google Scholar] [CrossRef]
  7. Taormina, R.; Chau, K.W. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines. J. Hydrol. 2015, 529, 1617–1632. [Google Scholar] [CrossRef]
  8. Wang, W.C.; Chau, K.; Xu, D.M.; Chen, X.Y. Improving forecasting accuracy of annual runoff time series using ARIMA based on EEMD decomposition. Water Resour. Manag. 2015, 29, 2655–2675. [Google Scholar] [CrossRef]
  9. Chen, X.Y.; Chau, K.W.; Busari, A.O. A comparative study of population-based optimization algorithms for downstream river flow forecasting by a hybrid neural network model. Eng. Appl. Artif. Intell. 2015, 46, 258–268. [Google Scholar] [CrossRef]
  10. Chau, K.W.; Wu, C.L. A Hybrid Model Coupled with Singular Spectrum Analysis for Daily Rainfall Prediction. J. Hydroinform. 2010, 12, 458–473. [Google Scholar] [CrossRef]
  11. Wu, C.L.; Chau, K.W.; Fan, C. Prediction of rainfall time series using modular artificial neural networks coupled with data-preprocessing techniques. J. Hydrol. 2010, 389, 146–167. [Google Scholar] [CrossRef]
  12. Zeyaeyan, S.; Fattahi, E.; Ranjbar, A.; Vazifedoust, M. Classification of Rainfall Warning Based on the TOPSIS method. Climate 2017. [Google Scholar] [CrossRef]
  13. Evans, J.P.; Ekstrom, M.; Ji, F. Evaluating the performance of a WRF physics ensemble over South-East Australia. Clim. Dyn. 2011, 39, 1241–1258. [Google Scholar] [CrossRef]
  14. Rajeevan, M.; Kesarkar, A.; Thampi, S.B.; Rao, T.N.; Radhakrishna, B.; Rajasekhar, M. Sensitivity of WRF cloud microphysics to simulations of a severe thunderstorm event over Southeast India. Ann. Geophys. 2010, 28, 603–619. [Google Scholar] [CrossRef]
  15. Satyaban, B.; Ratnam, J.V.; Behera, S.K.; Rautenbach, C.J.D.; Ndarana, T.; Takahashi, K.; Yamagata, T. Performance assessment of three convective parameterization schemes in WRF for downscaling summer rainfall over South Africa. Clim. Dyn. 2013, 42, 2931–2953. [Google Scholar]
  16. Pennelly, C.; Reuter, G.; Flesch, T. Verification of the WRF model for simulating heavy precipitation in Alberta. Atmos. Res. 2014, 135, 172–192. [Google Scholar] [CrossRef]
  17. Li, L.F.; Li, W.H.; Jin, J.M. Improvements in WRF simulation skills of southeastern United States summer rainfall: physical parameterization and horizontal resolution. Clim. Dyn. 2014, 43, 7–8. [Google Scholar] [CrossRef]
  18. Chakrit, C.; Jiemjai, K.; Somporn, C. Evaluation of Precipitation Simulations over Thailand using a WRF Regional Climate Model. Chiang Mai J. Sci. Contrib. Pap. 2012, 39, 623–638. [Google Scholar]
  19. Hu, M.X.; Nielsen, G.; Zhang, F.Q. Evaluation of Three Planetary Boundary Layer Schemes in the WRF Model. Am. Meteorol. Soc. 2010, 49, 1831–1844. [Google Scholar] [CrossRef]
  20. Jankov, I.; Gallus, W. J.; Segal, M.; Shaw, B.; Koch, S. The impact of different WRF model physical parameterizations and their interactions on warm season MCS rainfall. Weather Forecast. 2005, 20, 1048–1060. [Google Scholar] [CrossRef]
  21. Jankov, I.; Schultz, P.; Anderson, C.; Koch, S. The impact of different physical parameterizations and their interactions on cold season QPF in the American River basin. J. Hydrometeorol. 2007, 8, 1141–1151. [Google Scholar] [CrossRef]
  22. Flaounas, E.; Bastin, S.; Janicot, S. Regional climate modelling of the 2006 West African monsoon sensitivity to convection and planetary boundary layer parameterisation using WRF. Clim. Dyn. 2011, 36, 1083–1105. [Google Scholar] [CrossRef]
  23. Barkovsky, M.; Karoly, D. Precipitation simulations using WRF as a nested regional climate model. Appl. Meteorol. Climatol. 2009, 48, 2152–2159. [Google Scholar] [CrossRef]
  24. Asakereh, H.; Ramzi, R. Climatology of Precipitation in North West of Iran. Geogr. Dev. 2012, 9, 137–158. [Google Scholar]
  25. Skamarock, W.C. A Description of the Advanced Research WRF Version 3; National Center for Atmospheric Research: Boulder, CO, USA, 2008. [Google Scholar]
  26. Berg, L.K.; Evgueni, I.K.; Gustafson, W.I., Jr.; Deng, L.P. Evaluation of a Modified Scheme for Shallow Convection: Implementation of CuP and Case Studies. Am. Meteorol. Soc. 2013, 141, 134–147. [Google Scholar] [CrossRef]
  27. WRF Users Page. 2016. Available online: http://www2.mmm.ucar.edu/wrf/users/wrfv3.8/updates-3.8.html (accessed on 8 November 2016).
  28. Wedi, N.P. Increasing horizontal resolution in numerical weather prediction and climate simulations: Illusion or panacea? Philosophical transactions of the royal society a mathematical. Phys. Eng. Sci. 2014. [Google Scholar] [CrossRef] [PubMed]
  29. Murphy, A.H.; Winkler, R.L. A General Framework for Forecast Verification. Mon. Weather Rev. 1987, 115, 1330–1338. [Google Scholar] [CrossRef]
  30. WWRP WCRP JWGFVR. World Weather Research Programme. Available online: http://www.cawcr.gov.au/projects/verification/#Introductio (accessed on 8 November 2016).
  31. Cressie, N.; Johannesson, G. Fixed rank kriging for very larg spatial data sets. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 209–226. [Google Scholar] [CrossRef]
  32. Sokal, R.R.; Rohlf, F.J. Biometry: The Principles and Practice of Statistics in, 3rd ed.; W. H. Freeman and Company: New York, NY, USA, 1995. [Google Scholar]
  33. McDonald, J.H. Handbook of Biological Statistics; Sparky House Publishing: Baltimore, MD, USA, 2014; p. 3. [Google Scholar]
  34. Finley, J.P. Tornado predictions. Am. Meteorol. J. 1884, 1, 85–88. [Google Scholar]
  35. Gilbert, G.F. Finley’s tornado predictions. Am. Meteorol. J. 1884, 4, 166–172. [Google Scholar]
  36. Stephenson, D.B. Use of the “Odds Ratio” for Diagnosing Forecast Skill. Weather Forecast. 1999, 15, 221–232. [Google Scholar] [CrossRef]
  37. Doolittle, M.H. The verification of predictions. Am. Meteorol. 1885, 7, 327–329. [Google Scholar]
  38. Heidke, P. Berechnung des Erfolges und der Gute der Windstarkevorhersagen im Sturmwarnungdienst (Calculation of the success and goodness of strong wind forecasts in the storm warning service). Geogr. Ann. Stockh. 1926, 8, 301–349. [Google Scholar]
  39. Peirce, C.S. The numerical measure of the success of predictions. Science 1884, 4, 453–454. [Google Scholar] [CrossRef] [PubMed]
  40. Hanssen, A.W.; Kuipers, W.J.A. On the relationship between the frequency of rain and various meteorological parameters. Meded. Verh. 1965, 47, 2–15. [Google Scholar]
  41. Schaefer, J.T. The critical success index as an indicator of warning skill. Weather Forecast. 1990, 5, 570–575. [Google Scholar] [CrossRef]
  42. Agresti, A. An Introduction to Categorical Data Analysis; John Wiley and Sons: New York, NY, USA, 1996. [Google Scholar]
  43. Yule, G.U. On the association of attributes in statistics. Philos. Trans. R. Lond. Philos. Trans. R. Soc. 1900, 194, 257–319. [Google Scholar] [CrossRef]
  44. Taylor, K.E. summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res. 2001, 106, 7183–7192. [Google Scholar] [CrossRef]
  45. Houghton, J.T.; Ding, Y.; Griggs, D.J.; Noguer, N.; van der Linden, P.J.; Dai, X.; Maskell, K.; Johnson, C.A. Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2001. [Google Scholar]
  46. Woodcock, F. The evaluation of yes/no forecasts for scientific and administrative purposes. Mon. Weather Rev. 1976, 104, 1209–1214. [Google Scholar] [CrossRef]
Figure 1. The two domains of the study, 15 km and 5 km resolutions.
Figure 1. The two domains of the study, 15 km and 5 km resolutions.
Climate 05 00048 g001
Figure 2. Taylor diagram to verification models with 15 km resolution.
Figure 2. Taylor diagram to verification models with 15 km resolution.
Climate 05 00048 g002
Figure 3. Taylor diagram to verification models with 5 km resolution.
Figure 3. Taylor diagram to verification models with 5 km resolution.
Climate 05 00048 g003
Figure 4. Compare between models and observation (Eyeball verification), Resolution 15 km.
Figure 4. Compare between models and observation (Eyeball verification), Resolution 15 km.
Climate 05 00048 g004
Figure 5. Compare between models and observation (Eyeball verification), Resolution 5 km.
Figure 5. Compare between models and observation (Eyeball verification), Resolution 5 km.
Climate 05 00048 g005
Table 1. Schemes and different configuration.
Table 1. Schemes and different configuration.
NoCumulus SchemePBL SchemeMicrophysics SchemeNumber of Configuration
1=1, Kain-Fritsch (new Eta) =2, Mellor-Yamada-Janjic TKE=1, Kessler4 × 3 × 2 = 24
2=2, Betts-Miller-Janjic =11, Shin-Hong ‘scale-aware’=3, WSM 3-class simple ice
3=10, Modified Kain-Fritsch =99, MRF
4=16, A newer Tiedtke
5=11, Multi-scale Kain-Fritsch =1, YSU=1, Kessler scheme1 × 1 × 2 = 2
=3, WSM 3-class simple ice
Total number of configuration26
Table 2. Date of events.
Table 2. Date of events.
NoDate
YearMonthDay
12010624
22010711
32013922
42015826
Table 3. Latitude and longitude was used in domains.
Table 3. Latitude and longitude was used in domains.
PointDomain 1Domain 2
LatitudeLongitudeLatitudeLongitude
Lower left32°47′36.74″ N38°27′38.95″ E34°45′26.24″ N42°10′28.6″3 E
Upper left41°18′55.26″ N38°27′38.95″ E40°36′31.93″ N42°10′28.63″ E
Lower right32°47′36.74″ N53°57′42.55″ E34°45′26.2″4 N49°42′41.58″ E
Upper right41°18′55.26″ N53°5′742.55″ E40°363′1.93″ N49°42′41.58″ E
Table 4. Contingency table.
Table 4. Contingency table.
Observed
yesNoTotal
ForecastYesaBforecast yes
NocDforecast no
Total observed yesobserved non = a + b + c + d = total
a = event forecast to occur, and did occur; b = event forecast to occur, but did not occur; c = event forecast not to occur, but did occur, d = event forecast not to occur, and did not occur.
Table 5. Verification results for a resolution of 15 km.
Table 5. Verification results for a resolution of 15 km.
NoModelsABCDPCTSOdds RatioBFARHFHSSPSSCSSGSSQRMSE
1c1-p2-m19206213517170.630.493.0261.210.40.720.460.260.260.270.150.58.07
2c1-p2-m39376363347020.630.493.0971.240.40.740.480.260.260.270.150.517.11
3c1-p11-m18825883897500.630.472.8921.160.40.690.440.250.250.260.150.497.56
4c1-p11-m38996023727360.630.482.9551.180.40.710.450.260.260.260.150.496.76
5c1-p99-m18155124568260.630.462.8831.040.390.640.380.260.260.260.150.495.95
6c1-p99-m38335204388180.630.472.9921.070.380.660.390.270.270.270.150.55.61
7c2-p2-m18925843797540.630.483.0391.160.40.70.440.260.270.270.150.513.23
8c2-p2-m38945893777490.630.483.0161.170.40.70.440.260.260.270.150.53.73
9c2-p11-m18755613967770.630.483.061.130.390.690.420.270.270.270.160.512.88
10c2-p11-m38815623907760.640.483.1191.140.390.690.420.270.270.280.160.513.66
11c2-p99-m18154854568530.640.463.1431.020.370.640.360.280.280.280.160.521.96
12c2-p99-m37984874738510.630.452.9481.010.380.630.360.260.260.260.150.492.5
13c10-p2-m19256043467340.640.493.2491.20.40.730.450.280.280.290.160.537.11
14c10-p2-m39316253407130.630.493.1241.220.40.730.470.260.270.280.150.526.75
15c10-p11-m18905833817550.630.483.0251.160.40.70.440.260.270.270.150.56.76
16c10-p11-m38986043737340.630.482.9261.180.40.710.450.250.260.260.150.496.41
17c10-p99-m18045134678250.620.452.7691.040.390.630.380.250.250.250.140.475.16
18c10-p99-m38185054538330.630.462.9791.040.380.640.380.270.270.270.150.55.15
19c16-p2-m19736312987070.640.513.6581.260.390.770.470.290.290.310.170.575.19
20c16-p2-m39806412916970.640.513.6621.280.40.770.480.290.290.310.170.575.09
21c16-p11-m19516273207110.640.53.371.240.40.750.470.280.280.290.160.545.02
22c16-p11-m39516323207060.640.53.321.250.40.750.470.270.280.290.160.544.89
23c16-p99-m19316103407280.640.53.2681.210.40.730.460.280.280.290.160.534.39
24c16-p99-m39336093387290.640.53.3041.210.40.730.460.280.280.290.160.544.25
25c11-p1-m18875763847620.630.483.0561.150.390.70.430.270.270.270.150.516.47
26c11-p1-m38985923737460.630.483.0341.170.40.710.440.260.260.270.150.55.17
Table 6. Verification results for resolution of 5 km.
Table 6. Verification results for resolution of 5 km.
NoModelsABCDPCTSOdds RatioBFARHFHSSPSSCSSGSSQRMSE
1c1-p2-m159452779470861000.6170.4432.7720.8190.3190.5580.3130.2410.2450.2460.1370.4705.87
2c1-p2-m360993053455458260.6110.4452.5560.8590.3340.5730.3440.2250.2290.2280.1270.4385.99
3c1-p11-m151622513549163660.5900.3922.3810.7200.3270.4850.2830.1960.2020.2090.1090.4095.34
4c1-p11-m355192720513461590.5980.4132.4340.7730.3300.5180.3060.2070.2120.2150.1150.4185.42
5c1-p99-m139672029668668500.5540.3132.0030.5630.3380.3720.2290.1380.1440.1680.0740.3344.02
6c1-p99-m343122214634166650.5620.3352.0470.6130.3390.4050.2490.1500.1550.1730.0810.3444.41
7c2-p2-m159283005472558740.6040.4342.4520.8390.3360.5560.3380.2150.2180.2180.1200.4212.34
8c2-p2-m361023111455157680.6080.4432.4860.8650.3380.5730.3500.2190.2220.2210.1230.4263
9c2-p11-m154882731516561480.5960.4102.3920.7720.3320.5150.3080.2030.2080.2110.1130.4102.21
10c2-p11-m355782785507560940.5980.4152.4050.7850.3330.5240.3140.2060.2100.2130.1150.4132.92
11c2-p99-m142612079639268000.5660.3352.1800.5950.3280.4000.2340.1590.1660.1880.0870.3711.75
12c2-p99-m345402247611366320.5720.3522.1920.6370.3310.4260.2530.1670.1730.1890.0910.3732.35
13c10-p2-m155722636508162430.6050.4192.5970.7700.3210.5230.2970.2210.2260.2300.1240.4445.88
14c10-p2-m359772995467658840.6070.4382.5110.8420.3340.5610.3370.2200.2240.2230.1240.4305.89
15c10-p11-m150272422562664570.5880.3842.3820.6990.3250.4720.2730.1930.1990.2090.1070.4095.06
16c10-p11-m354152733523861460.5920.4052.3250.7650.3350.5080.3080.1960.2010.2040.1090.3985.41
17c10-p99-m137271957692669220.5450.2961.9030.5340.3440.3500.2200.1240.1290.1560.0660.3113.63
18c10-p99-m341512188650266910.5550.3231.9520.5950.3450.3900.2460.1380.1430.1620.0740.3234.31
19c16-p2-m168693664378452150.6190.4802.5840.9890.3480.6450.4130.2320.2320.2320.1310.4424.4
20c16-p2-m368523656380152230.6180.4792.5750.9860.3480.6430.4120.2310.2310.2310.1310.4414.34
21c16-p11-m165053413414854660.6130.4622.5120.9310.3440.6110.3840.2250.2260.2240.1270.4304.33
22c16-p11-m364853359416855200.6150.4632.5570.9240.3410.6090.3780.2290.2300.2290.1290.4384.27
23c16-p99-m159223257473156220.5910.4262.1610.8620.3550.5560.3670.1870.1890.1880.1030.3673.91
24c16-p99-m359183242473556370.5920.4262.1730.8600.3540.5560.3650.1880.1900.1900.1040.3703.8
25c11-p1-m152922658536162210.5890.3982.3100.7460.3340.4970.2990.1930.1970.2030.1070.3964.95
26c11-p1-m356832929497059500.5960.4182.3230.8080.3400.5330.3300.2000.2040.2050.1110.3984.2
Table 7. Scoring the best results for verification methods 15 km resolution.
Table 7. Scoring the best results for verification methods 15 km resolution.
NoModelsABCDPCTSOdds RatioBFARHFHSSPSSCSSGSSQRMSETaylor DiagramTotal Score
1c1-p2-m10000000000000000000
2c1-p2-m30000000000000000000
3c1-p11-m10000000000000000000
4c1-p11-m30000000000000000000
5c1-p99-m10000000000000000000
6c1-p99-m30000000000000000000
7c2-p2-m10000000000000000000
8c2-p2-m30000000000000000000
9c2-p11-m10000000000000000000
10c2-p11-m30000000000000000000
11c2-p99-m10101000110100000117
12c2-p99-m30101000110100000117
13c10-p2-m10000000000000000000
14c10-p2-m30000000000000000000
15c10-p11-m10000000000000000000
16c10-p11-m30000000000000000000
17c10-p99-m10000000000000000011
18c10-p99-m30000000000000000011
19c16-p2-m110101110010111110011
20c16-p2-m310101110010111110011
21c16-p11-m10000000000000000000
22c16-p11-m30000000000000000000
23c16-p99-m10000000000000000000
24c16-p99-m30000000000000000000
25c11-p1-m10000000000000000011
26c11-p1-m30000000000000000011
Table 8. Scoring the best results for verification methods 5 km resolution.
Table 8. Scoring the best results for verification methods 5 km resolution.
NoModelsABCDPCTSOdds RatioBFARHFHSSPSSCSSGSSQRMSETaylor DiagramTotal Score
1c1-p2-m10000001010011111007
2c1-p2-m30000000000000000000
3c1-p11-m10000000000000000000
4c1-p11-m30000000000000000000
5c1-p99-m10101000000100000003
6c1-p99-m30000000000000000000
7c2-p2-m10000000000000000000
8c2-p2-m30000000000000000000
9c2-p11-m10000000000000000101
10c2-p11-m30000000000000000000
11c2-p99-m10000000000000000112
12c2-p99-m30000000000000000000
13c10-p2-m10000001010000001003
14c10-p2-m30000000000000000000
15c10-p11-m10000000000000000000
16c10-p11-m30000000000000000000
17c10-p99-m10101000000100000003
18c10-p99-m30000000000000000000
19c16-p2-m11010110101011110009
20c16-p2-m31010110101000000006
21c16-p11-m10000000000000000000
22c16-p11-m30000000000000000000
23c16-p99-m10000000000000000000
24c16-p99-m30000000000000000000
25c11-p1-m10000000000000000011
26c11-p1-m30000000000000000011

Share and Cite

MDPI and ACS Style

Zeyaeyan, S.; Fattahi, E.; Ranjbar, A.; Azadi, M.; Vazifedoust, M. Evaluating the Effect of Physics Schemes in WRF Simulations of Summer Rainfall in North West Iran. Climate 2017, 5, 48. https://doi.org/10.3390/cli5030048

AMA Style

Zeyaeyan S, Fattahi E, Ranjbar A, Azadi M, Vazifedoust M. Evaluating the Effect of Physics Schemes in WRF Simulations of Summer Rainfall in North West Iran. Climate. 2017; 5(3):48. https://doi.org/10.3390/cli5030048

Chicago/Turabian Style

Zeyaeyan, Sadegh, Ebrahim Fattahi, Abbas Ranjbar, Majid Azadi, and Majid Vazifedoust. 2017. "Evaluating the Effect of Physics Schemes in WRF Simulations of Summer Rainfall in North West Iran" Climate 5, no. 3: 48. https://doi.org/10.3390/cli5030048

APA Style

Zeyaeyan, S., Fattahi, E., Ranjbar, A., Azadi, M., & Vazifedoust, M. (2017). Evaluating the Effect of Physics Schemes in WRF Simulations of Summer Rainfall in North West Iran. Climate, 5(3), 48. https://doi.org/10.3390/cli5030048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop