Next Article in Journal
Assessing the Influence of Compounding Factors to the Water Level Variation of Erhai Lake
Next Article in Special Issue
Investigating Hydrological Variability in the Wuding River Basin: Implications for Water Resources Management under the Water–Human-Coupled Environment
Previous Article in Journal
Treatment of Wastewaters by Microalgae and the Potential Applications of the Produced Biomass—A Review
Previous Article in Special Issue
Quantitative Lasting Effects of Drought Stress at a Growth Stage on Soybean Evapotranspiration and Aboveground BIOMASS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review: Sources of Hydrological Model Uncertainties and Advances in Their Analysis

1
Department of Geography, University of California Berkeley, Berkeley, CA 94709, USA
2
Department of Civil and Environmental Engineering, Washington State University, Richland, WA 99354, USA
3
Global Institute for Water Security, University of Saskatchewan, Saskatoon, SK S7N 5A2, Canada
*
Author to whom correspondence should be addressed.
Water 2021, 13(1), 28; https://doi.org/10.3390/w13010028
Submission received: 27 November 2020 / Revised: 18 December 2020 / Accepted: 22 December 2020 / Published: 25 December 2020
(This article belongs to the Special Issue Hydrological Modeling in Water Cycle Processes)

Abstract

:
Despite progresses in representing different processes, hydrological models remain uncertain. Their uncertainty stems from input and calibration data, model structure, and parameters. In characterizing these sources, their causes, interactions and different uncertainty analysis (UA) methods are reviewed. The commonly used UA methods are categorized into six broad classes: (i) Monte Carlo analysis, (ii) Bayesian statistics, (iii) multi-objective analysis, (iv) least-squares-based inverse modeling, (v) response-surface-based techniques, and (vi) multi-modeling analysis. For each source of uncertainty, the status-quo and applications of these methods are critiqued in gauged catchments where UA is common and in ungauged catchments where both UA and its review are lacking. Compared to parameter uncertainty, UA application for structural uncertainty is limited while input and calibration data uncertainties are mostly unaccounted. Further research is needed to improve the computational efficiency of UA, disentangle and propagate the different sources of uncertainty, improve UA applications to environmental changes and coupled human–natural-hydrologic systems, and ease UA’s applications for practitioners.

1. Introduction

Hydrological models are developed to understand process, test hypothesis, and support decision-making. These models solve empirical and governing equations at different complexities. Their complexities vary depending on how the governing equations are solved over different spatial configurations such as lumped [1], semi-distributed [2], or fully distributed area [3], as well as the extent to which variables and processes are coupled [4,5]. Although model development has progressed in accounting for different processes and complexities, they remain as simplifications of the actual hydrological processes.
Process simplification is a consequence of limited knowledge and data, imprecise measurements, and involvement of multiple scales and interactive processes [6,7,8,9]. These simplifications introduce uncertainty and make it an intrinsic property of any model [10,11,12]. Analyzing this uncertainty requires separate methodological treatments beyond model development. However, uncertainty analysis (UA) benefits hydrological modeling in (i) identifying model limitations and improvement strategies [13,14], (ii) guiding further data collection [15,16], and (iii) quantifying the uncertainty associated with model predictions.
Various UA techniques have been developed. A few of the methods include generalized likelihood uncertainty estimation (GLUE) [17], differential evolution adaptive metropolis (DREAM) [18], parameter estimation code (PEST) [19], Bayesian total error analysis (BATEA) [20], and multi-objective analysis (Borg) [21]. These techniques have been applied in different water resources decision-making such as water supply design [22], flood mapping [23,24], and hydropower plant evaluation [25]. However, there is a lack of a systematic review and categorization of these techniques to provide a much-needed UA guideline for selecting a suitable method.
This review intends to summarize, categorize, and review UA methods for each source of model uncertainty. The conceptual basis of the methods, their applications, advantages, and challenges are critiqued along with an outlook of future research. This work builds upon and goes beyond other UA reviews in the literature. Among these reviews, Walker et al. [26] focused on the conceptual basis of model uncertainty, while Refsgaard et al. [27] focused on the role, terminology and assessment of uncertainty. Pathak et al. [28] emphasized the application of selected UA methods for practicing engineers. Meanwhile, Gupta et al. [6] and Mishra [29] focused on reviewing specific UA methods, calibration techniques and sensitivity analysis, and Warmink et al. [30] focused on identification and classification of uncertainty sources. None of these reviews adopt a comprehensive categorization and critiquing of the UA methods for each uncertainty source. Here, we fill that gap by organizing and critiquing the UA methods for each source of uncertainty. Further, we newly reviewed response surface methods, uncertainty source interactions and provided a much-needed review of UA applications for hydrologic prediction in ungauged basins. These make our work a novel contribution that critiqued UA in both gauged and ungauged catchments.

2. Sources of Hydrological Model Uncertainties

Hydrological model uncertainties stem from parameters, model structure, calibration (observation) and input data (Figure 1). In addition to these sources, uncertainties can stem from model initial and boundary conditions; however, these sources are not considered in this review. Hydrological models often contain parameterizations that are a result of conceptual simplifications, known as effective parameters [31]. Parameter uncertainty can be a result of the inability to estimate or measure these effective parameters that integrate and conceptualize processes [32]. In addition, parameter uncertainty can be a result of the natural process variability and observation errors. Although some model parameters, such as hydraulic conductivity, are measurable at the point scale, their values at the catchment scale vary significantly. Hence, the practical difficulty in measuring this variability can also lead to parameter uncertainty. On the other hand, due to errors in the calibration data, parameter uncertainty can be encountered even if a model is an exact representation of the hydrologic system. Therefore, inability to accurately estimate effective parameters, the challenge of measuring natural variability, and the existence of observation errors lead to parameter uncertainty. This uncertainty often manifests itself in model calibration through the lack of a single optimal set of parameters [33].
An exact representation of a hydrological system is challenging due to the absence of a unifying theory, limited knowledge and numerical and process simplifications. Such limitations constitute model structural uncertainty. Model structural uncertainty can also refer to alternative conceptualizations, such as the hydro-stratigraphy of the subsurface or the discretization of surface and process features [34,35,36]. Model performance is strongly dependent on model structures [37]. As a result, structural uncertainty is critical as it can render the model and quantification of other uncertainties useless [37,38,39]. Comparing structural and parameter uncertainty, Højberg et al. [40] showed that structural uncertainty is dominant, particularly when the model is used beyond its calibration sphere. Moreover, Rojas et al. [41] concluded that structural uncertainty may contribute up to 30% of the predictive uncertainty. Using the variance decomposition of streamflow estimates, Troin et al. [42] showed that model structure is the highest predictive uncertainty contributor.
Input forcing to hydrological models includes different hydro-meteorological, catchment, and subsurface data. Although there have been improvements in data acquisition and processing, the data used for model forcing are sparse, and subjected to gaps, imprecisions, and uncertainties [43]. In most cases, input data involve interpolations, scaling, and derivation from other measurements that result in an uncertainty range of 10–40% [43]. These inaccuracies in input data constitute input uncertainty. Failure to account for input uncertainty can lead to biased parameter estimation [20,44] and mislead water balance calculations [45]. Consequently, input uncertainty can interfere with the quantification of predictive uncertainty. Comparing input and model parameter uncertainty particularly in a data sparse region, Bárdossy et al. [46] showed the severity of input uncertainty over parameter uncertainty and suggested a simultaneous analysis of both uncertainties to have a meaningful result.
Hydrological modeling often involves calibrating the model parameters and evaluating model predictions using observed state and/or flux variables, such as streamflow and groundwater levels. These observations are subjected to similar measurement uncertainties as input data. Streamflow observations, for instance, are derived from rating curves that translate river stage measurements to discharge estimates. This translation not only propagates the random and systematic stage measurement errors but also passes the structural (rating curve equation) and parameter uncertainties involved in the calibration of the rating curve. In addition, discharge estimates suffer from interpolation and extrapolation errors of the rating curve, hysteresis, change in site conditions (e.g., bed movement), and seasonal variations in measurement and flow conditions [47,48,49,50]. Extrapolation contributes the highest uncertainty [51]. Overall, discharge observation uncertainty ranges from 5% [52,53] to 25% when extrapolation is involved [54].
Predictive uncertainty is the result of the above uncertainties exhibited on the model output. This uncertainty can be biased or variance-dominated depending on the level of model complexity and the information content of the data, bias being dominant in simple models and variance being dominant in complex models. Predictive uncertainty is usually heteroscedastic, the magnitude of uncertainty varying with the magnitude of the model output (non-Gaussian residuals). In streamflow modeling, this is partly due to the limited number of high flow observations (low frequency events) to constrain model calibration. Different data transformation schemes are used to address heteroscedasticity including the Box–Cox transformation and simple log transformation [55,56].

3. Hydrological Model Uncertainty Analysis

We have found six broad classes of UA methods: (i) Monte Carlo sampling, (ii) response surface-based schemes including polynomial chaos expansion and machine learning, (iii) multi-modeling approaches, (iv) Bayesian statistics, (v) multi-objective analysis, and (vi) least square-based inverse modeling (Figure 2). Furthermore, approximation and analytical solutions based on Taylor series are also used in UA. However, since such approaches are not widely applied, they are not included in this review.
The number of papers reviewed in this section (Section 3) across the different sources and methods are shown in Figure 2. This numbers indicate pioneering articles, articles that discuss limitations and advantages of the methods, and articles that introduce method improvements. Hence, the numbers do not indicate articles that are application oriented. As rainfall multipliers are embedded as part of model parameters they are not included as a unique method to be reviewed. In general, although it could be a limited sample, the numbers may reflect an approximation of how different methods are being developed for each source and how these methods are being critiqued and advanced as a focus of different studies.
The bellow six broad classes are a summary of different UA methods (Table 1). The common thread across these methods is their involvement of many actual/surrogate model executions. This can translate into high computational time depending on (1) the model’s execution time, (2) whether the surrogate training requires multiple runs, and (3) whether the model execution is parallel.

3.1. Parameter Uncertainty

Compared to the other sources of uncertainty, several techniques are employed to address parameter uncertainty (Figure 2). One of the widely used parameter UA methods is generalized likelihood uncertainty estimation (GLUE) [17], which accounts for the equifinality hypothesis [33]. The equifinality hypothesis highlights the existence of multiple parameter sets that describe hydrological processes indiscriminately or producing the same final result. The approach augments Monte Carlo simulations with a behavioral threshold measure that distinguishes hydrologically tenable and untenable parameters (and structures). Although GLUE’s straightforward conceptualization and implementation allow it to achieve a widespread use, GLUE has been criticized for its subjectivity in choosing a behavioral threshold and its lack of a formal statistical foundation [75,76,77]. Extending GLUE, Beven [33] suggested the limits of acceptability approach where pre-defined uncertainties that objectively reflect both input and output observation uncertainties are used as a measure of acceptability than the subjective behavioral thresholds. Although its use is not as widespread as GLUE, this approach has been applied, e.g., [78,79]. The limitation of its application is partly due to the lack of measurements that reflect the level of input and output uncertainties and their interactions [80,81,82].
Following GLUE, numerous contributions have been made to formally quantify parameter uncertainty. These approaches primarily rely on Bayesian statistics [83,84,85,86]. The main advantage of the formal Bayesian statistics is that parameter uncertainty is not only quantified, but can also be reduced through the inclusion of prior knowledge. Although GLUE’s behavioral thresholds can achieve this goal, their formulation is subjective. The latest addition to the formal approach is DREAM [18], which has been applied widely. DREAM’s widespread application is derived from its unique capability of merging the differential evolution algorithm [87] with the adaptive Markov Chain Monte Carlo (MCMC) approach [88,89]. The differential evolution allows DREAM to efficiently explore non-linear and discontinuous parameter spaces, while the adaptive MCMC component keeps it within the parameter posterior space. Contrary to other MCMC schemes [90], DREAM uses multiple chains to exchange sampling space information rather than confirming the attainment of the stationary state. The primary challenge of DREAM and other Bayesian methods is identifying a likelihood function that results in homoscedastic residuals, justifying the use of a strong assumption for a formal likelihood function. This challenge is particularly difficult in disproportionately low flow dominated observations of streamflow coupled with a few high flows [33,55,91,92,93].
Least squares-based parameter inversions incorporated in PEST [19] and US Geological Survey computer program (UCODE) [66] are also used to quantify parameter uncertainty. These approaches rely on linear approximation of models and Gaussian residuals. Besides estimating parameter uncertainty, these methods are also useful in determining the number of model parameters that can be estimated using the available data [94,95]. This is valuable in highly parameterized models to prioritize data collection for model improvement. Furthermore, PEST expedites Monte Carlo based UA through its null space Monte Carlo analysis and the use of singular value decomposition (SVD). This further allows PEST to be used in its SVD-assist mode with Tikhonov regularization. The SVD-assist approach coupled with parallelized Jacobian matrix calculation makes parameter estimation close to observations (because of Tikhonov regularization) and faster (due to parallelization plus SVD reducing the parameter space) when calibrating highly parameterized and ill-posed problems. The main limitations of PEST and other least-squared based UA are their assumption of model linearity and Gaussian residuals. These assumptions have limited their use in surface water models riddled with threshold-based processes, discontinuities and integer variables.
Multi-objective optimization (MOO) is another UA approach for parameter uncertainty. MOO-based UA has received substantial interest due to the absence of a single global optimal solution due to equifinality, and because of the need for improved constraint of flux and store outputs in hydrological models [60,96,97]. MOO allows retrieving information from the increasingly available data against which model predictions can be compared [61,98]. It handles parameter uncertainty by conditioning model calibration through multiple complementary and competing objective functions. The functions can be defined using (1) multiple responses such as, metrics that measure the matching of different segments of a hydrograph, (2) same-variable output but at multiple sites, and (3) multi-variable outputs such as streamflow, soil moisture, and evapotranspiration [61,99,100]. The result of MOO provides trade-off solutions referred to as non-dominated (Pareto) solutions in which each member solution matches some aspect of the objective better than every other member.
Parameter uncertainty is considered well identified when the optimized parameter uncertainty substantially decreased from the prior uncertainty, leading to a reduced model prediction uncertainty. In MOO, the uncertainty range can further be narrowed by rejecting solutions that failed the model validation test. MOO also provides unique parameter uncertainty information following the shape of the Pareto front. For instance, extended Pareto front in each objective indicates high uncertainty in reproducing the parameterization of the processes [61,100,101]. Furthermore, MOO provides functionality beyond parameter UA as the shape of the tradeoff allows us to understand model structural limitations [102]. However, this advantage might not be straightforward because meaningful multi-objective tradeoffs in hydrological modeling are less frequent [102].
One of the challenges in MOO is the lack of consistent criteria for choosing the number and type of the objective functions and inability to formally emphasize one criterion over the other. Like GLUE, MOO is also statistically informal as it does not use Bayesian probability theory. Theoretically, the Pareto solutions and GLUE’s behavioral solutions may overlap; however, they are not necessarily equivalent. The difference between the parameter uncertainty sets defined by the Pareto front and the GLUE approach is that GLUE’s equifinality set consists of both the dominated and non-dominated parameter whereas the Pareto set contains only the non-dominated sets [103]. The difference between Bayesian UA and multi-objective analysis is that Bayesian schemes usually have a single objective (likelihood functions) or an aggregated multi-objective. In cases where multi-objective configuration is employed, the Bayesian scheme results in a compromise solution [104] while MOO results in a full Pareto solution [103]. Detailed review of MOO can be found in [100].
Response surface methods are used to approximate predictive uncertainty that is primarily caused by parameter uncertainty. Response surface methods employ a proxy function emulating the original model’s parameter and output relationships, making the method blind to model internal workings or the nature of outputs that are not emulated. The common proxy functions used to approximate models are truncated Polynomial Chaos Expansion (PCE) [2,105,106] and machine learning schemes [107,108,109,110]. Since these methodologies are computationally inexpensive, the approximation is suited for UA in complex models that demand long run time or applied for real time forecasts. Response surface methods train the approximating functions based on a subset of the original model output and parameter relationships. Consequently, one of the major challenges of these schemes is ensuring the accuracy of the approximating function. If the subset is not representative, the function can lead to biased approximation. Identifying the right size and representative subset are an ongoing research. Furthermore, the accuracy of the response surface methods depends on the smoothness of the original model parameter and output surface relationship. Thus, the approximation is better for problems that are not highly non-linear and response surfaces that are not riddled with thresholds and multiple local optimums. Commonly, a validation test on a split sample is considered to confirm the approximating function’s accuracy [2,110].
The typical approach of uncertainty quantification using response surfaces is to conduct Monte Carlo simulations using the approximated function. The resultant Monte Carlo based distributions represent the predictive uncertainty. Compared to the machine learning alternative, PCE offers an analytical approximation of the uncertainty through the estimate of variance without the need for simulations [111]. Further review of PCE can be found in [112,113], while review of machine learning approaches are provided by [114].
Dotto et al. [115] compared Monte Carlo (GLUE), multi-objective (A Multi-Algorithm, Genetically Adaptive Multiobjective—AMALGAM [62]) and Bayesian (Model-Independent Markov Chain Monte Carlo Analysis—MICA [116]) techniques to estimate parameter uncertainty. They found that while all the techniques performed similarly; GLUE was slow but easiest to implement. Although the Bayesian technique was less suitable because it required homoscedastic residuals, it was the only one avoiding subjectivity. Finally, they indicated that method choice depends on the modelers’ experience, priority, and problem complexity. Further, Keating et al. [117] found equivalent performance between the Bayesian DREAM and the least square null-space Monte Carlo. Comparing GLUE and formal Bayesian techniques, Thi et al. [118] indicated formal Bayesian method’s efficiency in resulting to a better identified parameter. However, Jin et al. [119] indicated that a stricter behavioral threshold choice can achieve similar identifiability with an increased computation [120]. A detailed comparison of formal Bayesian techniques against GLUE and its behavioral threshold choices can be found in [120].

3.2. Input Uncertainty

Hydrological models require different hydroclimatic input data. For this review, we focused on precipitation and its uncertainty. Traditionally, input uncertainty is addressed using a factor that multiplies input precipitation to correct measurement uncertainty. This approach is relatively fast and easy as the multiplication factor is expert judgement-based or estimated with model parameters. However, the method is ad hoc and does not have formal procedures to determine the multiplying factor. Beyond this approach, precipitation uncertainties can be studied using Monte Carlo (GLUE) and Bayesian statistics. The Monte Carlo approach estimates the rainfall multiplying along with the model parameters. The Bayesian techniques can be classified into two classes, model dependent and model independent, depending on whether the input uncertainty is estimated dependently or independently of the model parameters. Balin et al. [121] demonstrated how input uncertainty can be estimated as a standalone quantity, separate from model parameters. They decoupled input uncertainty from the model by assuming a distribution that represents the “true” precipitation. This approach is advantageous, because it relieves input uncertainty from the remaining uncertainty sources. However, the approach is limited because of the difficulty in determining the “true” input distribution.
On the other hand, Bayesian total error analysis (BATEA [74]) and integrated Bayesian uncertainty estimator (IBUNE [73]) estimate input uncertainty along with model parameters. Hence, these approaches recover input uncertainty with parameter and structural uncertainties. Thereby, the estimated input uncertainty is dependent on the specific model structure and can vary with changes in model structures. Consequently, the input uncertainty might not only be a result of input measurements, but also structural and parameter uncertainties. These approaches can also be computationally expensive in distributed models, which require spatially and temporally varying inputs.

3.3. Structural Uncertainty

Multi-model averaging and ensemble techniques, which pool together alternative model structures, have been applied to quantify and reduce structural uncertainty. Hagedorn et al. [122] noted that the success of multi-model averaging is a result of error compensation, consistency, and reliability. Consistency refers to the lack of a single model that performs best regardless of circumstances, and reliability refers to averages performing better than the worst component an ensemble. Moreover, Moges et al. [123] showed why a multi-model average performs better than a single model using statistical proof and empirical large-sample data.
Predictions from ensembles are combined using either equal or performance-based weights. Velázquez et al. [70] presented a study that benefits from equal weighting while performance based weighted multi-model predictions are discussed in [37,71,124]. A comparison of different alternative model averaging techniques was presented in [125]. Equal weighting is advantageous in the absence of model weighting/discrimination criteria and when component models have similar performances. However, when the models’ performances are significantly different and each model is specialized in simulating certain components of the hydrological processes, using variable weights provide better predictions.
One of the widely used weighted multi-model averaging is Bayesian model averaging (BMA) [126,127,128]. BMA and similar Bayesian based averaging techniques are advantageous since their weights are interpretable and derived from model posterior performance that combines the model’s ability to fit observations and experts’ prior knowledge. Despite its widespread application, BMA has its limitations in addressing structural uncertainty. For instance, even though model performance varies across hydrograph segments [129], runoff seasons [14], or catchment circumstances [72], BMA assigns constant weights to component models. Furthermore, improvement in BMA’s performance was noted when component models are weighted differently for different quantiles of a hydrograph [129]. In contrast, hierarchical mixture of experts (HME) [72], an extension of BMA, has the capacity to assign different weights to different models depending on the dominant processes governing different segments of a hydrograph. This flexibility allows HME to be useful in dealing with structural uncertainty when model performances vary.
Hydrology involves governing physical principles and established process signatures that need to be respected. As a result, any statistically based multi-model averaging techniques need to be critically streamlined so that they respect hydrological principles. In this regard, Gupta et al. [130] stressed the application of diagnostic modeling. Similarly, Sivapalan et al. [131] argued the importance of parsimony and incorporation of dominant processes in model structural development. To this end, Moges et al. [132] streamlined HME by coupling it with hydrological signatures that diagnose model inadequacy. Such approaches can be explored to characterize catchment hydrology while avoiding disparities between statistical success and process description.
Structural uncertainty has also been addressed in the framework for understanding structural errors (FUSE [59]), which explores structural uncertainty through shuffling and combination of parts of alternative hydrological models. The parts constitute alternative formulations of processes. FUSE has similarity with Monte Carlo (or Bootstrap) methods. However, the samples in FUSE are components of a model representing the different alternative process conceptualizations. Further, given the ease of parallelization, FUSE is practical to address structural uncertainty.
The chief criticism of all multi-modeling UA is the limitation of identifying and incorporating the entire span of feasible model structures. Although the above methods can handle any number of structures, the practical inability to identify and explore the entire structural space constrains the scope of the investigation. As a result, full examination of structural uncertainty is limited.

3.4. Calibration Data Uncertainty

Although different calibration data are used to constrain hydrological models, streamflow observation uncertainty is common in the literature. Hence, this subsection focuses on streamflow observation uncertainty. Some of the methodologies used to address observation uncertainty include Monte Carlo simulations to estimate the rating curve uncertainty, use of the upper and lower bounds of the rating curve estimates, and Bayesian techniques. The first method propagates observation uncertainty to parameter and/or predictive uncertainty estimates through a repeated calibration of the hydrologic model [45], making the method computationally intensive. In the second approach, only the upper and lower rating curve uncertainty bounds are considered to come up with the corresponding upper and lower bounds of predictive uncertainty [133]. This approach is computationally effective, but it does not maximize exploring the full information contained in the rating curve uncertainty. To improve the rating curve uncertainty utilization, McMillan et al. [134] suggested derivation of an informal likelihood function that accounts for the entire rating curve uncertainty. This technique allowed them to apply Bayesian statistics and use the rating curve information to analyze observation uncertainty. In contrast, Sirakosa and Renard [135] used a formal Bayesian technique where the rating curve parameters and the hydrological model parameters are inferred simultaneously by directly using the stage measurement instead of the streamflow estimate.
Observation uncertainty influences predictive uncertainty. McMillan et al. [134] showed improvement in model predictive uncertainty by considering discharge observation uncertainty compared to a model that does not consider discharge uncertainty while Beven and Smith [82] demonstrated the disinformation emanating from observation uncertainties. Aronica et al. [133] showed how different rating curve realizations lead to different predictive uncertainties, whereas Engeland et al. [45] concluded that considering or not considering observation uncertainty leads to different parameter uncertainties and different interactions with structure, parameter, and predictive uncertainties. These studies indicate the significance of observation uncertainty and the need to account it.

3.5. Predictive Uncertainty

The resultant uncertainty estimate of a model output is predictive uncertainty. Predictive uncertainty can be quantified by propagating parameter, structure and input uncertainties to the model output. This propagation is mostly performed by Monte Carlo simulation of parameter, structure, and input uncertainties. Predictive uncertainty can be reduced using different machine learning schemes [63,136]. Although their physical process realism needs validation, these schemes reduce predictive uncertainty as they can learn and identify patterns in the model residuals. Besides, to reduce predictive uncertainty, it is important to disentangle and understand the propagation of the different sources of uncertainties.

4. Salient Features and Additional Perspectives in Hydrological Model Uncertainty

4.1. Uncertainty Source Interaction and Reduction

Understanding the causes and interactions of uncertainty sources is critical in disentangling and reducing predictive uncertainty. The different causes and interactions of the four sources are summarized in Figure 3. Some of the causes can be common across the different sources (Figure 3). Structural uncertainty is at the core and interacts with all sources of uncertainty. The interactions between input and structural uncertainty are not direct. They interact through the parameters of the model. On the other hand, although input and calibration data uncertainties are independent, they both interact with parameter uncertainty. The interaction between input and parameter uncertainties may lead to biased parameter estimates [74], which can dominate the overall model uncertainty [137,138]. Similarly, input errors can be compensated by parameters [139,140,141]. However, this compensation can be misleading since the causes of input uncertainty are independent of any model and its parameters. This is particularly a concern when the model is used for conditions that are different from the ones used for model development. Thus, it is better to independently address the causes of input errors instead of relying on compensation.
Although observation and model structural uncertainty do not necessarily interact directly, Montanari and Baldassarre [142] showed how different model structures can limit propagation of observation uncertainty. For a given model structure, observations are used for (1) model performance evaluation, (2) deriving hydrological signatures, and (3) deriving likelihood measures in model parameter estimation. With these uses, observation uncertainty interacts with parameter uncertainty when parameters are estimated. Consequently, failure to account for observation uncertainty can lead to biased parameter and predictive uncertainty [135,143,144]. Due to the interaction between parameter and observation uncertainty, acquiring more data can reduce parameter/predictive uncertainties. However, if the information content of the acquired data cannot support the parameterization of the model, these uncertainties may not be reduced by simple data acquisitions. Different techniques are employed to better extract the information content of calibration data, including (i) extracting different hydrological signatures from streamflow [145,146,147], (ii) increasing the spatial coverage of observations beyond the catchment outlet [148,149,150], (iii) having different variables beyond streamflow [151,152], (iv) incorporating soft data from expert-based diagnostics [153,154] and (v) including remote sensing data [155,156,157]. These examples indicate that not only data acquisition, but also the diversity and the information content of the data are essential.
The bias–variance trade-off curve describes the interaction between parameter and structural uncertainty [158] (Figure 4). This theoretical curve shows that the optimal model lies between a biased assumption-ridden simple model and a highly variable complex model, that is, model biases lead to inaccurate process representations, while building a complex model without the detailed supporting data results in highly variable outputs and ambiguities. Thus, model building involves trade-offs between the two sources. Consequently, Clark et al. [159] states how the boundary between structure and parameter is not always clearly defined. As different parameter values can translate into structural uncertainty, Clark et al. [159] indicated that it is not straightforward to distinguish parameter from structural uncertainty. Structural uncertainty can also be manifested by parameters residing at the edge of the permissible parameter range [160]. Indeed, calibrated parameter values lying outside their physical ranges serve as an indicator of structural uncertainty [161]. Similarly, different studies showed that the tradeoff between these two sources [162,163].
On the bias–variance trade-off curve, there are two potential ways of reducing uncertainty. From the right, parameter uncertainty can be lessened through reducing model complexity by multi-objective and multi-variable analysis, dimensional reduction of complex and coupled models, or approximating the complex model with surrogate proxies. Via any of these approaches, reduction of parameter uncertainty demands computational efficiency, different flux and storage data, and using effective UA methods (Section 3 and the literature therein). From the left, structural uncertainty can be lessened by removing simplifying and relaxing assumptions using theory development, coupling sub-models and process understanding [14,164,165]. However, due to the uniqueness of place and process, reduction of structural uncertainty can be watershed dependent [166]. Consequently, the use of hydrological signatures can be critical in identifying and diagnosing the dominant processes and incorporating the proper level of model complexity.

4.2. Model Inadequacy and Structural Uncertainty

Model adequacy is interlinked with structural uncertainty. Model inadequacy results from missing one or more processes, inadequate numerical schemes and discretization, adopting a deterministic approach for a stochastic phenomenon, and misrepresenting process/physical phenomenon [167]. Conversely, structural uncertainty results from the inability to discriminate alternative models due to limitations in process understanding (Section 2). As such, ideally, the ensemble techniques in Section 3.3 shall contain models that are difficult to discriminate, thereby linking model inadequacy evaluations and structural UA. However, in most ensemble based structural UA, it is not common to address structural uncertainty following model inadequacy evaluations.
Both hydrological and statistical approaches are used to evaluate model inadequacy. By using a statistical technique, Pande [168] proved that the crossing of any two quantiles of a model structure indicates the structure’s inadequacy. Pande [169] applied this quantile-based approach to rank several structures. This ranking can be used to identify alternative models for structural UA. Although other statistical model ranking methods such as Bayesian model selection criteria exists, Pande’s [168] approach is preferable as it is proven to diagnose structural inadequacy. However, as the approach involves model optimization for different quantiles, it demands high computation. Further, as this approach is statistical, it cannot always guarantee hydrological adequacy. Alternatively, in the hydrological approach, process-based signatures are employed to evaluate model structural adequacy. For instance, the use of recession curves may highlight deficiency in subsurface flow characterization while the behavior of flow duration curves can diagnose process nonlinearities [56,170,171,172]. Further, McMillan [173] provided a review of hydrological signatures and their corresponding processes. These links can be used as hydrological instruments to identify inadequacy and enhance structural UA.

4.3. UA in Ungauged Basins

Most watersheds across different parts of the world are ungauged. Although such watersheds are common, model development and UA are limited as they demand hydrological observations [174,175]. In these watersheds, there are two approaches of UA—model dependent and model independent (Figure 5). In both approaches, we have identified five distinct stages that may incur uncertainty. The model dependent approach [176,177] involves (i) choosing a model, (ii) estimating and quantifying parameter uncertainty for several gauged watersheds which can be achieved using Bayesian, Monte Carlo or multi-objective techniques as discussed in Section 3, (iii) determining how many parameters (the full or a subset of the parameter distribution) to transfer to the ungauged catchment, (iv) developing a regional relationship between the model parameters and watershed attributes, and (v) estimating the parameter distribution of the ungauged catchment by using the regional relationship (Figure 5a). At its core, this approach directly transfers model parameters from gauged to ungauged watersheds. Hence, it can propagate both structural and parameter uncertainties from gauged to ungauged catchments. Additionally, UA becomes challenging when the model parameters and regional relationships at stage four are poorly identified. Further, at stage three, determining the transferable parameters does not only lack a guideline, but also incurs a high computational cost when the full parameter distribution is transferred. As an alternative to selecting the parameter subset, Hundecha and Bárdossy [178] directly estimated the regression coefficients instead of the model parameters. Their approach requires predefining a linear regression between a set of sensitive model parameters and catchment characteristics. Although the approach alleviates the subset problem, it assumes a linear relationship and increases the number of parameters (regression coefficients) to be estimated.
The model independent approach involves the following stages (Figure 5b): (i) select watershed attributes and hydrological signatures, (ii) regionalize the attributes and hydrological signatures (e.g., baseflow index, curve number, mean flows), (iii) estimate the value of the hydrological signatures using the ungauged watershed attributes, (iv) choose a model, and (v) apply the model and quantify parameter uncertainty by targeting the estimated signatures at the ungauged watershed. Stage five may employ Monte Carlo, Bayesian and multi-objective methods. As any model uses the information at stage four, this approach allows a direct quantification of structural uncertainty and is less prone to propagating structural and parameter uncertainty from gauged to ungauged watersheds. However, the approach is limited when the relationship between hydrological signatures and catchment attributes is poor. Additionally, since most of the hydrological signatures employed at stage four are streamflow oriented, the approach might be poor in predicting other flux or storage variables.
At stage five, the Monte Carlo method discriminates the behavioral and non-behavioral parameters by capitalizing on the confidence interval determined at stage three [179]. The multi-objective approach uses the estimated signatures (stage three) as its objectives [180]. The Bayesian approach employs any distribution (e.g., based on expert knowledge) as its parameter prior, while it relates the model simulated and estimated hydrological signatures for its likelihood [175,181,182]. Moreover, the fundamental pros and cons of the Monte Carlo, Bayesian and multi-objective techniques in Section 3 apply here as well.
UA in both approaches benefits from having (i) many gauged basins in the regionalization, (ii) a discernible regional relationship and (iii) non-redundant complementary hydrological signatures. Similarly, both approaches implicitly assume the dominant process in the ungauged catchment is comparable to or contained within the set of dominant processes in the gauged catchments and is a function of watershed attributes. Hence, model choice is critical. Comparing a physically based versus a conceptual model, a physically based model is theoretically advantageous as it relates most of its parameters to catchment attributes. However, since its parameters are often poorly identified, using such models for regionalization may lead to high uncertainty. In contrast, although there is a lack of direct physical relationships between a conceptual model parameter and catchment attributes, due to model parsimony, conceptual models are better suited for ungauged catchment UA. Parsimony often leads to better identified parameter which is relatively easy to regionalize and transfer to ungauged basins. Better identification may also reduce the challenges of subsetting parameters for regionalization.
The studies reviewed in this subsection did not account for input and observation uncertainties. However, we provided a perspective in addressing them in ungauged basins. A prior Accounting of observation uncertainty in gauged catchments may reduce propagating the observational uncertainty from gauged to ungauged watersheds. This applies only for the model dependent ungauged UA. However, since there are no observations in ungauged basins, this uncertainty is not a concern by its own. Similarly, accounting for input uncertainty in the gauged watersheds before regionalizing them can reduce propagating input uncertainty to the ungauged watershed. However, it is important for future studies to assess and disentangle all sources of uncertainty in ungauged basins.

4.4. Other Approaches in UA

This review has heretofore focused on the widely used probabilistic and optimization-based UA (Section 3). Theoretically, accurate estimates through these methods require large amounts of data. Additionally, these methods are not straightforward in accommodating ambiguous information (e.g., natural language and qualitative information). These qualities may limit their success in data scarce conditions and regions. Considering these limitations, alternative philosophies are in use. Although these alternatives are not the focus of this review, we briefly highlight fuzzy and information theoretic based approaches. Fuzzy based approaches rely on subjectively defined membership functions instead of formal probabilities [183]. Hence, this approach is well suited in accounting ambiguous natural language, qualitative and other additional information into UA. Liu et al. [184] showcased the application of the method in addressing parameter uncertainty. The challenges of this approach include subjectivity in defining the membership function and how to account for probabilistic quantities. A general review of fuzzy theory to hydrological problems can be found in [185].
On the other hand, information theoretic philosophies question the fundamental notion of quantifying uncertainty. Their central argument rests on the application and limitation of probabilistic methods in the absence of a ‘true’ model and limited data. That is, lack of a “true” model and limited data may not guarantee making the true probabilistic inference, hence uncertainty is inherently unknown [186]. Instead, this approach proposes a different question that is “to what extent is the available observed data useful in model improvement?” than quantifying the unknown uncertainty [187]. The application of this approach is demonstrated in addressing both structural and parameter uncertainties [188]. Although Nearing and Gupta [188] argued that information theory is independent of probabilistic theory, its technical application involves probabilistic terms [187].

5. Summary and Outlook

Across the four sources of hydrological model uncertainty, six broad classes of UA methods are identified. Each method has its own skills and limitations, and none is universally superior. In terms of coverage, parameter uncertainty has benefited from a variety of techniques, while recently input uncertainty has gained attention. In contrast, although there are a few works that deal with structural uncertainty, it is not addressed extensively provided that it is at the core of the other sources [41]. Meanwhile, calibration data uncertainty analysis beyond streamflow observation is limited.
UA that treat all sources of uncertainties in an integrated manner or disentangle their contributions are sparse. More work in this area will help source attribution and interpretation of predictive uncertainty. In line with source attribution, it is also important to employ uncertainty propagation in integrated hydrological models. By coupling different complementary sub-models these models have demonstrated their value in reducing assumptions and capturing feedback process [189,190]. However, such studies have not yet widely utilized UA to attribute their predictive uncertainty and illuminate which sub-model needs improvement or what additional data is required. In this regard, Moges et al. [191] showed how a framework based on the winding stairs algorithm and a concurrent computational framework can be utilized.
Uncertainty characterizations are limited in their ability to provide reliable information for applications involving environmental changes and coupled human–natural-hydrologic systems. This is because these systems involve many intricate and highly uncertain factors, such as future forcing input variables and decision-making under changing environments. Hence, to treat the propagation of uncertainty due to the feedback between climate models and human water management, an advanced UA approach involving an integrated transdisciplinary framework is needed. This challenge is recognized within the modeling community [192,193].
Model development, particularly in transboundary rivers, involves not only the above four sources of uncertainty but also geopolitical uncertainty related to water resources governance [194]. This uncertainty is characterized by data secrecy, ambiguous treaties, lack of mutual trust and cooperation, resulting in difficulties of data sharing and accessibility. Consequently, geopolitical uncertainty may confound the above sources of uncertainty sources and make hydrological models less reliable, limit their reproducibility and usefulness in decision-making [194,195]. As this source of uncertainty is poorly addressed in the existing hydrological literature, it needs more studies involving social scientists and different stakeholders promoting transparency, trust building, and multidisciplinary cooperation, which is in line with hydro-social thinking [196].
Another research frontier in UA is its application in large data sets [197,198]. Such analysis can be useful to generalize, regionalize, and identify patterns of model uncertainty in catchments of different climate, topography, and nature. Hence, it is important to apply different techniques to diverse datasets beyond single-catchment focused studies. Such explorations can reveal organizing principles and patterns of model uncertainty [199].
UA applications are still limited in practical engineering and management operations. One approach to bridge this limitation is to better communicate UA and link it with other concerns such as economics. In this regard, McMillan [25] demonstrated how UA reduces cost in the hydropower industry. Similarly, investigating the implications of UA on the flood insurance sector can be a path for coupling research and practice [200]. Future investigations in these areas can help UA gain a widespread foothold for the practicing community.
Furthermore, although there is a continued inclusion of UA in ungauged watersheds, it remains to be at an early stage [174]. Given that most watersheds across the world are ungauged, future research in this avenue is also critical. To this end, this review provides the first critique of the steps, the alternatives, and the uncertainty sources in ungauged watersheds.

Author Contributions

Conceptualization, E.M., Y.D., L.L. and F.Y.; writing—original draft preparation, E.M.; writing—review and editing, E.M., Y.D., L.L. and F.Y.; visualization, E.M. and F.Y.; supervision, Y.D. and L.L.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was conducted as a part of the “Improved hydrologic forecasting through synthesis of critical storage components and timescales across watersheds worldwide” Working Group supported by the John Wesley Powell Center for Analysis and Synthesis, funded by the USGS.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bergström, S. Development and Application of a Conceptual Runoff Model for Scandinavian Catchments; SMHI Norrköping: Norrköping, Sweden, 1976. [Google Scholar]
  2. Hossain, F.; Anagnostou, E.N. Assessment of a stochastic interpolation based parameter sampling scheme for efficient uncertainty analyses of hydrologic models. Comput. Geosci. 2005, 31, 497–512. [Google Scholar] [CrossRef]
  3. Abbott, M.B.; Bathurst, J.C.; Cunge, J.A.; O’Connell, P.E.; Rasmussen, J. An introduction to the European Hydrological System—Systeme Hydrologique Europeen, “SHE”, 1: History and philosophy of a physically-based, distributed modelling system. J. Hydrol. 1986, 87, 45–59. [Google Scholar] [CrossRef]
  4. Markstrom, S.L.; Niswonger, R.G.; Regan, R.S.; Prudic, D.E.; Barlow, P.M. GSFLOW—Coupled Ground-Water and Surface-Water Flow Model Based on the Integration of the Precipitation-Runoff Modeling System (PRMS) and the Modular Ground-Water Flow Model (MODFLOW-2005); U.S. Department of the Interior: Washington, DC, USA; U.S. Geological Survey: Washington, DC, USA, 2008. [Google Scholar]
  5. Kollet, S.; Maxwell, R.M. Integrated surface–groundwater flow modeling: A free-surface overland flow boundary condition in a parallel groundwater flow model. Adv. Water Resour. 2006, 29, 945–958. [Google Scholar] [CrossRef] [Green Version]
  6. Gupta, H.V.; Beven, K.J.; Wagener, T. Model Calibration and Uncertainty Estimation. In Encyclopedia of Hydrological Sciences; John Wiley & Sons, Ltd.: Chichester, UK, 2005. [Google Scholar]
  7. Kirchner, J.W. Getting the right answers for the right reasons: Linking measurements, analyses, and models to advance the science of hydrology. Water Resour. Res. 2006, 42. [Google Scholar] [CrossRef]
  8. Kirchner, J.W. Catchments as simple dynamical systems: Catchment characterization, rainfall-runoff modeling, and doing hydrology backward. Water Resour. Res. 2009, 45. [Google Scholar] [CrossRef] [Green Version]
  9. Blöschl, G.; Sivapalan, M. Scale issues in hydrological modelling: A review. Hydrol. Process. 1995, 9, 251–290. [Google Scholar] [CrossRef]
  10. Gan, T.Y.; Biftu, G.F. Effects of model complexity and structure, parameter interactions and data on watershed modeling. Water Sci. Appl. 2003, 6, 317–329. [Google Scholar] [CrossRef]
  11. Orth, R.; Staudinger, M.; Seneviratne, S.I.; Seibert, J.; Zappa, M. Does model performance improve with complexity? A case study with three hydrological models. J. Hydrol. 2015, 523, 147–159. [Google Scholar] [CrossRef] [Green Version]
  12. Li, H.; Xu, C.-Y.; Beldring, S. How much can we gain with increasing model complexity with the same model concepts? J. Hydrol. 2015, 527, 858–871. [Google Scholar] [CrossRef] [Green Version]
  13. Bulygina, N.; Gupta, H.V. Estimating the uncertain mathematical structure of a water balance model via Bayesian data assimilation. Water Resour. Res. 2009, 45. [Google Scholar] [CrossRef]
  14. Son, K.; Sivapalan, M. Improving model structure and reducing parameter uncertainty in conceptual water balance models through the use of auxiliary data. Water Resour. Res. 2007, 43. [Google Scholar] [CrossRef]
  15. Lu, D.; Ricciuto, D.; Evans, K. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method. Adv. Water Resour. 2018, 113, 223–235. [Google Scholar] [CrossRef]
  16. Neuman, S.P.; Xue, L.; Ye, M.; Lu, D. Bayesian analysis of data-worth considering model and parameter uncertainties. Adv. Water Resour. 2012, 36, 75–85. [Google Scholar] [CrossRef]
  17. Beven, K.J.; Binley, A.M. The future of distributed models: Model calibration and uncertainty prediction. Hydrol. Process. 1992, 6, 279–298. [Google Scholar] [CrossRef]
  18. Vrugt, J.A. Markov chain Monte Carlo simulation using the DREAM software package: Theory, concepts, and MATLAB implementation. Environ. Model. Softw. 2016, 75, 273–316. [Google Scholar] [CrossRef] [Green Version]
  19. Doherty, J. PEST, Model-Independent Parameter Estimation—User Manual, 5th ed.; with slight additions; Watermark Numerical Computing: Brisbane, Australia, 2010. [Google Scholar]
  20. Kavetski, D.; Franks, S.W.; Kuczera, G. Confronting input uncertainty in environmental modelling. Water Science and Application 2003, 6, 49–68. [Google Scholar] [CrossRef]
  21. Hadka, D.; Reed, P. Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework. Evol. Comput. 2013, 21, 231–259. [Google Scholar] [CrossRef]
  22. Muleta, M.K.; McMillan, J.; Amenu, G.G.; Burian, S. Bayesian Approach for Uncertainty Analysis of an Urban Storm Water Model and Its Application to a Heavily Urbanized Watershed. J. Hydrol. Eng. 2013, 18, 1360–1371. [Google Scholar] [CrossRef]
  23. Jung, Y.; Merwade, V. Uncertainty Quantification in Flood Inundation Mapping Using Generalized Likelihood Uncertainty Estimate and Sensitivity Analysis. J. Hydrol. Eng. 2012, 17, 507–520. [Google Scholar] [CrossRef]
  24. Rampinelli, C.G.; Knack, I.M.; Smith, T. Flood Mapping Uncertainty from a Restoration Perspective: A Practical Case Study. Water 2020, 12, 1948. [Google Scholar] [CrossRef]
  25. McMillan, H.K.; Seibert, J.; Petersen-Overleir, A.; Lang, M.; White, P.; Snelder, T.; Rutherford, K.; Krueger, T.; Mason, R.; Kiang, J. How uncertainty analysis of streamflow data can reduce costs and promote robust decisions in water management applications. Water Resour. Res. 2017, 53, 5220–5228. [Google Scholar] [CrossRef] [Green Version]
  26. Walker, W.E.; Harremoës, P.; Rotmans, J.; Van Der Sluijs, J.P.; Van Asselt, M.B.A.; Janssen, P.; Von Krauss, M.P.K. Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support. Integr. Assess. 2003, 4, 5–17. [Google Scholar] [CrossRef] [Green Version]
  27. Refsgaard, J.C.; Van Der Sluijs, J.P.; Højberg, A.L.; Vanrolleghem, P.A. Uncertainty in the environmental modelling process—A framework and guidance. Environ. Model. Softw. 2007, 22, 1543–1556. [Google Scholar] [CrossRef] [Green Version]
  28. Pathak, C.S.; Teegavarapu, R.S.V.; Olson, C.; Singh, A.; Lal, A.M.W.; Polatel, C.; Zahraeifard, V.; Senarath, S.U.S. Uncertainty Analyses in Hydrologic/Hydraulic Modeling: Challenges and Proposed Resolutions. J. Hydrol. Eng. 2015, 20, 02515003. [Google Scholar] [CrossRef]
  29. Mishra, S. Uncertainty and sensitivity analysis techniques for hydrologic modeling. J. Hydroinform. 2009, 11, 282–296. [Google Scholar] [CrossRef] [Green Version]
  30. Warmink, J.J.; Janssen, J.A.E.B.; Booij, M.J.; Krol, M.S. Identification and classification of uncertainties in the application of environmental models. Environ. Model. Softw. 2010, 25, 1518–1527. [Google Scholar] [CrossRef]
  31. Beven, K. Changing ideas in hydrology—The case of physically-based models. J. Hydrol. 1989, 105, 157–172. [Google Scholar] [CrossRef]
  32. Wagener, T.; Gupta, H.V. Model identification for hydrological forecasting under uncertainty. Stoch. Environ. Res. Risk Assess. 2005, 19, 378–387. [Google Scholar] [CrossRef]
  33. Beven, K. A manifesto for the equifinality thesis. J. Hydrol. 2006, 320, 18–36. [Google Scholar] [CrossRef] [Green Version]
  34. Refsgaard, J.C.; Christensen, S.; Sonnenborg, T.O.; Seifert, D.; Højberg, A.L.; Troldborg, L. Review of strategies for handling geological uncertainty in groundwater flow and transport modeling. Adv. Water Resour. 2012, 36, 36–50. [Google Scholar] [CrossRef]
  35. Rojas, R.; Kahunde, S.; Peeters, L.; Batelaan, O.; Feyen, L.; Dassargues, A. Application of a multimodel approach to account for conceptual model and scenario uncertainties in groundwater modelling. J. Hydrol. 2010, 394, 416–435. [Google Scholar] [CrossRef] [Green Version]
  36. Zeng, X.; Wang, D.; Wu, J.; Zhu, X.; Wang, L.; Zou, X. Uncertainty Evaluation of a Groundwater Conceptual Model by Using a Multimodel Averaging Method. Hum. Ecol. Risk Assessment: Int. J. 2015, 21, 1246–1258. [Google Scholar] [CrossRef]
  37. Butts, M.B.; Payne, J.T.; Kristensen, M.; Madsen, H. An evaluation of the impact of model structure on hydrological modelling uncertainty for streamflow simulation. J. Hydrol. 2004, 298, 242–266. [Google Scholar] [CrossRef]
  38. Doherty, J.; Welter, D. A short exploration of structural noise. Water Resour. Res. 2010, 46, 46. [Google Scholar] [CrossRef]
  39. Refsgaard, J.C.; Van Der Sluijs, J.P.; Brown, J.; Van Der Keur, P. A framework for dealing with uncertainty due to model structure error. Adv. Water Resour. 2006, 29, 1586–1597. [Google Scholar] [CrossRef] [Green Version]
  40. Højberg, A.L.; Refsgaard, J.C. Model uncertainty-parameter uncertainty versus conceptual models. Water Sci. Technol. 2005, 52, 177–186. [Google Scholar] [CrossRef] [PubMed]
  41. Rojas, R.; Feyen, L.; Dassargues, A. Conceptual model uncertainty in groundwater modeling: Combining generalized likelihood uncertainty estimation and Bayesian model averaging. Water Resour. Res. 2008, 44, 44. [Google Scholar] [CrossRef] [Green Version]
  42. Troin, M.; Arsenault, R.; Martel, J.-L.; Brissette, F. Uncertainty of Hydrological Model Components in Climate Change Studies over Two Nordic Quebec Catchments. J. Hydrometeorol. 2018, 19, 27–46. [Google Scholar] [CrossRef]
  43. McMillan, H.; Westerberg, I.K.; Krueger, T. Hydrological data uncertainty and its implications. Wiley Interdiscip. Rev. Water 2018, 5, 1319. [Google Scholar] [CrossRef] [Green Version]
  44. Renard, B.; Kavetski, D.; Kuczera, G.; Thyer, M.; Franks, S.W. Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors. Water Resour. Res. 2010, 46. [Google Scholar] [CrossRef]
  45. Engeland, K.; Steinsland, I.; Johansen, S.S.; Petersen-Øverleir, A.; Kolberg, S. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway. J. Hydrol. 2016, 536, 147–160. [Google Scholar] [CrossRef]
  46. Bárdossy, A.; Anwar, F.; Seidel, J. Hydrological Modelling in Data Sparse Environment: Inverse Modelling of a Historical Flood Event. Water 2020, 12, 3242. [Google Scholar] [CrossRef]
  47. Kiang, J.E.; Cohn, T.A.; Mason, R.R., Jr. Quantifying Uncertainty in Discharge Measurements: A New Approach. In Proceedings of the World Environmental and Water Resources Congress 2009, Kansas City, MO, USA, 17–21 May 2009; American Society of Civil Engineers: Reston, VA, USA, 2009; pp. 1–8. [Google Scholar]
  48. McMillan, H.; Krueger, T.; Freer, J. Benchmarking observational uncertainties for hydrology: Rainfall, river discharge and water quality. Hydrol. Process. 2012, 26, 4078–4111. [Google Scholar] [CrossRef]
  49. Sevrez, D.; Renard, B.; Reitan, T.; Mason, R.; Sikorska, A.E.; McMillan, H.; Le Coz, J.; Belleville, A.; Coxon, G.; Freer, J.; et al. A Comparison of Methods for Streamflow Uncertainty Estimation. Water Resour. Res. 2018, 54, 7149–7176. [Google Scholar] [CrossRef] [Green Version]
  50. Sikorska, A.E.; Scheidegger, A.; Banasik, K.; Rieckermann, J. Considering rating curve uncertainty in water level predictions. Hydrol. Earth Syst. Sci. 2013, 17, 4415–4427. [Google Scholar] [CrossRef] [Green Version]
  51. Domeneghetti, A.; Castellarin, A.; Brath, A. Assessing rating-curve uncertainty and its effects on hydraulic model calibration. Hydrol. Earth Syst. Sci. 2012, 16, 1191–1202. [Google Scholar] [CrossRef] [Green Version]
  52. Cong, S.; Xu, Y. The effect of discharge measurement error in flood frequency analysis. J. Hydrol. 1987, 96, 237–254. [Google Scholar] [CrossRef]
  53. WMO. Guide to Hydrological Practices Volume I Hydrology-from Measurement to Hydrological Information; WMO: Geneva, Switzerland, 2008. [Google Scholar]
  54. Di Baldassarre, G.; Montanari, A. Uncertainty in river discharge observations: A quantitative analysis. Hydrol. Earth Syst. Sci. 2009, 13, 913–921. [Google Scholar] [CrossRef] [Green Version]
  55. Evin, G.; Kavetski, D.; Thyer, M.; Kuczera, G. Pitfalls and improvements in the joint inference of heteroscedasticity and autocorrelation in hydrological model calibration. Water Resour. Res. 2013, 49, 4518–4524. [Google Scholar] [CrossRef] [Green Version]
  56. Martinez, G.F.; Gupta, H.V. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States. Water Resour. Res. 2011, 47. [Google Scholar] [CrossRef]
  57. Breuer, L.; Huisman, J.A.; Frede, H.-G. Monte Carlo assessment of uncertainty in the simulated hydrological response to land use change. Environ. Model. Assess. 2006, 11, 209–218. [Google Scholar] [CrossRef]
  58. Beven, K.J. Environmental Modelling: An Uncertain Future: An Introduction to Techniques for Uncertainty Estimation in Environmental Prediction; Routledge: Abingdon, UK, 2009; ISBN 9780415457590. [Google Scholar]
  59. Clark, M.P.; Slater, A.G.; Rupp, D.E.; Woods, R.A.; Vrugt, J.A.; Gupta, H.V.; Wagener, T.; Hay, L.E. Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models. Water Resour. Res. 2008, 44. [Google Scholar] [CrossRef]
  60. Yapo, P.O.; Gupta, H.V.; Sorooshian, S. Multi-objective global optimization for hydrologic models. J. Hydrol. 1998, 204, 83–97. [Google Scholar] [CrossRef] [Green Version]
  61. Gupta, H.V.; Sorooshian, S.; Yapo, P.O. Toward improved calibration of hydrologic models: Multiple and noncommensurable measures of information. Water Resour. Res. 1998, 34, 751–763. [Google Scholar] [CrossRef]
  62. Vrugt, J.A.; Robinson, B.A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. USA 2007, 104, 708–711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Demissie, Y.; Valocchi, A.; Minsker, B.S.; Bailey, B.A. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions. J. Hydrol. 2009, 364, 257–271. [Google Scholar] [CrossRef]
  64. Doherty, J. Ground Water Model Calibration Using Pilot Points and Regularization. Ground Water 2003, 41, 170–177. [Google Scholar] [CrossRef]
  65. Doherty, J.; Johnston, J.M. 201. Methodologies for calibration and predictive analysis of a watershed model. JAWRA J. Am. Water Resour. Assoc. 2003, 39, 251–265. [Google Scholar] [CrossRef]
  66. Poeter, E.P.; Hill, M.C. UCODE, a computer code for universal inverse modeling. Comput. Geosci. 1999, 25, 457–462. [Google Scholar] [CrossRef]
  67. Sochala, P.; Le Maître, O. Polynomial Chaos expansion for subsurface flows with uncertain soil parameters. Adv. Water Resour. 2013, 62, 139–154. [Google Scholar] [CrossRef] [Green Version]
  68. Wu, B.; Zheng, Y.; Tian, Y.; Wu, X.; Yao, Y.; Han, F.; Liu, J.; Zheng, C. Systematic assessment of the uncertainty in integrated surface water-groundwater modeling based on the probabilistic collocation method. Water Resour. Res. 2014, 50, 5848–5865. [Google Scholar] [CrossRef]
  69. Feinberg, J.; Langtangen, H.P. Chaospy: An open source tool for designing methods of uncertainty quantification. J. Comput. Sci. 2015, 11, 46–57. [Google Scholar] [CrossRef] [Green Version]
  70. Velázquez, J.A.; Anctil, F.; Perrin, C. Performance and reliability of multimodel hydrological ensemble simulations based on seventeen lumped models and a thousand catchments. Hydrol. Earth Syst. Sci. 2010, 14, 2303–2317. [Google Scholar] [CrossRef] [Green Version]
  71. Vrugt, J.A.; Diks, C.G.H.; Clark, M.P. Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling. Environ. Fluid Mech. 2008, 8, 579–595. [Google Scholar] [CrossRef]
  72. Marshall, L.; Nott, D.; Sharma, A. Towards dynamic catchment modelling: A Bayesian hierarchical mixtures of experts framework. Hydrol. Process. 2007, 21, 847–861. [Google Scholar] [CrossRef]
  73. Ajami, N.K.; Duan, Q.; Sorooshian, S. An integrated hydrologic Bayesian multimodel combination framework: Confronting input, parameter, and model structural uncertainty in hydrologic prediction. Water Resour. Res. 2007, 43. [Google Scholar] [CrossRef]
  74. Kavetski, D.; Kuczera, G.; Franks, S.W. Bayesian analysis of input uncertainty in hydrological modeling: 2. Application. Water Resour. Res. 2006, 42. [Google Scholar] [CrossRef]
  75. Beven, K.; Smith, P.; Freer, J. Comment on “Hydrological forecasting uncertainty assessment: Incoherence of the GLUE methodology” by Pietro Mantovan and Ezio Todini. J. Hydrol. 2007, 338, 315–318. [Google Scholar] [CrossRef]
  76. Beven, K.; Smith, P.J.; Freer, J. So just why would a modeller choose to be incoherent? J. Hydrol. 2008, 354, 15–32. [Google Scholar] [CrossRef]
  77. Mantovan, P.; Todini, E. Hydrological forecasting uncertainty assessment: Incoherence of the GLUE methodology. J. Hydrol. 2006, 330, 368–381. [Google Scholar] [CrossRef]
  78. Liu, Y.; Freer, J.; Beven, K.J.; Matgen, P. Towards a limits of acceptability approach to the calibration of hydrological models: Extending observation error. J. Hydrol. 2009, 367, 93–103. [Google Scholar] [CrossRef]
  79. Blazkova, S.; Beven, K. A limits of acceptability approach to model evaluation and uncertainty estimation in flood frequency estimation by continuous simulation: Skalka catchment, Czech Republic. Water Resour. Res. 2009, 45, 45. [Google Scholar] [CrossRef] [Green Version]
  80. Teweldebrhan, A.T.; Burkhart, J.F.; Schuler, T.V. Parameter uncertainty analysis for an operational hydrological model using residual-based and limits of acceptability approaches. Hydrol. Earth Syst. Sci. 2018, 22, 5021–5039. [Google Scholar] [CrossRef] [Green Version]
  81. Vrugt, J.A.; Beven, K.J. Embracing equifinality with efficiency: Limits of Acceptability sampling using the DREAM(LOA) algorithm. J. Hydrol. 2018, 559, 954–971. [Google Scholar] [CrossRef]
  82. Beven, K.J.; Smith, P.J. Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models. J. Hydrol. Eng. 2015, 20, A4014010. [Google Scholar] [CrossRef]
  83. Bates, B.C.; Campbell, E.P. A Markov Chain Monte Carlo Scheme for parameter estimation and inference in conceptual rainfall-runoff modeling. Water Resour. Res. 2001, 37, 937–947. [Google Scholar] [CrossRef]
  84. Kuczera, G.; Parent, E. Monte Carlo assessment of parameter uncertainty in conceptual catchment models: The Metropolis algorithm. J. Hydrol. 1998, 211, 69–85. [Google Scholar] [CrossRef]
  85. Thiemann, M.; Trosset, M.; Gupta, H.; Sorooshian, S. Bayesian recursive parameter estimation for hydrologic models. Water Resour. Res. 2001, 37, 2521–2535. [Google Scholar] [CrossRef]
  86. Vrugt, J.A.; Ter Braak, C.J.F.; Gupta, H.V.; Robinson, B.A. Equifinality of formal (DREAM) and informal (GLUE) Bayesian approaches in hydrologic modeling? Stoch. Environ. Res. Risk Assess. 2008, 23, 1011–1026. [Google Scholar] [CrossRef] [Green Version]
  87. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  88. Haario, H.; Saksman, E.; Tamminen, J. Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Stat. 1999, 14, 375–395. [Google Scholar] [CrossRef]
  89. Hastings, W.K.; Mardia, K.V. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  90. Viglione, A.; Hosking, J.R.M.; Francesco, L.; Alan, M.; Eric, G.; Olivier, P.; Jose, S.; Chi, N.; Karine, H. Package ‘nsRFA’. Available online: https://cran.r-project.org/web/packages/nsRFA/nsRFA.pdf (accessed on 13 July 2017).
  91. Evin, G.; Thyer, M.; Kavetski, D.; McInerney, D.; Kuczera, G. Comparison of joint versus postprocessor approaches for hydrological uncertainty estimation accounting for error autocorrelation and heteroscedasticity. Water Resour. Res. 2014, 50, 2350–2375. [Google Scholar] [CrossRef]
  92. Smith, T.; Sharma, A.; Marshall, L.; Mehrotra, R.; Sisson, S. Development of a formal likelihood function for improved Bayesian inference of ephemeral catchments. Water Resour. Res. 2010, 46, 46. [Google Scholar] [CrossRef] [Green Version]
  93. Smith, T.; Marshall, L.; Sharma, A. Modeling residual hydrologic errors with Bayesian inference. J. Hydrol. 2015, 528, 29–37. [Google Scholar] [CrossRef]
  94. Doherty, J.; Hunt, R.J. Two statistics for evaluating parameter identifiability and error reduction. J. Hydrol. 2009, 366, 119–127. [Google Scholar] [CrossRef]
  95. Stigter, J.D.; Beck, M.; Molenaar, J. Assessing local structural identifiability for environmental models. Environ. Model. Softw. 2017, 93, 398–408. [Google Scholar] [CrossRef]
  96. Reed, P.; Minsker, B.S.; Goldberg, D.E. Simplifying multiobjective optimization: An automated design methodology for the nondominated sorted genetic algorithm-II. Water Resour. Res. 2003, 39. [Google Scholar] [CrossRef]
  97. Vrugt, J.A.; Gupta, H.V.; Bastidas, L.A.; Bouten, W.; Sorooshian, S. Effective and efficient algorithm for multiobjective optimization of hydrologic models. Water Resour. Res. 2003, 39. [Google Scholar] [CrossRef] [Green Version]
  98. Yassin, F.; Razavi, S.; Wheater, H.; Sapriza-Azuri, G.; Davison, B.; Pietroniro, A. Enhanced identification of a hydrologic model using streamflow and satellite water storage data: A multicriteria sensitivity analysis and optimization approach. Hydrol. Process. 2017, 31, 3320–3333. [Google Scholar] [CrossRef]
  99. Madsen, H. Parameter estimation in distributed hydrological catchment modelling using automatic calibration with multiple objectives. Adv. Water Resour. 2003, 26, 205–216. [Google Scholar] [CrossRef]
  100. Efstratiadis, A.; Koutsoyiannis, D. One decade of multi-objective calibration approaches in hydrological modelling: A review. Hydrol. Sci. J. 2010, 55, 58–78. [Google Scholar] [CrossRef] [Green Version]
  101. Yassin, F.; Razavi, S.; ElShamy, M.; Davison, B.; Sapriza-Azuri, G.; Wheater, H. Representation and improved parameterization of reservoir operation in hydrological and land-surface models. Hydrol. Earth Syst. Sci. 2019, 23, 3735–3764. [Google Scholar] [CrossRef] [Green Version]
  102. Kollat, J.B.; Reed, P.M.; Wagener, T. When are multiobjective calibration trade-offs in hydrologic models meaningful? Water Resour. Res. 2012, 48, 48. [Google Scholar] [CrossRef]
  103. Engeland, K.; Braud, I.; Gottschalk, L.; Leblois, E. Multi-objective regional modelling. J. Hydrol. 2006, 327, 339–351. [Google Scholar] [CrossRef]
  104. Tang, Y.; Marshall, L.; Sharma, A.; Ajami, H. A Bayesian alternative for multi-objective ecohydrological model specification. J. Hydrol. 2018, 556, 25–38. [Google Scholar] [CrossRef]
  105. Isukapalli, S.S.; Roy, A.; Georgopoulos, P.G. Stochastic Response Surface Methods (SRSMs) for Uncertainty Propagation: Application to Environmental and Biological Systems. Risk Anal. 1998, 18, 351–363. [Google Scholar] [CrossRef]
  106. Tran, V.N.; Kim, J. Quantification of predictive uncertainty with a metamodel: Toward more efficient hydrologic simulations. Stoch. Environ. Res. Risk Assess. 2019, 33, 1453–1476. [Google Scholar] [CrossRef]
  107. Shrestha, D.L.; Kayastha, N.; Solomatine, D.P.; Price, R. Encapsulation of parametric uncertainty statistics by various predictive machine learning models: MLUE method. J. Hydroinform. 2014, 16, 95–113. [Google Scholar] [CrossRef]
  108. Solomatine, D.P.; Shrestha, D.L. A novel method to estimate model uncertainty using machine learning techniques. Water Resour. Res. 2009, 45. [Google Scholar] [CrossRef]
  109. Khu, S.-T.; Werner, M.G.F. Reduction of Monte-Carlo simulation runs for uncertainty estimation in hydrological modelling. Hydrol. Earth Syst. Sci. 2003, 7, 680–692. [Google Scholar] [CrossRef] [Green Version]
  110. Zhang, X.; Srinivasan, R.; Van Liew, M. Approximating SWAT Model Using Artificial Neural Network and Support Vector Machine. JAWRA J. Am. Water Resour. Assoc. 2009, 45, 460–474. [Google Scholar] [CrossRef]
  111. Shi, L.; Yang, J.; Zhang, D.; Li, H. Probabilistic collocation method for unconfined flow in heterogeneous media. J. Hydrol. 2009, 365, 4–10. [Google Scholar] [CrossRef]
  112. Kaintura, A.; Dhaene, T.; Spina, D. Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits. Electronics 2018, 7, 30. [Google Scholar] [CrossRef] [Green Version]
  113. Smith, R.C. Uncertainty Quantification: Theory, Implementation, and Applications; SIAM: Philadelphia, PA, USA, 2013; ISBN 9781611973211. [Google Scholar]
  114. Razavi, S.; Tolson, B.A.; Burn, D.H. Review of surrogate modeling in water resources. Water Resour. Res. 2012, 48, 48. [Google Scholar] [CrossRef]
  115. Dotto, C.B.S.; Mannina, G.; Kleidorfer, M.; Vezzaro, L.; Henrichs, M.; McCarthy, D.T.; Freni, G.; Rauch, W.; Deletic, A. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling. Water Res. 2012, 46, 2545–2558. [Google Scholar] [CrossRef]
  116. Doherty, J. MICA: Model Independent Markov Chain Monte Carlo Analysis; Watermark Numerical Computing: Brisbane, Australia, 2003. [Google Scholar]
  117. Keating, E.H.; Doherty, J.; Vrugt, J.A.; Kang, Q. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality. Water Resour. Res. 2010, 46, 46. [Google Scholar] [CrossRef] [Green Version]
  118. Thi, P.C.; Ball, J.; Dao, N.H. Uncertainty Estimation Using the Glue and Bayesian Approaches in Flood Estimation: A case Study—Ba River, Vietnam. Water 2018, 10, 1641. [Google Scholar] [CrossRef] [Green Version]
  119. Jin, X.; Xu, C.-Y.; Zhang, Q.; Singh, V.P. Parameter and modeling uncertainty simulated by GLUE and a formal Bayesian method for a conceptual hydrological model. J. Hydrol. 2010, 383, 147–155. [Google Scholar] [CrossRef]
  120. Li, L.; Xia, J.; Xu, C.-Y.; Singh, V.P. Evaluation of the subjective factors of the GLUE method and comparison with the formal Bayesian method in uncertainty assessment of hydrological models. J. Hydrol. 2010, 390, 210–221. [Google Scholar] [CrossRef]
  121. Balin, D.; Lee, H.; Rode, M. Is point uncertain rainfall likely to have a great impact on distributed complex hydrological modeling? Water Resour. Res. 2010, 46. [Google Scholar] [CrossRef]
  122. Hagedorn, R.; Doblas-Reyes, F.J.; Palmer, T.N. The rationale behind the success of multi-model ensembles in seasonal forecasting—I. Basic concept. Tellus A Dyn. Meteorol. Oceanogr. 2005, 57, 219–233. [Google Scholar] [CrossRef]
  123. Moges, E.; Jared, A.; Demissie, Y.; Yan, E.; Mortuza, R.; Mahat, V. Bayesian Augmented L-Moment Approach for Regional Frequency Analysis. In Proceedings of the World Environmental and Water Resources Congress 2018, Minnesota, MN, USA, 3–7 June 2018; pp. 165–180. [Google Scholar]
  124. Georgakakos, K.; Seo, D.-J.; Gupta, H.; Schaake, J.; Butts, M.B. Towards the characterization of streamflow simulation uncertainty through multimodel ensembles. J. Hydrol. 2004, 298, 222–241. [Google Scholar] [CrossRef]
  125. Arsenault, R.; Gatien, P.; Renaud, B.; Brissette, F.; Martel, J.-L. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation. J. Hydrol. 2015, 529, 754–767. [Google Scholar] [CrossRef]
  126. Raftery, A.E.; Madigan, D.; Hoeting, J.A. Bayesian Model Averaging for Linear Regression Models. J. Am. Stat. Assoc. 1997, 92, 179–191. [Google Scholar] [CrossRef]
  127. Hoeting, J.A.; Madigan, D.; Raftery, A.E.; Volinsky, C.T. Bayesian model averaging: A tutorial (with comments by M. Clyde, David Draper and E. I. George, and a rejoinder by the authors. Stat. Sci. 1999, 14, 382–417. [Google Scholar]
  128. Raftery, A.E.; Gneiting, T.; Balabdaoui, F.; Polakowski, M. Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Mon. Weather. Rev. 2005, 133, 1155–1174. [Google Scholar] [CrossRef] [Green Version]
  129. Duan, Q.; Ajami, N.K.; Gao, X.; Sorooshian, S. Multi-model ensemble hydrologic prediction using Bayesian model averaging. Adv. Water Resour. 2007, 30, 1371–1386. [Google Scholar] [CrossRef] [Green Version]
  130. Gupta, H.V.; Wagener, T.; Liu, Y. Reconciling theory with observations: Elements of a diagnostic approach to model evaluation. Hydrol. Process. 2008, 22, 3802–3813. [Google Scholar] [CrossRef]
  131. Sivapalan, M.; Blöschl, G.; Zhang, L.; Vertessy, R. Downward approach to hydrological prediction. Hydrol. Process. 2003, 17, 2101–2111. [Google Scholar] [CrossRef]
  132. Moges, E.; Demissie, Y.; Li, H. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty. Water Resour. Res. 2016, 52, 2551–2570. [Google Scholar] [CrossRef] [Green Version]
  133. Aronica, G.T.; Candela, A.; Viola, F.; Cannarozzo, M. Influence of Rating Curve Uncertainty on Daily Rainfall-Runoff Model Predictions; IAHS Publ.: Wallingford, UK, 2005. [Google Scholar]
  134. McMillan, H.; Freer, J.; Pappenberger, F.; Krueger, T.; Clark, M. Impacts of uncertain river flow data on rainfall-runoff model calibration and discharge predictions. Hydrol. Process. 2010, 24, 1270–1284. [Google Scholar] [CrossRef]
  135. Sikorska, A.E.; Renard, B. Calibrating a hydrological model in stage space to account for rating curve uncertainties: General framework and key challenges. Adv. Water Resour. 2017, 105, 51–66. [Google Scholar] [CrossRef] [Green Version]
  136. Xu, T.; Valocchi, A.J. A Bayesian approach to improved calibration and prediction of groundwater models with structural error. Water Resour. Res. 2015, 51, 9290–9311. [Google Scholar] [CrossRef]
  137. Yatheendradas, S.; Wagener, T.; Gupta, H.V.; Unkrich, C.; Goodrich, D.E.; Schaffner, M.; Stewart, A. Understanding uncertainty in distributed flash flood forecasting for semiarid regions. Water Resour. Res. 2008, 44, 44. [Google Scholar] [CrossRef]
  138. Reichert, P.; Mieleitner, J. Analyzing input and structural uncertainty of nonlinear dynamic models with stochastic, time-dependent parameters. Water Resour. Res. 2009, 45, 45. [Google Scholar] [CrossRef] [Green Version]
  139. Dotto, C.B.S.; Kleidorfer, M.; Deletic, A.; Rauch, W.; McCarthy, D. Impacts of measured data uncertainty on urban stormwater models. J. Hydrol. 2014, 508, 28–42. [Google Scholar] [CrossRef]
  140. Kleidorfer, M.; Deletic, A.; Fletcher, T.D.; Rauch, W. Impact of input data uncertainties on urban stormwater model parameters. Water Sci. Technol. 2009, 60, 1545–1554. [Google Scholar] [CrossRef]
  141. Andréassian, V.; Perrin, C.; Michel, C.; Usart-Sanchez, I.; Lavabre, J. Impact of imperfect rainfall knowledge on the efficiency and the parameters of watershed models. J. Hydrol. 2001, 250, 206–223. [Google Scholar] [CrossRef]
  142. Montanari, A.; Di Baldassarre, G. Data errors and hydrological modelling: The role of model structure to propagate observation uncertainty. Adv. Water Resour. 2013, 51, 498–504. [Google Scholar] [CrossRef]
  143. Peña-Arancibia, J.L.; Zhang, Y.; Pagendam, D.E.; Viney, N.R.; Lerat, J.; Van Dijk, A.I.J.M.; Vaze, J.; Frost, A.J. Streamflow rating uncertainty: Characterisation and impacts on model calibration and performance. Environ. Model. Softw. 2015, 63, 32–44. [Google Scholar] [CrossRef]
  144. Di Baldassarre, G.; Laio, F.; Montanari, A. Effect of observation errors on the uncertainty of design floods. Phys. Chem. Earth Parts A/B/C 2012, 42–44, 85–90. [Google Scholar] [CrossRef]
  145. Euser, T.; Winsemius, H.C.; Hrachowitz, M.; Fenicia, F.; Uhlenbrook, S.; Savenije, H.H.G. A framework to assess the realism of model structures using hydrological signatures. Hydrol. Earth Syst. Sci. 2013, 17, 1893–1912. [Google Scholar] [CrossRef] [Green Version]
  146. Hingray, B.; Schaefli, B.; Mezghani, A.; Hamdi, Y. Signature-based model calibration for hydrological prediction in mesoscale Alpine catchments. Hydrol. Sci. J. 2010, 55, 1002–1016. [Google Scholar] [CrossRef] [Green Version]
  147. Shafii, M.; Tolson, B.A. Optimizing hydrological consistency by incorporating hydrological signatures into model calibration objectives. Water Resour. Res. 2015, 51, 3796–3814. [Google Scholar] [CrossRef] [Green Version]
  148. Ajami, N.K.; Gupta, H.; Wagener, T.; Sorooshian, S. Calibration of a semi-distributed hydrologic model for streamflow estimation along a river system. J. Hydrol. 2004, 298, 112–135. [Google Scholar] [CrossRef] [Green Version]
  149. Wi, S.; Yang, Y.E.; Steinschneider, S.; Khalil, A.; Brown, C.M. Calibration approaches for distributed hydrologic models in poorly gaged basins: Implication for streamflow projections under climate change. Hydrol. Earth Syst. Sci. 2015, 19, 857–876. [Google Scholar] [CrossRef] [Green Version]
  150. Yang, Y.; Pan, M.; Beck, H.E.; Fisher, C.K.; Beighley, R.E.; Kao, S.-C.; Hong, Y.; Wood, E. In Quest of Calibration Density and Consistency in Hydrologic Modeling: Distributed Parameter Calibration against Streamflow Characteristics. Water Resour. Res. 2019, 55, 7784–7803. [Google Scholar] [CrossRef] [Green Version]
  151. Rientjes, T.H.M.; Muthuwatta, L.P.; Bos, M.G.; Booij, M.J.; Bhatti, H. Multi-variable calibration of a semi-distributed hydrological model using streamflow data and satellite-based evapotranspiration. J. Hydrol. 2013, 505, 276–290. [Google Scholar] [CrossRef]
  152. Stisen, S.; Koch, J.; Sonnenborg, T.O.; Refsgaard, J.C.; Bircher, S.; Ringgaard, R.; Jensen, K.H. Moving beyond run-off calibration-Multivariable optimization of a surface-subsurface-atmosphere model. Hydrol. Process. 2018, 32, 2654–2668. [Google Scholar] [CrossRef]
  153. Seibert, J.; McDonnell, J.J. On the dialog between experimentalist and modeler in catchment hydrology: Use of soft data for multicriteria model calibration. Water Resour. Res. 2002, 38, 23-1–23-14. [Google Scholar] [CrossRef]
  154. Arnold, J.G.; Youssef, M.A.; Yen, H.; White, M.J.; Sheshukov, A.Y.; Sadeghi, A.M.; Moriasi, D.N.; Steiner, J.L.; Amatya, D.M.; Skaggs, R.W.; et al. Hydrological Processes and Model Representation: Impact of Soft Data on Calibration. Trans. ASABE 2015, 58, 1637–1660. [Google Scholar] [CrossRef]
  155. Koster, R.D.; Liu, Q.; Mahanama, S.P.P.; Reichle, R.H. Improved Hydrological Simulation Using SMAP Data: Relative Impacts of Model Calibration and Data Assimilation. J. Hydrometeorol. 2018, 19, 727–741. [Google Scholar] [CrossRef] [PubMed]
  156. Revilla-Romero, B.; Beck, H.E.; Burek, P.; Salamon, P.; De Roo, A.; Thielen, J. Filling the gaps: Calibrating a rainfall-runoff model using satellite-derived surface water extent. Remote. Sens. Environ. 2015, 171, 118–131. [Google Scholar] [CrossRef]
  157. Silvestro, F.; Gabellani, S.; Rudari, R.; Delogu, F.; Laiolo, P.; Boni, G. Uncertainty reduction and parameter estimation of a distributed hydrological model with ground and remote-sensing data. Hydrol. Earth Syst. Sci. 2015, 19, 1727–1751. [Google Scholar] [CrossRef] [Green Version]
  158. Yeh, W.W.-G.; Yoon, Y.S. Aquifer parameter identification with optimum dimension in parameterization. Water Resour. Res. 1981, 17, 664–672. [Google Scholar] [CrossRef]
  159. Clark, M.; Kavetski, D.; Fenicia, F. Pursuing the method of multiple working hypotheses for hydrological modeling. Water Resour. Res. 2011, 47. [Google Scholar] [CrossRef] [Green Version]
  160. Schoups, G.; Vrugt, J.A. A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors. Water Resour. Res. 2010, 46. [Google Scholar] [CrossRef] [Green Version]
  161. Jehn, F.U.; Chamorro, A.; Houska, T.; Breuer, L. Trade-offs between parameter constraints and model realism: A case study. Sci. Rep. 2019, 9, 10729. [Google Scholar] [CrossRef] [Green Version]
  162. Lindenschmidt, K.-E. The effect of complexity on parameter sensitivity and model uncertainty in river water quality modelling. Ecol. Model. 2006, 190, 72–86. [Google Scholar] [CrossRef]
  163. Snowling, S.; Kramer, J.R. Evaluating modelling uncertainty for model selection. Ecol. Model. 2001, 138, 17–30. [Google Scholar] [CrossRef]
  164. Fenicia, F.; Savenije, H.H.G.; Matgen, P.; Pfister, L. Understanding catchment behavior through stepwise model concept improvement. Water Resour. Res. 2008, 44, 44. [Google Scholar] [CrossRef] [Green Version]
  165. Willems, P. Parsimonious rainfall–runoff model construction supported by time series processing and validation of hydrological extremes—Part 1: Step-wise model-structure identification and calibration approach. J. Hydrol. 2014, 510, 578–590. [Google Scholar] [CrossRef]
  166. Beven, K.J. Uniqueness of place and process representations in hydrological modelling. Hydrol. Earth Syst. Sci. 2000, 4, 203–213. [Google Scholar] [CrossRef] [Green Version]
  167. Gupta, H.V.; Clark, M.P.; Vrugt, J.A.; Abramowitz, G.; Ye, M. Towards a comprehensive assessment of model structural adequacy. Water Resour. Res. 2012, 48. [Google Scholar] [CrossRef]
  168. Pande, S. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory. Water Resour. Res. 2013, 49, 5631–5657. [Google Scholar] [CrossRef] [Green Version]
  169. Pande, S. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications. Water Resour. Res. 2013, 49, 5658–5673. [Google Scholar] [CrossRef]
  170. Martinez, G.F.; Gupta, H.V. Toward improved identification of hydrological models: A diagnostic evaluation of the “abcd” monthly water balance model for the conterminous United States. Water Resour. Res. 2010, 46, 46. [Google Scholar] [CrossRef]
  171. Yilmaz, K.K.; Gupta, H.V.; Wagener, T. A process-based diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resour. Res. 2008, 44, 44. [Google Scholar] [CrossRef] [Green Version]
  172. Clark, M.P.; Rupp, D.E.; Woods, R.; Van Meerveld, I.; Peters, N.E.; Freer, J. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale. Hydrol. Process. 2009, 23, 311–319. [Google Scholar] [CrossRef]
  173. McMillan, H.K. Linking hydrologic signatures to hydrologic processes: A review. Hydrol. Process. 2020, 34, 1393–1409. [Google Scholar] [CrossRef]
  174. Razavi, T.; Coulibaly, P. Streamflow Prediction in Ungauged Basins: Review of Regionalization Methods. J. Hydrol. Eng. 2013, 18, 958–975. [Google Scholar] [CrossRef]
  175. Wagener, T.; Montanari, A. Convergence of approaches toward reducing uncertainty in predictions in ungauged basins. Water Resour. Res. 2011, 47. [Google Scholar] [CrossRef] [Green Version]
  176. Wagener, T.; Wheater, H.S. Parameter estimation and regionalization for continuous rainfall-runoff models including uncertainty. J. Hydrol. 2006, 320, 132–154. [Google Scholar] [CrossRef]
  177. Arsenault, R.; Brissette, F.P. Continuous streamflow prediction in ungauged basins: The effects of equifinality and parameter set selection on uncertainty in regionalization approaches. Water Resour. Res. 2014, 50, 6135–6153. [Google Scholar] [CrossRef]
  178. Hundecha, Y.; Bárdossy, A. Modeling of the effect of land use changes on the runoff generation of a river basin through parameter regionalization of a watershed model. J. Hydrol. 2004, 292, 281–295. [Google Scholar] [CrossRef]
  179. Yadav, M.; Wagener, T.; Gupta, H. Regionalization of constraints on expected watershed response behavior for improved predictions in ungauged basins. Adv. Water Resour. 2007, 30, 1756–1774. [Google Scholar] [CrossRef]
  180. Zhang, Z.; Wagener, T.; Reed, P.; Bhushan, R. Reducing uncertainty in predictions in ungauged basins by combining hydrologic indices regionalization and multiobjective optimization. Water Resour. Res. 2008, 44, 44. [Google Scholar] [CrossRef]
  181. Bulygina, N.; McIntyre, N.; Wheater, H.S. Bayesian conditioning of a rainfall-runoff model for predicting flows in ungauged catchments and under land use changes. Water Resour. Res. 2011, 47. [Google Scholar] [CrossRef] [Green Version]
  182. Bulygina, N.; McIntyre, N.; Wheater, H. Conditioning rainfall-runoff model parameters for ungauged catchments and land management impacts analysis. Hydrol. Earth Syst. Sci. 2009, 13, 893–904. [Google Scholar] [CrossRef] [Green Version]
  183. Zadeh, L.A. Generalized theory of uncertainty (GTU)—principal concepts and ideas. Comput. Stat. Data Anal. 2006, 51, 15–46. [Google Scholar] [CrossRef]
  184. Liu, F.F.; Li, Y.P.; Huang, G.; Cui, L.; Liu, J. Assessing Uncertainty in Hydrological Processes Using a Fuzzy Vertex Simulation Method. J. Hydrol. Eng. 2016, 21, 05016002. [Google Scholar] [CrossRef]
  185. Kambalimath, S.; Deka, P.C. A basic review of fuzzy logic applications in hydrology and water resources. Appl. Water Sci. 2020, 10, 1–14. [Google Scholar] [CrossRef]
  186. Nearing, G.S.; Tian, Y.; Gupta, H.V.; Clark, M.P.; Harrison, K.W.; Weijs, S.V. A philosophical basis for hydrological uncertainty. Hydrol. Sci. J. 2016, 61, 1666–1678. [Google Scholar] [CrossRef] [Green Version]
  187. Nearing, G.S.; Ruddell, B.L.; Bennett, A.R.; Prieto, C.; Gupta, H.V. Does Information Theory Provide a New Paradigm for Earth Science? Hypothesis Testing. Water Resour. Res. 2020, 56, 56. [Google Scholar] [CrossRef]
  188. Nearing, G.S.; Gupta, H.V. The quantity and quality of information in hydrologic models. Water Resour. Res. 2015, 51, 524–538. [Google Scholar] [CrossRef]
  189. Larsen, M.; Christensen, J.H.; Drews, M.; Butts, M.; Refsgaard, J.C. Local control on precipitation in a fully coupled climate-hydrology model. Sci. Rep. 2016, 6, 22927. [Google Scholar] [CrossRef] [Green Version]
  190. Wagner, S.; Fersch, B.; Yuan, F.; Yu, Z.; Kunstmann, H. Fully coupled atmospheric-hydrological modeling at regional and long-term scales: Development, application, and analysis of WRF-HMS. Water Resour. Res. 2016, 52, 3187–3211. [Google Scholar] [CrossRef] [Green Version]
  191. Moges, E.; Demissie, Y.; Li, H. Uncertainty propagation in coupled hydrological models using winding stairs and null-space Monte Carlo methods. J. Hydrol. 2020, 589, 125341. [Google Scholar] [CrossRef]
  192. Ye, M. Uncertainty—AGU Hydrology Section Newsletter 2018. Available online: https://connect.agu.org/hydrology/newsletter/ (accessed on 26 November 2020).
  193. Razavi, S. Uncertainty—AGU Hydrology Section Newsletter 2019. Available online: https://connect.agu.org/hydrology/newsletter/ (accessed on 26 November 2020).
  194. Wine, M.L. Under non-stationarity securitization contributes to uncertainty and Tragedy of the Commons. J. Hydrol. 2019, 568, 716–721. [Google Scholar] [CrossRef]
  195. Wine, M.L. Comment on Ben Yona et al. (2020): Intra-annual dynamics—always fascinating, occasionally essential. J. Hydrol. 2020, 588, 125058. [Google Scholar] [CrossRef]
  196. Zeitoun, M.; Mirumachi, N.; Warner, J.F.; Kirkegaard, M.; Cascão, A. Analysis for water conflict transformation. Water Int. 2019, 45, 365–384. [Google Scholar] [CrossRef]
  197. Addor, N.; Newman, A.; Mizukami, N.; Clark, M.P. The CAMELS data set: Catchment attributes and meteorology for large-sample studies. Hydrol. Earth Syst. Sci. 2017, 21, 5293–5313. [Google Scholar] [CrossRef] [Green Version]
  198. Duan, Q.; Schaake, J.; Andréassian, V.; Franks, S.; Goteti, G.; Gupta, H.; Gusev, Y.; Habets, F.; Hall, A.; Hay, L.; et al. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops. J. Hydrol. 2006, 320, 3–17. [Google Scholar] [CrossRef] [Green Version]
  199. Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L. Transferring global uncertainty estimates from gauged to ungauged catchments. Hydrol. Earth Syst. Sci. 2015, 19, 2535–2546. [Google Scholar] [CrossRef]
  200. Sampson, C.C.; Fewtrell, T.J.; Oloughlin, F.; Pappenberger, F.; Bates, P.B.; Freer, J.E.; Cloke, H.L. The impact of uncertain precipitation data on insurance loss estimates using a flood catastrophe model. Hydrol. Earth Syst. Sci. 2014, 18, 2305–2324. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Different sources of hydrological model uncertainty. The arrows indicate the direction of interaction of the uncertainty sources. The center rectangle demonstrates different structures, hypotheses, and equations as examples of structural uncertainty. The top and left rectangles indicate input data distributions as sources of input forcing and calibration data uncertainties. The bottom rectangle shows parameter uncertainty using parameter distributions. The right-side sketch shows the resultant predictive uncertainty in a hydrograph as upper and lower predictive bands.
Figure 1. Different sources of hydrological model uncertainty. The arrows indicate the direction of interaction of the uncertainty sources. The center rectangle demonstrates different structures, hypotheses, and equations as examples of structural uncertainty. The top and left rectangles indicate input data distributions as sources of input forcing and calibration data uncertainties. The bottom rectangle shows parameter uncertainty using parameter distributions. The right-side sketch shows the resultant predictive uncertainty in a hydrograph as upper and lower predictive bands.
Water 13 00028 g001
Figure 2. Summary of sources of hydrological model uncertainty and broad techniques used to deal with them. The numbers next to the methods indicates the papers reviewed in Section 3.
Figure 2. Summary of sources of hydrological model uncertainty and broad techniques used to deal with them. The numbers next to the methods indicates the papers reviewed in Section 3.
Water 13 00028 g002
Figure 3. The different sources of hydrological model uncertainty and examples of their causes.
Figure 3. The different sources of hydrological model uncertainty and examples of their causes.
Water 13 00028 g003
Figure 4. (a) The span of model complexity with respect to spatial resolutions and process complexity, (b) The bias–variance tradeoff curve relating model complexity and uncertainty. The green curve indicates the decrease in bias (model inaccuracy) as model complexity increases while the purple curve indicates a decrease in variance (parameter uncertainty) because of a decrease in model complexity. The red curve indicates the net effect of bias and variance.
Figure 4. (a) The span of model complexity with respect to spatial resolutions and process complexity, (b) The bias–variance tradeoff curve relating model complexity and uncertainty. The green curve indicates the decrease in bias (model inaccuracy) as model complexity increases while the purple curve indicates a decrease in variance (parameter uncertainty) because of a decrease in model complexity. The red curve indicates the net effect of bias and variance.
Water 13 00028 g004
Figure 5. Stages of UA in ungauged basins: (a) model dependent and (b) model independent approaches.
Figure 5. Stages of UA in ungauged basins: (a) model dependent and (b) model independent approaches.
Water 13 00028 g005
Table 1. Summary of uncertainty analysis methods, sources of uncertainty, and color coded broader methodological categories.
Table 1. Summary of uncertainty analysis methods, sources of uncertainty, and color coded broader methodological categories.
No.UA Methods ReviewedApplicable in Uncertainty SourceMethod ClassesWeb Page to Download the Software ToolReference Application
1Monte CarloParameter/input/ structural uncertaintyMonte Carlohttp://www.uncertain-future.org.uk/?page_id=131[57]
2GLUEParameter/ structural uncertaintyMonte Carlo and behavioral thresholdhttp://www.uncertain-future.org.uk/?page_id=131[58]
3FUSEStructuralMonte Carlohttps://github.com/cvitolo/fuse[59]
4Multi-objectiveParameter uncertaintyMulti-objective optimizationhttp://borgmoea.org/https://faculty.sites.uci.edu/jasper/software/;
http://moeaframework.org/index.html
[60,61,62]
5Machine LearningPredictive uncertaintyMachine Learninghttps://scikit-learn.org/stable/[63]
6PEST/UCODEParameter/ predictive uncertaintyLeast square analysis http://www.pesthomepage.org/
https://igwmc.mines.edu/ucode-2/
[64,65,66]
7Polynomial chaos expansionPredictive/ parameter uncertaintyPolynomial chaos expansion https://www.uqlab.com/featureshttp://muq.mit.edu/
https://pypi.org/project/UQToolbox/
https://github.com/jonathf/chaospy
[67,68,69]
8Ensemble averagingStructural uncertaintyMulti-models [70]
9BMAStructural uncertainty Multi-models plus Bayesian statisticshttps://faculty.sites.uci.edu/jasper/software[71]
10HMEStructural uncertaintyMulti-models plus Bayesian statisticshttps://faculty.sites.uci.edu/jasper/software[72]
11DREAMParameter/input uncertaintyBayesian statisticshttps://faculty.sites.uci.edu/jasper/software/[18]
12BATEA and IBUNEInput/structure/ parameter uncertaintyBayesian statistics https://faculty.sites.uci.edu/jasper/software/[73,74]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moges, E.; Demissie, Y.; Larsen, L.; Yassin, F. Review: Sources of Hydrological Model Uncertainties and Advances in Their Analysis. Water 2021, 13, 28. https://doi.org/10.3390/w13010028

AMA Style

Moges E, Demissie Y, Larsen L, Yassin F. Review: Sources of Hydrological Model Uncertainties and Advances in Their Analysis. Water. 2021; 13(1):28. https://doi.org/10.3390/w13010028

Chicago/Turabian Style

Moges, Edom, Yonas Demissie, Laurel Larsen, and Fuad Yassin. 2021. "Review: Sources of Hydrological Model Uncertainties and Advances in Their Analysis" Water 13, no. 1: 28. https://doi.org/10.3390/w13010028

APA Style

Moges, E., Demissie, Y., Larsen, L., & Yassin, F. (2021). Review: Sources of Hydrological Model Uncertainties and Advances in Their Analysis. Water, 13(1), 28. https://doi.org/10.3390/w13010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop