首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Although uncertainty about structures of environmental models (conceptual uncertainty) is often acknowledged to be the main source of uncertainty in model predictions, it is rarely considered in environmental modelling. Rather, formal uncertainty analyses have traditionally focused on model parameters and input data as the principal source of uncertainty in model predictions. The traditional approach to model uncertainty analysis, which considers only a single conceptual model, may fail to adequately sample the relevant space of plausible conceptual models. As such, it is prone to modelling bias and underestimation of predictive uncertainty.  相似文献   

2.
How much data is needed for calibration of a hydrological catchment model? In this paper we address this question by evaluating the information contained in different subsets of discharge and groundwater time series for multi‐objective calibration of a conceptual hydrological model within the framework of an uncertainty analysis. The study site was a 5·6‐km2 catchment within the Forsmark research site in central Sweden along the Baltic coast. Daily time series data were available for discharge and several groundwater wells within the catchment for a continuous 1065‐day period. The hydrological model was a site‐specific modification of the conceptual HBV model. The uncertainty analyses were based on a selective Monte Carlo procedure. Thirteen subsets of the complete time series data were investigated with the idea that these represent realistic intermittent sampling strategies. Data subsets included split‐samples and various combinations of weekly, monthly, and quarterly fixed interval subsets, as well as a 53‐day ‘informed observer’ subset that utilized once per month samples except during March and April—the months containing large and often dominant snow melt events—when sampling was once per week. Several of these subsets, including that of the informed observer, provided very similar constraints on model calibration and parameter identification as the full data record, in terms of credibility bands on simulated time series, posterior parameter distributions, and performance indices calculated to the full dataset. This result suggests that hydrological sampling designs can, at least in some cases, be optimized. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
The uncertainties associated with atmosphere‐ocean General Circulation Models (GCMs) and hydrologic models are assessed by means of multi‐modelling and using the statistically downscaled outputs from eight GCM simulations and two emission scenarios. The statistically downscaled atmospheric forcing is used to drive four hydrologic models, three lumped and one distributed, of differing complexity: the Sacramento Soil Moisture Accounting (SAC‐SMA) model, Conceptual HYdrologic MODel (HYMOD), Thornthwaite‐Mather model (TM) and the Precipitation Runoff Modelling System (PRMS). The models are calibrated based on three objective functions to create more plausible models for the study. The hydrologic model simulations are then combined using the Bayesian Model Averaging (BMA) method according to the performance of each models in the observed period, and the total variance of the models. The study is conducted over the rainfall‐dominated Tualatin River Basin (TRB) in Oregon, USA. This study shows that the hydrologic model uncertainty is considerably smaller than GCM uncertainty, except during the dry season, suggesting that the hydrologic model selection‐combination is critical when assessing the hydrologic climate change impact. The implementation of the BMA in analysing the ensemble results is found to be useful in integrating the projected runoff estimations from different models, while enabling to assess the model structural uncertainty. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Abstract

Different sets of parameters and conceptualizations of a basin can give equally good results in terms of predefined objective functions. Therefore, a need exists to tackle equifinality and quantify the uncertainty bands of a model. In this paper we use the concepts of equifinality, identifiability and uncertainty to propose a simple method aimed at constraining the equifinal parameters and reducing the uncertainty bands of model outputs, and obtaining physically possible and reasonable models. Additionally, the uncertainty of equifinal solutions is quantified to estimate the amount by which output uncertainty can be reduced by knowing how to discard most of the equifinal solutions of a model. As a study case, a conceptual model of the Chillán basin in Chile is carried out. From the study it is concluded that using identifiability analysis makes it possible to constrain equifinal solutions with reduced uncertainty and realistic models, resulting in a framework that can be recommended to practitioners, especially due to the simplicity of the method.  相似文献   

5.
Distributed hydrological models can make predictions with much finer spatial resolution than the supporting field data. They will, however, usually not have a predictive capability at model grid scale due to limitations of data availability and uncertainty of model conceptualizations. In previous publications, we have introduced the Representative Elementary Scale (RES) concept as the theoretically minimum scale at which a model with a given conceptualization has a potential for obtaining a predictive accuracy corresponding to a given acceptable accuracy. The new RES concept has similarities to the 25‐year‐old Representative Elementary Area concept, but it differs in the sense that while Representative Elementary Area addresses similarity between subcatchments by sampling within the catchment, RES focuses on effects of data or conceptualization uncertainty by Monte Carlo simulations followed by a scale analysis. In the present paper, we extend and generalize the RES concept to a framework for assessing the minimum scale of potential predictability of a distributed model applicable also for analyses of different model structures and data availabilities. We present three examples with RES analyses and discuss our findings in relation to Beven's alternative blueprint and environmental modeling philosophy from 2002. While Beven here addresses model structural and parameter uncertainties, he does not provide a thorough methodology for assessing to which extent model predictions for variables that are not measured possess opportunities to have meaningful predictive accuracies, or whether this is impossible due to limitations in data and models. This shortcoming is addressed by the RES framework through its analysis of the relationship between aggregation scale of model results and prediction uncertainties and for considering how alternative model structures and alternative data availability affects the results. We suggest that RES analysis should be applied in all modeling studies that aim to use simulation results at spatial scales smaller than the support scale of the calibration data.  相似文献   

6.
The response of a landslide near Barcelonnette (southeast France) to climatic factors was simulated with three slope stability models: a fully empirical gross precipitation threshold, a semi‐empirical threshold model for net precipitation, and a fully conceptual slope stability model. The three models performed with similar levels in reproducing the present‐day temporal pattern of landslide reactivation, using dendrogeomorphological information as test data. The semi‐empirical and conceptual models were found to be overparameterized, because more than one parameter setting matching the test data was identified. In the case of the conceptual model, this resulted in strongly divergent scenarios of future landslide activity, using downscaled climate scenarios as inputs to the model. The uncertainty of the landslide scenarios obtained with the semi‐empirical model was much lower. In addition, the simulation of strongly different scenarios by the fully empirical threshold was attributed to its incomplete representation of the site‐specific landslide reactivation mechanism. It is concluded that the semi‐empirical model constitutes the best compromise between conceptual representation and model robustness. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
Previous work has shown that streamflow response during baseflow conditions is a function of storage, but also that this functional relationship varies among seasons and catchments. Traditionally, hydrological models incorporate conceptual groundwater models consisting of linear or non‐linear storage–outflow functions. Identification of the right model structure and model parameterization however is challenging. The aim of this paper is to systematically test different model structures in a set of catchments where different aquifer types govern baseflow generation processes. Nine different two‐parameter conceptual groundwater models are applied with multi‐objective calibration to transform two different groundwater recharge series derived from a soil‐atmosphere‐vegetation transfer model into baseflow separated from streamflow data. The relative performance differences of the model structures allow to systematically improve the understanding of baseflow generation processes and to identify most appropriate model structures for different aquifer types. We found more versatile and more aquifer‐specific optimal model structures and elucidate the role of interflow, flow paths, recharge regimes and partially contributing storages. Aquifer‐specific recommendations of storage models were found for fractured and karstic aquifers, whereas large storage capacities blur the identification of superior model structures for complex and porous aquifers. A model performance matrix is presented, which highlights the joint effects of different recharge inputs, calibration criteria, model structures and aquifer types. The matrix is a guidance to improve groundwater model structures towards their representation of the dominant baseflow generation processes of specific aquifer types. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
In this study, uncertainty in model input data (precipitation) and parameters is propagated through a physically based, spatially distributed hydrological model based on the MIKE SHE code. Precipitation uncertainty is accounted for using an ensemble of daily rainfall fields that incorporate four different sources of uncertainty, whereas parameter uncertainty is considered using Latin hypercube sampling. Model predictive uncertainty is assessed for multiple simulated hydrological variables (discharge, groundwater head, evapotranspiration, and soil moisture). Utilizing an extensive set of observational data, effective observational uncertainties for each hydrological variable are assessed. Considering not only model predictive uncertainty but also effective observational uncertainty leads to a notable increase in the number of instances, for which model simulation and observations are in good agreement (e.g., 47% vs. 91% for discharge and 0% vs. 98% for soil moisture). Effective observational uncertainty is in several cases larger than model predictive uncertainty. We conclude that the use of precipitation uncertainty with a realistic spatio‐temporal correlation structure, analyses of multiple variables with different spatial support, and the consideration of observational uncertainty are crucial for adequately evaluating the performance of physically based, spatially distributed hydrological models.  相似文献   

9.
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories—Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.  相似文献   

10.
Robert L. Wilby 《水文研究》2005,19(16):3201-3219
Despite their acknowledged limitations, lumped conceptual models continue to be used widely for climate‐change impact assessments. Therefore, it is important to understand the relative magnitude of uncertainties in water resource projections arising from the choice of model calibration period, model structure, and non‐uniqueness of model parameter sets. In addition, external sources of uncertainty linked to choice of emission scenario, climate model ensemble member, downscaling technique(s), and so on, should be acknowledged. To this end, the CATCHMOD conceptual water balance model was used to project changes in daily flows for the River Thames at Kingston using parameter sets derived from different subsets of training data, including the full record. Monte Carlo sampling was also used to explore parameter stability and identifiability in the context of historic climate variability. Parameters reflecting rainfall acceptance at the soil surface in simpler model structures were found to be highly sensitive to the training period, implying that climatic variability does lead to variability in the hydrologic behaviour of the Thames basin. Non‐uniqueness of parameters for more complex model structures results in relatively small variations in projected annual mean flow quantiles for different training periods compared with the choice of emission scenario. However, this was not the case for subannual flow statistics, where uncertainty in flow changes due to equifinality was higher in winter than summer, and comparable in magnitude to the uncertainty of the emission scenario. Therefore, it is recommended that climate‐change impact assessments using conceptual water balance models should routinely undertake sensitivity analyses to quantify uncertainties due to parameter instability, identifiability and non‐uniqueness. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Alternative water supply, storage, and treatment (AWSST) systems, which utilize aquifers to supply, store, and naturally treat water, are increasingly being implemented globally to address water scarcity and safety. The failure of some AWSST systems to meet water quality expectations was caused by conceptual model error, in which local hydrogeological conditions were less favorable than recognized or considered during project feasibility assessments, economic analyses, and design. More successful implementation of AWSST projects requires that conceptual model error be explicitly and rigorously addressed. Recommended approaches to addressing conceptual model uncertainty include more detailed aquifer characterization, recognition of a wide suite of possible alternative conceptual models, and then screening the models as to whether or not they are plausible and relevant in terms of materially impacting predictive results. Subjective professional judgement remains the basis for assigning probabilities to relevant conceptual models (contingencies).  相似文献   

12.
Hypothesis testing about catchment functioning with conceptual hydrological models is affected by uncertainties in the model representation of reality as well as in the observed data used to drive and evaluate the model. We formulated a learning framework to investigate the role of observational uncertainties in hypothesis testing using conceptual models and applied it to the relatively data‐scarce tropical Sarapiqui catchment in Costa Rica. Observational uncertainties were accounted for throughout the framework that incorporated different choices of model structures to test process hypotheses, analyses of parametric uncertainties and effects of likelihood choice, a posterior performance analysis and (iteratively) formulation of new hypotheses. Estimated uncertainties in precipitation and discharge were linked to likely non‐linear near‐surface runoff generation and the potentially important role of soils in mediating the hydrological response. Some model‐structural inadequacies could be identified in the posterior analyses (supporting the need for an explicit soil‐moisture routine to match streamflow dynamics), but the available information about the observational uncertainties prevented conclusions about other process representations. The importance of epistemic data errors, the difficulty in quantifying them and their effect on model simulations was illustrated by an inconsistent event with long‐term effects. Finally we discuss the need for new data, new process hypotheses related to deep groundwater losses, and conclude that observational uncertainties need to be accounted for in hypothesis testing to reduce the risk of drawing incorrect conclusions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Sasmita Sahoo 《水文研究》2015,29(5):671-691
Groundwater modelling has emerged as a powerful tool to develop a sustainable management plan for efficient groundwater utilization and protection of this vital resource. This study deals with the development of five hybrid artificial neural network (ANN) models and their critical assessment for simulating spatio‐temporal fluctuations of groundwater in an alluvial aquifer system. Unlike past studies, in this study, all the relevant input variables having significant influence on groundwater have been considered, and the hybrid ANN technique [ANN‐cum‐Genetic Algorithm (GA)] has been used to simulate groundwater levels at 17 sites over the study area. The parameters of the ANN models were optimized using a GA optimization technique. The predictive ability of the five hybrid ANN models developed for each of the 17 sites was evaluated using six goodness‐of‐fit criteria and graphical indicators, together with adequate uncertainty analyses. The analysis of the results of this study revealed that the multilayer perceptron Levenberg–Marquardt model is the most efficient in predicting monthly groundwater levels at almost all of the 17 sites, while the radial basis function model is the least efficient. The GA technique was found to be superior to the commonly used trial‐and‐error method for determining optimal ANN architecture and internal parameters. Of the goodness‐of‐fit statistics used in this study, only root‐mean‐squared error, r2 and Nash–Sutcliffe efficiency were found to be more powerful and useful in assessing the performance of the ANN models. It can be concluded that the hybrid ANN modelling approach can be effectively used for predicting spatio‐temporal fluctuations of groundwater at basin or subbasin scales. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
How can spatially explicit nonlinear regression modelling be used for obtaining nonpoint source loading estimates in watersheds with limited information? What is the value of additional monitoring and where should future data‐collection efforts focus on? In this study, we address two frequently asked questions in watershed modelling by implementing Bayesian inference techniques to parameterize SPAtially Referenced Regressions On Watershed attributes (SPARROW), a model that empirically estimates the relation between in‐stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed. Our case study is the Hamilton Harbour watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. The proposed Bayesian approach explicitly accounts for the uncertainty associated with the existing knowledge from the system and the different types of spatial correlation typically underlying the parameter estimation of watershed models. Informative prior parameter distributions were formulated to overcome the problem of inadequate data quantity and quality, whereas the potential bias introduced from the pertinent assumptions is subsequently examined by quantifying the relative change of the posterior parameter patterns. Our modelling exercise offers the first estimates of export coefficients and delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export ‘hot spots’ in the studied watershed. Despite substantial uncertainties characterizing our calibration dataset, ranging from 17% to nearly 400%, we arrived at an uncertainty level for the whole‐basin nutrient export estimates of only 36%. Finally, we conduct modelling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty if the uncertainty associated with the current nutrient loading estimates is reduced. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
With the rapid growth of nanotechnology industry, nanomaterials as an emerging pollutant are gradually released into subsurface environments and become great concerns. Simulating the transport of nanomaterials in groundwater is an important approach to investigate and predict the impact of nanomaterials on subsurface environments. Currently, a number of transport models are used to simulate this process, and the outputs of these models could be inconsistent with each other due to conceptual model uncertainty. However, the performances of different models on simulating nanoparticles transport in groundwater are rarely assessed in Bayesian framework in previous researches, and these will be the primary objective of this study. A porous media column experiment is conducted to observe the transport of Titanium Dioxide Nanoparticles (nano-TiO2). Ten typical transport models which consider different chemical reaction processes are used to simulate the transport of nano-TiO2, and the observed nano-TiO2 breakthrough curves data are used to calibrate these models. For each transport model, the parameter uncertainty is evaluated using Markov Chain Monte Carlo, and the DREAM(ZS) algorithm is used to sample parameter probability space. Moreover, the Bayesian model averaging (BMA) method is used to incorporate the conceptual model uncertainty arising from different chemical reaction based transport models. The results indicate that both two-sites and nonequilibrium sorption models can well reproduce the retention of nano-TiO2 transport in porous media. The linear equilibrium sorption isotherm, first-order degradation, and mobile-immobile models fail to describe the nano-TiO2 retention and transport. The BMA method could instead provide more reliable estimations of the predictive uncertainty compared to that using a single model.  相似文献   

16.
Hydrological scientists develop perceptual models of the catchments they study, using field measurements and observations to build an understanding of the dominant processes controlling the hydrological response. However, conceptual and numerical models used to simulate catchment behaviour often fail to take advantage of this knowledge. It is common instead to use a pre‐defined model structure which can only be fitted to the catchment via parameter calibration. In this article, we suggest an alternative approach where different sources of field data are used to build a synthesis of dominant hydrological processes and hence provide recommendations for representing those processes in a time‐stepping simulation model. Using analysis of precipitation, flow and soil moisture data, recommendations are made for a comprehensive set of modelling decisions, including Evapotranspiration (ET) parameterization, vertical drainage threshold and behaviour, depth and water holding capacity of the active soil zone, unsaturated and saturated zone model architecture and deep groundwater flow behaviour. The second article in this two‐part series implements those recommendations and tests the capability of different model sub‐components to represent the observed hydrological processes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
This paper explores the predicted hydrologic responses associated with the compounded error of cascading global circulation model (GCM) uncertainty through hydrologic model uncertainty due to climate change. A coupled groundwater and surface water flow model (GSFLOW) was used within the differential evolution adaptive metropolis (DREAM) uncertainty approach and combined with eight GCMs to investigate uncertainties in hydrologic predictions for three subbasins of varying hydrogeology within the Santiam River basin in Oregon, USA. Predictions of future hydrology in the Santiam River include increases in runoff in the fall and winter months and decreases in runoff for the spring and summer months. One‐year peak flows were predicted to increase whereas 100‐year peak flows were predicted to slightly decrease. The predicted 10‐year 7‐day low flow decreased in two subbasins with little groundwater influences but increased in another subbasin with substantial groundwater influences. Uncertainty in GCMs represented the majority of uncertainty in the analysis, accounting for an average deviation from the median of 66%. The uncertainty associated with use of GSFLOW produced only an 8% increase in the overall uncertainty of predicted responses compared to GCM uncertainty. This analysis demonstrates the value and limitations of cascading uncertainty from GCM use through uncertainty in the hydrologic model, offers insight into the interpretation and use of uncertainty estimates in water resources analysis, and illustrates the need for a fully nonstationary approach with respect to calibrating hydrologic models and transferring parameters across basins and time for climate change analyses. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
This work examines future flood risk within the context of integrated climate and hydrologic modelling uncertainty. The research questions investigated are (1) whether hydrologic uncertainties are a significant source of uncertainty relative to other sources such as climate variability and change and (2) whether a statistical characterization of uncertainty from a lumped, conceptual hydrologic model is sufficient to account for hydrologic uncertainties in the modelling process. To investigate these questions, an ensemble of climate simulations are propagated through hydrologic models and then through a reservoir simulation model to delimit the range of flood protection under a wide array of climate conditions. Uncertainty in mean climate changes and internal climate variability are framed using a risk‐based methodology and are explored using a stochastic weather generator. To account for hydrologic uncertainty, two hydrologic models are considered, a conceptual, lumped parameter model and a distributed, physically based model. In the conceptual model, parameter and residual error uncertainties are quantified and propagated through the analysis using a Bayesian modelling framework. The approach is demonstrated in a case study for the Coralville Dam on the Iowa River, where recent, intense flooding has raised questions about potential impacts of climate change on flood protection adequacy. Results indicate that the uncertainty surrounding future flood risk from hydrologic modelling and internal climate variability can be of the same order of magnitude as climate change. Furthermore, statistical uncertainty in the conceptual hydrological model can capture the primary structural differences that emerge in flood damage estimates between the two hydrologic models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.  相似文献   

20.
Knowledge about saturation and pressure distributions in a reservoir can help in determining an optimal drainage pattern, and in deciding on optimal well designs to reduce risks of blow‐outs and damage to production equipment. By analyzing time‐lapse PP AVO or time‐lapse multicomponent seismic data, it is possible to separate the effects of production related saturation and pressure changes on seismic data. To be able to utilize information about saturation and pressure distributions in reservoir model building and simulation, information about uncertainty in the estimates is useful. In this paper we present a method to estimate changes in saturation and pressure from time‐lapse multicomponent seismic data using a Bayesian estimation technique. Results of the estimations will be probability density functions (pdfs), giving immediate information about both parameter values and uncertainties. Linearized rock physical models are linked to the changes in saturation and pressure in the prior probability distribution. The relationship between the elastic parameters and the measured seismic data is described in the likelihood model. By assuming Gaussian distributed prior uncertainties the posterior distribution of the saturation and pressure changes can be calculated analytically. Results from tests on synthetic seismic data show that this method produces more precise estimates of changes in effective pressure than a similar methodology based on only PP AVO time‐lapse seismic data. This indicates that additional information about S‐waves obtained from converted‐wave seismic data is useful for obtaining reliable information about the pressure change distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号