首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The estimation of missing rainfall data is an important problem for data analysis and modelling studies in hydrology. This paper develops a Bayesian method to address missing rainfall estimation from runoff measurements based on a pre-calibrated conceptual rainfall–runoff model. The Bayesian method assigns posterior probability of rainfall estimates proportional to the likelihood function of measured runoff flows and prior rainfall information, which is presented by uniform distributions in the absence of rainfall data. The likelihood function of measured runoff can be determined via the test of different residual error models in the calibration phase. The application of this method to a French urban catchment indicates that the proposed Bayesian method is able to assess missing rainfall and its uncertainty based only on runoff measurements, which provides an alternative to the reverse model for missing rainfall estimates.  相似文献   

3.
Problem complexity for watershed model calibration is heavily dependent on the number of parameters that can be identified during model calibration. This study investigates the use of global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems while maximizing the information extracted from hydrological response data. This study shows that by expanding calibration problem formulations beyond traditional, statistical error metrics to also include metrics that capture indices or signatures of hydrological function, it is possible to reduce the complexity of calibration while maintaining high quality model predictions. The sensitivity-guided calibration is demonstrated using the Sacramento Soil Moisture Accounting (SAC-SMA) conceptual rainfall–runoff model of moderate complexity (i.e., up to 14 freely varying parameters). Using both statistical and hydrological metrics, optimization results demonstrate that parameters controlling at least 20% of the model output variance (through individual effects and interactions) should be included in the calibration process. This threshold generally yields 30–40% reductions in the number of SAC-SMA parameters requiring calibration – setting the others to a priori values – while maintaining high quality predictions. Two parameters are recommended to be calibrated in all cases (percent impervious area and lower zone tension water storage), three parameters are needed in drier watersheds (additional impervious area, riparian zone vegetation, and percent of percolation going to tension storage), and the lower zone parameters are crucial unless the watershed is very dry. Overall, this study demonstrates that a coupled, multi-objective sensitivity and calibration analysis better captures differences between watersheds during model calibration and serves to maximize the value of available watershed response time series. These contributions are particularly important given the ongoing development of more complex integrated models, which will require new tools to address the growing discrepancy between the information content of hydrological data and the number of model parameters that have to be estimated.  相似文献   

4.
Abstract

Recent work pertaining to estimating error and accuracies in geomagnetic field modeling is reviewed from a unified viewpoint and illustrated with examples. The formulation of a finite dimensional approximation to the underlying infinite dimensional problem is developed. Central to the formulation is an inner product and norm in the solution space through which a priori information can be brought to bear on the problem. Such information is crucial to estimation of the effects of higher degree fields at the Core-Mantle boundary (CMB) because the behavior of higher degree fields is masked in our measurements by the presence of the field from the Earth's crust. Contributions to the errors in predicting geophysical quantities based on the approximate model are separated into three categories: (1) the usual error from the measurement noise; (2) the error from unmodeled fields, i.e. from sources in the crust, ionosphere, etc.; and (3) the error from truncating to a finite dimensioned solution and prediction space. The combination of the first two is termed low degree error while the third is referred to as truncation error.

The error analysis problem consists of “characterizing” the difference δz = z—z, where z is some quantity depending on the magnetic field and z is the estimate of z resulting from our model. Two approaches are discussed. The method of Confidence Set Inference (CSI) seeks to find an upper bound for |z—?|. Statistical methods, i.e. Bayesian or Stochastic Estimation, seek to estimate Ez2 ), where E is the expectation value. Estimation of both the truncation error and low degree error is discussed for both approaches. Expressions are found for an upper bound for |δz| and for Ez2 ). Of particular interest is the computation of the radial field, B., at the CMB for which error estimates are made as examples of the methods. Estimated accuracies of the Gauss coefficients are given for the various methods. In general, the lowest error estimates result when the greatest amount of a priori information is available and, indeed, the estimates for truncation error are completely dependent upon the nature of the a priori information assumed. For the most conservative approach, the error in computing point values of Br at the CMB is unbounded and one must be content with, e.g., averages over some large area. The various assumptions about a priori information are reviewed. Work is needed to extend and develop this information. In particular, information regarding the truncated fields is needed to determine if the pessimistic bounds presently available are realistic or if there is a real physical basis for lower error estimates. Characterization of crustal fields for degree greater than 50 is needed as is more rigorous characterization of the external fields.  相似文献   

5.
Calibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box–Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)–(iv) mentioned above.  相似文献   

6.
 The prediction error of a relatively simple soil acidification model (SMART2) was assessed before and after calibration, focussing on the Al and NO3 concentrations on a block scale. Although SMART2 is especially developed for application on a national to European scale, it still runs at a point support. A 5×5 km2 grid was used for application on the European scale. Block characteristic values were obtained simply by taking the median value of the point support values within the corresponding grid cell. In order to increase confidence in model predictions on large spatial scales, the model was calibrated and validated for the Netherlands, using a resolution that is feasible for Europe as a whole. Because observations are available only at the point support, it was necessary to transfer them to the block support of the model results. For this purpose, about 250 point observations of soil solution concentrations in forest soils were upscaled to a 5×5 km2 grid map, using multiple linear regression analysis combined with block kriging. The resulting map with upscaled observations was used for both validation and calibration. A comparison of the map with model predictions using nominal parameter values and the map with the upscaled observations showed that the model overestimated the predicted Al and NO3 concentrations. The nominal model results were still in the 95% confidence interval of the upscaled observations, but calibration improved the model predictions and strongly reduced the model error. However, the model error after calibration remains rather large.  相似文献   

7.
A groundwater model characterized by a lack of field data about hydraulic model parameters and boundary conditions combined with many observation data sets for calibration purpose was investigated concerning model uncertainty. Seven different conceptual models with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike information criterion (AIC and AICc), Bayesian information criterion (BIC), and Kashyap's information criterion (KIC) were calculated for a set of seven inverse calibrated models with increasing complexity. Finally, the likelihood of each model was computed. Comparing only residuals of the different conceptual models leads to an overparameterization and certainty loss in the conceptual model approach. The model employing only uncalibrated hydraulic parameters, estimated from sedimentological information, obtained the worst AIC, BIC, and KIC values. Using only sedimentological data to derive hydraulic parameters introduces a systematic error into the simulation results and cannot be recommended for generating a valuable model. For numerical investigations with high numbers of calibration data the BIC and KIC select as optimal a simpler model than the AIC. The model with 15 adjusted parameters was evaluated by AIC as the best option and obtained a likelihood of 98%. The AIC disregards the potential model structure error and the selection of the KIC is, therefore, more appropriate. Sensitivities to piezometric heads were highest for the model with only five adjustable parameters and sensitivity coefficients were directly influenced by the changes in extracted groundwater volumes.  相似文献   

8.
The input uncertainty is as significant as model error, which affects the parameter estimation, yields bias and misleading results. This study performed a comprehensive comparison and evaluation of uncertainty estimates according to the impact of precipitation errors by GLUE and Bayesian methods using the Metropolis Hasting algorithm in a validated conceptual hydrological model (WASMOD). It aims to explain the sensitivity and differences between the GLUE and Bayesian method applied to hydrological model under precipitation errors with constant multiplier parameter and random multiplier parameter. The 95 % confidence interval of monthly discharge in low flow, medium flow and high flow were selected for comparison. Four indices, i.e. the average relative interval length, the percentage of observations bracketed by the confidence interval, the percentage of observations bracketed by the unit confidence interval and the continuous rank probability score (CRPS) were used in this study for sensitivity analysis under model input error via GLUE and Bayesian methods. It was found that (1) the posterior distributions derived by the Bayesian method are narrower and sharper than those obtained by the GLUE under precipitation errors, but the differences are quite small; (2) Bayesian method performs more sensitive in uncertainty estimates of discharge than GLUE according to the impact of precipitation errors; (3) GLUE and Bayesian methods are more sensitive in uncertainty estimate of high flow than the other flows by the impact of precipitation errors; and (4) under the impact of precipitation, the results of CRPS for low and medium flows are quite stable from both GLUE and Bayesian method while it is sensitive for high flow by Bayesian method.  相似文献   

9.
Calibration is typically used for improving the predictability of mechanistic simulation models by adjusting a set of model parameters and fitting model predictions to observations. Calibration does not, however, account for or correct potential misspecifications in the model structure, limiting the accuracy of modeled predictions. This paper presents a new approach that addresses both parameter error and model structural error to improve the predictive capabilities of a model. The new approach simultaneously conducts a numeric search for model parameter estimation and a symbolic (regression) search to determine a function to correct misspecifications in model equations. It is based on an evolutionary computation approach that integrates genetic algorithm and genetic programming operators. While this new approach is designed generically and can be applied to a broad array of mechanistic models, it is demonstrated for an illustrative case study involving water quality modeling and prediction. Results based on extensive testing and evaluation, show that the new procedure performs consistently well in fitting a set of training data as well as predicting a set of validation data, and outperforms a calibration procedure and an empirical model fitting procedure.  相似文献   

10.
This work examines future flood risk within the context of integrated climate and hydrologic modelling uncertainty. The research questions investigated are (1) whether hydrologic uncertainties are a significant source of uncertainty relative to other sources such as climate variability and change and (2) whether a statistical characterization of uncertainty from a lumped, conceptual hydrologic model is sufficient to account for hydrologic uncertainties in the modelling process. To investigate these questions, an ensemble of climate simulations are propagated through hydrologic models and then through a reservoir simulation model to delimit the range of flood protection under a wide array of climate conditions. Uncertainty in mean climate changes and internal climate variability are framed using a risk‐based methodology and are explored using a stochastic weather generator. To account for hydrologic uncertainty, two hydrologic models are considered, a conceptual, lumped parameter model and a distributed, physically based model. In the conceptual model, parameter and residual error uncertainties are quantified and propagated through the analysis using a Bayesian modelling framework. The approach is demonstrated in a case study for the Coralville Dam on the Iowa River, where recent, intense flooding has raised questions about potential impacts of climate change on flood protection adequacy. Results indicate that the uncertainty surrounding future flood risk from hydrologic modelling and internal climate variability can be of the same order of magnitude as climate change. Furthermore, statistical uncertainty in the conceptual hydrological model can capture the primary structural differences that emerge in flood damage estimates between the two hydrologic models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Abstract

The uncertainties arising from the problem of identifying a representative model structure and model parameters in a conceptual rainfall-runoff model were investigated. A conceptual model, the HBV model, was applied to the mountainous Brugga basin (39.9 km”) in the Black Forest, southwestern Germany. In a first step, a Monte Carlo procedure with randomly generated parameter sets was used for calibration. For a ten-year calibration period, different parameter sets resulted in an equally good correspondence between observed and simulated runoff. A few parameters were well defined (i.e. best parameter values were within small ranges), but for most parameters good simulations were found with values varying over wide ranges. In a second step, model variants with different numbers of elevation and landuse zones and various runoff generation conceptualizations were tested. In some cases, representation of more spatial variability gave better simulations in terms of discharge. However, good results could be obtained with different and even unrealistic concepts. The computation of design floods and low flow predictions illustrated that the parameter uncertainty and the uncertainty of identifying a unique best model variant have implications for model predictions. The flow predictions varied considerably. The peak discharge of a flood with a probability of 0.01 year?1, for instance, varied from 40 to almost 60 mm day?1. It was concluded that model predictions, particularly in applied studies, should be given as ranges rather than as single values.  相似文献   

12.
半湿润流域水文模型比较与集合预报   总被引:1,自引:0,他引:1  
霍文博  李致家  李巧玲 《湖泊科学》2017,29(6):1491-1501
选择7种水文模型分别在中国北部3个半湿润流域做模拟对比,分析不同水文模型在各流域的适用性,并使用贝叶斯模型平均法对不同模型集合,比较各种集合方法的优势,研究贝叶斯模型平均法的应用效果.研究结果表明,以蓄满产流模式为主的模型在半湿润流域应用效果较好,针对不同流域特点对传统模型进行改进可以提高模拟精度.贝叶斯模型平均法能提供较好的确定性预报结果和概率预报结果,仅对少数模拟效果好的模型进行集合,并不能有效提高预报精度,适当增加参与集合的模型数量能使贝叶斯模型平均法更好地综合各模型优势,提高预报结果的精度.  相似文献   

13.
The inversion of induced‐polarization parameters is important in the characterization of the frequency electrical response of porous rocks. A Bayesian approach is developed to invert these parameters assuming the electrical response is described by a Cole–Cole model in the time or frequency domain. We show that the Bayesian approach provides a better analysis of the uncertainty associated with the parameters of the Cole–Cole model compared with more conventional methods based on the minimization of a cost function using the least‐squares criterion. This is due to the strong non‐linearity of the inverse problem and non‐uniqueness of the solution in the time domain. The Bayesian approach consists of propagating the information provided by the measurements through the model and combining this information with a priori knowledge of the data. Our analysis demonstrates that the uncertainty in estimating the Cole–Cole model parameters from induced‐polarization data is much higher for measurements performed in the time domain than in the frequency domain. Our conclusion is that it is very difficult, if not impossible, to retrieve the correct value of the Cole–Cole parameters from time‐domain induced‐polarization data using standard least‐squares methods. In contrast, the Cole–Cole parameters can be more correctly inverted in the frequency domain. These results are also valid for other models describing the induced‐polarization spectral response, such as the Cole–Davidson or power law models.  相似文献   

14.
Streamflow forecasting methods are moving towards probabilistic approaches that quantify the uncertainty associated with the various sources of error in the forecasting process. Multi-model averaging methods which try to address modeling deficiencies by considering multiple models are gaining much popularity. We have applied the Bayesian Model Averaging method to an ensemble of twelve snow models that vary in their heat and melt algorithms, parameterization, and/or albedo estimation method. Three of the models use the temperature-based heat and melt routines of the SNOW17 snow accumulation and ablation model. Nine models use heat and melt routines that are based on a simplified energy balance approach, and are varied by using three different albedo estimation schemes. Finally, different parameter sets were identified through automatic calibration with three objective functions. All models use the snow accumulation, liquid water transport, and ground surface heat exchange processes of the SNOW17. The resulting twelve snow models were combined using Bayesian Model Averaging (BMA). The individual models, BMA predictive mean, and BMA predictive variance were evaluated for six SNOTEL sites in the western U.S. The models performed best and the BMA variance was lowest at the colder sites with high winter precipitation and little mid-winter melting. An individual snow model would often outperform the BMA predictive mean. However, observed snow water equivalent (SWE) was captured within the 95% confidence intervals of the BMA variance on average 80% of the time at all sites. Results are promising that consideration of multiple snow structures would provide useful uncertainty information for probabilistic hydrologic prediction.  相似文献   

15.
For the southern branch of the Rhine–Meuse estuary, The Netherlands, a two-dimensional horizontal suspended sediment transport model was constructed in order to evaluate the complicated water quality management of the area. The data needed to calibrate the model were collected during a special field survey at high river runoff utilizing a number of techniques: (1) turbidity probes were used to obtain suspended sediment concentration profiles; (2) air-borne remote sensing video recordings were applied in order to obtain information concerning the spatial distribution of the suspended sediment concentration; (3) an acoustic probe (ISAC) was used to measure cohesive bed density profiles and (4) an in situ underwater video camera (VIS) was deployed to collect video recordings of the suspended sediment. These VIS data were finally processed to fall velocity and diameter distributions and were mainly used to improve insight into the relevant transport processes, indicating significant erosion of sand from the upstream Rhine branch. For quantitative calibration of the model, the data from the turbidity profiles were used. Sedimentation and erosion were modelled according to Krone and Partheniades. The model results showed a good overall fit to the measurements, with a mean absolute error of 18 per cent (standard fault = 1 per cent), corresponding to concentrations of about 0·020 (upstream) to 0·005 kg m−3 (downstream). The overall correlation between observed and simulated suspended sediment concentrations was 0·85. The remote sensing video recordings were used for a qualitative calibration of the model. The distribution pattern of the suspended sediment on these photos was reproduced quite well by the model. However, a more accurate calibration technique is needed to enable the use of aerial remote sensing as a quantitative calibration method. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
The error in physically-based rainfall-runoff modelling is broken into components, and these components are assigned to three groups: (1) model structure error, associated with the model’s equations; (2) parameter error, associated with the parameter values used in the equations; and (3) run time error, associated with rainfall and other forcing data. The error components all contribute to “integrated” errors, such as the difference between simulated and observed runoff, but their individual contributions cannot usually be isolated because the modelling process is complex and there is a lack of knowledge about the catchment and its hydrological responses. A simple model of the Slapton Wood Catchment is developed within a theoretical framework in which the catchment and its responses are assumed to be known perfectly. This makes it possible to analyse the contributions of the error components when predicting the effects of a physical change in the catchment. The standard approach to predicting change effects involves: (1) running “unchanged” simulations using current parameter sets; (2) making adjustments to the sets to allow for physical change; and (3) running “changed” simulations. Calibration or uncertainty-handling methods such as GLUE are used to obtain the current sets based on forcing and runoff data for a calibration period, by minimising or creating statistical bounds for the “integrated” errors in simulations of runoff. It is shown that current parameter sets derived in this fashion are unreliable for predicting change effects, because of model structure error and its interaction with parameter error, so caution is needed if the standard approach is to be used when making management decisions about change in catchments.  相似文献   

17.
How much data is needed for calibration of a hydrological catchment model? In this paper we address this question by evaluating the information contained in different subsets of discharge and groundwater time series for multi‐objective calibration of a conceptual hydrological model within the framework of an uncertainty analysis. The study site was a 5·6‐km2 catchment within the Forsmark research site in central Sweden along the Baltic coast. Daily time series data were available for discharge and several groundwater wells within the catchment for a continuous 1065‐day period. The hydrological model was a site‐specific modification of the conceptual HBV model. The uncertainty analyses were based on a selective Monte Carlo procedure. Thirteen subsets of the complete time series data were investigated with the idea that these represent realistic intermittent sampling strategies. Data subsets included split‐samples and various combinations of weekly, monthly, and quarterly fixed interval subsets, as well as a 53‐day ‘informed observer’ subset that utilized once per month samples except during March and April—the months containing large and often dominant snow melt events—when sampling was once per week. Several of these subsets, including that of the informed observer, provided very similar constraints on model calibration and parameter identification as the full data record, in terms of credibility bands on simulated time series, posterior parameter distributions, and performance indices calculated to the full dataset. This result suggests that hydrological sampling designs can, at least in some cases, be optimized. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Probabilistic-fuzzy health risk modeling   总被引:3,自引:2,他引:1  
Health risk analysis of multi-pathway exposure to contaminated water involves the use of mechanistic models that include many uncertain and highly variable parameters. Currently, the uncertainties in these models are treated using statistical approaches. However, not all uncertainties in data or model parameters are due to randomness. Other sources of imprecision that may lead to uncertainty include scarce or incomplete data, measurement error, data obtained from expert judgment, or subjective interpretation of available information. These kinds of uncertainties and also the non-random uncertainty cannot be treated solely by statistical methods. In this paper we propose the use of fuzzy set theory together with probability theory to incorporate uncertainties into the health risk analysis. We identify this approach as probabilistic-fuzzy risk assessment (PFRA). Based on the form of available information, fuzzy set theory, probability theory, or a combination of both can be used to incorporate parameter uncertainty and variability into mechanistic risk assessment models. In this study, tap water concentration is used as the source of contamination in the human exposure model. Ingestion, inhalation and dermal contact are considered as multiple exposure pathways. The tap water concentration of the contaminant and cancer potency factors for ingestion, inhalation and dermal contact are treated as fuzzy variables while the remaining model parameters are treated using probability density functions. Combined utilization of fuzzy and random variables produces membership functions of risk to individuals at different fractiles of risk as well as probability distributions of risk for various alpha-cut levels of the membership function. The proposed method provides a robust approach in evaluating human health risk to exposure when there is both uncertainty and variability in model parameters. PFRA allows utilization of certain types of information which have not been used directly in existing risk assessment methods.  相似文献   

19.
Stochastic delineation of capture zones: classical versus Bayesian approach   总被引:1,自引:0,他引:1  
A Bayesian approach to characterize the predictive uncertainty in the delineation of time-related well capture zones in heterogeneous formations is presented and compared with the classical or non-Bayesian approach. The transmissivity field is modelled as a random space function and conditioned on distributed measurements of the transmissivity. In conventional geostatistical methods the mean value of the log transmissivity and the functional form of the covariance and its parameters are estimated from the available measurements, and then entered into the prediction equations as if they are the true values. However, this classical approach accounts only for the uncertainty that stems from the lack of ability to exactly predict the transmissivity at unmeasured locations. In reality, the number of measurements used to infer the statistical properties of the transmissvity field is often limited, which introduces error in the estimation of the structural parameters. The method presented accounts for the uncertainty that originates from the imperfect knowledge of the parameters by treating them as random variables. In particular, we use Bayesian methods of inference so as to make proper allowance for the uncertainty associated with estimating the unknown values of the parameters. The classical and Bayesian approach to stochastic capture zone delineation are detailed and applied to a hypothetical flow field. Two different sampling densities on a regular grid are considered to evaluate the effect of data density in both methods. Results indicate that the predictions of the Bayesian approach are more conservative.  相似文献   

20.
The groundwater inverse problem of estimating heterogeneous groundwater model parameters (hydraulic conductivity in this case) given measurements of aquifer response (such as hydraulic heads) is known to be an ill-posed problem, with multiple parameter values giving similar fits to the aquifer response measurements. This problem is further exacerbated due to the lack of extensive data, typical of most real-world problems. In such cases, it is desirable to incorporate expert knowledge in the estimation process to generate more reasonable estimates. This work presents a novel interactive framework, called the ‘Interactive Multi-Objective Genetic Algorithm’ (IMOGA), to solve the groundwater inverse problem considering different sources of quantitative data as well as qualitative expert knowledge about the site. The IMOGA is unique in that it looks at groundwater model calibration as a multi-objective problem consisting of quantitative objectives – calibration error and regularization – and a ‘qualitative’ objective based on the preference of the geological expert for different spatial characteristics of the conductivity field. All these objectives are then included within a multi-objective genetic algorithm to find multiple solutions that represent the best combination of all quantitative and qualitative objectives. A hypothetical aquifer case-study (based on the test case presented by Freyberg [Freyberg DL. An exercise in ground-water model calibration and prediction. Ground Water 1988;26(3)], for which the ‘true’ parameter values are known, is used as a test case to demonstrate the applicability of this method. It is shown that using automated calibration techniques without using expert interaction leads to parameter values that are not consistent with site-knowledge. Adding expert interaction is shown to not only improve the plausibility of the estimated conductivity fields but also the predictive accuracy of the calibrated model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号