首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The errors-in-variables (EIV) model is a nonlinear model, the parameters of which can be solved by singular value decomposition (SVD) method or the general iterative algorithm. The existing formulae for covariance matrix of total least squares (TLS) parameter estimates don’t fully consider the randomness of quantities in iterative algorithm and the biases of parameter estimates and residuals. In order to reflect more reasonable precision information for TLS adjustment, the derivative-free unscented transformation with scaled symmetric sampling strategy, i.e. scaled unscented transformation (SUT), is introduced and implemented. In this contribution, we firstly discuss the existing various solutions of TLS adjustment and covariance matrices of TLS parameter estimates and derive the general first-order approximate cofactor matrices of random quantities in TLS adjustment. Secondly, based on the combination of TLS iterative algorithm and calculation process of SUT, we design the two SUT algorithms to calculate the biases and the second-order approximate covariance matrices. Finally, the straight line fitting model and plane coordinate transformation model are used to demonstrate that applying SUT for precision estimation of TLS adjustment is feasible and effective.  相似文献   

2.
Coupling basin- and site-scale inverse models of the Española aquifer   总被引:1,自引:0,他引:1  
Large-scale models are frequently used to estimate fluxes to small-scale models. The uncertainty associated with these flux estimates, however, is rarely addressed. We present a case study from the Espa?ola Basin, northern New Mexico, where we use a basin-scale model coupled with a high-resolution, nested site-scale model. Both models are three-dimensional and are analyzed by codes FEHM and PEST. Using constrained nonlinear optimization, we examine the effect of parameter uncertainty in the basin-scale model on the nonlinear confidence limits of predicted fluxes to the site-scale model. We find that some of the fluxes are very well constrained, while for others there is fairly large uncertainty. Site-scale transport simulation results, however, are relatively insensitive to the estimated uncertainty in the fluxes. We also compare parameter estimates obtained by the basin- and site-scale inverse models. Differences in the model grid resolution (scale of parameter estimation) result in differing delineation of hydrostratigraphic units, so the two models produce different estimates for some units. The effect is similar to the observed scale effect in medium properties owing to differences in tested volume. More important, estimation uncertainty of model parameters is quite different at the two scales. Overall, the basin inverse model resulted in significantly lower estimates of uncertainty, because of the larger calibration dataset available. This suggests that the basin-scale model contributes not only important boundary condition information but also improved parameter identification for some units. Our results demonstrate that caution is warranted when applying parameter estimates inferred from a large-scale model to small-scale simulations, and vice versa.  相似文献   

3.
Considering complexity in groundwater modeling can aid in selecting an optimal model, and can avoid over parameterization, model uncertainty, and misleading conclusions. This study was designed to determine the uncertainty arising from model complexity, and to identify how complexity affects model uncertainty. The Ajabshir aquifer, located in East Azerbaijan, Iran, was used for comprehensive hydrogeological studies and modeling. Six unique conceptual models with four different degrees of complexity measured by the number of calibrated model parameters (6, 10, 10, 13, 13 and 15 parameters) were compared and characterized with alternative geological interpretations, recharge estimates and boundary conditions. The models were developed with Model Muse and calibrated using UCODE with the same set of observed data of hydraulic head. Different methods were used to calculate model probability and model weight to explore model complexity, including Bayesian model averaging, model selection criteria, and multicriteria decision-making (MCDM). With the model selection criteria of AIC, AICc and BIC, the simplest model received the highest model probability. The model selection criterion, KIC, and the MCDM method, in addition to considering the quality of model fit between observed and simulated data and the number of calibrated parameters, also consider uncertainty in parameter estimates with a Fisher information matrix. KIC and MCDM selected a model with moderate complexity (10 parameters) and the best parameter estimation (model 3) as the best models, over another model with the same degree of complexity (model 2). The results of these comparisons show that in choosing between models, priority should be given to quality of the data and parameter estimation rather than degree of complexity.  相似文献   

4.
Knowledge about saturation and pressure distributions in a reservoir can help in determining an optimal drainage pattern, and in deciding on optimal well designs to reduce risks of blow‐outs and damage to production equipment. By analyzing time‐lapse PP AVO or time‐lapse multicomponent seismic data, it is possible to separate the effects of production related saturation and pressure changes on seismic data. To be able to utilize information about saturation and pressure distributions in reservoir model building and simulation, information about uncertainty in the estimates is useful. In this paper we present a method to estimate changes in saturation and pressure from time‐lapse multicomponent seismic data using a Bayesian estimation technique. Results of the estimations will be probability density functions (pdfs), giving immediate information about both parameter values and uncertainties. Linearized rock physical models are linked to the changes in saturation and pressure in the prior probability distribution. The relationship between the elastic parameters and the measured seismic data is described in the likelihood model. By assuming Gaussian distributed prior uncertainties the posterior distribution of the saturation and pressure changes can be calculated analytically. Results from tests on synthetic seismic data show that this method produces more precise estimates of changes in effective pressure than a similar methodology based on only PP AVO time‐lapse seismic data. This indicates that additional information about S‐waves obtained from converted‐wave seismic data is useful for obtaining reliable information about the pressure change distribution.  相似文献   

5.
长波长假设条件下,各向同性背景地层中发育一组平行排列的垂直裂缝可等效为具有水平对称轴的横向各向同性(HTI)介质.基于不同观测方位的岩石地震响应特征变化,宽方位地震数据不仅可实现裂缝岩石弹性参数与各向异性参数的预测,同时也蕴含着丰富的孔隙度等储层物性参数信息.本文结合实际地震资料提出了贝叶斯框架下岩石物理驱动的储层裂缝参数与物性参数概率地震联合反演方法,首先基于AVAZ反演裂缝岩石的弹性参数与各向异性参数,并在此基础上通过统计岩石物理模型表征孔隙度、裂缝密度等各向异性介质储层参数与裂缝岩石参数的相互关联,并采用马尔科夫链蒙特卡洛(MCMC)抽样方法进行大量样本的随机模拟,使用期望最大化(EM)算法估计后验条件概率分布,最终寻找最大后验条件概率对应的孔隙度、裂缝密度等HTI裂缝介质储层参数即为反演结果.测井及实际地震数据处理表明,该方法能够稳定合理地从方位地震资料中获取裂缝岩石弹性参数与各向异性参数,并提供了一种较为可靠的孔隙度、裂缝密度等裂缝介质储层参数概率地震反演方法.  相似文献   

6.
Time-lapse seismic data is useful for identifying fluid movement and pressure and saturation changes in a petroleum reservoir and for monitoring of CO2 injection. The focus of this paper is estimation of time-lapse changes with uncertainty quantification using full-waveform inversion. The purpose of also estimating the uncertainty in the inverted parameters is to be able to use the inverted seismic data quantitatively for updating reservoir models with ensemble-based methods. We perform Bayesian inversion of seismic waveform data in the frequency domain by combining an iterated extended Kalman filter with an explicit representation of the sensitivity matrix in terms of Green functions (acoustic approximation). Using this method, we test different strategies for inversion of the time-lapse seismic data with uncertainty. We compare the results from a sequential strategy (making a prior from the monitor survey using the inverted baseline survey) with a double difference strategy (inverting the difference between the monitor and baseline data). We apply the methods to a subset of the Marmousi2 P-velocity model. Both strategies performed well and relatively good estimates of the monitor velocities and the time-lapse differences were obtained. For the estimated time-lapse differences, the double difference strategy gave the lowest errors.  相似文献   

7.
Abstract

The complexity of distributed hydrological models has led to improvements in calibration methodologies in recent years. There are various manual, automatic and hybrid methods of calibration. Most use a single objective function to calculate estimation errors. The use of multi-objective calibration improves results, since different aspects of the hydrograph may be considered simultaneously. However, the uncertainty of estimates from a hydrological model can only be taken into account by using a probabilistic approach. This paper presents a calibration method of probabilistic nature, based on the determination of probability functions that best characterize different parameters of the model. The method was applied to the Real-time Interactive Basin Simulator (RIBS) distributed hydrological model using the Manzanares River basin in Spain as a case study. The proposed method allows us to consider the uncertainty in the model estimates by obtaining the probability distributions of flows in the flood hydrograph.

Citation Mediero, L., Garrote, L. & Martín-Carrasco, F. J. (2011) Probabilistic calibration of a distributed hydrological model for flood forecasting. Hydrol. Sci. J. 56(7), 1129–1149.  相似文献   

8.
基于岩石物理和地震反演理论,提出了一种同步反演储层孔隙度和含水饱和度的方法.以岩石物理为基础,建立了砂泥岩储层物性和弹性参数之间定量的关系-Simon模型,以贝叶斯理论为手段,结合不同类型的砂泥岩储层,建立了多信息联合约束的物性参数反演目标函数,并通过蒙特卡罗和遗传算法相结合的思路求解该目标函数,最终得到孔隙度和含水饱和度的同步反演结果.将该方法应用于河道砂和砂砾岩两种不同的砂泥岩储层中,孔隙度和含水饱和度数据的联合应用,进一步减少了储层预测的多解性,为石油地质综合研究提供了更加丰富准确的基础数据.  相似文献   

9.
砂岩储层AVO特征影响因素的不确定性研究   总被引:3,自引:2,他引:1       下载免费PDF全文
传统的地震AVO正演研究多采用参数固定的岩石物理模型,而实际地层属性参数在勘探范围内具有不确定性.本研究以目标地层岩芯样品的实验室测试数据为基础,通过样品孔隙度和干燥状态下纵、横波阻抗的高度线性关系对岩石物理模型进行了简化,并结合实验测量和测井解释建立了主要模型参数的概率密度函数.采用Monte-Carlo随机正演和G...  相似文献   

10.
Approximate copula-based estimation and prediction of discrete spatial data   总被引:1,自引:1,他引:0  
The present paper reports on the use of copula functions to describe the distribution of discrete spatial data, e.g. count data from environmental mapping or areal data analysis. In particular, we consider approaches to parameter point estimation and propose a fast method to perform approximate spatial prediction in copula-based spatial models with discrete marginal distributions. We assess the goodness of the resulting parameter estimates and predictors under different spatial settings and guide the analyst on which approach to apply for the data at hand. Finally, we illustrate the methodology by analyzing the well-known Lansing Woods data set. Software that implements the methods proposed in this paper is freely available in Matlab language on the author’s website.  相似文献   

11.
In climate models, the land–atmosphere interactions are described numerically by land surface parameterization (LSP) schemes. The continuing improvement in realism in these schemes comes at the expense of the need to specify a large number of parameters that are either directly measured or estimated. Also, an emerging problem is whether the relationships used in LSPs are universal and globally applicable. One plausible approach to evaluate this is to first minimize uncertainty in model parameters by calibration. In this paper, we conduct a comprehensive analysis of some model diagnostics using a slightly modified version of the Simple Biosphere 3 model for a variety of biomes located mainly in the Amazon. First, the degree of influence of each individual parameter in simulating surface fluxes is identified. Next, we estimate parameters using a multi‐operator genetic algorithm applied in a multi‐objective context and evaluate simulations of energy and carbon fluxes against observations. Compared with the default parameter sets, these parameter estimates improve the partitioning of energy fluxes in forest and cropland sites and provide better simulations of daytime increases in assimilation of net carbon during the dry season at forest sites. Finally, a detailed assessment of the parameter estimation problem was performed by accounting for the decomposition of the mean squared error to the total model uncertainty. Analysis of the total prediction uncertainty reveals that the parameter adjustments significantly improve reproduction of the mean and variability of the flux time series at all sites and generally remove seasonality of the errors but do not improve dynamical properties. Our results demonstrate that error decomposition provides a meaningful and intuitive way to understand differences in model performance. To make further advancements in the knowledge of these models, we encourage the LSP community to adopt similar approaches in the future. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
A general method for estimating ground-water solute mass transfer rate parameters from field test data is presented. The method entails matching solute concentration and hydraulic head data collected during the recovery phase of a pumping test through application of a simulation-regression technique. Estimation of hydraulic conductivity and mass transfer rate parameter values is performed by fitting model simulations to the data. Parameter estimates are utilized to assess cleanup times for pump-and-treat aquifer remediation scenarios. Uncertainty in the cleanup time estimate is evaluated using statistical information obtained with the parameter estimation technique. Application of the method is demonstrated using a hypothetical ground-water flow and solute transport system. Simulations of field testing, parameter estimation, and remedial time frames are performed to evaluate the usefulness of the method. Sets of random noise that signify potential field and laboratory measurement errors are combined with the hypothetical data to provide rigorous testing of the method. Field tests are simulated using ranges of values for data noise, the mass transfer rate parameters, the test pumping rates, and the duration of recovery monitoring to evaluate their respective influence on parameter and cleanup time estimates. The demonstration indicates the method is capable of yielding accurate estimates of the solute mass transfer rate parameters. When the parameter values for the hypothetical system are well estimated, cleanup time predictions are shown to be more accurate than when calculated using the local equilibrium assumption.  相似文献   

13.
任梦依  刘哲 《地震学报》2022,44(6):1035-1048
基于广义帕累托分布构建地震活动性模型,因其输入参数取值难以避免不确定性,导致依据该模型所得的地震危险性估计结果具有不确定性。鉴于此,本文选取青藏高原东北缘为研究区,提出了基于全域敏感性分析的地震危险性估计的不确定性分析流程和方法。首先,利用地震活动性广义帕累托模型,进行研究区地震危险性估计;然后,选取地震记录的起始时间和震级阈值作为地震活动性模型的输入参数,采用具有全域敏感性分析功能的E-FAST方法,对上述两个参数的不确定性以及两参数之间的相互作用对地震危险性估计不确定性的影响进行定量分析。结果表明:地震危险性估计结果(不同重现期的震级重现水平、震级上限及相应的置信区间)对两个输入参数中的震级阈值更为敏感;不同重现期的地震危险性估计结果对震级阈值的敏感程度不同;对不同的重现期而言,在影响地震危险性估计结果的不确定性上,两个输入参数之间存在非线性效应,且非线性效应程度不同。本文提出的不确定性分析流程和方法,可以推广应用于基于其它类型地震活动性模型的地震危险性估计不确定性分析。   相似文献   

14.
Hydrological models are useful tools for better understanding the hydrological processes and performing the hydrological prediction. However, the reliability of the prediction depends largely on its uncertainty range. This study mainly focuses on estimating model parameter uncertainty and quantifying the simulation uncertainties caused by sole model parameters and the co‐effects of model parameters and model structure in a lumped conceptual water balance model called WASMOD (Water And Snow balance MODeling system). The validity of statistical hypotheses on residuals made in the model formation is tested as well, as it is the base of parameter estimation and simulation uncertainty evaluation. The bootstrap method is employed to examine the parameter uncertainty in the selected model. The Yingluoxia watershed at the upper reaches of the Heihe River basin in north‐west of China is selected as the study area. Results show that all parameters in the model can be regarded as normally distributed based on their marginal distributions and the Kolmogorov–Smirnov test, although they appear slightly skewed for two parameters. Their uncertainty ranges are different from each other. The model residuals are tested to be independent, homoscedastic and normally distributed. Based on such valid hypotheses of model residuals, simulation uncertainties caused by co‐effects of model parameters and model structure can be evaluated effectively. It is found that the 95% and 99% confidence intervals (CIs) of simulated discharge cover 42.7% and 52.4% of the observations when only parameter uncertainty is considered, indicating that parameter uncertainty has a great effect on simulation uncertainty but still cannot be used to explain all the simulation uncertainty in this study. The 95% and 99% CIs become wider, and the percentages of observations falling inside such CIs become larger when co‐effects of parameters and model structure are considered, indicating that simultaneous consideration of both parameters and model structure uncertainties accounts sufficient contribution for model simulation uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   

16.
This paper presents a new explicit method for the estimation of layered vertical transverse isotropic (VTI) anisotropic parameters from walkaway VSP data. This method is based on Dix‐type normal moveout (NMO) inversion. To estimate interval anisotropic parameters above a receiver array, the method uses time arrivals of surface‐related double‐reflected downgoing waves. A three‐term NMO approximation function is used to estimate NMO velocity and a non‐hyperbolic parameter. Assuming the vertical velocity is known from zero‐offset VSP data, Dix‐type inversion is applied to estimate the layered Thomsen anisotropic parameters ?, δ above the receivers array. Model results show reasonable accuracy for estimates through Dix‐type inversion. Results also show that in many cases we can neglect the influence of the velocity gradient on anisotropy estimates. First breaks are used to estimate anisotropic parameters within the walkaway receiver interval. Analytical uncertainty analysis is performed to NMO parameter estimates. Its conclusions are confirmed by modelling.  相似文献   

17.
18.
We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding data using a non-linear inversion. The inversion scheme uses a theoretically correct Metropolis–Gibbs' sampling technique and an approximate method using numerous models sampled by a global optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to be computationally efficient in several geophysical parameter estimation problems. Unlike conventional simulated annealing (SA), in VFSA the perturbations are generated from the model parameters according to a Cauchy-like distribution whose shape changes with each iteration. This results in an algorithm that converges much faster than a standard SA. In the course of finding the optimal solution, VFSA samples several models from the search space. All these models can be used to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about the shape of an a posteriori probability density function in the model space. Here, we carry out a VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory. The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and uncertainty of layer parameters, would generally be more useful than the conventional best-fit techniques.  相似文献   

19.
The level of model complexity that can be effectively supported by available information has long been a subject of many studies in hydrologic modelling. In particular, distributed parameter models tend to be regarded as overparameterized because of numerous parameters used to describe spatially heterogeneous hydrologic processes. However, it is not clear how parameters and observations influence the degree of overparameterization, equifinality of parameter values, and uncertainty. This study investigated the impact of the numbers of observations and parameters on calibration quality including equifinality among calibrated parameter values, model performance, and output/parameter uncertainty using the Soil and Water Assessment Tool model. In the experiments, the number of observations was increased by expanding the calibration period or by including measurements made at inner points of a watershed. Similarly, additional calibration parameters were included in the order of their sensitivity. Then, unique sets of parameters were calibrated with the same objective function, optimization algorithm, and stopping criteria but different numbers of observations. The calibration quality was quantified with statistics calculated based on the ‘behavioural’ parameter sets, identified using 1% and 5% cut‐off thresholds in a generalized likelihood uncertainty estimation framework. The study demonstrated that equifinality, model performance, and output/parameter uncertainty were responsive to the numbers of observations and calibration parameters; however, the relationship between the numbers, equifinality, and uncertainty was not always conclusive. Model performance improved with increased numbers of calibration parameters and observations, and substantial equifinality did neither necessarily mean bad model performance nor large uncertainty in the model outputs and parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Soil heterogeneity and data sparsity combine to render estimates of infiltration rates uncertain. We develop reduced complexity models for the probabilistic forecasting of infiltration rates in heterogeneous soils during surface runoff and/or flooding events. These models yield closed-form semi-analytical expressions for the single- and multi-point infiltration-rate PDFs (probability density functions), which quantify predictive uncertainty stemming from uncertainty in soil properties. These solutions enable us to investigate the relative importance of uncertainty in various hydraulic parameters and the effects of their cross-correlation. At early times, the infiltration-rate PDFs computed with the reduced complexity models are in close agreement with their counterparts obtained from a full infiltration model based on the Richards equation. At all times, the reduced complexity models provide conservative estimates of predictive uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号