首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 106 毫秒
1.
Truncated pluri-Gaussian simulation (TPGS) is suitable for the simulation of categorical variables that show natural ordering as the TPGS technique can consider transition probabilities. The TPGS assumes that categorical variables are the result of the truncation of underlying latent variables. In practice, only the categorical variables are observed. This translates the practical application of TPGS into a missing data problem in which all latent variables are missing. Latent variables are required at data locations in order to condition categorical realizations to observed categorical data. The imputation of missing latent variables at data locations is often achieved by either assigning constant values or spatially simulating latent variables subject to categorical observations. Realizations of latent variables can be used to condition all model realizations. Using a single realization or a constant value to condition all realizations is the same as assuming that latent variables are known at the data locations and this assumption affects uncertainty near data locations. The techniques for imputation of latent variables in TPGS framework are investigated in this article and their impact on uncertainty of simulated categorical models and possible effects on factors affecting decision making are explored. It is shown that the use of single realization of latent variables leads to underestimation of uncertainty and overestimation of measured resources while the use constant values for latent variables may lead to considerable over or underestimation of measured resources. The results highlight the importance of multiple data imputation in the context of TPGS.  相似文献   

2.
Estimating and mapping spatial uncertainty of environmental variables is crucial for environmental evaluation and decision making. For a continuous spatial variable, estimation of spatial uncertainty may be conducted in the form of estimating the probability of (not) exceeding a threshold value. In this paper, we introduced a Markov chain geostatistical approach for estimating threshold-exceeding probabilities. The differences of this approach compared to the conventional indicator approach lie with its nonlinear estimators—Markov chain random field models and its incorporation of interclass dependencies through transiograms. We estimated threshold-exceeding probability maps of clay layer thickness through simulation (i.e., using a number of realizations simulated by Markov chain sequential simulation) and interpolation (i.e., direct conditional probability estimation using only the indicator values of sample data), respectively. To evaluate the approach, we also estimated those probability maps using sequential indicator simulation and indicator kriging interpolation. Our results show that (i) the Markov chain approach provides an effective alternative for spatial uncertainty assessment of environmental spatial variables and the probability maps from this approach are more reasonable than those from conventional indicator geostatistics, and (ii) the probability maps estimated through sequential simulation are more realistic than those through interpolation because the latter display some uneven transitions caused by spatial structures of the sample data.  相似文献   

3.
With rapid advances of geospatial technologies, the amount of spatial data has been increasing exponentially over the past few decades. Usually collected by diverse source providers, the available spatial data tend to be fragmented by a large variety of data heterogeneities, which highlights the need of sound methods capable of efficiently fusing the diverse and incompatible spatial information. Within the context of spatial prediction of categorical variables, this paper describes a statistical framework for integrating and drawing inferences from a collection of spatially correlated variables while accounting for data heterogeneities and complex spatial dependencies. In this framework, we discuss the spatial prediction of categorical variables in the paradigm of latent random fields, and represent each spatial variable via spatial covariance functions, which define two-point similarities or dependencies of spatially correlated variables. The representation of spatial covariance functions derived from different spatial variables is independent of heterogeneous characteristics and can be combined in a straightforward fashion. Therefore it provides a unified and flexible representation of heterogeneous spatial variables in spatial analysis while accounting for complex spatial dependencies. We show that in the spatial prediction of categorical variables, the sought-after class occurrence probability at a target location can be formulated as a multinomial logistic function of spatial covariances of spatial variables between the target and sampled locations. Group least absolute shrinkage and selection operator is adopted for parameter estimation, which prevents the model from over-fitting, and simultaneously selects an optimal subset of important information (variables). Synthetic and real case studies are provided to illustrate the introduced concepts, and showcase the advantages of the proposed statistical framework.  相似文献   

4.
Multi-site simulation of hydrological data are required for drought risk assessment of large multi-reservoir water supply systems. In this paper, a general Bayesian framework is presented for the calibration and evaluation of multi-site hydrological data at annual timescales. Models included within this framework are the hidden Markov model (HMM) and the widely used lag-1 autoregressive (AR(1)) model. These models are extended by the inclusion of a Box–Cox transformation and a spatial correlation function in a multi-site setting. Parameter uncertainty is evaluated using Markov chain Monte Carlo techniques. Models are evaluated by their ability to reproduce a range of important extreme statistics and compared using Bayesian model selection techniques which evaluate model probabilities. The case study, using multi-site annual rainfall data situated within catchments which contribute to Sydney’s main water supply, provided the following results: Firstly, in terms of model probabilities and diagnostics, the inclusion of the Box–Cox transformation was preferred. Secondly the AR(1) and HMM performed similarly, while some other proposed AR(1)/HMM models with regionally pooled parameters had greater posterior probability than these two models. The practical significance of parameter and model uncertainty was illustrated using a case study involving drought security analysis for urban water supply. It was shown that ignoring parameter uncertainty resulted in a significant overestimate of reservoir yield and an underestimation of system vulnerability to severe drought.  相似文献   

5.
Markov链模型在储层随机建模中的作用越来越受到关注,但其多用于类型属性(岩相、沉积相、沉积亚相等)的模拟,对于连续型属性(孔隙度、渗透率、含油气饱和度等)的模拟还比较困难.本文提出用Markov链模型相控建模方法模拟连续型属性的思路,即首先用Markov链模型模拟出类型属性,其次在类型属性约束下模拟出连续型属性,从而解决连续型属性不能产生突变边界的问题.最后应用此方法进行了模拟实验,模拟结果显示不同岩相中孔隙度差异较大,而同种岩相中孔隙度变化较小,证明了此方法的可靠性和适用性.  相似文献   

6.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

7.
In this paper, we investigate the information content in “nanosensors” with limited functionality that might be injected into a reservoir or an aquifer to provide information on the spatial distribution of properties. The two types of sensors that we consider are sensors that can potentially measure pressure at various times during transport, and sensors can be located in space by perturbations in electrical, magnetic, or acoustic properties. The intent of the study is to determine the resolution of estimates of properties that can be obtained from various combinations of sensors, various frequencies of observations, and various specifications on sensor precision.Our goal is to investigate the resolution of model estimates for various types of measurements. For this, we compute linearized estimates of the sensitivity of the observations to the porosity and permeability assuming gaussian errors in the pressure and location observations. Because the flow is one-dimensional and incompressible, observations of location are sensitive to the porosity between the injection location and the sensor location, while the location of particles is sensitive to the effective permeability over the entire interval from injector to producer. When only the pressure is measured but the location of the sensor is unknown, as might be the situation for a threshold sensor, the pressure is sensitive to both permeability and porosity only in the region between the injector and sensor.In addition to the linearized sensitivity and resolution analyses, Markov chain Monte Carlo sampling is used to estimate the posterior pdf for model variables for realistic (non-Gaussian) likelihood models. For a Markov chain of length one million samples approximately 200-500 independent samples are generated for uncertainty and resolution assessment. Results from the MCMC analysis are not in conflict with the linearized analysis.  相似文献   

8.
The spatial distribution of residual light non-aqueous phase liquid (LNAPL) is an important factor in reactive solute transport modeling studies. There is great uncertainty associated with both the areal limits of LNAPL source zones and smaller scale variability within the areal limits. A statistical approach is proposed to construct a probabilistic model for the spatial distribution of residual NAPL and it is applied to a site characterized by ultra-violet-induced-cone-penetration testing (CPT–UVIF). The uncertainty in areal limits is explicitly addressed by a novel distance function (DF) approach. In modeling the small-scale variability within the areal limits, the CPT–UVIF data are used as primary source of information, while soil texture and distance to water table are treated as secondary data. Two widely used geostatistical techniques are applied for the data integration, namely sequential indicator simulation with locally varying means (SIS–LVM) and Bayesian updating (BU). A close match between the calibrated uncertainty band (UB) and the target probabilities shows the performance of the proposed DF technique in characterization of uncertainty in the areal limits. A cross-validation study also shows that the integration of the secondary data sources substantially improves the prediction of contaminated and uncontaminated locations and that the SIS–LVM algorithm gives a more accurate prediction of residual NAPL contamination. The proposed DF approach is useful in modeling the areal limits of the non-stationary continuous or categorical random variables, and in providing a prior probability map for source zone sizes to be used in Monte Carlo simulations of contaminant transport or Monte Carlo type inverse modeling studies.  相似文献   

9.
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.  相似文献   

10.
We invert prestack seismic amplitude data to find rock properties of a vertical profile of the earth. In particular we focus on lithology, porosity and fluid. Our model includes vertical dependencies of the rock properties. This allows us to compute quantities valid for the full profile such as the probability that the vertical profile contains hydrocarbons and volume distributions of hydrocarbons. In a standard point wise approach, these quantities can not be assessed. We formulate the problem in a Bayesian framework, and model the vertical dependency using spatial statistics. The relation between rock properties and elastic parameters is established through a stochastic rock model, and a convolutional model links the reflectivity to the seismic. A Markov chain Monte Carlo (MCMC) algorithm is used to generate multiple realizations that honours both the seismic data and the prior beliefs and respects the additional constraints imposed by the vertical dependencies. Convergence plots are used to provide quality check of the algorithm and to compare it with a similar method. The implementation has been tested on three different data sets offshore Norway, among these one profile has well control. For all test cases the MCMC algorithm provides reliable estimates with uncertainty quantification within three hours. The inversion result is consistent with the observed well data. In the case example we show that the seismic amplitudes make a significant impact on the inversion result even if the data have a moderate well tie, and that this is due to the vertical dependency imposed on the lithology fluid classes in our model. The vertical correlation in elastic parameters mainly influences the upside potential of the volume distribution. The approach is best suited to evaluate a few selected vertical profiles since the MCMC algorithm is computer demanding.  相似文献   

11.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   

12.
Although uncertainty about structures of environmental models (conceptual uncertainty) is often acknowledged to be the main source of uncertainty in model predictions, it is rarely considered in environmental modelling. Rather, formal uncertainty analyses have traditionally focused on model parameters and input data as the principal source of uncertainty in model predictions. The traditional approach to model uncertainty analysis, which considers only a single conceptual model, may fail to adequately sample the relevant space of plausible conceptual models. As such, it is prone to modelling bias and underestimation of predictive uncertainty.  相似文献   

13.
We consider a Bayesian model for inversion of observed amplitude variation with offset data into lithology/fluid classes, and study in particular how the choice of prior distribution for the lithology/fluid classes influences the inversion results. Two distinct prior distributions are considered, a simple manually specified Markov random field prior with a first-order neighbourhood and a Markov mesh model with a much larger neighbourhood estimated from a training image. They are chosen to model both horizontal connectivity and vertical thickness distribution of the lithology/fluid classes, and are compared on an offshore clastic oil reservoir in the North Sea. We combine both priors with the same linearized Gaussian likelihood function based on a convolved linearized Zoeppritz relation and estimate properties of the resulting two posterior distributions by simulating from these distributions with the Metropolis–Hastings algorithm. The influence of the prior on the marginal posterior probabilities for the lithology/fluid classes is clearly observable, but modest. The importance of the prior on the connectivity properties in the posterior realizations, however, is much stronger. The larger neighbourhood of the Markov mesh prior enables it to identify and model connectivity and curvature much better than what can be done by the first-order neighbourhood Markov random field prior. As a result, we conclude that the posterior realizations based on the Markov mesh prior appear with much higher lateral connectivity, which is geologically plausible.  相似文献   

14.
Seismic Rock physics plays a bridge role between the rock moduli and physical properties of the hydrocarbon reservoirs. Prestack seismic inversion is an important method for the quantitative characterization of elasticity, physical properties, lithology and fluid properties of subsurface reservoirs. In this paper, a high order approximation of rock physics model for clastic rocks is established and one seismic AVO reflection equation characterized by the high order approximation(Jacobian and Hessian matrix) of rock moduli is derived. Besides, the contribution of porosity, shale content and fluid saturation to AVO reflectivity is analyzed. The feasibility of the proposed AVO equation is discussed in the direct estimation of rock physical properties. On the basis of this, one probabilistic AVO inversion based on differential evolution-Markov chain Monte Carlo stochastic model is proposed on the premise that the model parameters obey Gaussian mixture probability prior model. The stochastic model has both the global optimization characteristics of the differential evolution algorithm and the uncertainty analysis ability of Markov chain Monte Carlo model. Through the cross parallel of multiple Markov chains, multiple stochastic solutions of the model parameters can be obtained simultaneously, and the posterior probability density distribution of the model parameters can be simulated effectively. The posterior mean is treated as the optimal solution of the model to be inverted.Besides, the variance and confidence interval are utilized to evaluate the uncertainties of the estimated results, so as to realize the simultaneous estimation of reservoir elasticity, physical properties, discrete lithofacies and dry rock skeleton. The validity of the proposed approach is verified by theoretical tests and one real application case in eastern China.  相似文献   

15.
Typhoon is one of the most destructive disasters in Taiwan, which usually causes many floods and mudslides and prevents the electrical and water supply. Prior to its arrival, how to accurately forecast the path and rainfall of typhoon are important issues. In the past, a regression-based model was the most applied statistical method to evaluate the associated problems. However, it generally ignored the spatial dependence in the data, resulting in less accurate estimation and prediction, and the importance of particular explanatory variables may not be apparent. Therefore, in this paper we focus on assessing the spatial risk variations regarding the typhoon cumulated rainfall at Taipei with respect to typhoon locations by using the spatial hierarchical Bayesian model combined with the spatial conditional autoregressive model, where the model parameters are estimated by designing a family of stochastic algorithms based on a Markov chain Monte Carlo technique. The proposed method is applied to a real data set of Taiwan for illustration. Also, some important explanatory variables regarding the typhoon cumulated rainfall at Taipei are indicated as well.  相似文献   

16.
Land-use changes are generally recognized as multi-scale complex systems with processes and driving factors operating at different scales. Traditional linear approaches could not adequately acquire the nonlinear features in complex land-use changes. A multi-state artificial neural network based cellular automata (MANNCA) model and a multi-state autologistic regression based cellular automata (MALRCA) model were developed to simulate complex land-use changes in the Yellow River Delta during the period of 1992–2005. Relatively good conformity between simulated and actual land-use patterns indicated that the two models were able to simulate land-use dynamics effectively and generate realistic land-use patterns. The MANNCA model obtained higher fuzzy kappa values over MALRCA model at all the three simulation periods, which indicated that artificial neural networks could more effectively capture the complex relationships between land-use changes and a large set of spatial variables. Although the MALRCA model does have some advantages, the proposed MANNCA model represents a more effective approach to simulate the complex and nonlinear land-use evolutionary process.  相似文献   

17.
In this paper we combine a multiscale data integration technique introduced in [Lee SH, Malallah A, Datta-Gupta A, Hidgon D. Multiscale data integration using Markov Random Fields. SPE Reservoir Evaluat Eng 2002;5(1):68–78] with upscaling techniques for spatial modeling of permeability. The main goal of this paper is to find fine-scale permeability fields based on coarse-scale permeability measurements. The approach introduced in the paper is hierarchical and the conditional information from different length scales is incorporated into the posterior distribution using a Bayesian framework. Because of a complicated structure of the posterior distribution Markov chain Monte Carlo (MCMC) based approaches are used to draw samples of the fine-scale permeability field.  相似文献   

18.
In geophysical inverse problems, the posterior model can be analytically assessed only in case of linear forward operators, Gaussian, Gaussian mixture, or generalized Gaussian prior models, continuous model properties, and Gaussian-distributed noise contaminating the observed data. For this reason, one of the major challenges of seismic inversion is to derive reliable uncertainty appraisals in cases of complex prior models, non-linear forward operators and mixed discrete-continuous model parameters. We present two amplitude versus angle inversion strategies for the joint estimation of elastic properties and litho-fluid facies from pre-stack seismic data in case of non-parametric mixture prior distributions and non-linear forward modellings. The first strategy is a two-dimensional target-oriented inversion that inverts the amplitude versus angle responses of the target reflections by adopting the single-interface full Zoeppritz equations. The second is an interval-oriented approach that inverts the pre-stack seismic responses along a given time interval using a one-dimensional convolutional forward modelling still based on the Zoeppritz equations. In both approaches, the model vector includes the facies sequence and the elastic properties of P-wave velocity, S-wave velocity and density. The distribution of the elastic properties at each common-mid-point location (for the target-oriented approach) or at each time-sample position (for the time-interval approach) is assumed to be multimodal with as many modes as the number of litho-fluid facies considered. In this context, an analytical expression of the posterior model is no more available. For this reason, we adopt a Markov chain Monte Carlo algorithm to numerically evaluate the posterior uncertainties. With the aim of speeding up the convergence of the probabilistic sampling, we adopt a specific recipe that includes multiple chains, a parallel tempering strategy, a delayed rejection updating scheme and hybridizes the standard Metropolis–Hasting algorithm with the more advanced differential evolution Markov chain method. For the lack of available field seismic data, we validate the two implemented algorithms by inverting synthetic seismic data derived on the basis of realistic subsurface models and actual well log data. The two approaches are also benchmarked against two analytical inversion approaches that assume Gaussian-mixture-distributed elastic parameters. The final predictions and the convergence analysis of the two implemented methods proved that our approaches retrieve reliable estimations and accurate uncertainties quantifications with a reasonable computational effort.  相似文献   

19.
This study develops a novel approach for modelling and examining the impacts of time–space land‐use changes on hydrological components. The approach uses an empirical land‐use change allocation model (CLUE‐s) and a distributed hydrological model (DHSVM) to examine various land‐use change scenarios in the Wu‐Tu watershed in northern Taiwan. The study also uses a generalized likelihood uncertainty estimation approach to quantify the parameter uncertainty of the distributed hydrological model. The results indicate that various land‐use policies—such as no change, dynamic change and simultaneous change—have different levels of impact on simulating the spatial distributions of hydrological components in the watershed study. Peak flow rates under simultaneous and dynamic land‐use changes are 5·71% and 2·77%, respectively, greater than the rate under the no land‐use change scenario. Using dynamic land‐use changes to assess the effect of land‐use changes on hydrological components is more practical and feasible than using simultaneous land‐use change and no land‐use change scenarios. Furthermore, land‐use change is a spatial dynamic process that can lead to significant changes in the distributions of ground water and soil moisture. The spatial distributions of land‐use changes influence hydrological processes, such as the ground water level of whole areas, particularly in the downstream watershed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Hydrological modelling depends highly on the accuracy and uncertainty of model input parameters such as soil properties. Since most of these data are field surveyed, geostatistical techniques such as kriging, classification and regression trees or more sophisticated soil‐landscape models need to be applied to interpolate point information to the area. Most of the existing interpolation techniques require a random or regular distribution of points within the study area but are not adequate to satisfactorily interpolate soil catena or transect data. The soil landscape model presented in this study is predicting soil information from transect or catena point data using a statistical mean (arithmetic, geometric and harmonic mean) to calculate the soil information based on class means of merged spatial explanatory variables. A data set of 226 soil depth measurements covering a range of 0–6·5 m was used to test the model. The point data were sampled along four transects in the Stubbetorp catchment, SE‐Sweden. We overlaid a geomorphology map (8 classes) with digital elevation model‐derived topographic index maps (2–9 classes) to estimate the range of error the model produces with changing sample size and input maps. The accuracy of the soil depth predictions was estimated with the root mean square error (RMSE) based on a testing and training data set. RMSE ranged generally between 0·73 and 0·83 m ± 0·013 m depending on the amount of classes the merged layers had, but were smallest for a map combination with a low number of classes predicted with the harmonic mean (RMSE = 0·46 m). The results show that the prediction accuracy of this method depends on the number of point values in the sample, the value range of the measured attribute and the initial correlations between point values and explanatory variables, but suggests that the model approach is in general scale invariant. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号