首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In earth and environmental sciences applications, uncertainty analysis regarding the outputs of models whose parameters are spatially varying (or spatially distributed) is often performed in a Monte Carlo framework. In this context, alternative realizations of the spatial distribution of model inputs, typically conditioned to reproduce attribute values at locations where measurements are obtained, are generated via geostatistical simulation using simple random (SR) sampling. The environmental model under consideration is then evaluated using each of these realizations as a plausible input, in order to construct a distribution of plausible model outputs for uncertainty analysis purposes. In hydrogeological investigations, for example, conditional simulations of saturated hydraulic conductivity are used as input to physically-based simulators of flow and transport to evaluate the associated uncertainty in the spatial distribution of solute concentration. Realistic uncertainty analysis via SR sampling, however, requires a large number of simulated attribute realizations for the model inputs in order to yield a representative distribution of model outputs; this often hinders the application of uncertainty analysis due to the computational expense of evaluating complex environmental models. Stratified sampling methods, including variants of Latin hypercube sampling, constitute more efficient sampling aternatives, often resulting in a more representative distribution of model outputs (e.g., solute concentration) with fewer model input realizations (e.g., hydraulic conductivity), thus reducing the computational cost of uncertainty analysis. The application of stratified and Latin hypercube sampling in a geostatistical simulation context, however, is not widespread, and, apart from a few exceptions, has been limited to the unconditional simulation case. This paper proposes methodological modifications for adopting existing methods for stratified sampling (including Latin hypercube sampling), employed to date in an unconditional geostatistical simulation context, for the purpose of efficient conditional simulation of Gaussian random fields. The proposed conditional simulation methods are compared to traditional geostatistical simulation, based on SR sampling, in the context of a hydrogeological flow and transport model via a synthetic case study. The results indicate that stratified sampling methods (including Latin hypercube sampling) are more efficient than SR, overall reproducing to a similar extent statistics of the conductivity (and subsequently concentration) fields, yet with smaller sampling variability. These findings suggest that the proposed efficient conditional sampling methods could contribute to the wider application of uncertainty analysis in spatially distributed environmental models using geostatistical simulation.  相似文献   

2.
Geostatistical simulations of lithotypes or facies are commonly used to create a geological model and to describe the heterogeneities of petroleum reservoirs. However, it is difficult to handle such models in the framework of multiple realizations to assess the uncertainty of hydrocarbon in place. Indeed, the hydrocarbon in place is correlated with the facies proportions, which are themselves uncertain. The uncertainty model of facies proportions is not easy to describe because of closure relationships. A previous attempt was made with a nonparametric approach using the resampling technique. It has been successful in a stationary case but it is difficult to extend it to nonstationary cases. In this paper, we have applied the vectorial beta parametric model or Dirichlet model. It has provided much more realistic uncertainties on volumetrics in very different geological and geostatistical contexts.  相似文献   

3.
Comparing Training-Image Based Algorithms Using an Analysis of Distance   总被引:1,自引:1,他引:0  
As additional multiple-point statistical (MPS) algorithms are developed, there is an increased need for scientific ways for comparison beyond the usual visual comparison or simple metrics, such as connectivity measures. In this paper, we start from the general observation that any (not just MPS) geostatistical simulation algorithm represents two types of variability: (1) the within-realization variability, namely, that realizations reproduce a spatial continuity model (variogram, Boolean, or training-image based), (2) the between-realization variability representing a model of spatial uncertainty. In this paper, it is argued that any comparison of algorithms needs, at a minimum, to be based on these two randomizations. In fact, for certain MPS algorithms, it is illustrated with different examples that there is often a trade-off: Increased pattern reproduction entails reduced spatial uncertainty. In this paper, the subjective choice that the best algorithm maximizes pattern reproduction is made while at the same time maximizes spatial uncertainty. The discussion is also limited to fairly standard multiple-point algorithms and that our method does not necessarily apply to more recent or possibly future developments. In order to render these fundamental principles quantitative, this paper relies on a distance-based measure for both within-realization variability (pattern reproduction) and between-realization variability (spatial uncertainty). It is illustrated in this paper that this method is efficient and effective for two-dimensional, three-dimensional, continuous, and discrete training images.  相似文献   

4.
In oil industry and subsurface hydrology, geostatistical models are often used to represent the porosity or the permeability field. In history matching of a geostatistical reservoir model, we attempt to find multiple realizations that are conditional to dynamic data and representative of the model uncertainty space. A relevant way to simulate the conditioned realizations is by generating Monte Carlo Markov chains (MCMC). The huge dimensions (number of parameters) of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. In practice, we have to stop the chain far before it has browsed the whole support of the posterior probability density function. Furthermore, as the relationship between the production data and the random field is highly nonlinear, the posterior can be strongly multimodal and the chain may stay stuck in one of the modes. In this work, we propose a methodology to enhance the sampling properties of classical single MCMC in history matching. We first show how to reduce the dimension of the problem by using a truncated Karhunen–Loève expansion of the random field of interest and assess the number of components to be kept. Then, we show how we can improve the mixing properties of MCMC, without increasing the global computational cost, by using parallel interacting Markov Chains. Finally, we show the encouraging results obtained when applying the method to a synthetic history matching case.  相似文献   

5.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

6.
In the present paper, a new geostatistical parameterization technique is introduced for solving inverse problems, either in groundwater hydrology or petroleum engineering. The purpose of this is to characterize permeability at the field scale from the available dynamic data, that is, data depending on fluid displacements. Thus, a permeability model is built, which yields numerical flow answers similar to the data collected. This problem is often defined as an objective function to be minimized. We are especially focused on the possibility to locally change the permeability model, so as to further reduce the objective function. This concern is of interest when dealing with 4D-seismic data. The calibration phase consists of selecting sub-domains or pilot blocks and of varying their log-permeability averages. The permeability model is then constrained to these fictitious block-data through simple cokriging. In addition, we estimate the prior probability density function relative to the pilot block values and incorporate this prior information into the objective function. Therefore, variations in block values are governed by the optimizer while accounting for nearby point and block-data. Pilot block based optimizations provide permeability models respecting point-data at their locations, spatial variability models inferred from point-data and dynamic data in a least squares sense. A synthetic example is presented to demonstrate the applicability of the proposed matching methodology.  相似文献   

7.
为了探究水文地质结构对地下水流数值模拟的不确定性,可以运用随机模拟建立地下水位的预测模型。根据转移概率地质统计方法模拟多孔介质岩性分布,利用非线性规划的思路计算岩性与水文地质参数之间的关系,从而建立相对精确的随机水文地质参数场。将不同的水文地质参数场运用到MODFLOW中,得到不同的随机模拟结果。通过比较随机模型和确定模型的末流场拟合情况以及水位动态拟合图,发现确定模型和随机模型具有相似趋势,都能与实测流场拟合较好,但是随机模型更能体现真实的水文地质特征。对随机模型预测10年后的地下水水位做不确定性分析,得到水位平均变幅介于-5~5 m之间,且95%置信度水位变幅的平均上限线约为0.146 m。研究结果为决策者提供科学依据。  相似文献   

8.
Representing Spatial Uncertainty Using Distances and Kernels   总被引:8,自引:7,他引:1  
Assessing uncertainty of a spatial phenomenon requires the analysis of a large number of parameters which must be processed by a transfer function. To capture the possibly of a wide range of uncertainty in the transfer function response, a large set of geostatistical model realizations needs to be processed. Stochastic spatial simulation can rapidly provide multiple, equally probable realizations. However, since the transfer function is often computationally demanding, only a small number of models can be evaluated in practice, and are usually selected through a ranking procedure. Traditional ranking techniques for selection of probabilistic ranges of response (P10, P50 and P90) are highly dependent on the static property used. In this paper, we propose to parameterize the spatial uncertainty represented by a large set of geostatistical realizations through a distance function measuring “dissimilarity” between any two geostatistical realizations. The distance function allows a mapping of the space of uncertainty. The distance can be tailored to the particular problem. The multi-dimensional space of uncertainty can be modeled using kernel techniques, such as kernel principal component analysis (KPCA) or kernel clustering. These tools allow for the selection of a subset of representative realizations containing similar properties to the larger set. Without losing accuracy, decisions and strategies can then be performed applying a transfer function on the subset without the need to exhaustively evaluate each realization. This method is applied to a synthetic oil reservoir, where spatial uncertainty of channel facies is modeled through multiple realizations generated using a multi-point geostatistical algorithm and several training images.  相似文献   

9.
This paper is devoted to a geostatistical attempt at modeling migration errors when localizing a reflector in the ground. Starting with a probabilistic velocity model and choosing the simple geometrical optics background for the wave propagation in such media, we give the expression of the errors. This may be quantified provided the covariance of the velocity field is known. Variance of arrival times at constant offset is related to the covariance of the velocity field at hand. A practical application is given in the same paragraph. After that we give a typical schema for migration and uncertainty modeling: starting with seismic data, we make the weak seismic inversion. We then obtain the covariance of the velocity field that we use for simulating migration errors. The main issues of this methodology are discussed in the last paragraph.  相似文献   

10.
The variogram is a critical input to geostatistical studies: (1) it is a tool to investigate and quantify the spatial variability of the phenomenon under study, and (2) most geostatistical estimation or simulation algorithms require an analytical variogram model, which they will reproduce with statistical fluctuations. In the construction of numerical models, the variogram reflects some of our understanding of the geometry and continuity of the variable, and can have a very important impact on predictions from such numerical models. The principles of variogram modeling are developed and illustrated with a number of practical examples. A three-dimensional interpretation of the variogram is necessary to fully describe geologic continuity. Directional continuity must be described simultaneously to be consistent with principles of geological deposition and for a legitimate measure of spatial variability for geostatistical modeling algorithms. Interpretation principles are discussed in detail. Variograms are modeled with particular functions for reasons of mathematical consistency. Used correctly, such variogram models account for the experimental data, geological interpretation, and analogue information. The steps in this essential data integration exercise are described in detail through the introduction of a rigorous methodology.  相似文献   

11.
Teacher''s Aide Variogram Interpretation and Modeling   总被引:13,自引:0,他引:13  
The variogram is a critical input to geostatistical studies: (1) it is a tool to investigate and quantify the spatial variability of the phenomenon under study, and (2) most geostatistical estimation or simulation algorithms require an analytical variogram model, which they will reproduce with statistical fluctuations. In the construction of numerical models, the variogram reflects some of our understanding of the geometry and continuity of the variable, and can have a very important impact on predictions from such numerical models. The principles of variogram modeling are developed and illustrated with a number of practical examples. A three-dimensional interpretation of the variogram is necessary to fully describe geologic continuity. Directional continuity must be described simultaneously to be consistent with principles of geological deposition and for a legitimate measure of spatial variability for geostatistical modeling algorithms. Interpretation principles are discussed in detail. Variograms are modeled with particular functions for reasons of mathematical consistency. Used correctly, such variogram models account for the experimental data, geological interpretation, and analogue information. The steps in this essential data integration exercise are described in detail through the introduction of a rigorous methodology.  相似文献   

12.
Direct Pattern-Based Simulation of Non-stationary Geostatistical Models   总被引:5,自引:2,他引:3  
Non-stationary models often capture better spatial variation of real world spatial phenomena than stationary ones. However, the construction of such models can be tedious as it requires modeling both statistical trend and stationary stochastic component. Non-stationary models are an important issue in the recent development of multiple-point geostatistical models. This new modeling paradigm, with its reliance on the training image as the source for spatial statistics or patterns, has had considerable practical appeal. However, the role and construction of the training image in the non-stationary case remains a problematic issue from both a modeling and practical point of view. In this paper, we provide an easy to use, computationally efficient methodology for creating non-stationary multiple-point geostatistical models, for both discrete and continuous variables, based on a distance-based modeling and simulation of patterns. In that regard, the paper builds on pattern-based modeling previously published by the authors, whereby a geostatistical realization is created by laying down patterns as puzzle pieces on the simulation grid, such that the simulated patterns are consistent (in terms of a similarity definition) with any previously simulated ones. In this paper we add the spatial coordinate to the pattern similarity calculation, thereby only borrowing patterns locally from the training image instead of globally. The latter would entail a stationary assumption. Two ways of adding the geographical coordinate are presented, (1) based on a functional that decreases gradually away from the location where the pattern is simulated and (2) based on an automatic segmentation of the training image into stationary regions. Using ample two-dimensional and three-dimensional case studies we study the behavior in terms of spatial and ensemble uncertainty of the generated realizations.  相似文献   

13.
王淑娟 《地下水》2009,31(4):106-108
在节水灌溉项目投资方案优选的决策过程中,所依赖的信息是"部分完全的"或称"贫信息性",这正是多个方案优选决策的"灰色性"。投资方案的优选决策是一项复杂的、多项因素的工作,要考虑的目标很多,其评价因素和优选结论都具有"不确定性",这实际上是一个多目标的决策问题。针对多个投资方案的优选问题,引入灰色关联度和主成分投影法,通过建立决策模型对投资方案进行优选,提高了优选的精确性和客观性,并通过实例说明该方法来解决多目标的方案决策问题是可行的。  相似文献   

14.

Conditioning complex subsurface flow models on nonlinear data is complicated by the need to preserve the expected geological connectivity patterns to maintain solution plausibility. Generative adversarial networks (GANs) have recently been proposed as a promising approach for low-dimensional representation of complex high-dimensional images. The method has also been adopted for low-rank parameterization of complex geologic models to facilitate uncertainty quantification workflows. A difficulty in adopting these methods for subsurface flow modeling is the complexity associated with nonlinear flow data conditioning. While conditional GAN (CGAN) can condition simulated images on labels, application to subsurface problems requires efficient conditioning workflows for nonlinear data, which is far more complex. We present two approaches for generating flow-conditioned models with complex spatial patterns using GAN. The first method is through conditional GAN, whereby a production response label is used as an auxiliary input during the training stage of GAN. The production label is derived from clustering of the flow responses of the prior model realizations (i.e., training data). The underlying assumption of this approach is that GAN can learn the association between the spatial features corresponding to the production responses within each cluster. An alternative method is to use a subset of samples from the training data that are within a certain distance from the observed flow responses and use them as training data within GAN to generate new model realizations. In this case, GAN is not required to learn the nonlinear relation between production responses and spatial patterns. Instead, it is tasked to learn the patterns in the selected realizations that provide a close match to the observed data. The conditional low-dimensional parameterization for complex geologic models with diverse spatial features (i.e., when multiple geologic scenarios are plausible) performed by GAN allows for exploring the spatial variability in the conditional realizations, which can be critical for decision-making. We present and discuss the important properties of GAN for data conditioning using several examples with increasing complexity.

  相似文献   

15.
We have known for a long time that the material properties of the subsurface are highly variable in space. We have learned that this variability is due to the extreme complexity and variation with time of processes responsible for the formation of the earth's crust, from plate tectonics to erosion, sediment transport, and deposition, as well as to mechanical, climatic, and diagenetic effects. As geologists, we learned how to "read" this complex history in the rocks and how to try to extrapolate in space what we have understood. As physicists, we then learned that to study flow processes in such media we must apply the laws of continuum mechanics. As mathematicians using analytical methods, we learned that we must simplify by dividing this complex continuum into a small number of units, such as aquifers and aquitards, and describe their properties by (constant) equivalent values. In recent years, as numerical modelers, we learned that we now have the freedom to "discretize" this complex reality and describe it as an ensemble of small homogeneous boxes of continuous media, each of which can have different properties. How do we use this freedom? Is there a need for it? If the answer is "yes," how can we assign different rock-property values to thousands or even millions of such little boxes in our models, to best represent reality, and include confidence levels for each selected rock property? As a tribute to Professor Eugene S. Simpson, with whom the first author of this paper often discussed these questions, we present an overview of three techniques that focus on one property, the rock permeability. We explain the motivation for describing spatial variability and illustrate how to do so by the geostatistical method, the Boolean method, and the genetic method. We discuss their advantages and disadvantages and indicate their present state of development. This is an active field of research and space is limited, so the review is certain to be incomplete, but we hope that it will encourage the development of new ideas and approaches.  相似文献   

16.
Regulatory geologists are concerned with predicting the performance of sites proposed for waste disposal or for remediation of existing pollution problems. Geologic modeling of these sites requires large-scale expansion of knowledge obtained from very limited sampling. This expansion induces considerable uncertainty into the geologic models of rock properties that are required for modeling the predicted performance of the site.One method for assessing this uncertainty is through nonparametric geostatistical simulation. Simulation can produce a series of equiprobable models of a rock property of interest. Each model honors measured values at sampled locations, and each can be constructed to emulate both the univariate histogram and the spatial covariance structure of the measured data. Computing a performance model for a number of geologic simulations allows evaluation of the effects of geologic uncertainty. A site may be judged acceptable if the number of failures to meet a particular performance criterion produced by these computations is sufficiently low. A site that produces too many failures may be either unacceptable or simply inadequately described.The simulation approach to addressing geologic uncertainty is being applied to the potential high-level nuclear waste repository site at Yucca Mountain, Nevada, U.S.A. Preliminary geologic models of unsaturated permeability have been created that reproduce observed statistical properties reasonably well. A spread of unsaturated groundwater travel times has been computed that reflects the variability of those geologic models. Regions within the simulated models exhibiting the greatest variability among multiple runs are candidates for obtaining the greatest reduction in uncertainty through additional site characterization.  相似文献   

17.
Conditioning Surface-Based Geological Models to Well and Thickness Data   总被引:2,自引:1,他引:1  
Geostatistical simulation methods aim to represent spatial uncertainty through realizations that reflect a certain geological concept by means of a spatial continuity model. Most common spatial continuity models are either variogram, training image, or Boolean based. In this paper, a more recent spatial model of geological continuity is developed, termed the event, or surface-based model, which is specifically applicable to modeling cases with complex stratigraphy, such as in sedimentary systems. These methods rely on a rule-based stacking of events, which are mathematically represented by two-dimensional thickness variations over the domain, where positive thickness is associated with deposition and negative thickness with erosion. Although it has been demonstrated that the surface-based models accurately represent the geological variation present in complex layered systems, they are more difficult to constrain to hard and soft data as is typically required of practical geostatistical techniques. In this paper, we develop a practical methodology for constraining such models to hard data from wells and thickness data interpreted from geophysics, such as seismic data. Our iterative methodology relies on a decomposition of the parameter optimization problem into smaller, manageable problems that are solved sequentially. We demonstrate this method on a real case study of a turbidite sedimentary basin.  相似文献   

18.
The assessment of the risks associated with contamination by elevated levels of pollutants is a major issue in most parts of the world. The risk arises from the presence of a pollutant and from the uncertainty associated with estimating its concentration, extent and trajectory. The uncertainty in the assessment comes from the difficulty of measuring the pollutant concentration values accurately at any given location and the impossibility of measuring it at all locations within a study zone. Estimations tend to give smoothed versions of reality, with the smoothing effect being inversely proportional to the amount of data. If risk is a measure of the probability of pollutant concentrations exceeding specified thresholds, then the variability is the key feature in risk assessment and risk analysis. For this reason, geostatistical simulations provide an appropriate way of quantifying risk by simulating possible “realities” and determining how many of these realities exceed the contamination thresholds, and, finally, provides a means of visualizing risk and the geological causes of risk. This study concerns multivariate simulations of organic and inorganic pollutants measured in terrain samples to assess the uncertainty for the risk analysis of a contaminated site, an industrial site in northern Italy that has to be remediated. The main geostatistical tools are used to model the local uncertainty of pollutant concentrations, which prevail at any unsampled site, in particular by means of stochastic simulation. These models of uncertainty have been used in the decision-making processes to identify the areas targeted for remediation.  相似文献   

19.
Kriging-based geostatistical models require a semivariogram model. Next to the initial decision of stationarity, the choice of an appropriate semivariogram model is the most important decision in a geostatistical study. Common practice consists of fitting experimental semivariograms with a nested combination of proven models such as the spherical, exponential, and Gaussian models. These models work well in most cases; however, there are some shapes found in practice that are difficult to fit. We introduce a family of semivariogram models that are based on geometric shapes, analogous to the spherical semivariogram, that are known to be conditional negative definite and provide additional flexibility to fit semivariograms encountered in practice. A methodology to calculate the associated geometric shapes to match semivariograms defined in any number of directions is presented. Greater flexibility is available through the application of these geometric semivariogram models.  相似文献   

20.
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review, we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号