首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper describes a new method for gradually deforming realizations of Gaussian-related stochastic models while preserving their spatial variability. This method consists in building a stochastic process whose state space is the ensemble of the realizations of a spatial stochastic model. In particular, a stochastic process, built by combining independent Gaussian random functions, is proposed to perform the gradual deformation of realizations. Then, the gradual deformation algorithm is coupled with an optimization algorithm to calibrate realizations of stochastic models to nonlinear data. The method is applied to calibrate a continuous and a discrete synthetic permeability fields to well-test pressure data. The examples illustrate the efficiency of the proposed method. Furthermore, we present some extensions of this method (multidimensional gradual deformation, gradual deformation with respect to structural parameters, and local gradual deformation) that are useful in practice. Although the method described in this paper is operational only in the Gaussian framework (e.g., lognormal model, truncated Gaussian model, etc.), the idea of gradually deforming realizations through a stochastic process remains general and therefore promising even for calibrating non-Gaussian models.  相似文献   

2.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

3.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

4.
Based on the algorithm for gradual deformation of Gaussian stochastic models, we propose, in this paper, an extension of this method to gradually deforming realizations generated by sequential, not necessarily Gaussian, simulation. As in the Gaussian case, gradual deformation of a sequential simulation preserves spatial variability of the stochastic model and yields in general a regular objective function that can be minimized by an efficient optimization algorithm (e.g., a gradient-based algorithm). Furthermore, we discuss the local gradual deformation and the gradual deformation with respect to the structural parameters (mean, variance, and variogram range, etc.) of realizations generated by sequential simulation. Local gradual deformation may significantly improve calibration speed in the case where observations are scattered in different zones of a field. Gradual deformation with respect to structural parameters is necessary when these parameters cannot be inferred a priori and need to be determined using an inverse procedure. A synthetic example inspired from a real oil field is presented to illustrate different aspects of this approach. Results from this case study demonstrate the efficiency of the gradual deformation approach for constraining facies models generated by sequential indicator simulation. They also show the potential applicability of the proposed approach to complex real cases.  相似文献   

5.
This paper presents a new method of constructing random functions whose realizations can be evaluated efficiently. The basic idea is to blend, both stochastically and linearly, a limited set of independent initial realizations previously generated by any chosen simulation method. The blending stochastic coefficients are determined in such a way that the new random function so generated has the same mean and covariance functions as the random function used for generating the initial realizations.  相似文献   

6.
A fast Fourier transform (FFT) moving average (FFT-MA) method for generating Gaussian stochastic processes is derived. Using discrete Fourier transforms makes the calculations easy and fast so that large random fields can be produced. On the other hand, the basic moving average frame allows us to uncouple the random numbers from the structural parameters (mean, variance, correlation length, ... ), but also to draw the randomness components in spatial domain. Such features impart great flexibility to the FFT-MA generator. For instance, changing only the random numbers gives distinct realizations all having the same covariance function. Similarly, several realizations can be built from the same random number set, but from different structural parameters. Integrating the FFT-MA generator into an optimization procedure provides a tool theoretically capable to determine the random numbers identifying the Gaussian field as well as the structural parameters from dynamic data. Moreover, all or only some of the random numbers can be perturbed so that realizations produced using the FFT-MA generator can be locally updated through an optimization process.  相似文献   

7.
The likelihood of Gaussian realizations, as generated by the Cholesky simulation method, is analyzed in terms of Mahalanobis distances and fluctuations in the variogram reproduction. For random sampling, the probability to observe a Gaussian realization vector can be expressed as a function of its Mahalanobis distance, and the maximum likelihood depends only on the vector size. The Mahalanobis distances are themselves distributed as a Chi-square distribution and they can be used to describe the likelihood of Gaussian realizations. Their expected value and variance are only determined by the size of the vector of independent random normal scores used to generate the realizations. When the vector size is small, the distribution of Mahalanobis distances is highly skewed and most realizations are close to the vector mean in agreement with the multi-Gaussian density model. As the vector size increases, the realizations sample a region increasingly far out on the tail of the multi-Gaussian distribution, due to the large increase in the size of the uncertainty space largely compensating for the low probability density. For a large vector size, realizations close to the vector mean are not observed anymore. Instead, Gaussian vectors with Mahalanobis distance in the neighborhood of the expected Mahalanobis distance have the maximum probability to be observed. The distribution of Mahalanobis distances becomes Gaussian shaped and the bulk of realizations appear more equiprobable. However, the ratio of their probabilities indicates that they still remain far from being equiprobable. On the other hand, it is observed that equiprobable realizations still display important fluctuations in their variogram reproduction. The variance level that is expected in the variogram reproduction, as well as the variance of the variogram fluctuations, is dependent on the Mahalanobis distance. Realizations with smaller Mahalanobis distances are, on average, smoother than realizations with larger Mahalanobis distances. Poor ergodic conditions tend to generate higher proportions of flatter variograms relative to the variogram model. Only equiprobable realizations with a Mahalanobis distance equal to the expected Mahalanobis distance have an expected variogram matching the variogram model. For large vector sizes, Cholesky simulated Gaussian vectors cannot be used to explore uncertainty in the neighborhood of the vector mean. Instead uncertainty is explored around the n-dimensional elliptical envelop corresponding to the expected Mahalanobis distance.  相似文献   

8.
Conditional simulation of intrinsic random functions of orderk is a stochastic method that generates realizations which mimic the spatial fluctuation of nonstationary phenomena, reproduce their generalized covariance and honor the available data at sampled locations. The technique proposed here requires the following steps: (i) on-line simulation of Wiener-Levy processes and of their integrations; (ii) use of the turning-bands method to generate realizations in Rn; (iii) conditioning to available data; and (iv) verification of the reproduced generalized covariance using generalized variograms. The applicational aspects of the technique are demonstrated in two and three dimensions. Examples include the conditional simulation of geological variates of the Crystal Viking petroleum reservoir, Alberta, Canada.  相似文献   

9.
Conditional simulation of intrinsic random functions of orderk is a stochastic method that generates realizations which mimic the spatial fluctuation of nonstationary phenomena, reproduce their generalized covariance and honor the available data at sampled locations. The technique proposed here requires the following steps: (i) on-line simulation of Wiener-Levy processes and of their integrations; (ii) use of the turning-bands method to generate realizations in Rn; (iii) conditioning to available data; and (iv) verification of the reproduced generalized covariance using generalized variograms. The applicational aspects of the technique are demonstrated in two and three dimensions. Examples include the conditional simulation of geological variates of the Crystal Viking petroleum reservoir, Alberta, Canada.  相似文献   

10.
11.
This paper aims to provide a stochastic response surface method (SRSM) that can consider non-Gaussian dependent random variables under incomplete probability information. The Rosenblatt transformation is adopted to map the random variables from the original space into the mutually independent standard normal space for the stochastic surrogate model development. The multivariate joint distribution is reconstructed by the pair-copula decomposition approach, in which the pair-copula parameters are retrieved from the incomplete probability information. The proposed method is illustrated in a tunnel excavation example. Three different dependence structures characterized by normal copulas, Frank copulas, and hybrid copulas are respectively investigated to demonstrate the effect of dependence structure on the reliability results. The results show that the widely used Nataf transformation is actually a special case of the proposed method if all pair-copulas are normal copulas. The effect of conditioning order is also examined. This study provides a new insight into the SRSM-based reliability analysis from the copula viewpoint and extends the application of SRSM under incomplete probability information.  相似文献   

12.
This article deals with the effect of grain crushing on shear localization in granular materials during plane strain monotonic compression tests under constant lateral pressure. The grain diameter and the initial void ratio were stochastically distributed using a spatial correlation. To describe the mechanical behavior of cohesionless granular materials during a monotonic deformation path in plane strain compression, we used a micropolar hypoplastic constitutive model that is able to describe the salient properties of granular bodies including shear localization. The model was extended by introducing changes to the grain diameter with varying pressure using formulae from breakage mechanics proposed for crushable granulates. The initial void ratios and grain diameters took the form of correlated random spatial fields described by both symmetric and nonsymmetric random distributions using a homogeneous correlation function. The field realizations were generated with the help of an original conditional rejection method. A few representative samples of the random fields selected from the generated set were taken into account in numerical calculations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
A new and simple method is proposed to obtain estimates of recovery functions: the Bi-Gaussian approach. Existing methods estimate recovery functions with conditional distributions where the conditioning set is all the data available. Here instead the simple kriging estimate of the Gaussian transform is proposed to be used. Results in the point recovery case are identical to the multi-Gaussian approach of Verly (1983, 1984), whereas in the non-point-support situation, an approximation is derived which saves computer time as compared to employing the strict multi-Gaussian hypothesis. Two examples compare favorably with the well-established disjunctive kriging method (discrete Gaussian model).  相似文献   

14.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

15.
Conditioning realizations of stationary Gaussian random fields to a set of data is traditionally based on simple kriging. In practice, this approach may be demanding as it does not account for the uncertainty in the spatial average of the random field. In this paper, an alternative model is presented, in which the Gaussian field is decomposed into a random mean, constant over space but variable over the realizations, and an independent residual. It is shown that, when the prior variance of the random mean is infinitely large (reflecting prior ignorance on the actual spatial average), the realizations of the Gaussian random field are made conditional by substituting ordinary kriging for simple kriging. The proposed approach can be extended to models with random drifts that are polynomials in the spatial coordinates, by using universal or intrinsic kriging for conditioning the realizations, and also to multivariate situations by using cokriging instead of kriging.  相似文献   

16.
Assessment of uncertainty in the performance of fluvial reservoirs often requires the ability to generate realizations of channel sands that are conditional to well observations. For channels with low sinuosity this problem has been effectively solved. When the sinuosity is large, however, the standard stochastic models for fluvial reservoirs are not valid, because the deviation of the channel from a principal direction line is multivalued. In this paper, I show how the method of randomized maximum likelihood can be used to generate conditional realizations of channels with large sinuosity. In one example, a Gaussian random field model is used to generate an unconditional realization of a channel with large sinuosity, and this realization is then conditioned to well observations. Channels generated in the second approach are less realistic, but may be sufficient for modeling reservoir connectivity in a realistic way. In the second example, an unconditional realization of a channel is generated by a complex geologic model with random forcing. It is then adjusted in a meaningful way to honor well observations. The key feature in the solution is the use of channel direction instead of channel deviation as the characteristic random function describing the geometry of the channel.  相似文献   

17.
基于饱和渗透系数空间变异结构的斜坡渗流及失稳特征   总被引:1,自引:0,他引:1  
以往研究一般采用单随机变量方法(SRV)或基于水平或垂直方向波动范围生成的空间变异随机场来模拟岩土参数的空间变异性,对具有倾斜定向特征的空间变异随机场未有涉及.基于条件模拟相关理论和非侵入式随机有限元的理论框架,提出了利用序贯高斯模拟方法进行斜坡参数条件随机场模拟并运用有限元方法进行斜坡渗流和稳定性分析的方法.针对理想边坡,对各向同性和几何各向异性的共7种空间变异结构的饱和渗透系数(Ks)各进行了200次条件随机场模拟,基于条件随机场模拟结果进行了有限元渗流和稳定性计算,对每种空间变异结构多次计算结果进行了统计分析.结果表明:本文所提出的方法不仅再现了研究区域参数的空间二阶统计特性,通过设定变异函数参数进行不同空间变异类型、变异程度、变异定向性的随机场模拟,同时利用现场观测数据对随机场模拟结果进行条件限制,从而提高了随机场的赋值精度;Ks的空间变异结构对孔隙水压力的分布规律、地下水位线变化范围、稳定性系数和最危险滑动面分布特征均有一定程度的影响.本研究为库岸斜坡稳定性评价提供方法支撑.   相似文献   

18.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

19.

A new low-dimensional parameterization based on principal component analysis (PCA) and convolutional neural networks (CNN) is developed to represent complex geological models. The CNN–PCA method is inspired by recent developments in computer vision using deep learning. CNN–PCA can be viewed as a generalization of an existing optimization-based PCA (O-PCA) method. Both CNN–PCA and O-PCA entail post-processing a PCA model to better honor complex geological features. In CNN–PCA, rather than use a histogram-based regularization as in O-PCA, a new regularization involving a set of metrics for multipoint statistics is introduced. The metrics are based on summary statistics of the nonlinear filter responses of geological models to a pre-trained deep CNN. In addition, in the CNN–PCA formulation presented here, a convolutional neural network is trained as an explicit transform function that can post-process PCA models quickly. CNN–PCA is shown to provide both unconditional and conditional realizations that honor the geological features present in reference SGeMS geostatistical realizations for a binary channelized system. Flow statistics obtained through simulation of random CNN–PCA models closely match results for random SGeMS models for a demanding case in which O-PCA models lead to significant discrepancies. Results for history matching are also presented. In this assessment CNN–PCA is applied with derivative-free optimization, and a subspace randomized maximum likelihood method is used to provide multiple posterior models. Data assimilation and significant uncertainty reduction are achieved for existing wells, and physically reasonable predictions are also obtained for new wells. Finally, the CNN–PCA method is extended to a more complex nonstationary bimodal deltaic fan system, and is shown to provide high-quality realizations for this challenging example.

  相似文献   

20.
We present a two-step stochastic inversion approach for monitoring the distribution of CO2 injected into deep saline aquifers for the typical scenario of one single injection well and a database comprising a common suite of well logs as well as time-lapse vertical seismic profiling (VSP) data. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian co-simulations (SGCS) representing the considered reservoir before CO2 injection. All realizations within a set of models are then iteratively combined using a modified gradual deformation algorithm aiming at reducing the mismatch between the observed and simulated VSP data. In the second step, these optimal static models then serve as input for a history matching approach using the same modified gradual deformation algorithm for minimizing the mismatch between the observed and simulated VSP data following the injection of CO2. At each gradual deformation step, the injection and migration of CO2 is simulated and the corresponding seismic traces are computed and compared with the observed ones. The proposed stochastic inversion approach has been tested for a realistic, and arguably particularly challenging, synthetic case study mimicking the geological environment of a potential CO2 injection site in the Cambrian-Ordivician sedimentary sequence of the St. Lawrence platform in Southern Québec. The results demonstrate that the proposed two-step reservoir characterization approach is capable of adequately resolving and monitoring the distribution of the injected CO2. This finds its expression in optimized models of P- and S-wave velocities, density, and porosity, which, compared to conventional stochastic reservoir models, exhibit a significantly improved structural similarity with regard to the corresponding reference models. The proposed approach is therefore expected to allow for an optimal injection forecast by using a quantitative assimilation of all available data from the appraisal stage of a CO2 injection site.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号