首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Based on the algorithm for gradual deformation of Gaussian stochastic models, we propose, in this paper, an extension of this method to gradually deforming realizations generated by sequential, not necessarily Gaussian, simulation. As in the Gaussian case, gradual deformation of a sequential simulation preserves spatial variability of the stochastic model and yields in general a regular objective function that can be minimized by an efficient optimization algorithm (e.g., a gradient-based algorithm). Furthermore, we discuss the local gradual deformation and the gradual deformation with respect to the structural parameters (mean, variance, and variogram range, etc.) of realizations generated by sequential simulation. Local gradual deformation may significantly improve calibration speed in the case where observations are scattered in different zones of a field. Gradual deformation with respect to structural parameters is necessary when these parameters cannot be inferred a priori and need to be determined using an inverse procedure. A synthetic example inspired from a real oil field is presented to illustrate different aspects of this approach. Results from this case study demonstrate the efficiency of the gradual deformation approach for constraining facies models generated by sequential indicator simulation. They also show the potential applicability of the proposed approach to complex real cases.  相似文献   

2.
Gradual deformation is a parameterization method that reduces considerably the unknown parameter space of stochastic models. This method can be used in an iterative optimization procedure for constraining stochastic simulations to data that are complex, nonanalytical functions of the simulated variables. This method is based on the fact that linear combinations of multi-Gaussian random functions remain multi-Gaussian random functions. During the past few years, we developed the gradual deformation method by combining independent realizations. This paper investigates another alternative: the combination of dependent realizations. One of our motivations for combining dependent realizations was to improve the numerical stability of the gradual deformation method. Because of limitations both in the size of simulation grids and in the precision of simulation algorithms, numerical realizations of a stochastic model are never perfectly independent. It was shown that the accumulation of very small dependence between realizations might result in significant structural drift from the initial stochastic model. From the combination of random functions whose covariance and cross-covariance are proportional to each other, we derived a new formulation of the gradual deformation method that can explicitly take into account the numerical dependence between realizations. This new formulation allows us to reduce the structural deterioration during the iterative optimization. The problem of combining dependent realizations also arises when deforming conditional realizations of a stochastic model. As opposed to the combination of independent realizations, combining conditional realizations avoids the additional conditioning step during the optimization process. However, this procedure is limited to global deformations with fixed structural parameters.  相似文献   

3.
4.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

5.
Sequential Gaussian Simulation(SGSIM)as a stochastic method has been developed to avoid the smoothing effect produced in deterministic methods by generating various stochastic realizations.One of the main issues of this technique is,however,an intensive computation related to the inverse operation in solving the Kriging system,which significantly limits its application when several realizations need to be produced for uncertainty quantification.In this paper,a physics-informed machine learning(PIML)model is proposed to improve the computational efficiency of the SGSIM.To this end,only a small amount of data produced by SGSIM are used as the training dataset based on which the model can discover the spatial correlations between available data and unsampled points.To achieve this,the governing equations of the SGSIM algorithm are incorporated into our proposed network.The quality of realizations produced by the PIML model is compared for both 2D and 3D cases,visually and quantitatively.Furthermore,computational performance is evaluated on different grid sizes.Our results demonstrate that the proposed PIML model can reduce the computational time of SGSIM by several orders of magnitude while similar results can be produced in a matter of seconds.  相似文献   

6.
We present a two-step stochastic inversion approach for monitoring the distribution of CO2 injected into deep saline aquifers for the typical scenario of one single injection well and a database comprising a common suite of well logs as well as time-lapse vertical seismic profiling (VSP) data. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian co-simulations (SGCS) representing the considered reservoir before CO2 injection. All realizations within a set of models are then iteratively combined using a modified gradual deformation algorithm aiming at reducing the mismatch between the observed and simulated VSP data. In the second step, these optimal static models then serve as input for a history matching approach using the same modified gradual deformation algorithm for minimizing the mismatch between the observed and simulated VSP data following the injection of CO2. At each gradual deformation step, the injection and migration of CO2 is simulated and the corresponding seismic traces are computed and compared with the observed ones. The proposed stochastic inversion approach has been tested for a realistic, and arguably particularly challenging, synthetic case study mimicking the geological environment of a potential CO2 injection site in the Cambrian-Ordivician sedimentary sequence of the St. Lawrence platform in Southern Québec. The results demonstrate that the proposed two-step reservoir characterization approach is capable of adequately resolving and monitoring the distribution of the injected CO2. This finds its expression in optimized models of P- and S-wave velocities, density, and porosity, which, compared to conventional stochastic reservoir models, exhibit a significantly improved structural similarity with regard to the corresponding reference models. The proposed approach is therefore expected to allow for an optimal injection forecast by using a quantitative assimilation of all available data from the appraisal stage of a CO2 injection site.  相似文献   

7.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

8.
Conditioning realizations of stationary Gaussian random fields to a set of data is traditionally based on simple kriging. In practice, this approach may be demanding as it does not account for the uncertainty in the spatial average of the random field. In this paper, an alternative model is presented, in which the Gaussian field is decomposed into a random mean, constant over space but variable over the realizations, and an independent residual. It is shown that, when the prior variance of the random mean is infinitely large (reflecting prior ignorance on the actual spatial average), the realizations of the Gaussian random field are made conditional by substituting ordinary kriging for simple kriging. The proposed approach can be extended to models with random drifts that are polynomials in the spatial coordinates, by using universal or intrinsic kriging for conditioning the realizations, and also to multivariate situations by using cokriging instead of kriging.  相似文献   

9.
This paper presents random field models with Gaussian or gamma univariate distributions and isofactorial bivariate distributions, constructed by composing two independent random fields: a directing function with stationary Gaussian increments and a stationary coding process with bivariate Gaussian or gamma distributions. Two variations are proposed, by considering a multivariate directing function and a coding process with a separable covariance, or by including drift components in the directing function. Iterative algorithms based on the Gibbs sampler allow one to condition the realizations of the substitution random fields to a set of data, while the inference of the model parameters relies on simple tools such as indicator variograms and variograms of different orders. A case study in polluted soil management is presented, for which a gamma model is used to quantify the risk that pollutant concentrations over remediation units exceed a given toxicity level. Unlike the multivariate Gaussian model, the proposed gamma model accounts for an asymmetry in the spatial correlation of the indicator functions around the median and for a spatial clustering of high pollutant concentrations.  相似文献   

10.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

11.
The prediction of fluid flows within hydrocarbon reservoirs requires the characterization of petrophysical properties. Such characterization is performed on the basis of geostatistics and history-matching; in short, a reservoir model is first randomly drawn, and then sequentially adjusted until it reproduces the available dynamic data. Two main concerns typical of the problem under consideration are the heterogeneity of rocks occurring at all scales and the use of data of distinct resolution levels. Therefore, referring to sequential Gaussian simulation, this paper proposes a new stochastic simulation method able to handle several scales for both continuous or discrete random fields. This method adds flexibility to history-matching as it boils down to the multiscale parameterization of reservoir models. In other words, reservoir models can be updated at either coarse or fine scales, or both. Parameterization adapts to the available data; the coarser the scale targeted, the smaller the number of unknown parameters, and the more efficient the history-matching process. This paper focuses on the use of variational optimization techniques driven by the gradual deformation method to vary reservoir models. Other data assimilation methods and perturbation processes could have been envisioned as well. Last, a numerical application case is presented in order to highlight the advantages of the proposed method for conditioning permeability models to dynamic data. For simplicity, we focus on two-scale processes. The coarse scale describes the variations in the trend while the fine scale characterizes local variations around the trend. The relationships between data resolution and parameterization are investigated.  相似文献   

12.
We present a methodology that allows conditioning the spatial distribution of geological and petrophysical properties of reservoir model realizations on available production data. The approach is fully consistent with modern concepts depicting natural reservoirs as composite media where the distribution of both lithological units (or facies) and associated attributes are modeled as stochastic processes of space. We represent the uncertain spatial distribution of the facies through a Markov mesh (MM) model, which allows describing complex and detailed facies geometries in a rigorous Bayesian framework. The latter is then embedded within a history matching workflow based on an iterative form of the ensemble Kalman filter (EnKF). We test the proposed methodology by way of a synthetic study characterized by the presence of two distinct facies. We analyze the accuracy and computational efficiency of our algorithm and its ability with respect to the standard EnKF to properly estimate model parameters and assess future reservoir production. We show the feasibility of integrating MM in a data assimilation scheme. Our methodology is conducive to a set of updated model realizations characterized by a realistic spatial distribution of facies and their log permeabilities. Model realizations updated through our proposed algorithm correctly capture the production dynamics.  相似文献   

13.
Stochastic simulation is increasingly used to map the spatial variability in the grades of elements of interest and to assess the uncertainty in the mineral resources and ore reserves. The practical implementation requires specifying a stochastic model, which describes the spatial distribution of the grades, and an algorithm to construct realizations of these grades, viewed as different possible outcomes or scenarios. In the case of the Gaussian random field model, a variety of algorithms have been proposed in the past decades, but their ability to reproduce the model statistics is often unequal. In this paper, we compare two such algorithms, namely the turning bands and the sequential algorithms. The comparison is hold through a synthetic case study and a real case study in a porphyry copper deposit located in southeastern Iran, in which it is of interest to jointly simulate the copper, molybdenum, silver, lead and zinc grades. Statistical testing and graphical validations are realized to check whether or not the realizations reproduce the features of the true grades, in particular their direct and cross variograms. Sequential simulation based on collocated cokriging turns out to poorly reproduce the cross variograms, while turning bands proves to be accurate in all the analyzed cases.  相似文献   

14.
A fast Fourier transform (FFT) moving average (FFT-MA) method for generating Gaussian stochastic processes is derived. Using discrete Fourier transforms makes the calculations easy and fast so that large random fields can be produced. On the other hand, the basic moving average frame allows us to uncouple the random numbers from the structural parameters (mean, variance, correlation length, ... ), but also to draw the randomness components in spatial domain. Such features impart great flexibility to the FFT-MA generator. For instance, changing only the random numbers gives distinct realizations all having the same covariance function. Similarly, several realizations can be built from the same random number set, but from different structural parameters. Integrating the FFT-MA generator into an optimization procedure provides a tool theoretically capable to determine the random numbers identifying the Gaussian field as well as the structural parameters from dynamic data. Moreover, all or only some of the random numbers can be perturbed so that realizations produced using the FFT-MA generator can be locally updated through an optimization process.  相似文献   

15.
Compensating for estimation smoothing in kriging   总被引:2,自引:0,他引:2  
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator—compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-ft2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well—better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging.  相似文献   

16.
Spatially distributed and varying natural phenomena encountered in geoscience and engineering problem solving are typically incompatible with Gaussian models, exhibiting nonlinear spatial patterns and complex, multiple-point connectivity of extreme values. Stochastic simulation of such phenomena is historically founded on second-order spatial statistical approaches, which are limited in their capacity to model complex spatial uncertainty. The newer multiple-point (MP) simulation framework addresses past limits by establishing the concept of a training image, and, arguably, has its own drawbacks. An alternative to current MP approaches is founded upon new high-order measures of spatial complexity, termed “high-order spatial cumulants.” These are combinations of moments of statistical parameters that characterize non-Gaussian random fields and can describe complex spatial information. Stochastic simulation of complex spatial processes is developed based on high-order spatial cumulants in the high-dimensional space of Legendre polynomials. Starting with discrete Legendre polynomials, a set of discrete orthogonal cumulants is introduced as a tool to characterize spatial shapes. Weighted orthonormal Legendre polynomials define the so-called Legendre cumulants that are high-order conditional spatial cumulants inferred from training images and are combined with available sparse data sets. Advantages of the high-order sequential simulation approach developed herein include the absence of any distribution-related assumptions and pre- or post-processing steps. The method is shown to generate realizations of complex spatial patterns, reproduce bimodal data distributions, data variograms, and high-order spatial cumulants of the data. In addition, it is shown that the available hard data dominate the simulation process and have a definitive effect on the simulated realizations, whereas the training images are only used to fill in high-order relations that cannot be inferred from data. Compared to the MP framework, the proposed approach is data-driven and consistently reconstructs the lower-order spatial complexity in the data used, in addition to high order.  相似文献   

17.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

18.
Most approaches in statistical spatial prediction assume that the spatial data are realizations of a Gaussian random field. However, this assumption is hard to justify for most applications. When the distribution of data is skewed but otherwise has similar properties to the normal distribution, a closed skew normal distribution can be used for modeling their skewness. Closed skew normal distribution is an extension of the multivariate skew normal distribution and has the advantage of being closed under marginalization and conditioning. In this paper, we generalize Bayesian prediction methods using closed skew normal distributions. A simulation study is performed to check the validity of the model and performance of the Bayesian spatial predictor. Finally, our prediction method is applied to Bayesian spatial prediction on the strain data near Semnan, Iran. The mean-square error of cross-validation is improved by the closed skew Gaussian model on the strain data.  相似文献   

19.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

20.
Seismic inverse modeling, which transforms appropriately processed geophysical data into the physical properties of the Earth, is an essential process for reservoir characterization. This paper proposes a work flow based on a Markov chain Monte Carlo method consistent with geology, well-logs, seismic data, and rock-physics information. It uses direct sampling as a multiple-point geostatistical method for generating realizations from the prior distribution, and Metropolis sampling with adaptive spatial resampling to perform an approximate sampling from the posterior distribution, conditioned to the geophysical data. Because it can assess important uncertainties, sampling is a more general approach than just finding the most likely model. However, since rejection sampling requires a large number of evaluations for generating the posterior distribution, it is inefficient and not suitable for reservoir modeling. Metropolis sampling is able to perform an equivalent sampling by forming a Markov chain. The iterative spatial resampling algorithm perturbs realizations of a spatially dependent variable, while preserving its spatial structure by conditioning to subset points. However, in most practical applications, when the subset conditioning points are selected at random, it can get stuck for a very long time in a non-optimal local minimum. In this paper it is demonstrated that adaptive subset sampling improves the efficiency of iterative spatial resampling. Depending on the acceptance/rejection criteria, it is possible to obtain a chain of geostatistical realizations aimed at characterizing the posterior distribution with Metropolis sampling. The validity and applicability of the proposed method are illustrated by results for seismic lithofacies inversion on the Stanford VI synthetic test sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号