首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a methodology based on the ensemble Kalman filter (EnKF) and the level set method for the continuous model updating of geological facies with respect to production data. Geological facies are modeled using an implicit surface representation and conditioned to production data using the ensemble Kalman filter. The methodology is based on Gaussian random fields used to deform the facies boundaries. The Gaussian random fields are used as the model parameter vector to be updated sequentially within the EnKF when new measurements are available. We show the successful application of the methodology to two synthetic reservoir models.  相似文献   

2.
3.
The performance of the ensemble Kalman filter (EnKF) for continuous updating of facies location and boundaries in a reservoir model based on production and facies data for a 3D synthetic problem is presented. The occurrence of the different facies types is treated as a random process and the initial distribution was obtained by truncating a bi-Gaussian random field. Because facies data are highly non-Gaussian, re-parameterization was necessary in order to use the EnKF algorithm for data assimilation; two Gaussian random fields are updated in lieu of the static facies parameters. The problem of history matching applied to facies is difficult due to (1) constraints to facies observations at wells are occasionally violated when productions data are assimilated; (2) excessive reduction of variance seems to be a bigger problem with facies than with Gaussian random permeability and porosity fields; and (3) the relationship between facies variables and data is so highly non-linear that the final facies field does not always honor early production data well. Consequently three issues are investigated in this work. Is it possible to iteratively enforce facies constraints when updates due to production data have caused them to be violated? Can localization of adjustments be used for facies to prevent collapse of the variance during the data-assimilation period? Is a forecast from the final state better than a forecast from time zero using the final parameter fields?To investigate these issues, a 3D reservoir simulation model is coupled with the EnKF technique for data assimilation. One approach to enforcing the facies constraint is continuous iteration on all available data, which may lead to inconsistent model states, incorrect weighting of the production data and incorrect adjustment of the state vector. A sequential EnKF where the dynamic and static data are assimilated sequentially is presented and this approach seems to have solved the highlighted problems above. When the ensemble size is small compared to the number of independent data, the localized adjustment of the state vector is a very important technique that may be used to mitigate loss of rank in the ensemble. Implementing a distance-based localization of the facies adjustment appears to mitigate the problem of variance deficiency in the ensembles by ensuring that sufficient variability in the ensemble is maintained throughout the data assimilation period. Finally, when data are assimilated without localization, the prediction results appear to be independent of the starting point. When localization is applied, it is better to predict from the start using the final parameter field rather than continue from the final state.  相似文献   

4.
5.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

6.
Karhunen-Loeve展开在土性各向异性随机场模拟中的应用研究   总被引:1,自引:0,他引:1  
史良胜  杨金忠  陈伏龙  周发超 《岩土力学》2007,28(11):2303-2308
研究了Karhunen-Loeve(简称KL)展开在土性参数随机场模拟中的应用,分析了KL展开的特点,针对不规则区域和任意类型协方差函数提出了积分方程的Galerkin数值解法,模拟了土壤渗透系数各向异性随机场。分析结果表明:较低阶Karhunen-Loeve展开能够较好描述随机场的空间结构,与转动带法相比,KL展开法在模拟随机场的各向异性特性方面更具优势;与谱展开法相比,KL展开法具有更优的收敛性。  相似文献   

7.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

8.
In the past years, many applications of history-matching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can be permeability fields or porosity fields, but can also fields defined by the rock type (facies fields). The estimation of the boundaries of the geologic facies with ensemble Kalman filter (EnKF) was made, in different papers, with the aid of Gaussian random fields, which were truncated using various schemes and introduced in a history-matching process. In this paper, we estimate, in the frame of the EnKF process, the locations of three facies types that occur into a reservoir domain, with the property that each two could have a contact. The geological simulation model is a form of the general truncated plurigaussian method. The difference with other approaches consists in how the truncation scheme is introduced and in the observation operator of the facies types at the well locations. The projection from the continuous space of the Gaussian fields into the discrete space of the facies fields is realized through in an intermediary space (space with probabilities). This space connects the observation operator of the facies types at the well locations with the geological simulation model. We will test the model using a 2D reservoir which is connected with the EnKF method as a data assimilation technique. We will use different geostatistical properties for the Gaussian fields and different levels of the uncertainty introduced in the model parameters and also in the construction of the Gaussian fields.  相似文献   

9.
This paper describes a novel approach for creating an efficient, general, and differentiable parameterization of large-scale non-Gaussian, non-stationary random fields (represented by multipoint geostatistics) that is capable of reproducing complex geological structures such as channels. Such parameterizations are appropriate for use with gradient-based algorithms applied to, for example, history-matching or uncertainty propagation. It is known that the standard Karhunen–Loeve (K–L) expansion, also called linear principal component analysis or PCA, can be used as a differentiable parameterization of input random fields defining the geological model. The standard K–L model is, however, limited in two respects. It requires an eigen-decomposition of the covariance matrix of the random field, which is prohibitively expensive for large models. In addition, it preserves only the two-point statistics of a random field, which is insufficient for reproducing complex structures. In this work, kernel PCA is applied to address the limitations associated with the standard K–L expansion. Although widely used in machine learning applications, it does not appear to have found any application for geological model parameterization. With kernel PCA, an eigen-decomposition of a small matrix called the kernel matrix is performed instead of the full covariance matrix. The method is much more efficient than the standard K–L procedure. Through use of higher order polynomial kernels, which implicitly define a high-dimensionality feature space, kernel PCA further enables the preservation of high-order statistics of the random field, instead of just two-point statistics as in the K–L method. The kernel PCA eigen-decomposition proceeds using a set of realizations created by geostatistical simulation (honoring two-point or multipoint statistics) rather than the analytical covariance function. We demonstrate that kernel PCA is capable of generating differentiable parameterizations that reproduce the essential features of complex geological structures represented by multipoint geostatistics. The kernel PCA representation is then applied to history match a water flooding problem. This example demonstrates that kernel PCA can be used with gradient-based history matching to provide models that match production history while maintaining multipoint geostatistics consistent with the underlying training image.  相似文献   

10.
Ensemble-based methods are becoming popular assisted history matching techniques with a growing number of field applications. These methods use an ensemble of model realizations, typically constructed by means of geostatistics, to represent the prior uncertainty. The performance of the history matching is very dependent on the quality of the initial ensemble. However, there is a significant level of uncertainty in the parameters used to define the geostatistical model. From a Bayesian viewpoint, the uncertainty in the geostatistical modeling can be represented by a hyper-prior in a hierarchical formulation. This paper presents the first steps towards a general parametrization to address the problem of uncertainty in the prior modeling. The proposed parametrization is inspired in Gaussian mixtures, where the uncertainty in the prior mean and prior covariance is accounted by defining weights for combining multiple Gaussian ensembles, which are estimated during the data assimilation. The parametrization was successfully tested in a simple reservoir problem where the orientation of the major anisotropic direction of the permeability field was unknown.  相似文献   

11.
Assimilation of production data into reservoir models for which the distribution of porosity and permeability is largely controlled by facies has become increasingly common. When the locations of the facies bodies must be conditioned to observations, the truncated plurigaussian model has been often shown to be a useful method for modeling as it allows gaussian variables to be updated instead of facies types. Previous experience has also shown that ensemble Kalman filter-like methods are particularly effective for assimilation of data into truncated plurigaussian models. In this paper, some limitations are shown of the ensemble-based or gradient-based methods when applied to truncated plurigaussian models of a certain type that is likely to occur for modeling channel facies. It is also shown that it is possible to improve the data match and increase the ensemble spread by modifying the updating step using an approximate derivative of the truncation map.  相似文献   

12.
13.
In a previous paper, we developed a theoretical basis for parameterization of reservoir model parameters based on truncated singular value decomposition (SVD) of the dimensionless sensitivity matrix. Two gradient-based algorithms based on truncated SVD were developed for history matching. In general, the best of these “SVD” algorithms requires on the order of 1/2 the number of equivalent reservoir simulation runs that are required by the limited memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) algorithm. In this work, we show that when combining SVD parameterization with the randomized maximum likelihood method, we can achieve significant additional computational savings by history matching all models simultaneously using a SVD parameterization based on a particular sensitivity matrix at each iteration. We present two new algorithms based on this idea, one which relies only on updating the SVD parameterization at each iteration and one which combines an inner iteration based on an adjoint gradient where during the inner iteration the truncated SVD parameterization does not vary. Results generated with our algorithms are compared with results obtained from the ensemble Kalman filter (EnKF). Finally, we show that by combining EnKF with the SVD-algorithm, we can improve the reliability of EnKF estimates.  相似文献   

14.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

15.
In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error, we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations is to use Markov chains Monte Carlo (MCMC). However, the dimensions of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. Furthermore, classical MCMC algorithms mix slowly, that is, they will not explore the whole support of the posterior in the time of the simulation. In this paper, we extend the methodology already described in a previous work to the problem of history matching of a Gaussian-related lithofacies reservoir model. We first show how to drastically reduce the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian random field underlying the lithofacies model. Moreover, we propose an innovative criterion of the choice of the number of components based on the connexity function. Then, we show how we improve the mixing properties of classical single MCMC, without increasing the global computational cost, by the use of parallel interacting Markov chains. Applying the dimension reduction and this innovative sampling method drastically lowers the number of iterations needed to sample efficiently from the posterior. We show the encouraging results obtained when applying the methodology to a synthetic history-matching case.  相似文献   

16.
17.
Model calibration and history matching are important techniques to adapt simulation tools to real-world systems. When prediction uncertainty needs to be quantified, one has to use the respective statistical counterparts, e.g., Bayesian updating of model parameters and data assimilation. For complex and large-scale systems, however, even single forward deterministic simulations may require parallel high-performance computing. This often makes accurate brute-force and nonlinear statistical approaches infeasible. We propose an advanced framework for parameter inference or history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. Our framework consists of two main steps. In step 1, the original model is projected onto a mathematically optimal response surface via the aPC technique. The resulting response surface can be viewed as a reduced (surrogate) model. It captures the model’s dependence on all parameters relevant for history matching at high-order accuracy. Step 2 consists of matching the reduced model from step 1 to observation data via bootstrap filtering. Bootstrap filtering is a fully nonlinear and Bayesian statistical approach to the inverse problem in history matching. It allows to quantify post-calibration parameter and prediction uncertainty and is more accurate than ensemble Kalman filtering or linearized methods. Through this combination, we obtain a statistical method for history matching that is accurate, yet has a computational speed that is more than sufficient to be developed towards real-time application. We motivate and demonstrate our method on the problem of CO2 storage in geological formations, using a low-parametric homogeneous 3D benchmark problem. In a synthetic case study, we update the parameters of a CO2/brine multiphase model on monitored pressure data during CO2 injection.  相似文献   

18.
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).  相似文献   

19.
The application of the ensemble Kalman filter (EnKF) for history matching petroleum reservoir models has been the subject of intense investigation during the past 10 years. Unfortunately, EnKF often fails to provide reasonable data matches for highly nonlinear problems. This fact motivated the development of several iterative ensemble-based methods in the last few years. However, there exists no study comparing the performance of these methods in the literature, especially in terms of their ability to quantify uncertainty correctly. In this paper, we compare the performance of nine ensemble-based methods in terms of the quality of the data matches, quantification of uncertainty, and computational cost. For this purpose, we use a small but highly nonlinear reservoir model so that we can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm. We also consider one adjoint-based implementation of the randomized maximum likelihood method in the comparisons.  相似文献   

20.
Ensemble-based data assimilation methods have recently become popular for solving reservoir history matching problems, but because of the practical limitation on ensemble size, using localization is necessary to reduce the effect of sampling error and to increase the degrees of freedom for incorporating large amounts of data. Local analysis in the ensemble Kalman filter has been used extensively for very large models in numerical weather prediction. It scales well with the model size and the number of data and is easily parallelized. In the petroleum literature, however, iterative ensemble smoothers with localization of the Kalman gain matrix have become the state-of-the-art approach for ensemble-based history matching. By forming the Kalman gain matrix row-by-row, the analysis step can also be parallelized. Localization regularizes updates to model parameters and state variables using information on the distance between the these variables and the observations. The truncation of small singular values in truncated singular value decomposition (TSVD) at the analysis step provides another type of regularization by projecting updates to dominant directions spanned by the simulated data ensemble. Typically, the combined use of localization and TSVD is necessary for problems with large amounts of data. In this paper, we compare the performance of Kalman gain localization to two forms of local analysis for parameter estimation problems with nonlocal data. The effect of TSVD with different localization methods and with the use of iteration is also analyzed. With several examples, we show that good results can be obtained for all localization methods if the localization range is chosen appropriately, but the optimal localization range differs for the various methods. In general, for local analysis with observation taper, the optimal range is somewhat shorter than the optimal range for other localization methods. Although all methods gave equivalent results when used in an iterative ensemble smoother, the local analysis methods generally converged more quickly than Kalman gain localization when the amount of data is large compared to ensemble size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号