共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum
reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear
problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple
times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence
between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple
data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse
seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF.
In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of
information during the truncation of small singular values. 相似文献
3.
Large-scale history matching with quadratic interpolation models 总被引:2,自引:0,他引:2
4.
Andreas S. Stordal Randi Valestrand Hans Arnfinn Karlsen Geir N?vdal Hans Julius Skaug 《Computational Geosciences》2012,16(2):467-482
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs.
EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement.
Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of
state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In
this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on
a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution
as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle
Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving
the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation
with the reference field than the corresponding fields obtained with EnKF. 相似文献
5.
Wiktoria Lawniczak Remus Hanea Arnold Heemink Dennis McLaughlin 《Computational Geosciences》2009,13(2):245-254
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally,
validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as
well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter
(EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that
uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due
to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations)
and the updated parameter field becomes scattered and loses important geological features (for example, the contact between
high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the
final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications
and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time,
the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the
tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF
are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The
advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many
measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems. 相似文献
6.
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history
matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional
EnKF requires assumption of the distribution’s normality. Besides, it is based on the linear update of the analysis equations.
These facts may cause problems when filter is used in reservoir applications and result in sampling error. The situation becomes
more problematic if the a priori information on the reservoir structure is poor and initial guess about the, e.g., permeability
field is far from the actual one. The above circumstance explains a reason to perform some further research concerned with
analyzing specific modification of the EnKF-based approach, namely, the iterative EnKF (IEnKF) scheme, which allows restarting
the procedure with a new initial guess that is closer to the actual solution and, hence, requires less improvement by the
algorithm while providing better estimation of the parameters. The paper presents some examples for which the IEnKF algorithm
works better than traditional EnKF. The algorithms are compared while estimating the permeability field in relation to the
two-phase, two-dimensional fluid flow model. 相似文献
7.
In this work, we present an efficient matrix-free ensemble Kalman filter (EnKF) algorithm for the assimilation of large data
sets. The EnKF has increasingly become an essential tool for data assimilation of numerical models. It is an attractive assimilation
method because it can evolve the model covariance matrix for a non-linear model, through the use of an ensemble of model states,
and it is easy to implement for any numerical model. Nevertheless, the computational cost of the EnKF can increase significantly
for cases involving the assimilation of large data sets. As more data become available for assimilation, a potential bottleneck
in most EnKF algorithms involves the operation of the Kalman gain matrix. To reduce the complexity and cost of assimilating
large data sets, a matrix-free EnKF algorithm is proposed. The algorithm uses an efficient matrix-free linear solver, based
on the Sherman–Morrison formulas, to solve the implicit linear system within the Kalman gain matrix and compute the analysis.
Numerical experiments with a two-dimensional shallow water model on the sphere are presented, where results show the matrix-free
implementation outperforming an singular value decomposition-based implementation in computational time. 相似文献
8.
Kernel Principal Component Analysis for Efficient,Differentiable Parameterization of Multipoint Geostatistics 总被引:6,自引:5,他引:1
This paper describes a novel approach for creating an efficient, general, and differentiable parameterization of large-scale
non-Gaussian, non-stationary random fields (represented by multipoint geostatistics) that is capable of reproducing complex
geological structures such as channels. Such parameterizations are appropriate for use with gradient-based algorithms applied
to, for example, history-matching or uncertainty propagation. It is known that the standard Karhunen–Loeve (K–L) expansion,
also called linear principal component analysis or PCA, can be used as a differentiable parameterization of input random fields
defining the geological model. The standard K–L model is, however, limited in two respects. It requires an eigen-decomposition
of the covariance matrix of the random field, which is prohibitively expensive for large models. In addition, it preserves
only the two-point statistics of a random field, which is insufficient for reproducing complex structures.
In this work, kernel PCA is applied to address the limitations associated with the standard K–L expansion. Although widely
used in machine learning applications, it does not appear to have found any application for geological model parameterization.
With kernel PCA, an eigen-decomposition of a small matrix called the kernel matrix is performed instead of the full covariance
matrix. The method is much more efficient than the standard K–L procedure. Through use of higher order polynomial kernels,
which implicitly define a high-dimensionality feature space, kernel PCA further enables the preservation of high-order statistics
of the random field, instead of just two-point statistics as in the K–L method. The kernel PCA eigen-decomposition proceeds
using a set of realizations created by geostatistical simulation (honoring two-point or multipoint statistics) rather than
the analytical covariance function. We demonstrate that kernel PCA is capable of generating differentiable parameterizations
that reproduce the essential features of complex geological structures represented by multipoint geostatistics. The kernel
PCA representation is then applied to history match a water flooding problem. This example demonstrates that kernel PCA can
be used with gradient-based history matching to provide models that match production history while maintaining multipoint
geostatistics consistent with the underlying training image. 相似文献
9.
Dmitry Eydinov Sigurd Ivar Aanonsen Jarle Haukås Ivar Aavatsmark 《Computational Geosciences》2008,12(2):209-225
A method for history matching of an in-house petroleum reservoir compositional simulator with multipoint flux approximation
is presented. This method is used for the estimation of unknown reservoir parameters, such as permeability and porosity, based
on production data and inverted seismic data. The limited-memory Broyden–Fletcher–Goldfarb–Shanno method is employed for minimization
of the objective function, which represents the difference between simulated and observed data. In this work, we present the
key features of the algorithm for calculations of the gradients of the objective function based on adjoint variables. The
test example shows that the method is applicable to cases with anisotropic permeability fields, multipoint flux approximation,
and arbitrary fluid compositions. 相似文献
10.
Reza Tavakoli Gergina Pencheva Mary F. Wheeler Benjamin Ganis 《Computational Geosciences》2013,17(1):83-97
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES. 相似文献
11.
Thomas Romary 《Computational Geosciences》2010,14(2):343-355
In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration
that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in
the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error,
we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations
is to use Markov chains Monte Carlo (MCMC). However, the dimensions of the model and the computational cost of each iteration
are two important pitfalls for the use of MCMC. Furthermore, classical MCMC algorithms mix slowly, that is, they will not
explore the whole support of the posterior in the time of the simulation. In this paper, we extend the methodology already
described in a previous work to the problem of history matching of a Gaussian-related lithofacies reservoir model. We first
show how to drastically reduce the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian
random field underlying the lithofacies model. Moreover, we propose an innovative criterion of the choice of the number of
components based on the connexity function. Then, we show how we improve the mixing properties of classical single MCMC, without
increasing the global computational cost, by the use of parallel interacting Markov chains. Applying the dimension reduction
and this innovative sampling method drastically lowers the number of iterations needed to sample efficiently from the posterior.
We show the encouraging results obtained when applying the methodology to a synthetic history-matching case. 相似文献
12.
Integrating production data under uncertainty by parallel interacting Markov chains on a reduced dimensional space 总被引:2,自引:0,他引:2
Thomas Romary 《Computational Geosciences》2009,13(1):103-122
In oil industry and subsurface hydrology, geostatistical models are often used to represent the porosity or the permeability
field. In history matching of a geostatistical reservoir model, we attempt to find multiple realizations that are conditional
to dynamic data and representative of the model uncertainty space. A relevant way to simulate the conditioned realizations
is by generating Monte Carlo Markov chains (MCMC). The huge dimensions (number of parameters) of the model and the computational
cost of each iteration are two important pitfalls for the use of MCMC. In practice, we have to stop the chain far before it
has browsed the whole support of the posterior probability density function. Furthermore, as the relationship between the
production data and the random field is highly nonlinear, the posterior can be strongly multimodal and the chain may stay
stuck in one of the modes. In this work, we propose a methodology to enhance the sampling properties of classical single MCMC
in history matching. We first show how to reduce the dimension of the problem by using a truncated Karhunen–Loève expansion
of the random field of interest and assess the number of components to be kept. Then, we show how we can improve the mixing
properties of MCMC, without increasing the global computational cost, by using parallel interacting Markov Chains. Finally,
we show the encouraging results obtained when applying the method to a synthetic history matching case. 相似文献
13.
Using a small ensemble size in the ensemble Kalman filter methodology is efficient for updating numerical reservoir models
but can result in poor updates following spurious correlations between observations and model variables. The most common approach
for reducing the effect of spurious correlations on model updates is multiplication of the estimated covariance by a tapering
function that eliminates all correlations beyond a prespecified distance. Distance-dependent tapering is not always appropriate,
however. In this paper, we describe efficient methods for discriminating between the real and the spurious correlations in
the Kalman gain matrix by using the bootstrap method to assess the confidence level of each element from the Kalman gain matrix.
The new method is tested on a small linear problem, and on a water flooding reservoir history matching problem. For the water
flooding example, a small ensemble size of 30 was used to compute the Kalman gain in both the screened EnKF and standard EnKF
methods. The new method resulted in significantly smaller root mean squared errors of the estimated model parameters and greater
variability in the final updated ensemble. 相似文献
14.
Ensemble Kalman filtering with shrinkage regression techniques 总被引:1,自引:0,他引:1
The classical ensemble Kalman filter (EnKF) is known to underestimate the prediction uncertainty. This can potentially lead
to low forecast precision and an ensemble collapsing into a single realisation. In this paper, we present alternative EnKF
updating schemes based on shrinkage methods known from multivariate linear regression. These methods reduce the effects caused
by collinear ensemble members and have the same computational properties as the fastest EnKF algorithms previously suggested.
In addition, the importance of model selection and validation for prediction purposes is investigated, and a model selection
scheme based on cross-validation is introduced. The classical EnKF scheme is compared with the suggested procedures on two-toy
examples and one synthetic reservoir case study. Significant improvements are seen, both in terms of forecast precision and
prediction uncertainty estimates. 相似文献
15.
Investigation of the sampling performance of ensemble-based methods with a simple reservoir model 总被引:1,自引:0,他引:1
The application of the ensemble Kalman filter (EnKF) for history matching petroleum reservoir models has been the subject of intense investigation during the past 10 years. Unfortunately, EnKF often fails to provide reasonable data matches for highly nonlinear problems. This fact motivated the development of several iterative ensemble-based methods in the last few years. However, there exists no study comparing the performance of these methods in the literature, especially in terms of their ability to quantify uncertainty correctly. In this paper, we compare the performance of nine ensemble-based methods in terms of the quality of the data matches, quantification of uncertainty, and computational cost. For this purpose, we use a small but highly nonlinear reservoir model so that we can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm. We also consider one adjoint-based implementation of the randomized maximum likelihood method in the comparisons. 相似文献
16.
17.
In this paper, we describe a method of history matching in which changes to the reservoir model are constructed from a limited set of basis vectors. The purpose of this reparameterization is to reduce the cost of a Newton iteration, without altering the final estimate of model parameters and without substantially slowing the rate of convergence. The utility of a subspace method depends on several factors, including the choice and number of the subspace vectors to be used. Computational gains in efficiency result partly from a reduction in the size of the matrix system that must be solved in a Newton iteration. More important contributions, however, result from a reduction in the number of sensitivity coefficients that must be computed, reduction in the dimensions of the matrices that must be multiplied, and elimination of matrix products involving the inverse of the prior model covariance matrix. These factors affect the efficiency of each Newton iteration. Although computation of the optimal set of subspace vectors may be expensive, we show that the rate of convergence and the final results are somewhat insensitive to the choice of subspace vectors. We also show that it is desirable to start with a small number of subspace vectors and gradually increase the number at each Newton iteration until an acceptable level of data mismatch is obtained. 相似文献
18.
Geert K. Brouwer Peter A. Fokker Frank Wilschut Wouter Zijl 《Mathematical Geosciences》2008,40(8):907-920
The determination of the permeability field from pressure and flow rate measurements in wells is a key problem in reservoir
engineering. This paper presents a Double Constraint method for inverse modeling that is an example of direct inverse modeling.
The method is used with a standard block-centered finite difference method. With an a priori grid block permeability field
as input, two forward runs are made: the first is constrained with the measured pressures; the second is constrained with
the measured flow rates. We calculate the pressures in the grid block centers from the first run, while from the second run
we calculate the fluxes through the faces between the grid blocks. Substitution of these pressures and fluxes into Darcy’s
law then yields the transmissibilities at the faces and hence the permeabilities in the grid blocks. In this way the “hard”
data (measured pressures and flow rates) are always honored while the “soft”, geological data can be incorporated at the discretion
of the geologist. Using a synthetic example, we demonstrate the method and compare the results with another method: Ensemble
Kalman Filtering. The two methods agree within the scope of their applicability. The Double Constraint method focuses initially
on determining spatial distributions of the permeability field for single-phase, steady state flow. For history matching an
extension is required to non-steady state, two-phase flow conditions, which is already possible with EnKF. We are currently
investigating the possibility of combining the two methods, whereby the strengths of the two methods could be fully exploited. 相似文献
19.
20.
Combining sensitivities and prior information for covariance localization in the ensemble Kalman filter for petroleum reservoir applications 总被引:1,自引:0,他引:1
Sampling errors can severely degrade the reliability of estimates of conditional means and uncertainty quantification obtained
by the application of the ensemble Kalman filter (EnKF) for data assimilation. A standard recommendation for reducing the
spurious correlations and loss of variance due to sampling errors is to use covariance localization. In distance-based localization,
the prior (forecast) covariance matrix at each data assimilation step is replaced with the Schur product of a correlation
matrix with compact support and the forecast covariance matrix. The most important decision to be made in this localization
procedure is the choice of the critical length(s) used to generate this correlation matrix. Here, we give a simple argument
that the appropriate choice of critical length(s) should be based both on the underlying principal correlation length(s) of
the geological model and the range of the sensitivity matrices. Based on this result, we implement a procedure for covariance
localization and demonstrate with a set of distinctive reservoir history-matching examples that this procedure yields improved
results over the standard EnKF implementation and over covariance localization with other choices of critical length. 相似文献