首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

2.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

3.
In this paper, a stochastic collocation-based Kalman filter (SCKF) is developed to estimate the hydraulic conductivity from direct and indirect measurements. It combines the advantages of the ensemble Kalman filter (EnKF) for dynamic data assimilation and the polynomial chaos expansion (PCE) for efficient uncertainty quantification. In this approach, the random log hydraulic conductivity field is first parameterized by the Karhunen–Loeve (KL) expansion and the hydraulic pressure is expressed by the PCE. The coefficients of PCE are solved with a collocation technique. Realizations are constructed by choosing collocation point sets in the random space. The stochastic collocation method is non-intrusive in that such realizations are solved forward in time via an existing deterministic solver independently as in the Monte Carlo method. The needed entries of the state covariance matrix are approximated with the coefficients of PCE, which can be recovered from the collocation results. The system states are updated by updating the PCE coefficients. A 2D heterogeneous flow example is used to demonstrate the applicability of the SCKF with respect to different factors, such as initial guess, variance, correlation length, and the number of observations. The results are compared with those from the EnKF method. It is shown that the SCKF is computationally more efficient than the EnKF under certain conditions. Each approach has its own advantages and limitations. The performance of the SCKF decreases with larger variance, smaller correlation ratio, and fewer observations. Hence, the choice between the two methods is problem dependent. As a non-intrusive method, the SCKF can be easily extended to multiphase flow problems.  相似文献   

4.
In recent years, data assimilation techniques have been applied to an increasingly wider specter of problems. Monte Carlo variants of the Kalman filter, in particular, the ensemble Kalman filter (EnKF), have gained significant popularity. EnKF is used for a wide variety of applications, among them for updating reservoir simulation models. EnKF is a Monte Carlo method, and its reliability depends on the actual size of the sample. In applications, a moderately sized sample (40–100 members) is used for computational convenience. Problems due to the resulting Monte Carlo effects require a more thorough analysis of the EnKF. Earlier we presented a method for the assessment of the error emerging at the EnKF update step (Kovalenko et al., SIAM J Matrix Anal Appl, in press). A particular energy norm of the EnKF error after a single update step was studied. The energy norm used to assess the error is hard to interpret. In this paper, we derive the distribution of the Euclidean norm of the sampling error under the same assumptions as before, namely normality of the forecast distribution and negligibility of the observation error. The distribution depends on the ensemble size, the number and spatial arrangement of the observations, and the prior covariance. The distribution is used to study the error propagation in a single update step on several synthetic examples. The examples illustrate the changes in reliability of the EnKF, when the parameters governing the error distribution vary.  相似文献   

5.
The application of the ensemble Kalman filter (EnKF) for history matching petroleum reservoir models has been the subject of intense investigation during the past 10 years. Unfortunately, EnKF often fails to provide reasonable data matches for highly nonlinear problems. This fact motivated the development of several iterative ensemble-based methods in the last few years. However, there exists no study comparing the performance of these methods in the literature, especially in terms of their ability to quantify uncertainty correctly. In this paper, we compare the performance of nine ensemble-based methods in terms of the quality of the data matches, quantification of uncertainty, and computational cost. For this purpose, we use a small but highly nonlinear reservoir model so that we can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm. We also consider one adjoint-based implementation of the randomized maximum likelihood method in the comparisons.  相似文献   

6.
7.
集合卡尔曼滤波(Ensemble Kalman Filter,EnKF)作为一种有效的数据同化方法,在众多数值实验中体现优势的同时,也暴露了它使用小集合估计协方差情况下精度较低的缺陷。为了降低取样噪声对协方差估计的干扰并提高滤波精度,应用局域化函数对小集合估计的协方差进行修正,即在协方差矩阵中以舒尔积的形式增加空间距离权重以限制远距离相关。在一个二维理想孔隙承压含水层模型中的运行结果表明,局域化对集合卡尔曼滤波估计地下水参数的修正十分有效,局域化可以很好地过滤小集合估计中噪声的影响,节省计算量的同时又可以防止滤波发散。相关长度较小的水文地质参数(如对数渗透系数)更容易受到噪声的干扰,更有必要进行局域化修正。  相似文献   

8.
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate (and expensive) forward simulations are run with fewer samples, while less accurate (and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations based on the number of coarse degrees of freedom, one can show that MLMC methods can provide better accuracy at the same cost as Monte Carlo (MC) methods. The main objective of the paper is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. Further, we use both approaches in the context of MLMC to speedup MC calculations.  相似文献   

9.
To more correctly estimate the error covariance of an evolved state of a nonlinear dynamical system, the second and higher-order moments of the prior error need to be known. Retrospective optimal interpolation (ROI) may require relatively less information on the higher-order moments of the prior errors than an ensemble Kalman filter (EnKF) because it uses the initial conditions as the background states instead of forecasts. Analogous to the extension of a Kalman filter into an EnKF, an ensemble retrospective optimal interpolation (EnROI) technique was derived using the Monte Carlo method from ROI. In contrast to the deterministic version of ROI, the background error covariance is represented by a background ensemble in EnROI. By sequentially applying EnROI to a moving limited analysis window and exploiting the forecast from the average of the background ensemble of EnROI as a guess field, the computation costs for EnROI can be reduced. In the numerical experiment using a Lorenz-96 model and a Model-III of Lorenz with a perfect-model assumption, the cost-effectiveness of the suboptimal version of EnROI is demonstrated to be superior to that of EnKF using perturbed observations.  相似文献   

10.
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF. In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of information during the truncation of small singular values.  相似文献   

11.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

12.
Sampling errors can severely degrade the reliability of estimates of conditional means and uncertainty quantification obtained by the application of the ensemble Kalman filter (EnKF) for data assimilation. A standard recommendation for reducing the spurious correlations and loss of variance due to sampling errors is to use covariance localization. In distance-based localization, the prior (forecast) covariance matrix at each data assimilation step is replaced with the Schur product of a correlation matrix with compact support and the forecast covariance matrix. The most important decision to be made in this localization procedure is the choice of the critical length(s) used to generate this correlation matrix. Here, we give a simple argument that the appropriate choice of critical length(s) should be based both on the underlying principal correlation length(s) of the geological model and the range of the sensitivity matrices. Based on this result, we implement a procedure for covariance localization and demonstrate with a set of distinctive reservoir history-matching examples that this procedure yields improved results over the standard EnKF implementation and over covariance localization with other choices of critical length.  相似文献   

13.
重质非水相有机污染物(DNAPL)泄漏到地下后,其运移与分布特征受渗透率非均质性影响显著。为刻画DNAPL污染源区结构特征,需进行参数估计以描述水文地质参数的非均质性。本研究构建了基于集合卡尔曼滤波方法(EnKF)与多相流运移模型的同化方案,通过融合DNAPL饱和度观测数据推估非均质介质渗透率空间分布。通过二维砂箱实际与理想算例,验证了同化方法的推估效果,并探讨了不同因素对同化的影响。研究结果表明:基于EnKF方法同化饱和度观测资料可有效地推估非均质渗透率场;参数推估精度随观测时空密度的增大而提高;观测点位置分布对同化效果有所影响,布置在污染集中区域的观测数据对于参数估计具有较高的数据价值。  相似文献   

14.
Uncertainty in future reservoir performance is usually evaluated from the simulated performance of a small number of reservoir realizations. Unfortunately, most of the practical methods for generating realizations conditional to production data are only approximately correct. It is not known whether or not the recently developed method of Gradual Deformation is an approximate method or if it actually generates realizations that are distributed correctly. In this paper, we evaluate the ability of the Gradual Deformation method to correctly assess the uncertainty in reservoir predictions by comparing the distribution of conditional realizations for a small test problem with the standard distribution from a Markov Chain Monte Carlo (MCMC) method, which is known to be correct, and with distributions from several approximate methods. Although the Gradual Deformation algorithm samples inefficiently for this test problem and is clearly not an exact method, it gives similar uncertainty estimates to those obtained by MCMC method based on a relatively small number of realizations.  相似文献   

15.
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gaussian distribution, one may represent the posterior distribution as a sum of Gaussian kernels. The resulting Gaussian mixture filter has the advantage of both a local Kalman type correction and the weighting/resampling step of a particle filter. The Gaussian mixture approximation relies on a bandwidth parameter which often has to be kept quite large in order to avoid a weight collapse in high dimensions. As a result, the Kalman correction is too large to capture highly non-Gaussian posterior distributions. In this paper, we have extended the Gaussian mixture filter (Hoteit et al., Mon Weather Rev 136:317–334, 2008) and also made the connection to particle filters more transparent. In particular, we introduce a tuning parameter for the importance weights. In the last part of the paper, we have performed a simulation experiment with the Lorenz40 model where our method has been compared to the EnKF and a full implementation of a particle filter. The results clearly indicate that the new method has advantages compared to the standard EnKF.  相似文献   

16.
Two methods for generating representative realizations from Gaussian and lognormal random field models are studied in this paper, with term representative implying realizations efficiently spanning the range of possible attribute values corresponding to the multivariate (log)normal probability distribution. The first method, already established in the geostatistical literature, is multivariate Latin hypercube sampling, a form of stratified random sampling aiming at marginal stratification of simulated values for each variable involved under the constraint of reproducing a known covariance matrix. The second method, scarcely known in the geostatistical literature, is stratified likelihood sampling, in which representative realizations are generated by exploring in a systematic way the structure of the multivariate distribution function itself. The two sampling methods are employed for generating unconditional realizations of saturated hydraulic conductivity in a hydrogeological context via a synthetic case study involving physically-based simulation of flow and transport in a heterogeneous porous medium; their performance is evaluated for different sample sizes (number of realizations) in terms of the reproduction of ensemble statistics of hydraulic conductivity and solute concentration computed from a very large ensemble set generated via simple random sampling. The results show that both Latin hypercube and stratified likelihood sampling are more efficient than simple random sampling, in that overall they can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than the simple random sampling.  相似文献   

17.
集合卡曼滤波由于易于使用而被广泛地应用到陆面数据同化研究中,它是建立在模型为线性、误差为正态分布的假设上,而实际土壤湿度方程是高度非线性的,并且当土壤过干或过湿时会发生样本偏斜.为了全面评估它在同化表层土壤湿度观测来反演土壤湿度廓线的性能,特引入不需要上述假设的采样重要性重采样粒子滤波,比较非线性和偏斜性对同化算法的影响.结果显示:不管是小样本还是大样本,集合卡曼滤波都能快速、准确地逼近样本均值,而粒子滤波只有在大样本时才能缓慢地趋近;此外,集合卡曼滤波的粒子边缘概率密度及其偏度和峰度与粒子滤波完全不同,前者粒子虽不完全满足正态分布,但始终为单峰状态,而后者粒子随同化推进经历了单峰到双峰再到单峰的变化.  相似文献   

18.
Uncertainty in surfactant–polymer flooding is an important challenge to the wide-scale implementation of this process. Any successful design of this enhanced oil recovery process will necessitate a good understanding of uncertainty. Thus, it is essential to have the ability to quantify this uncertainty in an efficient manner. Monte Carlo simulation is the traditional uncertainty quantification approach that is used for quantifying parametric uncertainty. However, the convergence of Monte Carlo simulation is relatively low, requiring a large number of realizations to converge. This study proposes the use of the probabilistic collocation method in parametric uncertainty quantification for surfactant–polymer flooding using four synthetic reservoir models. Four sources of uncertainty were considered: the chemical flood residual oil saturation, surfactant and polymer adsorption, and the polymer viscosity multiplier. The output parameter approximated is the recovery factor. The output metrics were the input–output model response relationship, the probability density function, and the first two moments. These were compared with the results obtained from Monte Carlo simulation over a large number of realizations. Two methods for solving for the coefficients of the output parameter polynomial chaos expansion are compared: Gaussian quadrature and linear regression. The linear regression approach used two types of sampling: full-tensor product nodes and Chebyshev-derived nodes. In general, the probabilistic collocation method was applied successfully to quantify the uncertainty in the recovery factor. Applying the method using the Gaussian quadrature produced more accurate results compared with using the linear regression with full-tensor product nodes. Applying the method using the linear regression with Chebyshev derived sampling also performed relatively well. Possible enhancements to improve the performance of the probabilistic collocation method were discussed. These enhancements include improved sparse sampling, approximation order-independent sampling, and using arbitrary random input distribution that could be more representative of reality.  相似文献   

19.
This work presents the application of a Monte Carlo simulation method to perform an statistical analysis of transient variably saturated flow in an hypothetical random porous media. For each realization of the stochastic soil parameters entering as coefficients in Richards' flow equation, the pressure head and the flow field are computed using a mixed finite element procedure for the spatial discretization combined with a backward Euler and a modified Picard iteration in time. The hybridization of the mixed method provides a novel way for evaluating hydraulic conductivity on interelement boundaries. The proposed methodology can handle both large variability and fractal structure in the hydraulic parameters. The saturated conductivity K s and the shape parameter vg in the van Genuchten model are treated as stochastic fractal functions known as fractional Brownian motion (fBm) or fractional Gaussian noise (fGn). The statistical moments of the pressure head, water content, and flow components are obtained by averaging realizations of the fractal parameters in Monte Carlo fashion. A numerical example showing the application of the proposed methodology to characterize groundwater flow in highly heterogeneous soils is presented.  相似文献   

20.
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (C c), void ratio (e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号