首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

2.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

3.
Satellite data assimilation can provide accurate initial field for Numerical Weather Prediction (NWP) models. So far, data variational assimilation is based on the theory where error obeys Gaussian distribution, so as to apply the least square method. During classical variational assimilation, if the data contain outliers, the results of optimal parameter estimation is meaningless. Therefore, quality control is quite necessary for Atmospheric Infrared Sounder (AIRS) data before data assimilation. This paper made a comment of the advances in the quality control using AIRS data, which analyzed and discussed the research status from five aspects: channel selection, outliers elimination, bias correction, cloud detection and data sparseness. Three methods for channel selection were summarized, which are stepwise iterative method based on information entropy, the cumulative effect coefficient of principal component and principal components—Stepwise regression, respectively. Comparatively, stepwise iterative method based on information entropy is more widely used, but the selected channels are weak related; Channel combination with large amount of information can be obtained through the method of principal components—stepwise regression, but the implementation process is time-consuming due to the algorithm. Both the lane of law and the double weight method were used in outliers elimination, and the result shows that the latter one is better. Two kinds of bias correction method including off-line and on-line, were introduced, which contain static, adaptive, regression method, variational, method based on the radiative transfer model, bias correction with Kalman filter and dynamic update of bias correction technique. It is found that the timeliness of static method is better; while variational method could solve the problems of data drift and so on. The result is better when using bias correction based on the model and Kalman methods, but it is more time-consuming and not suitable for business application. Generally, the effect and timeliness of dynamic update one is the best among them. In this paper four kinds of cloud detection method are discussed here, including the sky field-of-view, sky channel, cloud radiation correction and different instrument cloud products matching. The first two methods are more feasible from the perspective of timeliness for numerical prediction, but the data quantity using could detection method of sky field-of-view is less than sky channel, leading to discarding of lots of channel data in climate sensitive area such as upper channel, and thus affecting the quality of analysis field. Further on, the methods of hops jumper, box and principal component applied to AIRS data sparseness were analyzed. From assimilation timeliness and operability, box method is feasible; although there is high complexity with algorithm of principal component analysis, which has a certain application prospect. After reviewing the quality control section, some further research directions in these fields were given respectively.  相似文献   

4.
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).  相似文献   

5.
In recent years, data assimilation techniques have been applied to an increasingly wider specter of problems. Monte Carlo variants of the Kalman filter, in particular, the ensemble Kalman filter (EnKF), have gained significant popularity. EnKF is used for a wide variety of applications, among them for updating reservoir simulation models. EnKF is a Monte Carlo method, and its reliability depends on the actual size of the sample. In applications, a moderately sized sample (40–100 members) is used for computational convenience. Problems due to the resulting Monte Carlo effects require a more thorough analysis of the EnKF. Earlier we presented a method for the assessment of the error emerging at the EnKF update step (Kovalenko et al., SIAM J Matrix Anal Appl, in press). A particular energy norm of the EnKF error after a single update step was studied. The energy norm used to assess the error is hard to interpret. In this paper, we derive the distribution of the Euclidean norm of the sampling error under the same assumptions as before, namely normality of the forecast distribution and negligibility of the observation error. The distribution depends on the ensemble size, the number and spatial arrangement of the observations, and the prior covariance. The distribution is used to study the error propagation in a single update step on several synthetic examples. The examples illustrate the changes in reliability of the EnKF, when the parameters governing the error distribution vary.  相似文献   

6.
三维电阻率成像新方法及应用   总被引:4,自引:0,他引:4  
三维电阻率成像法包括近似三维反演因子和综合子空间解释器,三维反演因子用于正向线性电阻率近似解译,综合子空间法是共轭梯度法的变换,是求解大型最优化问题的有效方法.三维成像是线性反演问题,不需正演模拟或敏感度修改.在非线性三维反演中,三维电阻率成像技术可直接获取三维电阻率的分布信息,或提供中间过程的模式修改.野外资料验证表明,三维成像技术可提供电阻率空间的分布信息.  相似文献   

7.
由于传统的阻尼最小二乘法只适合于模型较少的简单模型,因此当介质的层数较多时,反演就会受到多解性的影响,有时甚至出现不收敛的情况,并且反演十分耗时。为此,使用正则化思想引入模型约束进行反演,且正则化因子通过计算每次迭代的数据目标函数和模型目标函数自适应得到,使反演能够稳定地进行;引入拟牛顿法来更新雅可比矩阵,大大缩短反演所需要的时间,通过典型的3层与多层理论模型的反演试算,证明了拟牛顿法自适应正则化反演算法对初始模型的要求不高,拟合效果好,收敛速度快,适应性强,体现了良好的稳定性和可靠性。   相似文献   

8.
地震数据本质上是非平稳的,如何解决复杂非平稳地震波场的数据缺失问题是地震勘探数据处理的重要环节之一。预测滤波器在地震数据处理和分析中具有重要的作用,该技术可以有效地解决地震数据缺失问题,但传统的平稳预测滤波方法无法很好地适应地震数据的非平稳特征;因此,开发高效的复杂地震波场自适应预测插值方法具有重要的工业价值。本文将预测滤波器加入"流处理"的概念,滤波器系数随着地震数据的变化同时更新,此计算过程仅需矢量点积运算,能够提高计算效率并降低内存空间;并以此为基础开发基于流预测滤波的地震数据插值方法。利用多次波的动力学信息,通过互相关技术构建虚拟一次波,有效地解决了缺失数据位置滤波系数估计不准的问题,为插值过程提供了更为合理的滤波器估计,更好地解决了非平稳地震数据的重建问题。对Sigsbee 2B模型和实际数据的测试结果表明,该方法可以合理地针对复杂地震信息完成缺失数据的重建。  相似文献   

9.
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gaussian distribution, one may represent the posterior distribution as a sum of Gaussian kernels. The resulting Gaussian mixture filter has the advantage of both a local Kalman type correction and the weighting/resampling step of a particle filter. The Gaussian mixture approximation relies on a bandwidth parameter which often has to be kept quite large in order to avoid a weight collapse in high dimensions. As a result, the Kalman correction is too large to capture highly non-Gaussian posterior distributions. In this paper, we have extended the Gaussian mixture filter (Hoteit et al., Mon Weather Rev 136:317–334, 2008) and also made the connection to particle filters more transparent. In particular, we introduce a tuning parameter for the importance weights. In the last part of the paper, we have performed a simulation experiment with the Lorenz40 model where our method has been compared to the EnKF and a full implementation of a particle filter. The results clearly indicate that the new method has advantages compared to the standard EnKF.  相似文献   

10.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

11.
自适应加权改进窗口中值滤波   总被引:1,自引:0,他引:1  
针对传统的多级二维中值滤波窗口函数及窗口大小对地震数据噪声处理存在影响,提出了一种自适应加权改进窗口多级二维中值滤波器。对传统的多级二维中值滤波窗口函数进行改进,使其具有保护线性和细节特征。在改进窗口函数的基础上,提出了自适应加权函数,加权函数的自适应性对噪声衰减和有效信号的保真奠定了基础,自适应加权改进窗口中值滤波器去除噪声明显。通过理论模型和实际数据处理的对比,表明本方法去除噪声和保护有效信号能力优于传统的二维多级中值滤波器。  相似文献   

12.
An iterative method of adaptive pattern recognition is used to allocate unclassified individuals to an a priori classification. The model is similar in form to a linear discriminant function, but the coefficient vector is determined by iteration. The method can be used with binary data, and with variables whose statistical distributions are not normal; it is therefore a useful technique for geologists.  相似文献   

13.
付晓东  盛谦  张勇慧  冷先伦 《岩土力学》2016,37(4):1171-1178
非连续变形分析(DDA)方法对大规模工程问题的数值模拟耗时太长,其中线性方程组求解耗时可占总计算时间的70%以上,因此,高效的线性方程组解法是重要研究课题。首先,阐述了适用于DDA方法的基于块的行压缩法和基于试验-误差迭代格式的非0位置记录;然后,针对DDA的子矩阵技术,将块雅可比迭代法 (BJ)、预处理的块共轭梯度法 (PCG,包括Jacobi-PCG、SSOR-PCG) 引入DDA方法,重点研究了线性方程组求解过程中的关键运算;最后,通过两个洞室开挖算例,分析了各线性方程组求解算法在DDA中的计算效率。研究表明:与迭代法相比,直解法无法满足大规模工程计算需要;BJ迭代法与块超松弛迭代法(BSOR)的效率差别不大,但明显不如PCG迭代法。因此,建议采用PCG迭代法求解DDA线性方程组,特别是SSOR-PCG值得推广;如果开展并行计算研究,Jacobi-PCG是较好的选择,当刚度矩阵惯性优势明显时,BJ迭代法同样有效。  相似文献   

14.
15.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

16.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

17.
18.
This study demonstrates that adaptive filters can be used successfully to remove noise from duplicate paleoceanographic time-series. Conventional methods for noise canceling such as fixed filters cannot be applied to paleoceanographic time-series if optimal filtering is to be achieved, because the signal-to-noise ratio is unknown and varies with time. In contrast, an adaptive filter automatically extracts information without any prior initialization of the filter parameters. Two basic adaptive filtering methods, the gradient-based stochastic least-mean-squares (LMS) algorithm and the recursive least-squares (RLS) algorithm have been modified for paleoceanographic applications. The RLS algorithm can be used for noise removal from duplicate records corrupted by stationary noise, for example, carbonate measurements, species counts, or density data. The RLS filter performance is characterized by high accuracy and fast rate of convergence. The modified LMS algorithm out-performs the RLS procedure in a nonstationary environment (e.g., stable isotope records) but at the price of a slower rate of convergence and a reduced accuracy in the final estimate. The application of both algorithms is demonstrated by means of carbonate and stable isotope data.  相似文献   

19.
Logistic regression is a widely used statistical method to relate a binary response variable to a set of explanatory variables and maximum likelihood is the most commonly used method for parameter estimation. A maximum-likelihood logistic regression (MLLR) model predicts the probability of the event from binary data defining the event. Currently, MLLR models are used in a myriad of fields including geosciences, natural hazard evaluation, medical diagnosis, homeland security, finance, and many others. In such applications, the empirical sample data often exhibit class imbalance, where one class is represented by a large number of events while the other is represented by only a few. In addition, the data also exhibit sampling bias, which occurs when there is a difference between the class distribution in the sample compared to the actual class distribution in the population. Previous studies have evaluated how class imbalance and sampling bias affect the predictive capability of asymptotic classification algorithms such as MLLR, yet no definitive conclusions have been reached.  相似文献   

20.
Multiple-point statistics (MPS) provides a flexible grid-based approach for simulating complex geologic patterns that contain high-order statistical information represented by a conceptual prior geologic model known as a training image (TI). While MPS is quite powerful for describing complex geologic facies connectivity, conditioning the simulation results on flow measurements that have a nonlinear and complex relation with the facies distribution is quite challenging. Here, an adaptive flow-conditioning method is proposed that uses a flow-data feedback mechanism to simulate facies models from a prior TI. The adaptive conditioning is implemented as a stochastic optimization algorithm that involves an initial exploration stage to find the promising regions of the search space, followed by a more focused search of the identified regions in the second stage. To guide the search strategy, a facies probability map that summarizes the common features of the accepted models in previous iterations is constructed to provide conditioning information about facies occurrence in each grid block. The constructed facies probability map is then incorporated as soft data into the single normal equation simulation (snesim) algorithm to generate a new candidate solution for the next iteration. As the optimization iterations progress, the initial facies probability map is gradually updated using the most recently accepted iterate. This conditioning process can be interpreted as a stochastic optimization algorithm with memory where the new models are proposed based on the history of the successful past iterations. The application of this adaptive conditioning approach is extended to the case where multiple training images are proposed as alternative geologic scenarios. The advantages and limitations of the proposed adaptive conditioning scheme are discussed and numerical experiments from fluvial channel formations are used to compare its performance with non-adaptive conditioning techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号