首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
A fast Fourier transform (FFT) moving average (FFT-MA) method for generating Gaussian stochastic processes is derived. Using discrete Fourier transforms makes the calculations easy and fast so that large random fields can be produced. On the other hand, the basic moving average frame allows us to uncouple the random numbers from the structural parameters (mean, variance, correlation length, ... ), but also to draw the randomness components in spatial domain. Such features impart great flexibility to the FFT-MA generator. For instance, changing only the random numbers gives distinct realizations all having the same covariance function. Similarly, several realizations can be built from the same random number set, but from different structural parameters. Integrating the FFT-MA generator into an optimization procedure provides a tool theoretically capable to determine the random numbers identifying the Gaussian field as well as the structural parameters from dynamic data. Moreover, all or only some of the random numbers can be perturbed so that realizations produced using the FFT-MA generator can be locally updated through an optimization process.  相似文献   

2.
The generation over two-dimensional grids of normally distributed random fields conditioned on available data is often required in reservoir modeling and mining investigations. Such fields can be obtained from application of turning band or spectral methods. However, both methods have limitations. First, they are only asymptotically exact in that the ensemble of realizations has the correlation structure required only if enough harmonics are used in the spectral method, or enough lines are generated in the turning bands approach. Moreover, the spectral method requires fine tuning of process parameters. As for the turning bands method, it is essentially restricted to processes with stationary and radially symmetric correlation functions. Another approach, which has the advantage of being general and exact, is to use a Cholesky factorization of the covariance matrix representing grid points correlation. For fields of large size, however, the Cholesky factorization can be computationally prohibitive. In this paper, we show that if the data are stationary and generated over a grid with regular mesh, the structure of the data covariance matrix can be exploited to significantly reduce the overall computational burden of conditional simulations based on matrix factorization techniques. A feature of this approach is its computational simplicity and suitability to parallel implementation.  相似文献   

3.
4.
Uncertainty quantification is typically accomplished by simulating multiple geological realizations, which can be very expensive computationally if the flow process is complicated and the models are highly resolved. Upscaling procedures can be applied to reduce computational demands, though it is essential that the resulting coarse-model predictions correspond to reference fine-scale solutions. In this work, we develop an ensemble level upscaling (EnLU) procedure for compositional systems, which enables the efficient generation of multiple coarse models for use in uncertainty quantification. We apply a newly developed global compositional upscaling method to provide coarse-scale parameters and functions for selected realizations. This global upscaling entails transmissibility and relative permeability upscaling, along with the computation of a-factors to capture component fluxes. Additional features include near-well upscaling for all coarse parameters and functions, and iteration on the a-factors, which is shown to improve accuracy. In the EnLU framework, this global upscaling is applied for only a few selected realizations. For 90 % or more of the realizations, upscaled functions are assigned statistically based on quickly computed flow and permeability attributes. A sequential Gaussian co-simulation procedure is incorporated to provide coarse models that honor the spatial correlation structure of the upscaled properties. The resulting EnLU procedure is applied for multiple realizations of two-dimensional models, for both Gaussian and channelized permeability fields. Results demonstrate that EnLU provides P10, P50, and P90 results for phase and component production rates that are in close agreement with reference fine-scale results. Less accuracy is observed in realization-by-realization comparisons, though the models are still much more accurate than those generated using standard coarsening procedures.  相似文献   

5.
In this paper, a stochastic collocation-based Kalman filter (SCKF) is developed to estimate the hydraulic conductivity from direct and indirect measurements. It combines the advantages of the ensemble Kalman filter (EnKF) for dynamic data assimilation and the polynomial chaos expansion (PCE) for efficient uncertainty quantification. In this approach, the random log hydraulic conductivity field is first parameterized by the Karhunen–Loeve (KL) expansion and the hydraulic pressure is expressed by the PCE. The coefficients of PCE are solved with a collocation technique. Realizations are constructed by choosing collocation point sets in the random space. The stochastic collocation method is non-intrusive in that such realizations are solved forward in time via an existing deterministic solver independently as in the Monte Carlo method. The needed entries of the state covariance matrix are approximated with the coefficients of PCE, which can be recovered from the collocation results. The system states are updated by updating the PCE coefficients. A 2D heterogeneous flow example is used to demonstrate the applicability of the SCKF with respect to different factors, such as initial guess, variance, correlation length, and the number of observations. The results are compared with those from the EnKF method. It is shown that the SCKF is computationally more efficient than the EnKF under certain conditions. Each approach has its own advantages and limitations. The performance of the SCKF decreases with larger variance, smaller correlation ratio, and fewer observations. Hence, the choice between the two methods is problem dependent. As a non-intrusive method, the SCKF can be easily extended to multiphase flow problems.  相似文献   

6.
To more correctly estimate the error covariance of an evolved state of a nonlinear dynamical system, the second and higher-order moments of the prior error need to be known. Retrospective optimal interpolation (ROI) may require relatively less information on the higher-order moments of the prior errors than an ensemble Kalman filter (EnKF) because it uses the initial conditions as the background states instead of forecasts. Analogous to the extension of a Kalman filter into an EnKF, an ensemble retrospective optimal interpolation (EnROI) technique was derived using the Monte Carlo method from ROI. In contrast to the deterministic version of ROI, the background error covariance is represented by a background ensemble in EnROI. By sequentially applying EnROI to a moving limited analysis window and exploiting the forecast from the average of the background ensemble of EnROI as a guess field, the computation costs for EnROI can be reduced. In the numerical experiment using a Lorenz-96 model and a Model-III of Lorenz with a perfect-model assumption, the cost-effectiveness of the suboptimal version of EnROI is demonstrated to be superior to that of EnKF using perturbed observations.  相似文献   

7.
In this paper, we discuss several possible approaches to improving the performance of the ensemble Kalman filter (EnKF) through improved sampling of the initial ensemble. Each of the approaches addresses a different limitation of the standard method. All methods, however, attempt to make the results from a small ensemble as reliable as possible. The validity and usefulness of each method for creating the initial ensemble is based on three criteria: (1) does the sampling result in unbiased Monte Carlo estimates for nonlinear flow problems, (2) does the sampling reduce the variability of estimates compared to ensembles of realizations from the prior, and (3) does the sampling improve the performance of the EnKF? In general, we conclude that the use of dominant eigenvectors ensures the orthogonality of the generated realizations, but results in biased forecasts of the fractional flow of water. We show that the addition of high frequencies from remaining eigenvectors can be used to remove the bias without affecting the orthogonality of the realizations, but the method did not perform significantly better than standard Monte Carlo sampling. It was possible to identify an appropriate importance weighting to reduce the variance in estimates of the fractional flow of water, but it does not appear to be possible to use the importance weighted realizations in standard EnKF when the data relationship is nonlinear. The biggest improvement came from use of the pseudo-data with corrections to the variance of the actual observations.  相似文献   

8.
Assessment of uncertainty in the performance of fluvial reservoirs often requires the ability to generate realizations of channel sands that are conditional to well observations. For channels with low sinuosity this problem has been effectively solved. When the sinuosity is large, however, the standard stochastic models for fluvial reservoirs are not valid, because the deviation of the channel from a principal direction line is multivalued. In this paper, I show how the method of randomized maximum likelihood can be used to generate conditional realizations of channels with large sinuosity. In one example, a Gaussian random field model is used to generate an unconditional realization of a channel with large sinuosity, and this realization is then conditioned to well observations. Channels generated in the second approach are less realistic, but may be sufficient for modeling reservoir connectivity in a realistic way. In the second example, an unconditional realization of a channel is generated by a complex geologic model with random forcing. It is then adjusted in a meaningful way to honor well observations. The key feature in the solution is the use of channel direction instead of channel deviation as the characteristic random function describing the geometry of the channel.  相似文献   

9.
The importance of time-series analysis in cyclic stratigraphy is evaluated by comparing three different methods (adaptive multiple taper spectral analysis, auto-/cross-correlation analysis, cova functions) applied to stratigraphic time series from the Early Cretaceous Cismon section in northern Italy. Carbonate content and magnetic susceptibility vary in a quasi-cyclic fashion in this pelagic limestone section and are almost perfectly negatively correlated. The spectral technique requires a high degree of preprocessing of the original data (interpolation and resampling at a regular interval, filtering, inversion) which introduces smoothing and rounding errors. The statistical correlation analysis also requires evenly and (for cross-correlation) correspondingly spaced series. The geostatistical cova functions (a generalization of the cross-variogram) prove to be the most versatile and robust of the methods compared. Cova functions can be calculated from unevenly and noncor-respondingly spaced time series without any preprocessing. This method also retains relatively more of the signal if noise and extreme outliers obscure the picture. The periodicities detected in the Cismon time series fall in the range of Milankovitch cycles. Cycle periods of 45 cm and 80 cm likely correspond to dominant precession and obliquity cycles. Due to the inaccuracy of the Cretaceous time scale periods cannot be matched exactly yet, but cycle ratios are close to expected ratios so that there is great potential for future cyclostratigraphic work to contribute to a substantial improvement of the geologic time scale.  相似文献   

10.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

11.
12.
We present a method of using classical wavelet-based multiresolution analysis to separate scales in model and observations during data assimilation with the ensemble Kalman filter. In many applications, the underlying physics of a phenomena involve the interaction of features at multiple scales. Blending of observational and model error across scales can result in large forecast inaccuracies since large errors at one scale are interpreted as inexact data at all scales due to the misrepresentation of observational error. Our method uses a partitioning of the range of the observation operator into separate observation scales. This naturally induces a transformation of the observation covariance and we put forward several algorithms to efficiently compute the transformed covariance. Another advantage of our multiresolution ensemble Kalman filter is that scales can be weighted independently to adjust each scale’s affect on the forecast. To demonstrate feasibility, we present applications to a one-dimensional Kuramoto-Sivashinsky (K–S) model with scale-dependent observation noise and an application involving the forecasting of solar photospheric flux. The solar flux application uses the Air Force Data Assimilative Photospheric Transport (ADAPT) model which has model and observation error exhibiting strong scale dependence. Results using our multiresolution ensemble Kalman filter show significant improvement in solar forecast error compared to traditional ensemble Kalman filtering.  相似文献   

13.
生态地理建模中的多尺度问题   总被引:29,自引:0,他引:29  
本文在分析生态地理建模内涵的基础上,讨论了生态地理建模中的尺度转换问题、跨尺度相互作用问题、空间尺度与时间尺度的关联问题和多尺度数据处理问题.由于生态地理问题的非线性、生态环境的异质性和随机事件,简单的线性尺度转换方法远不能满足生态地理建模的要求.为了从根本上解决生态地理建模中的时空尺度问题,除需要运用微分几何学和等级理论等经典方法外,还需要引入格点生成法和网格计算等现代理论和技术手段.  相似文献   

14.
The transfer function of time-dependent models is classically inferred by the ordinary least squares (OLS) techniques. This OLS technique assumes independence of the residuals with time. However, in practical cases, this hypothesis is often not justified producing inefficient estimation of the transfer function. When the residuals constitute an autoregressive process, we propose to apply the Box-Jenkins' method to model the residuals, and to modify in a simple manner the primary convolution equation. Then, a multivariate regression technique is used to infer the transfer function of the new equation producing time-independent residuals. This three-step autoregressive deconvolution technique is particularly efficient for time series analysis. The reconstitution and the forecasting of real data are improved efficiently. Theoretically, the proposed method can be extended to the convolution equations for which the residuals follow a moving average or an autoregressive-moving average process, but the mathematical formulation is no longer direct and explicit. For this general case, we propose to approximate the moving average or the autoregressive-moving average process by an autoregressive process of sufficient order, and then the transfer function. Two case studies in hydrogeology will be used to illustrate the procedure.  相似文献   

15.
16.
Geologic uncertainties and limited well data often render recovery forecasting a difficult undertaking in typical appraisal and early development settings. Recent advances in geologic modeling algorithms permit automation of the model generation process via macros and geostatistical tools. This allows rapid construction of multiple alternative geologic realizations. Despite the advances in geologic modeling, computation of the reservoir dynamic response via full-physics reservoir simulation remains a computationally expensive task. Therefore, only a few of the many probable realizations are simulated in practice. Experimental design techniques typically focus on a few discrete geologic realizations as they are inherently more suitable for continuous engineering parameters and can only crudely approximate the impact of geology. A flow-based pattern recognition algorithm (FPRA) has been developed for quantifying the forecast uncertainty as an alternative. The proposed algorithm relies on the rapid characterization of the geologic uncertainty space represented by an ensemble of sufficiently diverse static model realizations. FPRA characterizes the geologic uncertainty space by calculating connectivity distances, which quantify how different each individual realization is from all others in terms of recovery response. Fast streamline simulations are employed in evaluating these distances. By applying pattern recognition techniques to connectivity distances, a few representative realizations are identified within the model ensemble for full-physics simulation. In turn, the recovery factor probability distribution is derived from these intelligently selected simulation runs. Here, FPRA is tested on an example case where the objective is to accurately compute the recovery factor statistics as a function of geologic uncertainty in a channelized turbidite reservoir. Recovery factor cumulative distribution functions computed by FPRA compare well to the one computed via exhaustive full-physics simulations.  相似文献   

17.
Distance-based stochastic techniques have recently emerged in the context of ensemble modeling, in particular for history matching, model selection and uncertainty quantification. Starting with an initial ensemble of realizations, a distance between any two models is defined. This distance is defined such that the objective of the study is incorporated into the geological modeling process, thereby potentially enhancing the efficacy of the overall workflow. If the intent is to create new models that are constrained to dynamic data (history matching), the calculation of the distance requires flow simulation for each model in the initial ensemble. This can be very time consuming, especially for high-resolution models. In this paper, we present a multi-resolution framework for ensemble modeling. A distance-based procedure is employed, with emphasis on the rapid construction of multiple models that have improved dynamic data conditioning. Our intent is to construct new high-resolution models constrained to dynamic data, while performing most of the flow simulations only on upscaled models. An error modeling procedure is introduced into the distance calculations to account for potential errors in the upscaling. Based on a few fine-scale flow simulations, the upscaling error is estimated for each model using a clustering technique. We demonstrate the efficiency of the method on two examples, one where the upscaling error is small, and another where the upscaling error is significant. Results show that the error modeling procedure can accurately capture the error in upscaling, and can thus reproduce the fine-scale flow behavior from coarse-scale simulations with sufficient accuracy (in terms of uncertainty predictions). As a consequence, an ensemble of high-resolution models, which are constrained to dynamic data, can be obtained, but with a minimum of flow simulations at the fine scale.  相似文献   

18.
Two methods for generating representative realizations from Gaussian and lognormal random field models are studied in this paper, with term representative implying realizations efficiently spanning the range of possible attribute values corresponding to the multivariate (log)normal probability distribution. The first method, already established in the geostatistical literature, is multivariate Latin hypercube sampling, a form of stratified random sampling aiming at marginal stratification of simulated values for each variable involved under the constraint of reproducing a known covariance matrix. The second method, scarcely known in the geostatistical literature, is stratified likelihood sampling, in which representative realizations are generated by exploring in a systematic way the structure of the multivariate distribution function itself. The two sampling methods are employed for generating unconditional realizations of saturated hydraulic conductivity in a hydrogeological context via a synthetic case study involving physically-based simulation of flow and transport in a heterogeneous porous medium; their performance is evaluated for different sample sizes (number of realizations) in terms of the reproduction of ensemble statistics of hydraulic conductivity and solute concentration computed from a very large ensemble set generated via simple random sampling. The results show that both Latin hypercube and stratified likelihood sampling are more efficient than simple random sampling, in that overall they can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than the simple random sampling.  相似文献   

19.
A hierarchical scale-up framework is formulated to study the scaling characteristics of reservoir attributes and input dispersivities at the transport modeling scale, where heterogeneity distribution exhibits both non-stationarity (trend) and sub-scale variability. The proposed method is flexible to handle heterogeneities occurring at multiple scales, without any explicit assumption regarding the multivariate distribution of the heterogeneity. This paper extends our previous work by incorporating the effects of non-stationarity into the modeling workflow. Rock property at a given location is modeled as a random variable, which is decomposed into the sum of a trend (available on the same resolution of the transport modeling scale) and a residual component (defined at a much smaller scale). First, to scale up the residual component to the transport modeling scale, the corresponding volume variance is computed; by sampling numerous sets of “conditioning data” via bootstrapping and constructing multiple realizations of the residual components at the transport modeling, uncertainty due to this scale-up process is captured. Next, to compute the input dispersivity at the transport modeling scale, a flow-based technique is adopted: multiple geostatistical realizations of the same physical size as the transport modeling scale are generated to describe the spatial heterogeneity below the modeling scale. Each realization is subjected to particle-tracking simulation. Effective longitudinal and transverse dispersivities are estimated by minimizing the difference in effluent history for each realization and that of an equivalent average medium. Probability distributions of effective dispersivities are established by aggregating results from all realizations. The results demonstrate that both large-scale non-stationarity and sub-scale variability are both contributing to anomalous non-Fickian behavior. In comparison with our previous work, which ignored large-scale non-stationarity, the non-Fickian characteristics observed in this study is dramatically more pronounced.  相似文献   

20.
This paper describes a new method for gradually deforming realizations of Gaussian-related stochastic models while preserving their spatial variability. This method consists in building a stochastic process whose state space is the ensemble of the realizations of a spatial stochastic model. In particular, a stochastic process, built by combining independent Gaussian random functions, is proposed to perform the gradual deformation of realizations. Then, the gradual deformation algorithm is coupled with an optimization algorithm to calibrate realizations of stochastic models to nonlinear data. The method is applied to calibrate a continuous and a discrete synthetic permeability fields to well-test pressure data. The examples illustrate the efficiency of the proposed method. Furthermore, we present some extensions of this method (multidimensional gradual deformation, gradual deformation with respect to structural parameters, and local gradual deformation) that are useful in practice. Although the method described in this paper is operational only in the Gaussian framework (e.g., lognormal model, truncated Gaussian model, etc.), the idea of gradually deforming realizations through a stochastic process remains general and therefore promising even for calibrating non-Gaussian models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号