首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Performing a line search method in the direction given by the simplex gradient is a well-known method in the mathematical optimization community. For reservoir engineering optimization problems, both a modification of the simultaneous perturbation stochastic approximation (SPSA) and ensemble-based optimization (EnOpt) have recently been applied for estimating optimal well controls in the production optimization step of closed-loop reservoir management. The modified SPSA algorithm has also been applied to assisted history-matching problems. A recent comparison of the performance of EnOpt and a SPSA-type algorithm (G-SPSA) for a set of production optimization test problems showed that the two algorithms resulted in similar estimates of the optimal net-present-value and required roughly the same amount of computational time to achieve these estimates. Here, we show that, theoretically, this result is not surprising. In fact, we show that both the simplex, preconditioned simplex, and EnOpt algorithms can be derived directly from a modified SPSA-type algorithm where the preconditioned simplex algorithm is presented for the first time in this paper. We also show that the expectation of all these preconditioned stochastic gradients is a first-order approximation of the preconditioning covariance matrix times the true gradient or a covariance matrix squared times the true gradient.  相似文献   

3.
In a previous paper, we developed a theoretical basis for parameterization of reservoir model parameters based on truncated singular value decomposition (SVD) of the dimensionless sensitivity matrix. Two gradient-based algorithms based on truncated SVD were developed for history matching. In general, the best of these “SVD” algorithms requires on the order of 1/2 the number of equivalent reservoir simulation runs that are required by the limited memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) algorithm. In this work, we show that when combining SVD parameterization with the randomized maximum likelihood method, we can achieve significant additional computational savings by history matching all models simultaneously using a SVD parameterization based on a particular sensitivity matrix at each iteration. We present two new algorithms based on this idea, one which relies only on updating the SVD parameterization at each iteration and one which combines an inner iteration based on an adjoint gradient where during the inner iteration the truncated SVD parameterization does not vary. Results generated with our algorithms are compared with results obtained from the ensemble Kalman filter (EnKF). Finally, we show that by combining EnKF with the SVD-algorithm, we can improve the reliability of EnKF estimates.  相似文献   

4.
The amount of hydrocarbon recovered can be considerably increased by finding optimal placement of non-conventional wells. For that purpose, the use of optimization algorithms, where the objective function is evaluated using a reservoir simulator, is needed. Furthermore, for complex reservoir geologies with high heterogeneities, the optimization problem requires algorithms able to cope with the non-regularity of the objective function. In this paper, we propose an optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization. In addition, to improve the optimization procedure, two new techniques are proposed: (a) adaptive penalization with rejection in order to handle well placement constraints and (b) incorporation of a meta-model, based on locally weighted regression, into CMA-ES, using an approximate stochastic ranking procedure, in order to reduce the number of reservoir simulations required to evaluate the objective function. The approach is applied to the PUNQ-S3 case and compared with a genetic algorithm (GA) incorporating the Genocop III technique for handling constraints. To allow a fair comparison, both algorithms are used without parameter tuning on the problem, and standard settings are used for the GA and default settings for CMA-ES. It is shown that our new approach outperforms the genetic algorithm: It leads in general to both a higher net present value and a significant reduction in the number of reservoir simulations needed to reach a good well configuration. Moreover, coupling CMA-ES with a meta-model leads to further improvement, which was around 20% for the synthetic case in this study.  相似文献   

5.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

6.
The decrease of density contrast with depth in sedimentary strata is approximated by a quadratic function. The anomaly equation of a trapezoidal model with the quadratic density function is derived. Nonlinear optimization technique using the Marquardt algorithm has been developed and used to interpret a synthetic anomaly profile of the trapezoidal model. The exact values of the coefficients of the quadratic density function are assumed to be known. The convergence of the method is shown by plotting the values of the objective function λ and the various parameters with respect to iteration number. θ and half-width of the trapezoidal model are found to be correlated. The method is also applied to interpret the gravity anomalies over San Jacinto graben, California. Finally, the use of modelling with quadratic density function is discussed.  相似文献   

7.
The prediction of fluid flows within hydrocarbon reservoirs requires the characterization of petrophysical properties. Such characterization is performed on the basis of geostatistics and history-matching; in short, a reservoir model is first randomly drawn, and then sequentially adjusted until it reproduces the available dynamic data. Two main concerns typical of the problem under consideration are the heterogeneity of rocks occurring at all scales and the use of data of distinct resolution levels. Therefore, referring to sequential Gaussian simulation, this paper proposes a new stochastic simulation method able to handle several scales for both continuous or discrete random fields. This method adds flexibility to history-matching as it boils down to the multiscale parameterization of reservoir models. In other words, reservoir models can be updated at either coarse or fine scales, or both. Parameterization adapts to the available data; the coarser the scale targeted, the smaller the number of unknown parameters, and the more efficient the history-matching process. This paper focuses on the use of variational optimization techniques driven by the gradual deformation method to vary reservoir models. Other data assimilation methods and perturbation processes could have been envisioned as well. Last, a numerical application case is presented in order to highlight the advantages of the proposed method for conditioning permeability models to dynamic data. For simplicity, we focus on two-scale processes. The coarse scale describes the variations in the trend while the fine scale characterizes local variations around the trend. The relationships between data resolution and parameterization are investigated.  相似文献   

8.
Randomized maximum likelihood is known in the petroleum reservoir community as a Bayesian history matching technique by means of minimizing a stochastic quadratic objective function. The algorithm is well established and has shown promising results in several applications. For linear models with linear observation operator, the algorithm samples the posterior density accurately. To improve the sampling for nonlinear models, we introduce a generalized version in its simplest form by re-weighting the prior. The weight term is motivated by a sufficiency condition on the expected gradient of the objective function. Recently, an ensemble version of the algorithm was proposed which can be implemented with any simulator. Unfortunately, the method has some practical implementation issues due to computation of low rank pseudo inverse matrices and in practice only the data mismatch part of the objective function is maintained. Here, we take advantage of the fact that the measurement space is often much smaller than the parameter space and project the prior uncertainty from the parameter space to the measurement space to avoid over fitting of data. The proposed algorithms show good performance on synthetic test cases including a 2D reservoir model.  相似文献   

9.
10.
Geostatistically based history-matching methods make it possible to devise history-matching strategies that will honor geologic knowledge about the reservoir. However, the performance of these methods is known to be impeded by slow convergence rates resulting from the stochastic nature of the algorithm. It is the purpose of this paper to introduce a method that integrates qualitative gradient information into the probability perturbation method to improve convergence. The potential of the proposed method is demonstrated on a synthetic history-matching example. The results indicate that inclusion of qualitative gradient information improves the performance of the probability perturbation method.  相似文献   

11.
We introduce a novel, time-dependent inversion scheme for resolving temporal reservoir pressure drop from surface subsidence observations (from leveling or GPS data, InSAR, tiltmeter monitoring) in a single procedure. The theory is able to accommodate both the absence of surface subsidence estimates at sites at one or more epochs as well as the introduction of new sites at any arbitrary epoch. Thus, all observation sites with measurements from at least two epochs are utilized. The method uses both the prior model covariance matrix and the data covariance matrix, which incorporates the spatial and temporal correlations between model parameters and data, respectively. The incorporation of the model covariance implicitly guarantees smoothness of the model estimate, while maintaining specific geological features like sharp boundaries. Taking these relations into account through the model covariance matrix enhances the influence of the data on the inverted model estimate. This leads to a better defined and interpretable model estimate. The time-dependent aspect of the method yields a better constrained model estimate and makes it possible to identify non-linear acceleration or delay in reservoir compaction. The method is validated by a synthetic case study based on an existing gas reservoir with a highly variable transmissibility at the free water level. The prior model covariance matrix is based on a Monte Carlo simulation of the geological uncertainty in the transmissibility.  相似文献   

12.
Rate of Convergence of the Gibbs Sampler in the Gaussian Case   总被引:2,自引:0,他引:2  
We show that the Gibbs Sampler in the Gaussian case is closely linked to linear fixed point iterations. In fact stochastic linear iterations converge toward a stationary distribution under the same conditions as the classical linear fixed point one. Furthermore the covariance matrices are shown to satisify a related fixed point iteration, and consequently the Gibbs Sampler in the gaussian case corresponds to the classical Gauss-Seidel iterations on the inverse of the covariance matrix, and the stochastic over-relaxed Gauss-Seidel has the same limiting distribution as the Gibbs Sampler. Then an efficient method to simulate a gaussian vector is proposed. Finally numerical investigations are performed to understand the effect of the different strategies such as the initial ordering, the blocking and the updating order for iterations. The results show that in a geostatistical context the rate of convergence can be improved significantly compared to the standard case.  相似文献   

13.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

14.
In this paper, we describe a method of history matching in which changes to the reservoir model are constructed from a limited set of basis vectors. The purpose of this reparameterization is to reduce the cost of a Newton iteration, without altering the final estimate of model parameters and without substantially slowing the rate of convergence. The utility of a subspace method depends on several factors, including the choice and number of the subspace vectors to be used. Computational gains in efficiency result partly from a reduction in the size of the matrix system that must be solved in a Newton iteration. More important contributions, however, result from a reduction in the number of sensitivity coefficients that must be computed, reduction in the dimensions of the matrices that must be multiplied, and elimination of matrix products involving the inverse of the prior model covariance matrix. These factors affect the efficiency of each Newton iteration. Although computation of the optimal set of subspace vectors may be expensive, we show that the rate of convergence and the final results are somewhat insensitive to the choice of subspace vectors. We also show that it is desirable to start with a small number of subspace vectors and gradually increase the number at each Newton iteration until an acceptable level of data mismatch is obtained.  相似文献   

15.
The majority of geostatistical estimation and simulation algorithms rely on a covariance model as the sole characteristic of the spatial distribution of the attribute under study. The limitation to a single covariance implicitly calls for a multivariate Gaussian model for either the attribute itself or for its normal scores transform. The Gaussian model could be justified on the basis that it is both analytically simple and it is a maximum entropy model, i.e., a model that minimizes unwarranted structural properties. As a consequence, the Gaussian model also maximizes spatial disorder (beyond the imposed covariance) which can cause flow simulation results performed on multiple stochastic images to be very similar; thus, the space of response uncertainty could be too narrow entailing a misleading sense of safety. The ability of the sole covariance to adequately describe spatial distributions for flow studies, and the assumption that maximum spatial disorder amounts to either no additional information or a safe prior hypothesis are questioned. This paper attempts to clarify the link between entropy and spatial disorder and to provide, through a detailed case study, an appreciation for the impact of entropy of prior random function models on the resulting response distributions.  相似文献   

16.
The majority of geostatistical estimation and simulation algorithms rely on a covariance model as the sole characteristic of the spatial distribution of the attribute under study. The limitation to a single covariance implicitly calls for a multivariate Gaussian model for either the attribute itself or for its normal scores transform. The Gaussian model could be justified on the basis that it is both analytically simple and it is a maximum entropy model, i.e., a model that minimizes unwarranted structural properties. As a consequence, the Gaussian model also maximizes spatial disorder (beyond the imposed covariance) which can cause flow simulation results performed on multiple stochastic images to be very similar; thus, the space of response uncertainty could be too narrow entailing a misleading sense of safety. The ability of the sole covariance to adequately describe spatial distributions for flow studies, and the assumption that maximum spatial disorder amounts to either no additional information or a safe prior hypothesis are questioned. This paper attempts to clarify the link between entropy and spatial disorder and to provide, through a detailed case study, an appreciation for the impact of entropy of prior random function models on the resulting response distributions.  相似文献   

17.
徐冲  刘保国  刘开云  郭佳奇 《岩土力学》2011,32(6):1669-1675
高斯过程回归(GPR)学习机有着容易实现、超参数自适应获取及预测输出具有概率意义等优点。通常采用共轭梯度法获取GPR超参数,但其存在优化效果对初值依赖性太强,迭代次数难以确定,易陷入局部最优的缺点。改用粒子群优化 (PSO)算法进行最优超参数搜索,形成粒子群-高斯过程回归耦合算法(PSO-GPR)。将该算法引入三峡永久船闸高边坡、卧龙寺新滑坡、链子崖滑坡3个不同的典型滑坡变形时序分析中,对每个滑坡分别采用稳态核及一种新式神经网络(NN)、平方指数(SE)、有理二次型(RQ)3种单一核函数进行外推预报测试。工程应用表明,基于3种不同单一核函数的粒子群-高斯过程回归算法(PSO-GPR)均能完全适应不同滑坡时序分析,其中以NN核函数外推预测效果最佳,平均相对误差分别为6.37%、7.62%、1.07%,从而改善了在进行不同滑坡变形时序分析时采用单一核函数的核机器外推能力存在较大差异性的问题,提高了单一核函数对不同数据类型的兼容性  相似文献   

18.
Application of EM algorithms for seismic facices classification   总被引:1,自引:0,他引:1  
Identification of the geological facies and their distribution from seismic and other available geological information is important during the early stage of reservoir development (e.g. decision on initial well locations). Traditionally, this is done by manually inspecting the signatures of the seismic attribute maps, which is very time-consuming. This paper proposes an application of the Expectation-Maximization (EM) algorithm to automatically identify geological facies from seismic data. While the properties within a certain geological facies are relatively homogeneous, the properties between geological facies can be rather different. Assuming that noisy seismic data of a geological facies, which reflect rock properties, can be approximated with a Gaussian distribution, the seismic data of a reservoir composed of several geological facies are samples from a Gaussian mixture model. The mean of each Gaussian model represents the average value of the seismic data within each facies while the variance gives the variation of the seismic data within a facies. The proportions in the Gaussian mixture model represent the relative volumes of different facies in the reservoir. In this setting, the facies classification problem becomes a problem of estimating the parameters defining the Gaussian mixture model. The EM algorithm has long been used to estimate Gaussian mixture model parameters. As the standard EM algorithm does not consider spatial relationship among data, it can generate spatially scattered seismic facies which is physically unrealistic. We improve the standard EM algorithm by adding a spatial constraint to enhance spatial continuity of the estimated geological facies. By applying the EM algorithms to acoustic impedance and Poisson’s ratio data for two synthetic examples, we are able to identify the facies distribution.  相似文献   

19.
Development of subsurface energy and environmental resources can be improved by tuning important decision variables such as well locations and operating rates to optimize a desired performance metric. Optimal well locations in a discretized reservoir model are typically identified by solving an integer programming problem while identification of optimal well settings (controls) is formulated as a continuous optimization problem. In general, however, the decision variables in field development optimization can include many design parameters such as the number, type, location, short-term and long-term operational settings (controls), and drilling schedule of the wells. In addition to the large number of decision variables, field optimization problems are further complicated by the existing technical and physical constraints as well as the uncertainty in describing heterogeneous properties of geologic formations. In this paper, we consider simultaneous optimization of well locations and dynamic rate allocations under geologic uncertainty using a variant of the simultaneous perturbation and stochastic approximation (SPSA). In addition, by taking advantage of the robustness of SPSA against errors in calculating the cost function, we develop an efficient field development optimization under geologic uncertainty, where an ensemble of models are used to describe important flow and transport reservoir properties (e.g., permeability and porosity). We use several numerical experiments, including a channel layer of the SPE10 model and the three-dimensional PUNQ-S3 reservoir, to illustrate the performance improvement that can be achieved by solving a combined well placement and control optimization using the SPSA algorithm under known and uncertain reservoir model assumptions.  相似文献   

20.
    
An algorithm for producing a nonconditional simulation by multiplying the square root of the covariance matrix by a random vector is described. First, the square root of a matrix (or a function of a matrix in general) is defined. The square root of the matrix can be approximated by a minimax matrix polynomial. The block Toeplitz structure of the covariance matrix is used to minimize storage. Finally, multiplication of the block Toeplitz matrix by the random vector can be evaluated as a convolution using the fast Fourier transform. This results in an algorithm which is not only efficient in terms of storage and computation but also easy to implement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号