首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

2.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

3.
Calculating derivatives for automatic history matching   总被引:1,自引:0,他引:1  
Automatic history matching is based on minimizing an objective function that quantifies the mismatch between observed and simulated data. When using gradient-based methods for solving this optimization problem, a key point for the overall procedure is how the simulator delivers the necessary derivative information. In this paper, forward and adjoint methods for derivative calculation are discussed. Procedures for sensitivity matrix building, sensitivity matrix and transpose sensitivity matrix vector products are fully described. To show the usefulness of the derivative calculation algorithms, a new variant of the gradzone analysis, which tries to address the problem of selecting the most relevant parameters for a history matching, is proposed using the singular value decomposition of the sensitivity matrix. Application to a simple synthetic case shows that this procedure can reveal important information about the nature of the history-matching problem.  相似文献   

4.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

5.
Defining representative reservoir models usually calls for a huge number of fluid flow simulations, which may be very time-consuming. Meta-models are built to lessen this issue. They approximate a scalar function from the values simulated for a set of uncertain parameters. For time-dependent outputs, a reduced-basis approach can be considered. If the resulting meta-models are accurate, they can be called instead of the flow simulator. We propose here to investigate a specific approach named multi-fidelity meta-modeling to reduce further the simulation time. We assume that the outputs of interest are known at various levels of resolution: a fine reference level, and coarser levels for which computations are faster but less accurate. Multi-fidelity meta-models refer to co-kriging to approximate the outputs at the fine level using the values simulated at all levels. Such an approach can save simulation time by limiting the number of fine level simulations. The objective of this paper is to investigate the potential of multi-fidelity for reservoir engineering. The reduced-basis approach for time-dependent outputs is extended to the multi-fidelity context. Then, comparisons with the more usual kriging approach are proposed on a synthetic case, both in terms of computation time and predictivity. Meta-models are computed to evaluate the production responses at wells and the mismatch between the data and the simulated responses (history matching error), considering two levels of resolution. The results show that the multi-fidelity approach can outperform kriging if the target simulation time is small. Last, its potential is evidenced when used for history matching.  相似文献   

6.
Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reduced-order model is constructed with the aid of the proper orthogonal decomposition method. Due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated, and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The proposed approach is adjoint-free and can be used with any reservoir simulator. The method was evaluated for a waterflood reservoir with channelized permeability field. A comparison with an adjoint-based history matching procedure shows that the model-reduced approach gives a comparable quality of history matches and predictions. The computational efficiency of the model-reduced approach is lower than of an adjoint-based approach, but higher than of an approach where the gradients are obtained with simple finite differences.  相似文献   

7.
8.
Based on the algorithm for gradual deformation of Gaussian stochastic models, we propose, in this paper, an extension of this method to gradually deforming realizations generated by sequential, not necessarily Gaussian, simulation. As in the Gaussian case, gradual deformation of a sequential simulation preserves spatial variability of the stochastic model and yields in general a regular objective function that can be minimized by an efficient optimization algorithm (e.g., a gradient-based algorithm). Furthermore, we discuss the local gradual deformation and the gradual deformation with respect to the structural parameters (mean, variance, and variogram range, etc.) of realizations generated by sequential simulation. Local gradual deformation may significantly improve calibration speed in the case where observations are scattered in different zones of a field. Gradual deformation with respect to structural parameters is necessary when these parameters cannot be inferred a priori and need to be determined using an inverse procedure. A synthetic example inspired from a real oil field is presented to illustrate different aspects of this approach. Results from this case study demonstrate the efficiency of the gradual deformation approach for constraining facies models generated by sequential indicator simulation. They also show the potential applicability of the proposed approach to complex real cases.  相似文献   

9.
A method for history matching of an in-house petroleum reservoir compositional simulator with multipoint flux approximation is presented. This method is used for the estimation of unknown reservoir parameters, such as permeability and porosity, based on production data and inverted seismic data. The limited-memory Broyden–Fletcher–Goldfarb–Shanno method is employed for minimization of the objective function, which represents the difference between simulated and observed data. In this work, we present the key features of the algorithm for calculations of the gradients of the objective function based on adjoint variables. The test example shows that the method is applicable to cases with anisotropic permeability fields, multipoint flux approximation, and arbitrary fluid compositions.  相似文献   

10.
The amount of hydrocarbon recovered can be considerably increased by finding optimal placement of non-conventional wells. For that purpose, the use of optimization algorithms, where the objective function is evaluated using a reservoir simulator, is needed. Furthermore, for complex reservoir geologies with high heterogeneities, the optimization problem requires algorithms able to cope with the non-regularity of the objective function. In this paper, we propose an optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization. In addition, to improve the optimization procedure, two new techniques are proposed: (a) adaptive penalization with rejection in order to handle well placement constraints and (b) incorporation of a meta-model, based on locally weighted regression, into CMA-ES, using an approximate stochastic ranking procedure, in order to reduce the number of reservoir simulations required to evaluate the objective function. The approach is applied to the PUNQ-S3 case and compared with a genetic algorithm (GA) incorporating the Genocop III technique for handling constraints. To allow a fair comparison, both algorithms are used without parameter tuning on the problem, and standard settings are used for the GA and default settings for CMA-ES. It is shown that our new approach outperforms the genetic algorithm: It leads in general to both a higher net present value and a significant reduction in the number of reservoir simulations needed to reach a good well configuration. Moreover, coupling CMA-ES with a meta-model leads to further improvement, which was around 20% for the synthetic case in this study.  相似文献   

11.
无限大三层越流油气藏井底压力的精确解及典型曲线   总被引:2,自引:0,他引:2  
在考虑表皮效应和井筒储存的影响的条件下,采用最大有效井径的概念,建立无限三层越流油藏井底压力的动态模型。通过拉氏变换得到拉氏空间下以Bessel函数表示的井底压力和分层流量的精确解。运用Crump数值反演方法,得到实空间的解;分析了压力动态特征。该模型不但适合于表皮系数为正的情况,也适合于表皮系数为负的情形。用新模型绘制的典型曲线进行拟合,得到更加准确的结果。  相似文献   

12.
Traditional ensemble-based history matching method, such as the ensemble Kalman filter and iterative ensemble filters, usually update reservoir parameter fields using numerical grid-based parameterization. Although a parameter constraint term in the objective function for deriving these methods exists, it is difficult to preserve the geological continuity of the parameter field in the updating process of these methods; this is especially the case in the estimation of statistically anisotropic fields (such as a statistically anisotropic Gaussian field and facies field with elongated facies) with uncertainties about the anisotropy direction. In this work, we propose a Karhunen-Loeve expansion-based global parameterization technique that is combined with the ensemble-based history matching method for inverse modeling of statistically anisotropic fields. By using the Karhunen-Loeve expansion, a Gaussian random field can be parameterized by a group of independent Gaussian random variables. For a facies field, we combine the Karhunen-Loeve expansion and the level set technique to perform the parameterization; that is, for each facies, we use a Gaussian random field and a level set algorithm to parameterize it, and the Gaussian random field is further parameterized by the Karhunen-Loeve expansion. We treat the independent Gaussian random variables in the Karhunen-Loeve expansion as the model parameters. When the anisotropy direction of the statistically anisotropic field is uncertain, we also treat it as a model parameter for updating. After model parameterization, we use the ensemble randomized maximum likelihood filter to perform history matching. Because of the nature of the Karhunen-Loeve expansion, the geostatistical characteristics of the parameter field can be preserved in the updating process. Synthetic cases are set up to test the performance of the proposed method. Numerical results show that the proposed method is suitable for estimating statistically anisotropic fields.  相似文献   

13.
Over the past few years, more and more systems and control concepts have been applied in reservoir engineering, such as optimal control, Kalman filtering, and model reduction. The success of these applications is determined by the controllability, observability, and identifiability properties of the reservoir at hand. The first contribution of this paper is to analyze and interpret the controllability and observability of single-phase flow reservoir models and to investigate how these are affected by well locations, heterogeneity, and fluid properties. The second contribution of this paper is to show how to compute an upper bound on the number of identifiable parameters when history matching production data and to present a new method to regularize the history matching problem using a reservoir’s controllability and observability properties.  相似文献   

14.
15.
The process of reservoir history-matching is a costly task. Many available history-matching algorithms either fail to perform such a task or they require a large number of simulation runs. To overcome such struggles, we apply the Gaussian Process (GP) modeling technique to approximate the costly objective functions and to expedite finding the global optima. A GP model is a proxy, which is employed to model the input-output relationships by assuming a multi-Gaussian distribution on the output values. An infill criterion is used in conjunction with a GP model to help sequentially add the samples with potentially lower outputs. The IC fault model is used to compare the efficiency of GP-based optimization method with other typical optimization methods for minimizing the objective function. In this paper, we present the applicability of using a GP modeling approach for reservoir history-matching problems, which is exemplified by numerical analysis of production data from a horizontal multi-stage fractured tight gas condensate well. The results for the case that is studied here show a quick convergence to the lowest objective values in less than 100 simulations for this 20-dimensional problem. This amounts to an almost 10 times faster performance compared to the Differential Evolution (DE) algorithm that is also known to be a powerful optimization technique. The sensitivities are conducted to explain the performance of the GP-based optimization technique with various correlation functions.  相似文献   

16.
Matching seismic data in assisted history matching processes can be a challenging task. One main idea is to bring flexibility in the choice of the parameters to be perturbed, focusing on the information provided by seismic data. Local parameterization techniques such as pilot-point or gradual deformation methods can be introduced, considering their high adaptability. However, the choice of the spatial supports associated to the perturbed parameters is crucial to successfully reduce the seismic mismatch. The information related to seismic data is sometimes considered to initialize such local methods. Recent attempts to define the regions adaptively have been proposed, focusing on the mismatch between simulated and reference seismic data. However, the regions are defined manually for each optimization process. Therefore, we propose to drive the definition of the parameter support by performing an automatic definition of the regions to be perturbed from the residual maps related to the 3D seismic data. Two methods are developed in this paper. The first one consists in clustering the residual map with classification algorithms. The second method proposes to drive the generation of pilot point locations in an adaptive way. Residual maps, after proper normalization, are considered as probability density functions of the pilot point locations. Both procedures lead to a complete adaptive and highly flexible perturbation technique for 3D seismic matching. A synthetic study based on the PUNQ test case is introduced to illustrate the potential of these adaptive strategies.  相似文献   

17.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

18.
19.
We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to minimize the mismatch between predicted and observed production data through adjusting the grid block permeability values of the model. 2) performing two optimization exercises to minimize and maximize an economic objective over the remaining field life, for a fixed production strategy, by manipulating the same grid block permeabilities, however without significantly changing the mismatch obtained under step 1. This is accomplished through a hierarchical optimization procedure that limits the solution space of a secondary optimization problem to the (approximate) null space of the primary optimization problem. We applied this procedure to two different reservoir models. We performed a history match based on synthetic data, starting from a uniform prior and using a gradient-based minimization procedure. After history matching, minimization and maximization of the net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying close to the null space of the primary optimization problem. In other words, we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem. We also investigated how this method can be used as a means to assess the cost-effectiveness of acquiring different data types to reduce the uncertainty in the expected NPV.  相似文献   

20.
Randomized maximum likelihood is known in the petroleum reservoir community as a Bayesian history matching technique by means of minimizing a stochastic quadratic objective function. The algorithm is well established and has shown promising results in several applications. For linear models with linear observation operator, the algorithm samples the posterior density accurately. To improve the sampling for nonlinear models, we introduce a generalized version in its simplest form by re-weighting the prior. The weight term is motivated by a sufficiency condition on the expected gradient of the objective function. Recently, an ensemble version of the algorithm was proposed which can be implemented with any simulator. Unfortunately, the method has some practical implementation issues due to computation of low rank pseudo inverse matrices and in practice only the data mismatch part of the objective function is maintained. Here, we take advantage of the fact that the measurement space is often much smaller than the parameter space and project the prior uncertainty from the parameter space to the measurement space to avoid over fitting of data. The proposed algorithms show good performance on synthetic test cases including a 2D reservoir model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号