首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

2.
In geosciences, complex forward problems met in geophysics, petroleum system analysis, and reservoir engineering problems often require replacing these forward problems by proxies, and these proxies are used for optimizations problems. For instance, history matching of observed field data requires a so large number of reservoir simulation runs (especially when using geostatistical geological models) that it is often impossible to use the full reservoir simulator. Therefore, several techniques have been proposed to mimic the reservoir simulations using proxies. Due to the use of experimental approach, most authors propose to use second-order polynomials. In this paper, we demonstrate that (1) neural networks can also be second-order polynomials. Therefore, the use of a neural network as a proxy is much more flexible and adaptable to the nonlinearity of the problem to be solved; (2) first-order and second-order derivatives of the neural network can be obtained providing gradients and Hessian for optimizers. For inverse problems met in seismic inversion, well by well production data, optimal well locations, source rock generation, etc., most of the time, gradient methods are used for finding an optimal solution. The paper will describe how to calculate these gradients from a neural network built as a proxy. When needed, the Hessian can also be obtained from the neural network approach. On a real case study, the ability of neural networks to reproduce complex phenomena (water cuts, production rates, etc.) is shown. Comparisons with second polynomials (and kriging methods) will be done demonstrating the superiority of the neural network approach as soon as nonlinearity behaviors are present in the responses of the simulator. The gradients and the Hessian of the neural network will be compared to those of the real response function.  相似文献   

3.
张文  王泽文  乐励华 《岩土力学》2010,31(2):553-558
探讨了孔隙与单裂隙双重介质中的一类核素迁移数学模型及其反演问题。该核素迁移模型是一个耦合的抛物型方程组定解问题。若已知排污点的核素浓度变化规律,利用Laplace变换及其逆变换方法,求得了核素迁移模型正问题的解析解;反之,由下游裂隙中某个点的实测核素浓度,利用偏微分方程的叠加原理和反问题的拟解法,反求出核素迁移模型反问题的解,即排污点的核素状态。最后,给出核素迁移模型的正问题和反问题的数值模拟。数值结果表明,正问题的解析解能够刻画核素的迁移规律,也显示出所提反问题方法能有效地反演核素污染源。  相似文献   

4.
张亚芳  刘洁 《岩土力学》1991,12(3):24-34
波动方程的系数反演是一种用于识别地下介质物理力学参数的重要方法。过去这方面的研究一般都建立在弹性模型的基础上,本文则提出了粘弹性反演模型,这种模型更能真实地反映波在地下传播的实际情形。在粘弹性反演模型的基础上,我们还提出了一整套行之有效的数值反演方法,并在频域中完成了反演计算。最后的数值计算结果是令人满意的,证明了本文的模型和方法的合理性。  相似文献   

5.
Thermal recovery can entail considerably higher costs than conventional oil recovery, so the use of computational optimization techniques in designing and operating these processes may be beneficial. Optimization, however, requires many simulations, which results in substantial computational cost. Here, we implement a model-order reduction technique that aims at large reductions in computational requirements. The technique considered, trajectory piecewise linearization (TPWL), entails the representation of new solutions in terms of linearizations around previously simulated (and saved) training solutions. The linearized representation is projected into a low-dimensional space, with the projection matrix constructed through proper orthogonal decomposition of solution “snapshots” generated in the training step. Two idealized problems are considered here: primary production of oil driven by downhole heaters and a simplified model for steam-assisted gravity drainage, where water and steam are treated as a single “effective” phase. The strong temperature dependence of oil viscosity is included in both cases. TPWL results for these systems demonstrate that the method can provide accurate predictions relative to full-order reference solutions. Observed runtime speedups are very substantial, over 2 orders of magnitude for the cases considered. The overhead associated with TPWL model construction is equivalent to the computation time for several full-order simulations (the precise overhead depends on the number of training runs), so the method is only applicable if many simulations are to be performed.  相似文献   

6.
Waterflooding is a common secondary oil recovery process. Performance of waterfloods in mature fields with a significant number of wells can be improved with minimal infrastructure investment by optimizing injection/production rates of individual wells. However, a major bottleneck in the optimization framework is the large number of reservoir flow simulations often required. In this work, we propose a new method based on streamline-derived information that significantly reduces these computational costs in addition to making use of the computational efficiency of streamline simulation itself. We seek to maximize the long-term net present value of a waterflood by determining optimal individual well rates, given an expected albeit uncertain oil price and a total fluid injection volume. We approach the optimization problem by decomposing it into two stages which can be implemented in a computationally efficient manner. We show that the two-stage streamline-based optimization approach can be an effective technique when applied to reservoirs with a large number of wells in need of an efficient waterflooding strategy over a 5 to 15-year period.  相似文献   

7.
The least squares Monte Carlo method is a decision evaluation method that can capture the effect of uncertainty and the value of flexibility of a process. The method is a stochastic approximate dynamic programming approach to decision making. It is based on a forward simulation coupled with a recursive algorithm which produces the near-optimal policy. It relies on the Monte Carlo simulation to produce convergent results. This incurs a significant computational requirement when using this method to evaluate decisions for reservoir engineering problems because this requires running many reservoir simulations. The objective of this study was to enhance the performance of the least squares Monte Carlo method by improving the sampling method used to generate the technical uncertainties used in obtaining the production profiles. The probabilistic collocation method has been proven to be a robust and efficient uncertainty quantification method. By using the sampling methods of the probabilistic collocation method to approximate the sampling of the technical uncertainties, it is possible to significantly reduce the computational requirement of running the decision evaluation method. Thus, we introduce the least squares probabilistic collocation method. The decision evaluation considered a number of technical and economic uncertainties. Three reservoir case studies were used: a simple homogeneous model, the PUNQ-S3 model, and a modified portion of the SPE10 model. The results show that using the sampling techniques of the probabilistic collocation method produced relatively accurate responses compared with the original method. Different possible enhancements were discussed in order to practically adapt the least squares probabilistic collocation method to more realistic and complex reservoir models. Furthermore, it is desired to perform the method to evaluate high-dimensional decision scenarios for different chemical enhanced oil recovery processes using real reservoir data.  相似文献   

8.
Inverse modeling involves repeated evaluations of forward models, which can be computationally prohibitive for large numerical models. To reduce the overall computational burden of these simulations, we study the use of reduced order models (ROMs) as numerical surrogates. These ROMs usually involve using solutions to high-fidelity models at different sample points within the parameter space to construct an approximate solution at any point within the parameter space. This paper examines an input–output relational approach based on Gaussian process regression (GPR). We show that these ROMs are more accurate than the linear lookup tables with the same number of high-fidelity simulations. We describe an adaptive sampling procedure that automatically selects optimal sample points and demonstrate the use of GPR to a smooth response surface and a response surface with abrupt changes. We also describe how GPR can be used to construct ROMs for models with heterogeneous material properties. Finally, we demonstrate how the use of a GPR-based ROM in two many-query applications—uncertainty quantification and global sensitivity analysis—significantly reduces the total computational effort.  相似文献   

9.
李兰 《水科学进展》1999,10(1):7-13
根据逆边界逆动态控制理论,将河流水污染动态控制问题提为逆边界逆动态混合控制问题。针对多个或单个污染源排放浓度和排放总量计算,提出了一维对流-扩散方程逆控制的精确算法。该方法与现行最优控制方法相比,其优点是充分考虑了河流沿程的稀释混合容量,并能充分考虑水质动态标准和社会经济变化等因素,可获得动态控制精确解的近似解。  相似文献   

10.
11.
王婷  郑建国  邵生俊 《岩土力学》2009,30(Z2):494-498
在分析郑西客运专线试验段路基沉降实测资料的基础上,提出沉降变形常规预测方法受观测时间及单次观测结果的影响比较大,不利于用来预测最终沉降。通过分析现场沉降变形与时间关系曲线,提出路基沉降的发展主要受固结的影响,其发展是一个从沉降增加到趋缓的过程,可以明显分为固结变形及流变变形,并用非饱和土等效固结理论加以解释,提出了一种利用s- 曲线特征预测最终沉降量的方法,经检验对比发现其预测结果受观测时间及单次观测结果的影响比较小,优于其它常规预测方法  相似文献   

12.
The first order reliability method (FORM) is efficient, but it has limited accuracy; the second order reliability method (SORM) provides greater accuracy, but with additional computational effort. In this study, a new method which integrates two quasi-Newton approximation algorithms is proposed to efficiently estimate the second order reliability of geotechnical problems with reasonable accuracy. In particular, the Hasofer–Lind–Rackwitz–Fiessler–Broyden–Fletcher–Goldfarb–Shanno (HLRF–BFGS) algorithm is applied to identify the design point on the limit state function (LSF), and consequently to compute the first order reliability index; whereas the Symmetric Rank-one (SR1) algorithm is nested within the HLRF–BFGS algorithm to compute good approximations, yet with a reduced computational effort, of the Hessian matrix required to compute second order reliabilities. Three typical geotechnical problems are employed to demonstrate the ability of the suggested procedure, and advantages of the proposed approach with respect to conventional alternatives are discussed. Results show that the proposed method is able to achieve the accuracy of conventional SORM, but with a reduced computational cost that is equal to the computational cost of HLRF–BFGS-based FORM.  相似文献   

13.

We explore and develop a Proper Orthogonal Decomposition (POD)-based deflation method for the solution of ill-conditioned linear systems, appearing in simulations of two-phase flow through highly heterogeneous porous media. We accelerate the convergence of a Preconditioned Conjugate Gradient (PCG) method achieving speed-ups of factors up to five. The up-front extra computational cost of the proposed method depends on the number of deflation vectors. The POD-based deflation method is tested for a particular problem and linear solver; nevertheless, it can be applied to various transient problems, and combined with multiple solvers, e.g., Krylov subspace and multigrid methods.

  相似文献   

14.
Large deformation soil behavior underpins the operation and performance for a wide range of key geotechnical structures and needs to be properly considered in their modeling, analysis, and design. The material point method (MPM) has gained increasing popularity recently over conventional numerical methods such as finite element method (FEM) in tackling large deformation problems. In this study, we present a novel hierarchical coupling scheme to integrate MPM with discrete element method (DEM) for multiscale modeling of large deformation in geomechanics. The MPM is employed to treat a typical boundary value problem that may experience large deformation, and the DEM is used to derive the nonlinear material response from small strain to finite strain required by MPM for each of its material points. The proposed coupling framework not only inherits the advantages of MPM in tackling large deformation engineering problems over the use of FEM (eg, no need for remeshing to avoid mesh distortion in FEM), but also helps avoid the need for complicated, phenomenological assumptions on constitutive material models for soil exhibiting high nonlinearity at finite strain. The proposed framework lends great convenience for us to relate rich grain-scale information and key micromechanical mechanisms to macroscopic observations of granular soils over all deformation levels, from initial small-strain stage en route to large deformation regime before failure. Several classic geomechanics examples are used to demonstrate the key features the new MPM/DEM framework can offer on large deformation simulations, including biaxial compression test, rigid footing, soil-pipe interaction, and soil column collapse.  相似文献   

15.
《地学前缘(英文版)》2020,11(5):1859-1873
Calculations of risk from natural disasters may require ensembles of hundreds of thousands of simulations to accurately quantify the complex relationships between the outcome of a disaster and its contributing factors. Such large ensembles cannot typically be run on a single computer due to the limited computational resources available. Cloud Computing offers an attractive alternative, with an almost unlimited capacity for computation, storage, and network bandwidth. However, there are no clear mechanisms that define how to implement these complex natural disaster ensembles on the Cloud with minimal time and resources. As such, this paper proposes a system framework with two phases of cost optimization to run the ensembles as a service over Cloud. The cost is minimized through efficient distribution of the simulations among the cost-efficient instances and intelligent choice of the instances based on pricing models. We validate the proposed framework using real Cloud environment with real wildfire ensemble scenarios under different user requirements. The experimental results give an edge to the proposed system over the bag-of-task type execution on the Clouds with less cost and better flexibility.  相似文献   

16.
Deterministic mathematical modeling of complex geologic transport processes may require the use of odd boundary shapes, time dependency, and two or three dimensions. Under these circumstances the governing transport equations must be solved by numerical methods. For a number of transport phenomena a general form of the convective-dispersion equation can be employed. The solution of this equation for complicated problems can be solved readily by the finite-element method. Using quadrilateral isoparametric elements or triangular elements and a computational algorithm based on Galerkin's procedure, solutions to unsteady heat flux from a dike and seawater intrusion in an aquifer have been obtained. These examples illustrate that the finite-element numerical procedure is well suited for solving boundary-value problems resulting from modeling of complex physical phenomena.  相似文献   

17.
Coarse-scale data assimilation (DA) with large ensemble size is proposed as a robust alternative to standard DA with localization for reservoir history matching problems. With coarse-scale DA, the unknown property function associated with each ensemble member is upscaled to a grid significantly coarser than the original reservoir simulator grid. The grid coarsening is automatic, ensemble-specific and non-uniform. The selection of regions where the grid can be coarsened without introducing too large modelling errors is performed using a second-generation wavelet transform allowing for seamless handling of non-dyadic grids and inactive grid cells. An inexpensive local-local upscaling is performed on each ensemble member. A DA algorithm that restarts from initial time is utilized, which avoids the need for downscaling. Since the DA computational cost roughly equals the number of ensemble members times the cost of a single forward simulation, coarse-scale DA allows for a significant increase in the number of ensemble members at the same computational cost as standard DA with localization. Fixing the computational cost for both approaches, the quality of coarse-scale DA is compared to that of standard DA with localization (using state-of-the-art localization techniques) on examples spanning a large degree of variability. It is found that coarse-scale DA is more robust with respect to variation in example type than each of the localization techniques considered with standard DA. Although the paper is concerned with two spatial dimensions, coarse-scale DA is easily extendible to three spatial dimensions, where it is expected that its advantage with respect to standard DA with localization will increase.  相似文献   

18.
Most structures are subjected to more cyclic loads during their life time than static loads. These cyclic action could be a result of either natural or man-made activities and may lead to soil failure. In order to understand the response of the foundation and its interaction with these complex cyclic loadings, various researchers have over the years developed different constitutive models. Although a lot of research is being carried out on these relatively new models, little or no details exist in literature about the model-based identification of the cyclic constitutive parameters which to a large extent govern the quality of the model output. This could be attributed to the difficulties and complexities of the inverse modeling of such complex phenomena. A variety of optimisation strategies are available for the solution of the sum of least-squares problems as usually done in the field of model calibration. However, for the back analysis (calibration) of the soil response to oscillatory load functions, this article gives insight into the model calibration challenges and also puts forward a method for the inverse modeling of cyclic loaded foundation response such that high-quality solutions are obtained with minimum computational effort.  相似文献   

19.
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases.  相似文献   

20.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号