首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Constraint preconditioners have proved very efficient for the solution of ill-conditioned finite element (FE) coupled consolidation problems in a sequential computing environment. Their implementation on parallel computers, however, is not straightforward because of their inherent sequentiality. The present paper describes a novel parallel inexact constraint preconditioner (ParICP) for the efficient solution of linear algebraic systems arising from the FE discretization of the coupled poro-elasticity equations. The ParICP implementation is based on the use of the block factorized sparse approximate inverse incomplete Cholesky preconditioner, which is a very recent and effective development for the parallel preconditioning of symmetric positive definite matrices. The ParICP performance is experimented with in real 3D coupled consolidation problems, proving a scalable and efficient implementation of the constraint preconditioning for high-performance computing. ParICP appears to be a very robust algorithm for solving ill-conditioned large-size coupled models in a parallel computing environment.  相似文献   

2.
At various stages of petroleum reservoir development, we encounter a large degree of geological uncertainty under which a rational decision has to be made. In order to identify which parameter or group of parameters significantly affects the output of a decision model, we investigate decision-theoretic sensitivity analysis and its computational issues in this paper. In particular, we employ the so-called expected value of partial perfect information (EVPPI) as a sensitivity index and apply multilevel Monte Carlo (MLMC) methods to efficient estimation of EVPPI. In a recent paper by Giles and Goda, an antithetic MLMC estimator for EVPPI is proposed and its variance analysis is conducted under some assumptions on a decision model. In this paper, for an improvement on the performance of the MLMC estimator, we incorporate randomized quasi-Monte Carlo methods within the inner sampling, which results in an multilevel quasi-Monte Carlo (MLQMC) estimator. We apply both the antithetic MLMC and MLQMC estimators to a simple waterflooding decision problem under uncertainty on absolute permeability and relative permeability curves. Through numerical experiments, we compare the performances of the MLMC and MLQMC estimators and confirm a significant advantage of the MLQMC estimator.  相似文献   

3.
为提高网格环境下海量空间数据管理与并行化处理效率,将网格环境下的分布并行处理技术与空间索引相融合,提出了一种空间索引框架(grid slot and hash R tree,GSHR-Tree).该索引树结构基于散列hash表和动态空间槽,结合R树结构的范围查询优势和哈希表结构的高效单key查询,分析改进了索引结构的组织和存储.构造了适合于大规模空间数据的网格并行空间计算的索引结构,该索引树算法根据空间数据划分策略,动态分割空间槽,并将它们映射到多个节点机上.每个节点机再将其对应空间槽中的空间对象组织成R树,以大节点R树方式在多个节点上分布索引数据.以空间范围查询并行处理的系统响应时间为性能评估指标,通过模拟实验证明,该GSHR-Tree索引满足了当前网格环境空间索引的需要,并具有设计合理、性能高效的特点.   相似文献   

4.
Well placement and control optimization in oil field development are commonly performed in a sequential manner. In this work, we propose a joint approach that embeds well control optimization within the search for optimum well placement configurations. We solve for well placement using derivative-free methods based on pattern search. Control optimization is solved by sequential quadratic programming using gradients efficiently computed through adjoints. Joint optimization yields a significant increase, of up to 20% in net present value, when compared to reasonable sequential approaches. The joint approach does, however, require about an order of magnitude increase in the number of objective function evaluations compared to sequential procedures. This increase is somewhat mitigated by the parallel implementation of some of the pattern-search algorithms used in this work. Two pattern-search algorithms using eight and 20 computing cores yield speedup factors of 4.1 and 6.4, respectively. A third pattern-search procedure based on a serial evaluation of the objective function is less efficient in terms of clock time, but the optimized cost function value obtained with this scheme is marginally better.  相似文献   

5.
为了对砂岩型铀矿钻孔数据及地质资料进行汇集整理,建设铀矿地质钻孔及勘查信息数据库,实现对钻孔资料的统一管理,提高钻孔地质资料集成应用的工作效率,面向实际应用需求,设计并实现了铀矿综合管理信息平台。提出了铀矿大数据平台基础设施、信息资源、应用服务、用户交互的4层架构,采用云计算虚拟化、分布式储存、并行计算等技术建立了铀矿大数据基础环境,提升了铀矿钻孔大数据集的统一存储管理和计算能力;实现了多源异构钻孔数据智能化抽取、高效率转换及快速装载的目标,提高了数据管理及集成应用的工作效率;基于并行计算技术,实现了钻孔数据进行快速三维可视化表达及多条件快速查询功能,为铀矿勘查、成果集成提供了数据基础和信息技术支撑。   相似文献   

6.
在叠前深度偏移和非零炮检距声波方程正演计算过程中包含了大量的可并行计算的成分。作者在本文中提出叠前正演模拟与偏移的网络并行计算算法,并基于TCP/IP协议,将该算法设计成网络并行处理程序,极大地提高了计算效率。实际运算结果证明,本文提出的并行算法和技术路线是切实可行的。  相似文献   

7.
We present a high-order method for miscible displacement simulation in porous media. The method is based on discontinuous Galerkin discretization with weighted average stabilization technique and flux reconstruction post processing. The mathematical model is decoupled and solved sequentially. We apply domain decomposition and algebraic multigrid preconditioner for the linear system resulting from the high-order discretization. The accuracy and robustness of the method are demonstrated in the convergence study with analytical solutions and heterogeneous porous media, respectively. We also investigate the effect of grid orientation and anisotropic permeability using high-order discontinuous Galerkin method in contrast with cell-centered finite volume method. The study of the parallel implementation shows the scalability and efficiency of the method on parallel architecture. We also verify the simulation result on highly heterogeneous permeability field from the SPE10 model.  相似文献   

8.
Large‐scale engineering computing using the discontinuous deformation analysis (DDA) method is time‐consuming, which hinders the application of the DDA method. The simulation result of a typical numerical example indicates that the linear equation solver is a key factor that affects the efficiency of the DDA method. In this paper, highly efficient algorithms for solving linear equations are investigated, and two modifications of the DDA programme are presented. The first modification is a linear equation solver with high efficiency. The block Jacobi (BJ) iterative method and the block conjugate gradient with Jacobi pre‐processing (Jacobi‐PCG) iterative method are introduced, and the key operations are detailed, including the matrix‐vector product and the diagonal matrix inversion. Another modification consists of a parallel linear equation solver, which is separately constructed based on the multi‐thread and CPU‐GPU heterogeneous platforms with OpenMP and CUDA, respectively. The simulation results from several numerical examples using the modified DDA programme demonstrate that the Jacobi‐PCG is a better iterative method for large‐scale engineering computing and that adoptive parallel strategies can greatly enhance computational efficiency. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
The simulation of non-point source pollution in agricultural basins is a computationally demanding process due to the large number of individual sources and potential pollution receptors (e.g., drinking water wells). In this study, we present an efficient computational framework for parallel simulation of diffuse pollution in such groundwater basins. To derive a highly detailed velocity field, we employed algebraic multigrid (AMG) preconditioners to solve the groundwater flow equation. We compare two variants of AMG implementations, the multilevel preconditioning provided by Trilinos and the BoomerAMG provided by HYPRE. We also perform a sensitivity analysis on the configuration of AMG methods to evaluate the application of these libraries to groundwater flow problems. For the transport simulation of diffuse contamination, we use the streamline approach, which decomposes the 3D transport problem into a large number of 1D problems that can be executed in parallel. The proposed framework is applied to a 2,600-km2 groundwater basin in California discretized into a grid with over 11 million degrees of freedom. Using a Monte Carlo approach with 200 nitrate loading realizations at the aquifer surface, we perform a stochastic analysis to quantify nitrate breakthrough prediction uncertainty at over 1,500 wells due to random, temporally distributed nitrate loading. The results show that there is a significant time lag between loading and aquifer response at production wells. Generally, typical production wells respond after 5–50 years depending on well depth and screen length, while the prediction uncertainty for nitrate in individual wells is very large—approximately twice the drinking water limit for nitrate.  相似文献   

10.
自然灾害测报网格及其实现的关键技术研究   总被引:1,自引:0,他引:1  
自然灾害测报是一门综合的科学,它涉及到的灾害数据采集、信息资源分布、异构等问题制约了行业应用和信息化发展,而网格是新一代高性能计算环境和信息服务的基础设施,能够实现动态跨地域的资源共享和整合工作。自然灾害测报网格(NDPG)基于网格技术,为灾害测报工作服务。本文对自然灾害测报网格的概念、研究内容和框架体系进行了初步探讨,并讨论了NDPG在实现中的几个关键技术问题。  相似文献   

11.
Because of occurrence of ill-conditioning and outliers, use of direct Least Squares fit is now in decline, while robust M-estimators are currently attracting attention. We present here new algorithms based on the Spingarn Partial Inverse proximal decomposition method for L1 and Huber-M estimation that take into account both primal and dual aspects of the underlying optimization problem. The result is a family of highly parallel algorithms. Globally convergent, they are attractive for large scale problems as encountered in geodesy, especially in the field of Earth Orientation data analysis. The method is extended to handle box constrained problems. To obtain an efficient implementation, remedies are introduced to ensure efficiency in the case of models with less than full rank. Numerical results are discussed. Robust data pre-conditioning is shown to induce faster algorithm convergence. Practical implementation aspects are presented with application to series describing the Earth Rotation.  相似文献   

12.
The increasing use of unstructured grids for reservoir modeling motivates the development of geostatistical techniques to populate them with properties such as facies proportions, porosity and permeability. Unstructured grids are often populated by upscaling high-resolution regular grid models, but the size of the regular grid becomes unreasonably large to ensure that there is sufficient resolution for small unstructured grid elements. The properties could be modeled directly on the unstructured grid, which leads to an irregular configuration of points in the three-dimensional reservoir volume. Current implementations of Gaussian simulation for geostatistics are for regular grids. This paper addresses important implementation details involved in adapting sequential Gaussian simulation to populate irregular point configurations including general storage and computation issues, generating random paths for improved long range variogram reproduction, and search strategies including the superblock search and the k-dimensional tree. An efficient algorithm for computing the variogram of very large irregular point sets is developed for model checking.  相似文献   

13.
The current availability of thousands of processors at many high performance computing centers has made it feasible to carry out, in near real time, interactive visualization of 3D mantle convection temperature fields, using grid configurations having 10–100 million unknowns. We will describe the technical details involved in carrying out this endeavor, using the facilities available at the Laboratory of Computational Science and Engineering (LCSE) at the University of Minnesota. These technical details involve the modification of a parallel mantle convection program, ACuTEMan; the usage of client–server socket based programs to transfer upwards of a terabyte of time series scientific model data using a local network; a rendering system containing multiple nodes; a high resolution PowerWall display, and the interactive visualization software, DSCVR. We have found that working in an interactive visualizastion mode allows for fast and efficient analysis of mantle convection results. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

14.
We propose a new algorithm for the problem of approximate nearest neighbors (ANN) search in a regularly spaced low-dimensional grid for interpolation applications. It associates every sampled point to its nearest interpolation location, and then expands its influence to neighborhood locations in the grid, until the desired number of sampled points is achieved on every grid location. Our approach makes use of knowledge on the regular grid spacing to avoid measuring the distance between sampled points and grid locations. We compared our approach with four different state-of-the-art ANN algorithms in a large set of computational experiments. In general, our approach requires low computational effort, especially for cases with high density of sampled points, while the observed error is not significantly different. At the end, a case study is shown, where the ionosphere dynamics is predicted daily using samples from a mathematical model, which runs in parallel at 56 different longitude coordinates, providing sampled points not well distributed that follow Earth’s magnetic field-lines. Our approach overcomes the comparative algorithms when the ratio between the number of sampled points and grid locations is over 2849:1.  相似文献   

15.
Parallel computers are potentially very attractive for the implementation of large size geomechanical models. One of the main difficulties of parallelization, however, relies on the efficient solution of the frequently ill‐conditioned algebraic system arising from the linearization of the discretized equilibrium equations. While very efficient preconditioners have been developed for sequential computers, not much work has been devoted to parallel solution algorithms in geomechanics. The present study investigates the state‐of‐the‐art performance of the factorized sparse approximate inverse (FSAI) as a preconditioner for the iterative solution of ill‐conditioned geomechanical problems. Pre‐and post‐filtration strategies are experimented with to increase the FSAI efficiency. Numerical results show that FSAI exhibits a promising potential for parallel geomechanical models mainly because of its almost ideal scalability. With the present formulation, however, at least 4 or 8 processors are required in the selected test cases to outperform one of the most efficient sequential algorithms available for FE geomechanics, i.e. the multilevel incomplete factorization (MIF). Further research is needed to improve the FSAI efficiency with a more effective selection of the preconditioner non‐zero pattern. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
This paper presents a finite-volume method for hexahedral multiblock grids to calculate multiphase flow in geologically complex reservoirs. Accommodating complex geologic and geometric features in a reservoir model (e.g., faults) entails non-orthogonal and/or unstructured grids in place of conventional (globally structured) Cartesian grids. To obtain flexibility in gridding as well as efficient flow computation, we use hexahedral multiblock grids. These grids are locally structured, but globally unstructured. One major advantage of these grids over fully unstructured tetrahedral grids is that most numerical methods developed for structured grids can be directly used for dealing with the local problems. We present several challenging examples, generated via a commercially available tool, that demonstrate the capabilities of hexahedral multiblock gridding. Grid quality is discussed in terms of uniformity and orthogonality. The presence of non-orthogonal grid and full permeability tensors requires the use of multi-point discretization methods. A flux-continuous finite-difference (FCFD) scheme, previously developed for stratigraphic hexahedral grid with full-tensor permeability, is employed for numerical flow computation. We extend the FCFD scheme to handle exceptional configurations (i.e. three- or five-cell connections as opposed to the regular four), which result from employing multiblock gridding of certain complex objects. In order to perform flow simulation efficiently, we employ a two-level preconditioner for solving the linear equations that results from the wide stencil of the FCFD scheme. The individual block, composed of cells that form a structured grid, serves as the local level; the higher level operates on the global block configuration (i.e. unstructured component). The implementation uses an efficient data structure where each block is wrapped with a layer of neighboring cells. We also examine splitting techniques [14] for the linear systems associated with the wide stencils of our FCFD operator. We present three numerical examples that demonstrate the method: (1) a pinchout, (2) a faulted reservoir model with internal surfaces and (3) a real reservoir model with multiple faults and internal surfaces.  相似文献   

17.
Acoustic imaging and sensor modeling are processes that require repeated solution of the acoustic wave equation. Solution of the wave equation can be computationally expensive and memory intensive for large simulation domains. One scheme for speeding up solution of the wave equation is the operator-based upscaling method. The algorithm proceeds in two steps. First, the wave equation is solved for fine grid unknowns internal to coarse blocks assuming the coarse blocks do not need to communicate with neighboring blocks in parallel. Second, these fine grid solutions are used to form a new problem which is solved on the coarse grid. Accurate and efficient wave propagation schemes also must avoid artificial reflections off of the computational domain edges. One popular method for preventing artificial reflections is the nearly perfectly matched layer (NPML) method. In this paper, we discuss applying NPML to operator upscaling for the wave equation. We show that although we only apply NPML to the first step of this two step algorithm (directly affecting the fine grid unknowns only), we still see a significant reduction of reflections back into the domain. We describe three numerical experiments (one homogeneous medium experiment and two heterogeneous media examples) in which we validate that the solution of the wave equation exponentially decays in the NPML regions. Numerical experiments of acoustic wave propagation in two dimensions with a reasonable absorbing layer thickness resulted in a maximum pressure reflection of 3–8%. While the coarse grid acceleration is not explicitly damped in our algorithm, the tight coupling between the two steps of the algorithm results in only 0.1–1% of acceleration reflecting back into the computational domain.  相似文献   

18.
In this work, we present an efficient matrix-free ensemble Kalman filter (EnKF) algorithm for the assimilation of large data sets. The EnKF has increasingly become an essential tool for data assimilation of numerical models. It is an attractive assimilation method because it can evolve the model covariance matrix for a non-linear model, through the use of an ensemble of model states, and it is easy to implement for any numerical model. Nevertheless, the computational cost of the EnKF can increase significantly for cases involving the assimilation of large data sets. As more data become available for assimilation, a potential bottleneck in most EnKF algorithms involves the operation of the Kalman gain matrix. To reduce the complexity and cost of assimilating large data sets, a matrix-free EnKF algorithm is proposed. The algorithm uses an efficient matrix-free linear solver, based on the Sherman–Morrison formulas, to solve the implicit linear system within the Kalman gain matrix and compute the analysis. Numerical experiments with a two-dimensional shallow water model on the sphere are presented, where results show the matrix-free implementation outperforming an singular value decomposition-based implementation in computational time.  相似文献   

19.
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).  相似文献   

20.
This paper presents the integration of desktop grid infrastructure with GIS technologies, by proposing a parallel resolution method in a generic distributed environment. A case study focused on a discrete facility location problem, in the biomass area, exemplifies the high amount of computing resources (CPU, memory, HDD) required to solve the spatial problem. A comprehensive analysis is undertaken in order to analyse the behaviour of the grid-enabled GIS system. This analysis, consisting of a set of the experiments on the case study, concludes that the desktop grid infrastructure is able to use a commercial GIS system to solve the spatial problem achieving high speedup and computational resource utilization. Particularly, the results of the experiments showed an increase in speedup of fourteen times using sixteen computers and a computational efficiency greater than 87 % compared with the sequential procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号