首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
A probabilistic model of groundwater contaminant transport is presented. The model is based on coupling first- and second-order reliability methods (FORM and SORM) with a two-dimensional finite element solution of groundwater transport equations. Uncertainty in aquifer media is considered by modeling hydraulic conductivity as a spatial random field with a prescribed correlation structure. FORM and SORM provide the probability that a contaminant exceeds a target level at a well, termed the probability of failure. Sensitivity of the probability of failure to basic variabilities in grid block conductivity is also obtained, at no additional computational effort. The effect of the choice of the predetermined target level at the observation well is provided, along with its effect on the relevant sensitivity information. Considerable saving in computational time was achieved by superimposing a coarse random variables mesh on a finer numerical mesh. The presence of regions of lower conductivity on the probabilistic event is analyzed, and the regions in which conductivity most affects the results are identified.  相似文献   

2.
Optimal design of artificial open channels is essential for the planning and management of irrigation projects. In this paper a modified formulation is presented for the comprehensive design of open channels considering the seepage loss, evaporation loss and land acquisition cost along with the lining and excavation cost. The resulting formulation is solved using a recent meta-heuristic optimization technique namely probabilistic global search Lausanne (PGSL). The uncertainty associated with channel design parameter may lead to the failure of canals (channels). The parametric uncertainty in open channel design is modeled using first order reliability method (FORM). A bi-objective optimization model is presented in the study which minimizes the cost and minimizes the probability of overtopping considering a probabilistic cost function as the objective function. A new approach is proposed to solve the model in a meta-heuristic environment following PGSL as the solution method. Also a chance constrained optimization model which considers overtopping probability constraint and channel capacity constraint simultaneously along with the objective of minimization of cost is propounded and solved using PGSL. The solutions obtained using coupled FORM-PGSL approach is encouraging and the method can be used for optimal and reliable design of artificial open channels.  相似文献   

3.
A probabilistic approach is used to simulate particle tracking for two types of porous medium. The first is sand grains with a single intergranular porosity. Particle tracking is carried out by advection and dispersion. The second is chalk granulates with intergranular and matrix porosities. Sorption can occur with advection and dispersion during particle tracking. Particle tracking is modelled as the sum of elementary steps with independent random variables in the sand medium. An exponential distribution is obtained for each elementary step and shows that the whole process is Markovian. A Gamma distribution or probability density function is then deduced. The relationships between dispersivity and the elementary step are given using the central limit theorem. Particle tracking in the chalky medium is a non‐Markovian process. The probability density function depends on a power of the distance. Experimental simulations by dye tracer tests on a column have been performed for different distances and discharges. The probabilistic approach computations are in good agreement with the experimental data. The probabilistic computation seems an interesting and complementary approach to simulate transfer phenomena in porous media with respect to the traditional numerical methods. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
One of the more advanced approaches for simulating groundwater flow in fractured porous media is the discrete-fracture approach. This approach is limited by the large computational overheads associated with traditional modeling methods. In this work, we apply the Lanczos reduction method to the modeling of groundwater flow in fractured porous media using the discrete-fracture approach. The Lanczos reduction method reduces a finite element equation system to a much smaller tridiagonal system of first-order differential equations. The reduced system can be solved by a standard tridiagonal algorithm with little computational effort. Because solving the reduced system is more efficient compared to solving the original system, the simulation of groundwater flow in discretely fractured media using the reduction method is very efficient. The proposed method is especially suitable for the problem of large-scale and long-term simulation. In this paper, we develop an iterative version of Lanczos algorithm, in which the preconditioned conjugate gradient solver based on ORTHOMIN acceleration is employed within the Lanczos reduction process. Additional efficiency for the Lanczos method is achieved by applying an eigenvalue shift technique. The “shift” method can improve the Lanczos system convergence, by requiring fewer modes to achieve the same level of accuracy over the unshifted case. The developed model is verified by comparison with dual-porosity approach. The efficiency and accuracy of the method are demonstrated on a field-scale problem and compared to the performance of classic time marching method using an iterative solver on the original system. In spite of the advances, more theoretical work needs to be carried out to determine the optimal value of the shift before computations are actually carried out.  相似文献   

5.
基于叠前成像的三维地震观测系统设计   总被引:1,自引:1,他引:0       下载免费PDF全文
常规三维观测系统设计的主要目的是得到规则采样的叠加数据体,能够用叠后偏移进行成像.叠前偏移成像对地震观测系统提出了更高的要求.基于叠前成像的要求设计观测系统,对于充分发挥叠前偏移技术优势、提高地震成像精度具有重要意义.本文提出了基于叠前成像的观测系统设计方法,首先基于叠前偏移空间采样准则设计观测系统的基本空间采样,然后根据采样均匀和面元属性一致性原则设计观测系统布局,并利用聚焦束、散射点叠前偏移响应、正演模型和波场照明等技术对观测系统逐步优化,得到符合叠前偏移成像要求并能解决地质问题的观测系统.该方法在中原油田近年来的高精度地震勘探中得到了实际应用,取得了良好效果.  相似文献   

6.
一个以遗传算法为基础的结构可靠性分析方法   总被引:5,自引:0,他引:5  
本文首先总结了一次二阶矩法在结构可靠性分析时的五个弱点。针对其在寻求验算点时需要求极限状态函数的导数以及在处理多峰性极限状态函数时存在的不收敛或收敛于局部验算点等弱点,探讨了将遗传算法应用于结构可靠性分析的可能性,并在分析其应用时存在的具体问题的基础上提出了一个以智能生物为基础的遗传算法。本文的分析计算表明:遗传算法在结构可靠性分析中是适用的,它可以克服一次二阶矩法在求解验算点时存在的几个弱点。本  相似文献   

7.
Abstract

A case study is presented for the application of statistical and geostatistical methods to the problem of estimating groundwater quality variables. This methodology has been applied to the investigation of the detrital aquifer of the Bajo Andarax (Almería, Spain). The use of principal components analysis is proposed, as a first step, for identifying relevant types of groundwater and the processes that bring about a change in their quality. As a result of this application, three factors were obtained, which were used as three new variables (VI: sulphate influence; V2: thermal influence; and V3: marine influence). Analysis of their spatial distribution was performed through the calculation of experimental and theoretical variograms, which served as input for geostatistical modelling using ordinary block kriging. This analysis has allowed a probabilistic representation of the data to be obtained by mapping the three variables throughout the aquifer for each sampling point. In this way, one can evaluate the spatial and temporal variation of the principal physico-chemical processes associated with the three variables VI, V2 and V3 implicated in the groundwater quality of the detrital aquifer.  相似文献   

8.
 An efficient numerical solution for the two-dimensional groundwater flow problem using artificial neural networks (ANNs) is presented. Under stationary velocity conditions with unidirectional mean flow, the conductivity realizations and the head gradients, obtained by a traditional finite difference solution to the flow equation, are given as input-output pairs to train a neural network. The ANN is trained successfully and a certain level of recognition of the relationship between input conductivity patterns and output head gradients is achieved. The trained network produced velocity realizations that are physically plausible without solving the flow equation for each of the conductivity realizations. This is achieved in a small fraction of the time necessary for solving the flow equations. The prediction accuracy of the ANN reaches 97.5% for the longitudinal head gradient and 94.7% for the transverse gradient. Head-gradient and velocity statistics in terms of the first two moments are obtained with a very high accuracy. The cross covariances between head gradients and the fluctuating log-conductivity (log-K) and between velocity and log-K obtained with the ANN approach match very closely those obtained by a traditional numerical solution. The same is true for the velocity components auto-covariances. The results are also extended to transport simulations with very good accuracy. Spatial moments (up to the fourth) of mean-concentration plumes obtained using ANNs are in very good agreement with the traditional Monte Carlo simulations. Furthermore, the concentration second moment (concentration variance) is very close between the two approaches. Considering the fact that higher moments of concentration need more computational effort in numerical simulations, the advantage of the presented approach in saving long computational times is evident. Another advantage of the ANNs approach is the ability to generalize a trained network to conductivity distributions different from those used in training. However, the accuracy of the approach in cases with higher conductivity variances is being investigated.  相似文献   

9.
Despite the availability of numerical models, interest in analytical solutions of multidimensional advection‐dispersion systems remains high. Such models are commonly used for performing Tier I risk analysis and are embedded in many regulatory frameworks dealing with groundwater contamination. In this work, we develop a closed‐form solution of the three‐dimensional advection‐dispersion equation with exponential source decay, first‐order reaction, and retardation, and present an approach based on some ease of use diagrams to compare it with the integral open form solution and with earlier versions of the closed‐form solution. The comparison approach focuses on the relative differences associated with source decay and the effect of simulation time. The analysis of concentration contours, longitudinal sections, and transverse sections confirms that the closed‐form solutions studied can be used with acceptable approximation in the central area of a plume bound transversely within the source width, both behind and beyond the advective front and for concentration values up to two orders of magnitude less than the initial source concentration. As the proposed closed‐form model can be evaluated without nested numerical computations and with simple mathematical functions, it can be very useful in risk assessment procedures.  相似文献   

10.
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the forward physics and also in the observed data. Thus, problems with structure should be approached via linear algebra and optimization techniques. This paper provides new insights to understand uncertainty from a deterministic point of view, which is a necessary step to design more efficient methods to sample the uncertainty region(s) of equivalent solutions.  相似文献   

11.
This study develops a lattice Boltzmann method (LBM) with a two-relaxation-time collision operator (LTRT) to solve saltwater intrusion problems. A directional-speed-of-sound (DSS) technique is introduced to take into account the hydraulic conductivity heterogeneity and discontinuity, as well as the velocity-dependent dispersion coefficient. The forcing terms in the LTRT model are customized in order to recover the density-dependent groundwater flow and mass transport equations. Using the LTRT with the squared DSS achieves at least second-order accuracy. The LTRT results are verified with Henry’s analytical solution as well as compared with several numerical examples and modified Henry problems that consider heterogeneous hydraulic conductivity and velocity-dependent dispersion. The numerical results show good agreement with the Henry analytical solution and with the numerical solutions obtained by other numerical methods.  相似文献   

12.
A generalized, efficient, and practical approach based on the travel‐time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel‐time distribution from the injection point to the observation point. For advection‐dominant reactive transport with well‐mixed reactive species and a constant travel‐time distribution, the reactive BTC is obtained by integrating the solutions to advective‐reactive transport over the entire travel‐time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero‐, first‐, nth‐order, and Michaelis‐Menten reactions. The proposed approach is validated by a reactive transport case in a two‐dimensional synthetic heterogeneous aquifer and a field‐scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)‐bioremediation is better approximated by zero‐order reaction kinetics than first‐order reaction kinetics.  相似文献   

13.
The fact that the geomagnetic potential, as well as magnetic induction and its gradient, can be represented with an arbitrary accuracy in the scope of multipole models has been justified programmatically and analytically. The obtained expressions are brief and can be utilized in analytical and computer-based studies using methods of computer algebra and numerical computations. Previously unknown analytical expressions for the components of the first seven multipole tensors, expressed in terms of the Gaussian coefficients, have been obtained for applied problems of space dynamics. An algorithm that makes it possible to construct analytical expressions for the arbitrary-order multipole tensor and magnetic induction vector components in any finite approximation has been developed.  相似文献   

14.
基于改进点估计法的结构整体概率抗震能力分析   总被引:2,自引:0,他引:2  
确定能力中位值和能力离差值是结构整体概率抗震能力分析的两个关键问题,文中分析了现有方法存在的缺点。在Zhao-Ono点估计法的基础上,引入基于随机向量边缘概率分布信息的Nataf变换,提出了改进的点估计法。将改进点估计法与Pushover分析相结合,提出了评估结构整体概率抗震能力统计矩的随机Pushover分析方法。以某五层三跨钢筋混凝土框架结构为例,应用本方法,进行结构整体概率抗震能力分析,得到了结构整体抗震能力的易损性曲线。分析表明,所提方法是一种具有较高效率和较好精度的结构整体概率抗震能力的分析方法。  相似文献   

15.
Multiphase dynamic data integration into high resolution subsurface models is an integral aspect of reservoir and groundwater management strategies and uncertainty assessment. Over the past two decades, advances in computing and the development and implementation of robust algorithms for automatic history matching have considerably reduced the time and effort associated with subsurface characterization and reduced the subjectivity associated with manual model calibration. However, reliable and accurate subsurface characterization continues to be challenging due to the large number of model unknowns to be estimated using a relatively smaller set of measurements. For ensemble-based methods in particular, the difficulties are compounded by the need for a large number of model replicates to estimate sample-based statistical measures, specifically the covariances and cross-covariances that directly impact the spread of information from the measurement locations to the model parameters. Statistical noise resulting from modest ensemble sizes can overwhelm and degrade the model updates leading to geologically inconsistent subsurface models. In this work we propose to address the difficulties in the implementation of the ensemble Kalman filter (EnKF) for operational data integration problems. The methods described here use streamline-derived information to identify regions within the reservoir that will have a maximum impact on the dynamic response. This is achieved through spatial localization of the sample-based cross-covariance estimates between the measurements and the model unknowns using streamline trajectories. We illustrate the approach with a synthetic example and a large field-study that demonstrate the difficulties with the traditional EnKF implementation. In both the numerical experiments, it is shown that these challenges are addressed using flow relevant conditioning of the cross-covariance matrix. By mitigating sampling error in the cross-covariance estimates, the proposed approach provides significant computational savings through the use of modest ensemble sizes, and consequently offers the opportunity for use with large field-scale groundwater and reservoir characterization studies.  相似文献   

16.
A multi‐objective particle swarm optimization (MOPSO) approach is presented for generating Pareto‐optimal solutions for reservoir operation problems. This method is developed by integrating Pareto dominance principles into particle swarm optimization (PSO) algorithm. In addition, a variable size external repository and an efficient elitist‐mutation (EM) operator are introduced. The proposed EM‐MOPSO approach is first tested for few test problems taken from the literature and evaluated with standard performance measures. It is found that the EM‐MOPSO yields efficient solutions in terms of giving a wide spread of solutions with good convergence to true Pareto optimal solutions. On achieving good results for test cases, the approach was applied to a case study of multi‐objective reservoir operation problem, namely the Bhadra reservoir system in India. The solutions of EM‐MOPSOs yield a trade‐off curve/surface, identifying a set of alternatives that define optimal solutions to the problem. Finally, to facilitate easy implementation for the reservoir operator, a simple but effective decision‐making approach was presented. The results obtained show that the proposed approach is a viable alternative to solve multi‐objective water resources and hydrology problems. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents the development of a probabilistic multi‐model ensemble of statistically downscaled future projections of precipitation of a watershed in New Zealand. Climate change research based on the point estimates of a single model is considered less reliable for decision making, and multiple realizations of a single model or outputs from multiple models are often preferred for such purposes. Similarly, a probabilistic approach is preferable over deterministic point estimates. In the area of statistical downscaling, no single technique is considered a universal solution. This is due to the fact that each of these techniques has some weaknesses, owing to its basic working principles. Moreover, watershed scale precipitation downscaling is quite challenging and is more prone to uncertainty issues than downscaling of other climatological variables. So, multi‐model statistical downscaling studies based on a probabilistic approach are required. In the current paper, results from the three well‐reputed statistical downscaling methods are used to develop a Bayesian weighted multi‐model ensemble. The three members of the downscaling ensemble of this study belong to the following three broad categories of statistical downscaling methods: (1) multiple linear regression, (2) multiple non‐linear regression, and (3) stochastic weather generator. The results obtained in this study show that the new strategy adopted here is promising because of many advantages it offers, e.g. it combines the outputs of multiple statistical downscaling methods, provides probabilistic downscaled climate change projections and enables the quantification of uncertainty in these projections. This will encourage any future attempts for combining the results of multiple statistical downscaling methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
Global optimization methods such as simulated annealing, genetic algorithms and tabu search are being increasingly used to solve groundwater remediation design and parameter identification problems. While these methods enjoy some unique advantages over traditional gradient based methods, they typically require thousands to tens of thousands of forward simulation runs before reaching optimal or near-optimal solutions. Thus, one severe limitation associated with these global optimization methods is very long computation time. To mitigate this limitation, this paper presents a new approach for obtaining, repeatedly and efficiently, the solutions of a linear forward simulation model subject to successive perturbations. The proposed approach takes advantage of the fact that successive forward simulation runs, as required by a global optimization procedure, usually involve only slight changes in the coefficient matrices of the resultant linear equations. As a result, the new solution to a system of linear equations perturbed by the changes in aquifer properties and/or sinks/sources can be obtained as the sum of a non-perturbed base solution and the solution to the perturbed portion of the linear equations. The computational efficiency of the proposed approach arises from the fact that the perturbed solution can be derived directly without solving the linear equations again. A two-dimensional test problem with 20 by 30 nodes demonstrates that the proposed approach is much more efficient than repeatedly running the simulation model, by more than 15 times after a fixed number of model evaluations. The ratio of speedup increases with the number of model evaluations and also the size of simulation model. The main limitation of the proposed approach is the large amount of computer memory required to store the inverse matrix. Effective ways for limiting the storage requirement are briefly discussed.  相似文献   

19.
Earthquake loss estimation studies require predictions to be made of the proportion of a building class falling within discrete damage bands from a specified earthquake demand. These predictions should be made using methods that incorporate both computational efficiency and accuracy such that studies on regional or national levels can be effectively carried out, even when the triggering of multiple earthquake scenarios, as opposed to the use of probabilistic hazard maps and uniform hazard spectra, is employed to realistically assess seismic demand and its consequences on the built environment. Earthquake actions should be represented by a parameter that shows good correlation to damage and that accounts for the relationship between the frequency content of the ground motion and the fundamental period of the building; hence recent proposals to use displacement response spectra. A rational method is proposed herein that defines the capacity of a building class by relating its deformation potential to its fundamental period of vibration at different limit states and comparing this with a displacement response spectrum. The uncertainty in the geometrical, material and limit state properties of a building class is considered and the first-order reliability method, FORM, is used to produce an approximate joint probability density function (JPDF) of displacement capacity and period. The JPDF of capacity may be used in conjunction with the lognormal cumulative distribution function of demand in the classical reliability formula to calculate the probability of failing a given limit state. Vulnerability curves may be produced which, although not directly used in the methodology, serve to illustrate the conceptual soundness of the method and make comparisons with other methods.  相似文献   

20.
Zhang J  Randall G  Wei X 《Ground water》2012,50(3):464-471
In solving groundwater transport problems with numerical models, the computation time (CPU processing time) of transport simulation is approximately inversely proportional to the transport time-step size. Therefore, large time-step sizes are favorable for achieving short computation time. However, transport time-step size must be sufficiently small to avoid numerical instability if an explicit scheme is used (and to guarantee enough model accuracy if an implicit scheme is used). For a transport model involving groundwater pumping, a small transport time-step size is often required due to the high groundwater velocities near the pumping well. Small grid spacing often specified near the pumping well also limits the time-step size. This paper presents a method to increase transport time-step size in a transport model when groundwater pumping is simulated. The key to this approach is to numerically decrease the groundwater seepage velocities in grid cells near the pumping well by increasing the effective porosity so that the transport time-step size can be increased without violating stability constraints. Numerical tests reveal that by using the proposed method, the computation time of transport simulation can be reduced significantly, while the transport simulation results change very little.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号