首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
2.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

3.
Recent advances in commodity high-performance computing technology have dramatically reduced the computational cost for solving the seismic wave equation in complex earth structure models. As a consequence, wave-equation-based seismic tomography techniques are being actively developed and gradually adopted in routine subsurface seismic imaging practices. Wave-equation travel-time tomography is a seismic tomography technique that inverts cross-correlation travel-time misfits using full-wave Fréchet kernels computed by solving the wave equation. This technique can be implemented very efficiently using the adjoint method, in which the misfits are back-propagated from the receivers (i.e., seismometers) to produce the adjoint wave-field and the interaction between the adjoint wave-field and the forward wave-field from the seismic source gives the gradient of the objective function. Once the gradient is available, a gradient-based optimization algorithm can then be adopted to produce an optimal earth structure model that minimizes the objective function. This methodology is conceptually straightforward, but its implementation in practical situations is highly complex, error-prone and computationally demanding. In this study, we demonstrate the feasibility of automating wave-equation travel-time tomography based on the adjoint method using Kepler, an open-source software package for designing, managing and executing scientific workflows. The workflow technology allows us to abstract away much of the complexity involved in the implementation in a manner that is both robust and scalable. Our automated adjoint wave-equation travel-time tomography package has been successfully applied on a real active-source seismic dataset.  相似文献   

4.
Borehole radar velocity inversion using cokriging and cosimulation   总被引:4,自引:1,他引:4  
A new radar velocity tomography method is presented based on slowness covariance modeling and cokriging of the slowness field using only measured travel time data. The proposed approach is compared to the classical LSQR algorithm using various synthetic models and a real data set. In each case, the proposed method provides comparable to or better results than LSQR. One advantage of this approach is that it is self-regularized and requires less a priori information. The covariance model also allows stochastic imaging of slowness fields by geostatistical simulations. Stable characteristics and uncertain features of the inverted models can then be easily identified.  相似文献   

5.
This study compares formal Bayesian inference to the informal generalized likelihood uncertainty estimation (GLUE) approach for uncertainty-based calibration of rainfall-runoff models in a multi-criteria context. Bayesian inference is accomplished through Markov Chain Monte Carlo (MCMC) sampling based on an auto-regressive multi-criteria likelihood formulation. Non-converged MCMC sampling is also considered as an alternative method. These methods are compared along multiple comparative measures calculated over the calibration and validation periods of two case studies. Results demonstrate that there can be considerable differences in hydrograph prediction intervals generated by formal and informal strategies for uncertainty-based multi-criteria calibration. Also, the formal approach generates definitely preferable validation period results compared to GLUE (i.e., tighter prediction intervals that show higher reliability) considering identical computational budgets. Moreover, non-converged MCMC (based on the standard Gelman–Rubin metric) performance is reasonably consistent with those given by a formal and fully-converged Bayesian approach even though fully-converged results requires significantly larger number of samples (model evaluations) for the two case studies. Therefore, research to define alternative and more practical convergence criteria for MCMC applications to computationally intensive hydrologic models may be warranted.  相似文献   

6.
The specific objective of the paper is to propose a new flood frequency analysis method considering uncertainty of both probability distribution selection (model uncertainty) and uncertainty of parameter estimation (parameter uncertainty). Based on Bayesian theory sampling distribution of quantiles or design floods coupling these two kinds of uncertainties is derived, not only point estimator but also confidence interval of the quantiles can be provided. Markov Chain Monte Carlo is adopted in order to overcome difficulties to compute the integrals in estimating the sampling distribution. As an example, the proposed method is applied for flood frequency analysis at a gauge in Huai River, China. It has been shown that the approach considering only model uncertainty or parameter uncertainty could not fully account for uncertainties in quantile estimations, instead, method coupling these two uncertainties should be employed. Furthermore, the proposed Bayesian-based method provides not only various quantile estimators, but also quantitative assessment on uncertainties of flood frequency analysis.  相似文献   

7.
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the forward physics and also in the observed data. Thus, problems with structure should be approached via linear algebra and optimization techniques. This paper provides new insights to understand uncertainty from a deterministic point of view, which is a necessary step to design more efficient methods to sample the uncertainty region(s) of equivalent solutions.  相似文献   

8.
A hybrid algorithm, combining Monte-Carlo optimization with simultaneous iterative reconstructive technique (SIRT) tomography, is used to invert first arrival traveltimes from seismic data for building a velocity model. Stochastic algorithms may localize a point around the global minimum of the misfit function but are not suitable for identifying the precise solution. On the other hand, a tomographic model reconstruction, based on a local linearization, will only be successful if an initial model already close to the best solution is available. To overcome these problems, in the method proposed here, a first model obtained using a classical Monte Carlo-based optimization is used as a good initial guess for starting the local search with the SIRT tomographic reconstruction. In the forward problem, the first-break times are calculated by solving the eikonal equation through a velocity model with a fast finite-difference method instead of the traditional slow ray-tracing technique. In addition, for the SIRT tomography the seismic energy from sources to receivers is propagated by applying a fast Fresnel volume approach which when combined with turning rays can handle models with both positive and negative velocity gradients. The performance of this two-step optimization scheme has been tested on synthetic and field data for building a geologically plausible velocity model.This is an efficient and fast search mechanism, which permits insertion of geophysical, geological and geodynamic a priori constraints into the grid model and ray path is completed avoided. Extension of the technique to 3D data and also to the solution of 'static correction' problems is easily feasible.  相似文献   

9.
10.
罗马尼亚Vancea地震区是大陆上发生与板块磁撞和削减有关的中深部地震活动的地区之一。本文介绍了应用地震层析成象方法研究该地区深部速度结构成果。在研究中使用了地方和区域地震所记录的433个浅源和中部地震的到时资料反演求解深至200km的三维速度结构,在走时和射线路径的计算中利用了有效的三维射线跟踪技术,在反演中采用LSQR算法,高分辨率的地震层析图象揭示了速度结构的广泛不均匀性,结果表明,地震层析  相似文献   

11.
Two key issues distinguish probabilistic seismic risk analysis of a lifeline or portfolio of structures from that of a single structure. Regional analysis must consider the correlation among lifeline components or structures in the portfolio, and the larger scope makes it much more computationally demanding. In this paper, we systematically identify and compare alternative methods for regional hazard analysis that can be used as the first part of a computationally efficient regional probabilistic seismic risk analysis that properly considers spatial correlation. Specifically, each method results in a set of probabilistic ground motion maps with associated hazard‐consistent annual occurrence probabilities that together represent the regional hazard. The methods are compared according to how replicable and computationally tractable they are and the extent to which the resulting maps are physically realistic, consistent with the regional hazard and regional spatial correlation, and few in number. On the basis of a conceptual comparison and an empirical comparison for Los Angeles, we recommend a combination of simulation and optimization approaches: (i) Monte Carlo simulation with importance sampling of the earthquake magnitudes to generate a set of probabilistic earthquake scenarios (defined by source and magnitude); (ii) the optimization‐based probabilistic scenario method, a mixed‐integer linear program, to reduce the size of that set; (iii) Monte Carlo simulation to generate a set of probabilistic ground motion maps, varying the number of maps sampled from each earthquake scenario so as to minimize the sampling variance; and (iv) the optimization‐based probabilistic scenario again to reduce the set of probabilistic ground motion maps. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Highly detailed physically based groundwater models are often applied to make predictions of system states under unknown forcing. The required analysis of uncertainty is often unfeasible due to the high computational demand. We combine two possible solution strategies: (1) the use of faster surrogate models; and (2) a robust data worth analysis combining quick first-order second-moment uncertainty quantification with null-space Monte Carlo techniques to account for parametric uncertainty. A structurally and parametrically simplified model and a proper orthogonal decomposition (POD) surrogate are investigated. Data worth estimations by both surrogates are compared against estimates by a complex MODFLOW benchmark model of an aquifer in New Zealand. Data worth is defined as the change in post-calibration predictive uncertainty of groundwater head, river-groundwater exchange flux, and drain flux data, compared to the calibrated model. It incorporates existing observations, potential new measurements of system states (“additional” data) as well as knowledge of model parameters (“parametric” data). The data worth analysis is extended to account for non-uniqueness of model parameters by null-space Monte Carlo sampling. Data worth estimates of the surrogates and the benchmark suggest good agreement for both surrogates in estimating worth of existing data. The structural simplification surrogate only partially reproduces the worth of “additional” data and is unable to estimate “parametric” data, while the POD model is in agreement with the complex benchmark for both “additional” and “parametric” data. The variance of the POD data worth estimates suggests the need to account for parameter non-uniqueness, like presented here, for robust results.  相似文献   

13.
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems.In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.  相似文献   

14.
This paper investigates the effects of uncertainty in rock-physics models on reservoir parameter estimation using seismic amplitude variation with angle and controlled-source electromagnetics data. The reservoir parameters are related to electrical resistivity by the Poupon model and to elastic moduli and density by the Xu-White model. To handle uncertainty in the rock-physics models, we consider their outputs to be random functions with modes or means given by the predictions of those rock-physics models and we consider the parameters of the rock-physics models to be random variables defined by specified probability distributions. Using a Bayesian framework and Markov Chain Monte Carlo sampling methods, we are able to obtain estimates of reservoir parameters and information on the uncertainty in the estimation. The developed method is applied to a synthetic case study based on a layered reservoir model and the results show that uncertainty in both rock-physics models and in their parameters may have significant effects on reservoir parameter estimation. When the biases in rock-physics models and in their associated parameters are unknown, conventional joint inversion approaches, which consider rock-physics models as deterministic functions and the model parameters as fixed values, may produce misleading results. The developed stochastic method in this study provides an integrated approach for quantifying how uncertainty and biases in rock-physics models and in their associated parameters affect the estimates of reservoir parameters and therefore is a more robust method for reservoir parameter estimation.  相似文献   

15.
Seismic safety of high concrete dams   总被引:2,自引:1,他引:1  
Peak ground acceleration(PGA) estimation is an important task in earthquake engineering practice.One of the most well-known models is the Boore-Joyner-Fumal formula,which estimates the PGA using the moment magnitude,the site-to-fault distance and the site foundation properties.In the present study,the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an effi ciency-robustness balanced formula is proposed.For this purpose,a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship.In this approach,each model class(a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data.The one with the highest plausibility is robust since it possesses the optimal balance between the data fi tting capability and the sensitivity to noise.A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis.The optimal predictive formula is proposed based on this database.It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore,Joyner and Fumal(1993).  相似文献   

16.
Parameter estimation in nonlinear environmental problems   总被引:5,自引:4,他引:1  
Popular parameter estimation methods, including least squares, maximum likelihood, and maximum a posteriori (MAP), solve an optimization problem to obtain a central value (or best estimate) followed by an approximate evaluation of the spread (or covariance matrix). A different approach is the Monte Carlo (MC) method, and particularly Markov chain Monte Carlo (MCMC) methods, which allow sampling from the posterior distribution of the parameters. Though available for years, MC methods have only recently drawn wide attention as practical ways for solving challenging high-dimensional parameter estimation problems. They have a broader scope of applications than conventional methods and can be used to derive the full posterior pdf but can be computationally very intensive. This paper compares a number of different methods and presents improvements using as case study a nonlinear DNAPL source dissolution and solute transport model. This depth-integrated semi-analytical model approximates dissolution from the DNAPL source zone using nonlinear empirical equations with partially known parameters. It then calculates the DNAPL plume concentration in the aquifer by solving the advection-dispersion equation with a flux boundary. The comparison is among the classical MAP and some versions of computer-intensive Monte Carlo methods, including the Metropolis–Hastings (MH) method and the adaptive direction sampling (ADS) method.  相似文献   

17.
Incremental dynamic analysis (IDA) is presented as a powerful tool to evaluate the variability in the seismic demand and capacity of non‐deterministic structural models, building upon existing methodologies of Monte Carlo simulation and approximate moment‐estimation. A nine‐story steel moment‐resisting frame is used as a testbed, employing parameterized moment‐rotation relationships with non‐deterministic quadrilinear backbones for the beam plastic‐hinges. The uncertain properties of the backbones include the yield moment, the post‐yield hardening ratio, the end‐of‐hardening rotation, the slope of the descending branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately assess the seismic performance of the model for any combination of the parameters by performing multiple nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the IDA and the static pushover level reveal the yield moment and the two rotational‐ductility parameters to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual seismic performance we employ (a) Monte Carlo simulation with latin hypercube sampling, (b) point‐estimate and (c) first‐order second‐moment techniques, thus offering competing methods that represent different compromises between speed and accuracy. The final results provide firm ground for challenging current assumptions in seismic guidelines on using a median‐parameter model to estimate the median seismic performance and employing the well‐known square‐root‐sum‐of‐squares rule to combine aleatory randomness and epistemic uncertainty. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
是否能够正确地建立深度域三维速度模型是三维叠前深度偏移成败的关键 .本文根据Deregowski循环 ,利用叠前深度域地震成像对速度模型变化的敏感性 ,采用偏移迭代逐次逼近最佳成像速度 ,研究开发了一套快捷有效的三维叠前深度偏移深度域速度模型建立技术 .借鉴时间域CDP(共深度点 )道集上常规叠加速度分析的策略 ,在深度域CRP(共反射点 )道集上 ,提出剩余慢度平方谱的概念并建立相应的实现技术 .导出深度域中均方根速度与层速度之间的关系 ;按照串级偏移原理确定偏移循环过程中初始速度、剩余速度及修改后速度之间的关系 ;采用蒙特卡洛非线性优化算法实现从剩余慢度平方谱中自动拾取层速度 ,讨论了其地质速度约束条件和蒙特卡洛非线性优化的收敛准则 ,使得所拾取的层速度模型具有合理的地质意义并获得最佳偏移成像效果 .SEG EAGE理论模型数值试算验证了方法的有效性 ,在海拉尔盆地霍多莫尔工区 ,5 8km2 三维资料的速度模型建立并获得满意的三维叠前深度偏移成像 .  相似文献   

19.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   

20.
基于量子蒙特卡罗的地球物理反演方法   总被引:4,自引:2,他引:2       下载免费PDF全文
本文将量子蒙特卡罗全局优化方法引入地球物理反问题,进而发展了一类新的地球物理非线性反演方法.量子蒙特卡罗方法是基于量子力学机制的随机方法,包括变分蒙特卡罗方法、格林函数蒙特卡罗方法、扩散蒙特卡罗方法、路径积分蒙特卡罗方法.本文简要回顾了量子蒙特卡罗方法的发展,阐述了其方法理论;随后的数值试验结果表明,量子蒙特卡罗方法应用于地球物理反问题的求解是成功的,它适合于非线性、多极值的地球物理反演问题,在收敛速度和避免陷入局部极小等方面有着一定的优势,且该方法也适用于其他领域非线性最优化问题的求解,其算法具有较强的通用性;最后就量子蒙特卡罗方法在地球物理反问题中的应用前景以及存在的问题做了简要概述.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号