首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Robert L. Wilby 《水文研究》2005,19(16):3201-3219
Despite their acknowledged limitations, lumped conceptual models continue to be used widely for climate‐change impact assessments. Therefore, it is important to understand the relative magnitude of uncertainties in water resource projections arising from the choice of model calibration period, model structure, and non‐uniqueness of model parameter sets. In addition, external sources of uncertainty linked to choice of emission scenario, climate model ensemble member, downscaling technique(s), and so on, should be acknowledged. To this end, the CATCHMOD conceptual water balance model was used to project changes in daily flows for the River Thames at Kingston using parameter sets derived from different subsets of training data, including the full record. Monte Carlo sampling was also used to explore parameter stability and identifiability in the context of historic climate variability. Parameters reflecting rainfall acceptance at the soil surface in simpler model structures were found to be highly sensitive to the training period, implying that climatic variability does lead to variability in the hydrologic behaviour of the Thames basin. Non‐uniqueness of parameters for more complex model structures results in relatively small variations in projected annual mean flow quantiles for different training periods compared with the choice of emission scenario. However, this was not the case for subannual flow statistics, where uncertainty in flow changes due to equifinality was higher in winter than summer, and comparable in magnitude to the uncertainty of the emission scenario. Therefore, it is recommended that climate‐change impact assessments using conceptual water balance models should routinely undertake sensitivity analyses to quantify uncertainties due to parameter instability, identifiability and non‐uniqueness. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

3.
Accurate determination of seismic velocity of the crust is important for understanding regional tectonics and crustal evolution of the Earth. We propose a stepwise joint linearized inversion method using surface wave dispersion, Rayleigh wave ZH ratio (i.e., ellipticity), and receiver function data to better resolve 1D crustal shear wave velocity (v S) structure. Surface wave dispersion and Rayleigh wave ZH ratio data are more sensitive to absolute variations of shear wave speed at depths, but their sensitivity kernels to shear wave speeds are different and complimentary. However, receiver function data are more sensitive to sharp velocity contrast (e.g., due to the existence of crustal interfaces) and v P/v S ratios. The stepwise inversion method takes advantages of the complementary sensitivities of each dataset to better constrain the v S model in the crust. We firstly invert surface wave dispersion and ZH ratio data to obtain a 1D smooth absolute v S model and then incorporate receiver function data in the joint inversion to obtain a finer v S model with better constraints on interface structures. Through synthetic tests, Monte Carlo error analyses, and application to real data, we demonstrate that the proposed joint inversion method can resolve robust crustal v S structures and with little initial model dependency.  相似文献   

4.
Non‐uniqueness occurs with the 1D parametrization of refraction traveltime graphs in the vertical dimension and with the 2D lateral resolution of individual layers in the horizontal dimension. The most common source of non‐uniqueness is the inversion algorithm used to generate the starting model. This study applies 1D, 1.5D and 2D inversion algorithms to traveltime data for a syncline (2D) model, in order to generate starting models for wave path eikonal traveltime tomography. The 1D tau‐p algorithm produced a tomogram with an anticline rather than a syncline and an artefact with a high seismic velocity. The 2D generalized reciprocal method generated tomograms that accurately reproduced the syncline, together with narrow regions at the thalweg with seismic velocities that are less than and greater than the true seismic velocities as well as the true values. It is concluded that 2D inversion algorithms, which explicitly identify forward and reverse traveltime data, are required to generate useful starting models in the near‐surface where irregular refractors are common. The most likely tomogram can be selected as either the simplest model or with a priori information, such as head wave amplitudes. The determination of vertical velocity functions within individual layers is also subject to non‐uniqueness. Depths computed with vertical velocity gradients, which are the default with many tomography programs, are generally 50% greater than those computed with constant velocities for the same traveltime data. The average vertical velocity provides a more accurate measure of depth estimates, where it can be derived. Non‐uniqueness is a fundamental reality with the inversion of all near‐surface seismic refraction data. Unless specific measures are taken to explicitly address non‐uniqueness, then the production of a single refraction tomogram, which fits the traveltime data to sufficient accuracy, does not necessarily demonstrate that the result is either ‘correct’ or the most probable.  相似文献   

5.
基于贝叶斯理论的叠前多波联合反演弹性模量方法   总被引:8,自引:6,他引:2       下载免费PDF全文
AVO反演可以获得地层岩性和流体信息,而叠前反演问题都是高维的和非适定的,因此获得可靠稳定的解对叠前反演至关重要. 本文给出了一种基于贝叶斯理论的纵波和转换波联合反演密度比和模量比的方法. 鉴于剪切模量比、体积模量比可以更好地指示油气,基于岩石物理中速度比与模量比之间的关系,将此关系式代入Zoeppritz方程的近似形式Aki-Richards公式中,得到与模量比有关的反射系数近似公式. 联合纵波和转换波,利用最小二乘准则构建目标函数,最终反演出密度比、剪切模量比、体积模量比三个参数. 在反演过程中引入贝叶斯理论,假定先验信息服从高斯分布,待求参数服从改进的Cauchy分布,并去除待求参数之间的相关性. 利用模型数据和实际数据对本文方法进行测试,并与常规的单独利用纵波数据来反演方法进行比较,结果表明联合反演稳定性更好、精度更高、抗噪音能力更强,验证了本文方法的可行性和有效性.  相似文献   

6.
The paper discusses the performance and robustness of the Bayesian (probabilistic) approach to seismic tomography enhanced by the numerical Monte Carlo sampling technique. The approach is compared with two other popular techniques, namely the damped least-squares (LSQR) method and the general optimization approach. The theoretical considerations are illustrated by an analysis of seismic data from the Rudna (Poland) copper mine. Contrary to the LSQR and optimization techniques the Bayesian approach allows for construction of not only the “best-fitting” model of the sought velocity distribution but also other estimators, for example the average model which is often expected to be a more robust estimator than the maximum likelihood solution. We demonstrate that using the Markov Chain Monte Carlo sampling technique within the Bayesian approach opens up the possibility of analyzing tomography imaging uncertainties with minimal additional computational effort compared to the robust optimization approach. On the basis of the considered example it is concluded that the Monte Carlo based Bayesian approach offers new possibilities of robust and reliable tomography imaging.  相似文献   

7.
Soil shear wave velocity has been recognized as a governing parameter in the assessment of the seismic response of slopes. The spatial variability of soil shear wave velocity can influence the seismic response of sliding mass and seismic displacements. However, most analyses of sliding mass response have been carried out by deterministic models. This paper stochastically investigates the effect of random heterogeneity of shear wave velocity of soil on the dynamic response of sliding mass using the correlation matrix decomposition method and Monte Carlo simulation(MCS). The software FLAC 7.0 along with a Matlab code has been utilized for this purpose. The influence of statistical parameters on the seismic response of sliding mass and seismic displacements in earth slopes with different inclinations and stiffnesses subject to various earthquake shakings was investigated. The results indicated that, in general, the random heterogeneity of soil shear modulus can have a notable impact on the sliding mass response and that neglecting this phenomenon could lead to underestimation of sliding deformations.  相似文献   

8.
Practical applications of surface wave inversion demand reliable inverted shear‐wave profiles and a rigorous assessment of the uncertainty associated to the inverted parameters. As a matter of fact, the surface wave inverse problem is severely affected by solution non‐uniqueness: the degree of non‐uniqueness is closely related to the complexity of the observed dispersion pattern and to the experimental inaccuracies in dispersion measurements. Moreover, inversion pitfalls may be connected to specific problems such as inadequate model parametrization and incorrect identification of the surface wave modes. Consequently, it is essential to tune the inversion problem to the specific dataset under examination to avoid unnecessary computations and possible misinterpretations. In the heuristic inversion algorithm presented in this paper, different types of model constraints can be easily introduced to bias constructively the solution towards realistic estimates of the 1D shear‐wave profile. This approach merges the advantages of global inversion, like the extended exploration of the parameter space and a theoretically rigorous assessment of the uncertainties on the inverted parameters, with the practical approach of Lagrange multipliers, which is often used in deterministic inversion, which helps inversion to converge towards models with desired properties (e.g., ‘smooth’ or ‘minimum norm' models). In addition, two different forward kernels can be alternatively selected for direct‐problem computations: either the conventional modal inversion or, instead, the direct minimization of the secular function, which allows the interpreter to avoid mode identification. A rigorous uncertainty assessment of the model parameters is performed by posterior covariance analysis on the accepted solutions and the modal superposition associated to the inverted models is investigated by full‐waveform modelling. This way, the interpreter has several tools to address the more probable sources of inversion pitfalls within the framework of a rigorous and well‐tested global inversion algorithm. The effectiveness and the versatility of this approach, as well as the impact of the interpreter's choices on the final solution and on its posterior uncertainty, are illustrated using both synthetic and real data. In the latter case, the inverted shear velocity profiles are blind compared with borehole data.  相似文献   

9.
A sensitivity analysis of the surface and catchment characteristics in the European soil erosion model (EUROSEM) was carried out with special emphasis on rills and rock fragment cover. The analysis focused on the use of Monte Carlo simulation but was supplemented by a simple sensitivity analysis where input variables were increased and decreased by 10%. The study showed that rock fragments have a significant effect upon the static output parameters of total runoff, peak flow rate, total soil loss and peak sediment discharge, but with a high coefficient of variation. The same applied to the average hydrographs and sedigraphs although the peak of the graphs was associated with a low coefficient of variation. On average, however, the model was able to simulate the effect of rock fragment cover quite well. The sensitivity analysis through the Monte Carlo simulation showed that the model is particularly sensitive to changes in parameters describing rills and the length of the plane when no rock fragments are simulated but that the model also is sensitive to changes in the fraction of non‐erodible material and interrill slope when rock fragments were embedded in the topsoil. For rock fragments resting on the surface, changes in parameter values did not affect model output significantly. The simple sensitivity analysis supported the findings from the Monte Carlo simulation and illustrates the importance when choosing input parameters to describe both rills and rock fragment cover when modelling with EUROSEM. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

10.
11.
A new type of seismic imaging, based on Feynman path integrals for waveform modelling, is capable of producing accurate subsurface images without any need for a reference velocity model. Instead of the usual optimization for traveltime curves with maximal signal semblance, a weighted summation over all representative curves avoids the need for velocity analysis, with its common difficulties of subjective and time‐consuming manual picking. The summation over all curves includes the stationary one that plays a preferential role in classical imaging schemes, but also multiple stationary curves when they exist. Moreover, the weighted summation over all curves also accounts for non‐uniqueness and uncertainty in the stacking/migration velocities. The path‐integral imaging can be applied to stacking to zero‐offset and to time and depth migration. In all these cases, a properly defined weighting function plays a vital role: to emphasize contributions from traveltime curves close to the optimal one and to suppress contributions from unrealistic curves. The path‐integral method is an authentic macromodel‐independent technique in the sense that there is strictly no parameter optimization or estimation involved. Development is still in its initial stage, and several conceptual and implementation issues are yet to be solved. However, application to synthetic and real data examples shows that it has the potential for becoming a fully automatic imaging technique.  相似文献   

12.
是否能够正确地建立深度域三维速度模型是三维叠前深度偏移成败的关键 .本文根据Deregowski循环 ,利用叠前深度域地震成像对速度模型变化的敏感性 ,采用偏移迭代逐次逼近最佳成像速度 ,研究开发了一套快捷有效的三维叠前深度偏移深度域速度模型建立技术 .借鉴时间域CDP(共深度点 )道集上常规叠加速度分析的策略 ,在深度域CRP(共反射点 )道集上 ,提出剩余慢度平方谱的概念并建立相应的实现技术 .导出深度域中均方根速度与层速度之间的关系 ;按照串级偏移原理确定偏移循环过程中初始速度、剩余速度及修改后速度之间的关系 ;采用蒙特卡洛非线性优化算法实现从剩余慢度平方谱中自动拾取层速度 ,讨论了其地质速度约束条件和蒙特卡洛非线性优化的收敛准则 ,使得所拾取的层速度模型具有合理的地质意义并获得最佳偏移成像效果 .SEG EAGE理论模型数值试算验证了方法的有效性 ,在海拉尔盆地霍多莫尔工区 ,5 8km2 三维资料的速度模型建立并获得满意的三维叠前深度偏移成像 .  相似文献   

13.
In glacial studies, properties such as glacier thickness and the basement permeability and porosity are key to understand the hydrological and mechanical behaviour of the system. The seismoelectric method could potentially be used to determine key properties of glacial environments. Here we analytically model the generation of seismic and seismoelectric signals by means of a shear horizontal seismic wave source on top of a glacier overlying a porous basement. Considering a one-dimensional setting, we compute the seismic waves and the electrokinetically induced electric field. We then analyse the sensitivity of the seismic and electromagnetic data to relevant model parameters, namely depth of the glacier bottom, porosity, permeability, shear modulus and saturating water salinity of the glacier basement. Moreover, we study the possibility of inferring these key parameters from a set of very low noise synthetic data, adopting a Bayesian framework to pay particular attention to the uncertainty of the model parameters mentioned above. We tackle the resolution of the probabilistic inverse problem with two strategies: (1) we compute the marginal posterior distributions of each model parameter solving multidimensional integrals numerically and (2) we use a Markov chain Monte Carlo algorithm to retrieve a collection of model parameters that follows the posterior probability density function of the model parameters, given the synthetic data set. Both methodologies are able to obtain the marginal distributions of the parameters and estimate their mean and standard deviation. The Markov chain Monte Carlo algorithm performs better in terms of numerical stability and number of iterations needed to characterize the distributions. The inversion of seismic data alone is not able to constrain the values of porosity and permeability further than the prior distribution. In turn, the inversion of the electric data alone, and the joint inversion of seismic and electric data are useful to constrain these parameters as well as other glacial system properties. Furthermore, the joint inversion reduces the uncertainty of the model parameters estimates and provides more accurate results.  相似文献   

14.
Surface-wave tomography is an important and widely used method for imaging the crust and upper mantle velocity structure of the Earth. In this study, we proposed a deep learning (DL) method based on convolutional neural network (CNN), named SfNet, to derive the vS model from the Rayleigh wave phase and group velocity dispersion curves. Training a network model usually requires large amount of training datasets, which is labor-intensive and expensive to acquire. Here we relied on synthetics generated automatically from various spline-based vS models instead of directly using the existing vS models of an area to build the training dataset, which enhances the generalization of the DL method. In addition, we used a random sampling strategy of the dispersion periods in the training dataset, which alleviates the problem that the real data used must be sampled strictly according to the periods of training dataset. Tests using synthetic data demonstrate that the proposed method is much faster, and the results for the vS model are more accurate and robust than those of conventional methods. We applied our method to a dataset for the Chinese mainland and obtained a new reference velocity model of the Chinese continent (ChinaVs-DL1.0), which has smaller dispersion misfits than those from the traditional method. The high accuracy and efficiency of our DL approach makes it an important method for vS model inversions from large amounts of surface-wave dispersion data.  相似文献   

15.
In this paper we propose a method for the characterization of naturally fractured reservoirs by quantitative integration of seismic and production data. The method is based on a consistent theoretical frame work to model both effective hydraulic and elastic properties of fractured porous media and a (non‐linear) Bayesian method of inversion that provides information about uncertainties as well as mean (or maximum likelihood) values. We model a fractured reservoir as a porous medium containing a single set of vertical fractures characterized by an unknown fracture density, azimuthal orientation and aperture. We then look at the problem of fracture parameter estimation as a non‐linear inverse problem and try to estimate the unknown fracture parameters by joint inversion of seismic amplitude versus angle and azimuth data and dynamic production data. Once the fracture parameters have been estimated the corresponding effective stiffness and permeability tensors can be estimated using consistent models. A synthetic example is provided to clearly explain and test the workflow. It shows that seismic and production data complement each other, in the sense that the seismic data resolve a non‐uniqueness in the fracture orientation and the production data help to recover the true fracture aperture and permeability, because production data are more sensitive to the fracture aperture than the seismic data.  相似文献   

16.
基于控制照明的合成震源记录交互剩余偏移速度分析   总被引:10,自引:0,他引:10       下载免费PDF全文
提出了一种新的偏移速度分析方法——基于控制照明的合成震源记录交互剩余偏移速度分析方法.与其他类似偏移速度分析方法的不同点在于:(1) 叠前深度偏移采用基于波动理论的快速合成震源记录算法;(2)偏移方法采用平面波震源,与速度分析方法一致;(3)应用控制照明技术,避免了因横向变速而导致的平面波震源波场在传播过程中的畸变,从而减小了速度分析的误差;(4)实用的速度谱设计,使交互偏移速度分析可行且易于操作.模型和新疆实际资料的试算表明该方法是一种有效和实用的偏移速度分析方法.  相似文献   

17.
In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety of applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models that require significant computational time to run and produce the desired output. In this paper we improve the computational efficiency of GLUE by sampling the prior parameter space using an adaptive Markov Chain Monte Carlo scheme (the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm). Moreover, we propose an alternative strategy to determine the value of the cutoff threshold based on the appropriate coverage of the resulting uncertainty bounds. We demonstrate the superiority of this revised GLUE method with three different conceptual watershed models of increasing complexity, using both synthetic and real-world streamflow data from two catchments with different hydrologic regimes.  相似文献   

18.
To analyse and invert refraction seismic travel time data, different approaches and techniques have been proposed. One common approach is to invert first‐break travel times employing local optimization approaches. However, these approaches result in a single velocity model, and it is difficult to assess the quality and to quantify uncertainties and non‐uniqueness of the found solution. To address these problems, we propose an inversion strategy relying on a global optimization approach known as particle swarm optimization. With this approach we generate an ensemble of acceptable velocity models, i.e., models explaining our data equally well. We test and evaluate our approach using synthetic seismic travel times and field data collected across a creeping hillslope in the Austrian Alps. Our synthetic study mimics a layered near‐surface environment, including a sharp velocity increase with depth and complex refractor topography. Analysing the generated ensemble of acceptable solutions using different statistical measures demonstrates that our inversion strategy is able to reconstruct the input velocity model, including reasonable, quantitative estimates of uncertainty. Our field data set is inverted, employing the same strategy, and we further compare our results with the velocity model obtained by a standard local optimization approach and the information from a nearby borehole. This comparison shows that both inversion strategies result in geologically reasonable models (in agreement with the borehole information). However, analysing the model variability of the ensemble generated using our global approach indicates that the result of the local optimization approach is part of this model ensemble. Our results show the benefit of employing a global inversion strategy to generate near‐surface velocity models from refraction seismic data sets, especially in cases where no detailed a priori information regarding subsurface structures and velocity variations is available.  相似文献   

19.
Full-waveform inversion is characterized by cycle-skipping when the starting background model differs significantly from the true model and low-frequency data are unavailable. To mitigate this problem, reflection waveform inversion is applied to provide a background velocity model for full-waveform inversion. This technique attempts to extract background velocity updates along the reflection wavepath by matching the reflection waveforms. However, two issues arise during the implementation of reflection waveform inversion: amplitude and efficiency. The amplitude is always underestimated due to the complex subsurface parameter (i.e. the source signature, density, attenuation etc.). This makes it unreasonable to match the reflection amplitude involved in waveforms, especially in the filed data cases. In addition, generating the background velocity gradient requires the simulation of the reflection wavefield. However, simulating the reflection wavefield is time-consuming. To address the former, we introduced a locally normalized objective function, while for the latter, we used an efficient strategy by avoiding the explicit generation of the reflection wavefield. Results show that applying the proposed method to both synthetic and field data can provide a good background velocity model for full-waveform inversion with high efficiency.  相似文献   

20.
We describe a method to invert a walkaway vertical seismic profile (VSP) and predict elastic properties (P‐wave velocity, S‐wave velocity and density) in a layered model looking ahead of the deepest receiver. Starting from Bayes's rule, we define a posterior distribution of layered models that combines prior information (on the overall variability of and correlations among the elastic properties observed in well logs) with information provided by the VSP data. This posterior distribution of layered models is sampled by a Monte‐Carlo method. The sampled layered models agree with prior information and fit the VSP data, and their overall variability defines the uncertainty in the predicted elastic properties. We apply this technique first to a zero‐offset VSP data set, and show that uncertainty in the long‐wavelength P‐wave velocity structure results in a sizable uncertainty in the predicted elastic properties. We then use walkaway VSP data, which contain information on the long‐wavelength P‐wave velocity (in the reflection moveout) and on S‐wave velocity and density contrasts (in the change of reflectivity with offset). The uncertainty of the look‐ahead prediction is considerably decreased compared with the zero‐offset VSP, and the predicted elastic properties are in good agreement with well‐log measurements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号