首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
In recent years, due to the rapid development of computation hard- and software, time domain full-wave inversion, which makes use of all the information in the seismograms without appealing to linearization, has become a plausible candidate for the retrieval of the physical parameters of the earth's substratum. Retrieving a large number of parameters (the usual case in a layered substratum comprising various materials, some of which are porous) at one time is a formidable task, so full-wave inversion often seeks to retrieve only a subset of these unknowns, with the remaining parameters, the priors, considered to be known and constant, or sequentially updated, during the inversion. A known prior means that its value has been obtained by other means (e.g., in situ or laboratory measurement) or simply guessed (hopefully, with a reasonable degree of confidence). The uncertainty of the value of the priors, like that of data noise, and the inadequacy of the theoretical/numerical model employed to mimick the seismic data during the inversion, is a source of retrieval error. We show, on the example of a homogeneous, isotropic, anelastic half-plane substratum configuration, characterized by five parameters: density, P and S wavespeeds and P and S quality factors, when a perfectly-adequate theoretical/numerical model is employed during the inversion and the data is free of noise, that the retrieval error can be very large for a given parameter, even when the prior uncertainty of another single parameter is very small. Furthermore, the employment of other load and response polarization data and/or multi-offset data, as well as other choices of the to-be-retrieved parameters, are shown, on specific examples, not to systematically improve(they may even reduce) the accuracy of the retrievals when the prior uncertainty is relatively-large. These findings, relative to the recovery, via an exact retrieval model processing noiseless data obtained in one of the simplest geophysical configurations, of a single parameter at a time with a single uncertain prior, raises the question of the confidence that can be placed in geophysical parameter retrievals: 1) when more than one parameters are retrieved at a time, and/or 2) when more than one prior are affected by uncertainties during a given inversion, and/or 3) when the model employed to mimick the data during the inversion is inadequate, 4) when the data is affected by noise or measurement errors, and 5) when the parameter retrieval is carried out in more realistic configurations.  相似文献   

2.
含噪声数据反演的概率描述   总被引:5,自引:4,他引:1       下载免费PDF全文
根据贝叶斯理论给出了对含噪声地球物理数据处理的具体流程和方法,主要包括似然函数估计和后验概率计算.我们将数据向量的概念扩展为数据向量的集合,通过引入数据空间内的信赖度,把数据噪声转移到模型空间的概率密度函数上,即获得了反映数据本身的不确定性的似然函数.该方法由于避免了处理阶段数据空间内的人工干预,因而可以保证模型空间中的概率密度单纯反映数据噪声,具有信息保真度高、保留可行解的优点.为了得到加入先验信息的后验分布,本文提出了使用加权矩阵的概率分析法,该方法在模型空间直接引入地质信息,对噪声引起的反演多解性有很强的约束效果.整个处理流程均以大地电磁反演为例进行了展示.  相似文献   

3.
The error in physically-based rainfall-runoff modelling is broken into components, and these components are assigned to three groups: (1) model structure error, associated with the model’s equations; (2) parameter error, associated with the parameter values used in the equations; and (3) run time error, associated with rainfall and other forcing data. The error components all contribute to “integrated” errors, such as the difference between simulated and observed runoff, but their individual contributions cannot usually be isolated because the modelling process is complex and there is a lack of knowledge about the catchment and its hydrological responses. A simple model of the Slapton Wood Catchment is developed within a theoretical framework in which the catchment and its responses are assumed to be known perfectly. This makes it possible to analyse the contributions of the error components when predicting the effects of a physical change in the catchment. The standard approach to predicting change effects involves: (1) running “unchanged” simulations using current parameter sets; (2) making adjustments to the sets to allow for physical change; and (3) running “changed” simulations. Calibration or uncertainty-handling methods such as GLUE are used to obtain the current sets based on forcing and runoff data for a calibration period, by minimising or creating statistical bounds for the “integrated” errors in simulations of runoff. It is shown that current parameter sets derived in this fashion are unreliable for predicting change effects, because of model structure error and its interaction with parameter error, so caution is needed if the standard approach is to be used when making management decisions about change in catchments.  相似文献   

4.
不依赖源子波的跨孔雷达时间域波形反演   总被引:1,自引:0,他引:1       下载免费PDF全文
刘四新  孟旭  傅磊 《地球物理学报》2016,59(12):4473-4482
波形反演是近年来较热门的反演方法,其分辨率可以达到亚波长级别.在波形反演的实际应用中,源子波的估计十分重要.传统方法使用反褶积来估计源子波并随着反演过程更新,该方法在合成数据波形反演中效果较好,但在实际数据反演过程中存在一系列的问题.由于实际数据信噪比较低,在源子波估计过程中需要大量的人为干涉,且结果并不一定可靠.本文使用一种基于褶积波场的新型目标函数,令反演过程不再依赖源子波.详细推导了针对跨孔雷达波形反演的梯度及步长公式,实现介电常数和电导率的同步反演.针对一个合成数据模型同时反演介电常数和电导率,结果表明该方法能够反演出亚波长尺寸异常体的形状和位置.接着,将该方法应用到两组实际数据中,并与基于估计源子波的时间域波形反演结果进行比较.结果表明不依赖源子波的时间域波形反演结果分辨率更高,也更准确.  相似文献   

5.
二维叠后地震数据的平稳随机介质参数估计   总被引:1,自引:1,他引:0       下载免费PDF全文
随机介质参数估计是随机介质理论应用于地震勘探的关键.本文提出了一种从二维叠后地震数据中估计平稳随机介质参数的方法.文中阐述了二维叠后地震数据与随机介质波阻抗模型的关系,以及随机介质自相关函数参数的估计原理和方法,并结合实例详细介绍了应用功率谱法进行随机介质参数估计的具体步骤;通过多个二维理论模型的估计试验,验证了方法的可行性和正确性;还对实际地震数据进行了随机介质参数的估计试验,结果表明,随机介质参数可以为三角洲沉积相的进一步划分提供参考,反映了该方法有较好的应用前景.相比前人的研究,本文所提出的随机介质参数估计方法是一种真正的二维算法,特别是能给出自相关角度θ的估计,这种基于功率谱的估计方法具有直观且高效率的优点,但也存在着误差较大的问题,需要进一步的改进和完善.  相似文献   

6.
 Estimation of confidence limits and intervals for the two- and three-parameter Weibull distributions are presented based on the methods of moment (MOM), probability weighted moments (PWM), and maximum likelihood (ML). The asymptotic variances of the MOM, PWM, and ML quantile estimators are derived as a function of the sample size, return period, and parameters. Such variances can be used for estimating the confidence limits and confidence intervals of the population quantiles. Except for the two-parameter Weibull model, the formulas obtained do not have simple forms but can be evaluated numerically. Simulation experiments were performed to verify the applicability of the derived confidence intervals of quantiles. The results show that overall, the ML method for estimating the confidence limits performs better than the other two methods in terms of bias and mean square error. This is specially so for γ≥0.5 even for small sample sizes (e.g. N=10). However, the drawback of the ML method for determining the confidence limits is that it requires that the shape parameter be bigger than 2. The Weibull model based on the MOM, ML, and PWM estimation methods was applied to fit the distribution of annual 7-day low flows and 6-h maximum annual rainfall data. The results showed that the differences in the estimated quantiles based on the three methods are not large, generally are less than 10%. However, the differences between the confidence limits and confidence intervals obtained by the three estimation methods may be more significant. For instance, for the 7-day low flows the ratio between the estimated confidence interval to the estimated quantile based on ML is about 17% for T≥2 while it is about 30% for estimation based on MOM and PWM methods. In addition, the analysis of the rainfall data using the three-parameter Weibull showed that while ML parameters can be estimated, the corresponding confidence limits and intervals could not be found because the shape parameter was smaller than 2.  相似文献   

7.
The complementary advantages of GPS and seismic measurements are well recognized in seismotectonic monitoring studies. Therefore, integrated processing of the two data streams has been proposed recently in an attempt to obtain accurate and reliable information of surface displacements associated with earthquakes. A hitherto still critical issue in the integrated processing is real-time detection and precise estimation of the transient baseline error in the seismic records. Here, we report on a new approach by introducing the seismic acceleration corrected by baseline errors into the state equation system. The correction is performed and regularly updated in short epochs (with increments which may be as short as seconds), so that station position, velocity, and acceleration can be constrained very tightly and baseline error can be estimated as a random-walk process. With the adapted state equation system, our study highlights the use of a new approach developed for integrated processing of GPS and seismic data by means of sequential least-squares adjustment. The efficiency of our approach is demonstrated and validated using simulated, experimental, and real datasets. The latter were collected at collocated GPS and seismic stations around the 4 April 2010, E1 Mayor-Cucapah earthquake (Mw, 7.2). The results have shown that baseline errors of the strong-motion sensors are corrected precisely and high-precision seismic displacements are real-timely obtained by the new approach.  相似文献   

8.
For about three decades helicopter-borne electromagnetic (HEM) measurements have been used to reveal the resistivity distribution of the upper one hundred metres of the earth's subsurface. HEM systems record secondary fields, which are 3–6 orders of magnitude smaller than the transmitted primary fields. As both the primary fields and the secondary fields are present at the receivers, well-designed bucking coils are often used to reduce the primary fields at the receivers to a minimum. Remaining parts of the primary fields, the zero levels, are generally corrected by subtracting field values recorded at high altitudes (standard zero levelling) or estimated from resistivities of neighbouring lines or from resistivity maps (advanced zero levelling). These zero-levelling techniques enable the correction for long-term, quasi-linear instrumental drift. Short-term variations caused by temperature changes due to altitude variations, however, cannot be completely corrected by this procedure resulting in stripe patterns on thematic maps.Statistical methods and/or 2-D filter techniques called statistical levelling (tie-line levelling) and empirical levelling (microlevelling), respectively, used to correct stripe patterns in airborne geophysical data sets are, in general, not directly applicable to HEM data. Because HEM data levelling faces the problem that the parameter affected by zero-level errors, the secondary field, differs from the parameter generally levelled, the apparent resistivity. Furthermore, the dependency of the secondary field on both the resistivity of the subsurface and the sensor altitude is strongly nonlinear.A reasonable compromise is to microlevel both half-space parameters: apparent resistivity and apparent depth, followed by a recalculation of the secondary field components based on the half-space parameters levelled. Advantages and disadvantages of the diverse levelling techniques are discussed using a HEM data set obtained in a hilly region along the Saale River between the cities of Saalfeld and Jena in central Germany. It turns out from a comparison of apparent resistivity and apparent depth maps derived from levelled HEM data that manually advanced zero levelling of major level errors and automatic microlevelling of remaining minor level errors yield the best results.  相似文献   

9.
10.
The statistical concept of the power spectrum has proven to be of great value in the analysis of time series and linear systems for which the inputs and outputs are functions of time. This paper shows how the concept can be extended to two-dimensional spatial power spectra and illustrates, by example, how the concept can be applied to the determination of optimal data processing methods for satellite-derived magnetic anomaly data and to the planning of missions to obtain such data.The analysis techniques indicated are applied to a data set and data processing procedure described by Mayhew et al. (1980). These authors describe magnetic anomaly data for Australia and surrounding ocean obtained by the polar orbit POGO series satellites. This paper shows that the data processing method used by these authors is approximately equivalent to an invariant two-dimensional linear filter and that it is reasonably close to optimal with respect to accuracy, though some possible improvements are suggested. Nevertheless, as is usual when filtering data, some real “signal” is unavoidably removed along with the “noise”, resulting in errors that can be quite large.A method for reducing these errors by using additional data from a medium inclination orbit satellite (for example, 60° inclination) is suggested.  相似文献   

11.
Receiver Functions from Autoregressive Deconvolution   总被引:4,自引:0,他引:4  
Summary Receiver functions can be estimated by minimizing the square errors of Wiener filter in time-domain or spectrum division in frequency domain. To avoid the direct calculation of auto-correlation and cross-correlation coefficients in Toeplitz equation or of auto-spectrum and cross-spectrum in spectrum division equation as well as empirically choosing a damping parameter, autoregressive deconvolution is presented to isolate receiver function from three-component teleseismic P waveforms. The vertical component of teleseismic P waveform is modeled by an autoregressive model, which can be forward and backward, predicted respectively. The optimum length of the autoregressive model is determined by the Akaike criterion. By minimizing the square errors of forward and backward predicting filters, autoregressive filter coefficients can be recursively solved, and receiver function is also estimated in the similar procedure. Both synthetic and real data tests show that autoregressive deconvolution is an effective method to isolate receiver function from teleseismic P waveforms in time-domain.  相似文献   

12.
This study evaluated the attributes and uncertainty of non‐point source pollution data derived from synoptic surveys in a catchment affected by inactive metal mines in order to help to identify and select appropriate methods for data analysis/reporting and information use. Dissolved zinc data from the Upper Animas River Basin, Colorado, USA, were the focus of the study. Zinc was evaluated because concentrations were highest relative to national water quality criteria for brown trout, and zinc had the greatest frequency of criteria exceedances compared with other metals. Data attributes evaluated included measurement and model error, sample size, non‐normality, seasonality and uncertainty. The average measurement errors for discharges, concentrations and loadings were 0·15, 0·1 and 0·18, respectively. The 90 and 95% coefficients of confidence intervals for mean concentrations based on a sample size of four were 0·48 and 0·65, respectively, and ranged between 0·15 and 0·23 for sample sizes greater than 40. Aggregation of data from multiple stations decreased the confidence intervals significantly, but additional aggregation of all data increased them as a result of increasing spatial variability. Unit area loading data were approximately log‐normal. Concentration data were right‐skewed but not log‐normal. Differences in median concentrations were appreciable between snowmelt and both storm flow and baseflow, but not between storm flow and baseflow. Differences in unit area loadings between all flow events were large. It was determined that the average concentration and unit area loading values should be estimated for each flow event because of significant seasonality. Time weighted values generally should be computed if annual information is required. The confidence in average concentrations and unit area loadings is dependent on the computation method used. Both concentrations and loadings can be significantly underestimated on an annual basis when using data from synoptic surveys if the first flush of contaminants during the initial snowmelt runoff period is not sampled. The ambient standard for dissolved zinc for all events was estimated as 1600 μg l−1 using the 85th percentile of observed concentration data, with a 90% confidence interval width of 200 μg l−1. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

13.
《水文科学杂志》2013,58(5):852-871
Abstract

To reflect the uncertainties of a hydrological model in simulating and forecasting observed discharges according to rainfall inputs, the estimated result for each time step should not be just a point estimate (a single numerical value), but should be expressed as a prediction interval, i.e. a band defined by the prediction bounds of a particular confidence level α. How best to assess the quality of the prediction bounds thus becomes very important for understanding the modelling uncertainty in a comprehensive and objective way. This paper focuses on seven indices for characterizing the prediction bounds from different perspectives. For the three case-study catchments presented, these indices are calculated for the prediction bounds generated by the generalized likelihood uncertainty estimation (GLUE) method for various threshold values. In addition, the relationships among these indices are investigated, particularly that of the containing ratio (CR) to the other indices. In this context, three main findings are obtained for the prediction bounds estimated by GLUE. Firstly, both the average band-width and the average relative band-width are seen to have very strong linear correlations with the CR index. Secondly, a high CR value, a narrow band-width, and a high degree of symmetry with respect to the observed hydrograph, all of which are clearly desirable properties of the prediction bounds estimated by the uncertainty assessment methods, cannot all be achieved simultaneously. Thirdly, for the prediction bounds considered, the higher CR values and the higher degrees of symmetry with respect to the observed hydrograph are found to be associated with both the larger band-widths and the larger deviation amplitudes. It is recommended that a set of different indices, such as those considered in this study, be employed for assessing and comparing the prediction bounds in a more comprehensive and objective way.  相似文献   

14.
An inverse method is developed to simultaneously estimate multiple hydraulic conductivities, source/sink strengths, and boundary conditions, for two-dimensional confined and unconfined aquifers under non-pumping or pumping conditions. The method incorporates noisy observed data (hydraulic heads, groundwater fluxes, or well rates) at measurement locations. With a set of hybrid formulations, given sufficient measurement data, the method yields well-posed systems of equations that can be solved efficiently via nonlinear optimization. The solution is stable when measurement errors are increased. The method is successfully tested on problems with regular and irregular geometries, different heterogeneity patterns and variances (maximum Kmax/Kmin tested is 10,000), and error magnitudes. Under non-pumping conditions, when error-free observed data are used, the estimated conductivities and recharge rates are accurate within 8% of the true values. When data contain increasing errors, the estimated parameters become less accurate, as expected. For problems where the underlying parameter variation is unknown, equivalent conductivities and average recharge rates can be estimated. Under pumping (and/or injection) conditions, a hybrid formulation is developed to address these local source/sink effects, while different types of boundary conditions can also exert significant influences on drawdowns. Local grid refinement near wells is not needed to obtain accurate results, thus inversion is successful with coarse inverse grids, leading to high computation efficiency. Furthermore, flux measurements are not needed for the inversion to succeed; data requirement of the method is thus not much different from that of interpreting classic well tests. Finally, inversion accuracy is not sensitive to the degree of nonlinearity of the flow equations. Performance of the inverse method for confined and unconfined aquifer problems is similar in terms of the accuracy of the estimated parameters, the recovered head fields, and the solver speed.  相似文献   

15.
反射地震走时层析成像是一种精度较高的速度求取方法,最终可归结为线性方程组的求解.方程组具有很大的维数,常规解法需要很大的存储量和计算量.本文考虑到当投影函数取为走时残差,图象函数取为慢度残差时,灵敏度矩阵中的元素表示射线经过网格的长度的特殊物理意义,采用行索引的压缩存储方式,在射线追踪正演模拟过程中直接压缩存储灵敏度矩阵,在层析反演过程中利用压缩后的矩阵进行求解,大大降低了存储量和计算量.  相似文献   

16.
This research incorporates the generalized likelihood uncertainty estimation (GLUE) methodology in a high‐resolution Environmental Protection Agency Storm Water Management Model (SWMM), which we developed for a highly urbanized sewershed in Syracuse, NY, to assess SWMM modelling uncertainties and estimate parameters. We addressed two issues that have long been suggested having a great impact on the GLUE uncertainty estimation: the observations used to construct the likelihood measure and the sampling approach to obtain the posterior samples of the input parameters and prediction bounds of the model output. First, on the basis of the Bayes' theorem, we compared the prediction bounds generated from the same Gaussian distribution likelihood measure conditioned on flow observations of varying magnitude. Second, we employed two sampling techniques, the sampling importance resampling (SIR) and the threshold sampling methods, to generate posterior parameter distributions and prediction bounds, based on which the sampling efficiency was compared. In addition, for a better understanding of the hydrological responses of different pervious land covers in urban areas, we developed new parameter sets in SWMM representing the hydrological properties of trees and lawns, which were estimated through the GLUE procedure. The results showed that SIR was a more effective alternative to the conventional threshold sampling method. The combined total flow and peak flow data were an efficient alternative to the intensive 5‐min flow data for reducing SWMM parameter and output uncertainties. Several runoff control parameters were found to have a great effect on peak flows, including the newly introduced parameters for trees. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
The problem of identification of the modal parameters of a structural model using measured ambient response time histories is addressed. A Bayesian spectral density approach (BSDA) for modal updating is presented which uses the statistical properties of a spectral density estimator to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties by calculating the posterior joint probability distribution of these parameters. Calculation of the uncertainties of the identified modal parameters is very important if one plans to proceed with the updating of a theoretical finite element model based on modal estimates. It is found that the updated PDF of the modal parameters can be well approximated by a Gaussian distribution centred at the optimal parameters at which the posterior PDF is maximized. Examples using simulated data are presented to illustrate the proposed method. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
A comparative analysis of a variety of relationships for prediction of basin lag is performed by applying them to 23 basins located in the same geographical area and characterized by a rather similar vegetative cover. The results of computations indicate that a lag–area relationship with two constant parameters is the best predictor for most basins; under different vegetative covers in the same basin only one parameter should be variable. For a few other basins characterized by an anomalous drainage channel network of low density, such a relationship can lead to unacceptable errors. Thus, there is a need for an additional relationship to overcome this difficulty, but a larger number of anomalous basins would be required for its determination. An alternative procedure, based on the use of the non‐linear kinematic wave, which at least allows singling out the cases where a specific lag–area relationship is not reliable, is proposed. This procedure, therefore, represents a partial but very useful solution to avoid considerable errors in hydrological practice. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
A robust metric of data misfit such as the ?1‐norm is required for geophysical parameter estimation when the data are contaminated by erratic noise. Recently, the iteratively re‐weighted and refined least‐squares algorithm was introduced for efficient solution of geophysical inverse problems in the presence of additive Gaussian noise in the data. We extend the algorithm in two practically important directions to make it applicable to data with non‐Gaussian noise and to make its regularisation parameter tuning more efficient and automatic. The regularisation parameter in iteratively reweighted and refined least‐squares algorithm varies with iteration, allowing the efficient solution of constrained problems. A technique is proposed based on the secant method for root finding to concentrate on finding a solution that satisfies the constraint, either fitting to a target misfit (if a bound on the noise is available) or having a target size (if a bound on the solution is available). This technique leads to an automatic update of the regularisation parameter at each and every iteration. We further propose a simple and efficient scheme that tunes the regularisation parameter without requiring target bounds. This is of great importance for the field data inversion where there is no information about the size of the noise and the solution. Numerical examples from non‐stationary seismic deconvolution and velocity‐stack inversion show that the proposed algorithm is efficient, stable, and robust and outperforms the conventional and state‐of‐the‐art methods.  相似文献   

20.
We propose a two-dimensional, non-linear method for the inversion of reflected/converted traveltimes and waveform semblance designed to obtain the location and morphology of seismic reflectors in a lateral heterogeneous medium and in any source-to-receiver acquisition lay-out. This method uses a scheme of non-linear optimization for the determination of the interface parameters where the calculation of the traveltimes is carried out using a finite-difference solver of the Eikonal equation, assuming an a priori known background velocity model. For the search for the optimal interface model, we used a multiscale approach and the genetic algorithm global optimization technique. During the initial stages of inversion, we used the arrival times of the reflection phase to retrieve the interface model that is defined by a small number of parameters. In the successive steps, the inversion is based on the optimization of the semblance value determined along the calculated traveltime curves. Errors in the final model parameters and the criteria for the choice of the best-fit model are also estimated from the shape of the semblance function in the model parameter space. The method is tested and validated on a synthetic dataset that simulates the acquisition of reflection data in a complex volcanic structure. This study shows that the proposed inversion approach is a valid tool for geophysical investigations in complex geological environments, in order to obtain the morphology and positions of embedded discontinuities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号