首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In statistical space-time modeling, the use of non-separable covariance functions is often more realistic than separable models. In the literature, various tests for separability may justify this choice. However, in case of rejection of the separability hypothesis, none of these tests include testing for the type of non-separability of space-time covariance functions. This is an important and further significant step for choosing a class of models. In this paper a method for testing positive and negative non-separability is given; moreover, an approach for testing some well known classes of space-time covariance function models has been proposed. The performance of the tests has been shown using real and simulated data.  相似文献   

2.
基于混合差分进化算法的地球物理线性反演   总被引:4,自引:0,他引:4       下载免费PDF全文
地球物理反问题线性化处理之后, 各种反演算法归结为对病态线性方程组的求解. 为了快速准确地计算出地球物理参数, 本文提出了一种全新的基于LSQR算法的混合差分进化算法(Hybrid Differential Evolution Algorithm, HDE). 该算法利用LSQR算法给出DE算法的初始种群, 提高DE算法的计算速度和稳定性. 在不同噪声水平下, 对四种正则化方法Tikhonov、TSVD、LSQR和HDE的反演结果进行详细比较. 理论模型和实际数据反演的结果都表明: 改进的HDE算法应用于地球物理反问题的求解是成功的: 反演结果与原设定模型具有较高的相关性, 在稳定性和准确性上较常规的反演算法都具有一定的优势; 而且不需要给定正则化参数, 具有更强的实用性.  相似文献   

3.
核磁共振双TW测井数据联合反演与流体识别   总被引:6,自引:2,他引:4       下载免费PDF全文
针对核磁共振测井双TW观测数据分析和流体识别的需要,研究了基于全局搜索的遗传算法和局部搜索的最小二乘法的联合反演算法,实现了核磁共振双TW观测数据处理.首先,研究了饱和油气水岩石物理模型的核磁共振双TW观测模式的测井响应机理;然后,利用全局搜索性能优良的遗传算法, 对核磁共振回波差数据进行了反演,计算出了流体的核磁弛豫性质及其体积;最后,以遗传算法的反演结果为初值, 利用阻尼最小二乘方法对双TW回波串进行更精细的反演,计算出了双TW的T2分布、孔隙度和流体饱和度.理想模型的合成数据和实际测井资料应用表明,遗传算法与最小二乘方法相结合是一种行之有效的联合反演方法,能较好地实现核磁共振测井双TW观测数据的处理和流体评价.  相似文献   

4.
Acoustic full waveforms recorded in wells are the simplest way to get the velocity of P, S, and Stoneley waves in situ. Processing and interpretation of acoustic full waveforms in hard formations does not generate problems with identification packets of waves and calculation of their slowness and arrivals, and determination of the elastic parameter of rocks. But in shallow intervals of wells, in soft formations, some difficulties arise with proper evaluation of the S-wave velocity due to the lack of refracted S wave in case when its velocity is lower than the velocity of mud. Dynamic approach to selection of a proper value of semblance to determine the proper slowness and arrival is presented. Correlation between the results obtained from the proposed approach and the theoretical modeling is a measure of the correctness of the method.  相似文献   

5.
Reiter , E.C., Toksoz , M.N. and Purdy , G.M. 1992. A semblance-guided median filter. Geophysical Prospecting 41 , 15–41. A slowness selective median filter based on information from a local set of traces is described and implemented. The filter is constructed in two steps, the first being an estimation of a preferred slowness and the second, the selection of a median or trimmed mean value to replace the original data point. A symmetric window of traces defining the filter aperture is selected about each trace to be filtered and the filter applied repeatedly to each time point. The preferred slowness is determined by scanning a range of linear moveouts within the user-specified slowness passband. Semblance is computed for each trial slowness and the preferred slowness selected from the peak semblance value. Data points collected along this preferred slowness are then sorted from lowest to highest and in the case of a pure median filter, the middle point(s) selected to replace the original data point. The output of the filter is therefore quite insensitive to large amplitude noise bursts, retaining the well-known beneficial properties of a traditional 1D median filter. Energy which is either incoherent over the filter aperture or lies outside the slowness passband, may be additionally suppressed by weighting the filter output by the measured peak semblance. This approach may be used as a velocity filter to estimate coherent signal within a specified slowness passband and reject coherent energy outside this range. For applications of this type, other velocity estimators may be used in place of our semblance measure to provide improved velocity estimation and better filter performance. The filter aperture may also be extended to provide increased velocity estimation, but will result in additional lateral smearing of signal. We show that, in addition to a velocity filter, our approach may be used to improve signal-to-noise ratios in noisy data. The median filter tends to suppress the amplitude of random background noise and semblance weighting may be used to reduce the amplitude of background noise further while enhancing coherent signal. We apply our method to vertical seismic profile data to separate upgoing and downgoing wavefields, and also to large-offset ocean bottom hydrophone data to enhance weak refracted and post-critically reflected energy.  相似文献   

6.
The paper discusses the performance and robustness of the Bayesian (probabilistic) approach to seismic tomography enhanced by the numerical Monte Carlo sampling technique. The approach is compared with two other popular techniques, namely the damped least-squares (LSQR) method and the general optimization approach. The theoretical considerations are illustrated by an analysis of seismic data from the Rudna (Poland) copper mine. Contrary to the LSQR and optimization techniques the Bayesian approach allows for construction of not only the “best-fitting” model of the sought velocity distribution but also other estimators, for example the average model which is often expected to be a more robust estimator than the maximum likelihood solution. We demonstrate that using the Markov Chain Monte Carlo sampling technique within the Bayesian approach opens up the possibility of analyzing tomography imaging uncertainties with minimal additional computational effort compared to the robust optimization approach. On the basis of the considered example it is concluded that the Monte Carlo based Bayesian approach offers new possibilities of robust and reliable tomography imaging.  相似文献   

7.
Although modeling of cross-covariances by fitting the linear model of coregionalization (LMC) is considered a cumbersome task, cross-covariances are the key for integration of data for multiple attributes in environmental hydrology, aquifer and reservoir characterizations using multivariate geostatistics. This paper proposes a novel method of modeling cross-covariances in the linear model of coregionalization (LMC). The classic minimum/maximum autocorrelation factors (MAF) method is analyzed and found to be a good tool to discriminate the elementary nested structures of directional sample covariance matrices. Thus, separate modeling of the scalar sample covariance for each MAF factor may allow to obtain the complete LMC model for the original attributes after a back rotation of the diagonal model covariance matrix of directional factors. However, such a back rotation is not computable following the classic MAF formulation. This paper introduces an ambi-rotational minimum/maximum autocorrelation factors (AMAF) method that allows a back and forth double rotation of the directional diagonal model covariance matrix for factors. This approach provides a device for modeling of the full matrix of directional covariance and cross-covariance for the original attributes in the LMC without recurring to iterations. In this way, the use of multivariate geostatistics for data integration is allowed avoiding collocated approaches or rotation and modeling of data factor scores. The method is illustrated with an example for covariances for three attributes.  相似文献   

8.
Gravity data inversion can provide valuable information on the structure of the underlying distribution of mass. The solution of the inversion of gravity data is an ill-posed problem, and many methods have been proposed for solving it using various systematic techniques. The method proposed here is a new approach based on the collocation principle, derived from the Wiener filtering and prediction theory. The natural multiplicity of the solution of the inverse gravimetric problem can be overcome only by assuming a substantially simplified model, in this case a two-layer model, i.e. with one separation surface and one density contrast only. The presence of gravity disturbance and/or outliers in the upper layer is also taken into account. The basic idea of the method is to propagate the covariance structure of the depth function of the separation surface to the covariance structure of the gravity field measured on a reference plane. This can be done since the gravity field produced by the layers is a functional (linearized) of the depth. Furthermore, in this approach, it is possible to obtain the variance of the estimation error which indicates the precision of the computed solution. The method has proved to be effective on simulated data, fulfilling the a priori hypotheses. In real cases which display the required statistical homogeneity, good preliminary solutions, useful for a further quantitative interpretation, have also been derived. A case study is discussed.  相似文献   

9.
华北地区上地幔速度结构   总被引:2,自引:0,他引:2       下载免费PDF全文
赵珠 《地球物理学报》1983,26(4):341-354
本文选用了北京台网记录的来自不同方位并通过华北地区的140个地震,使用平滑法和dT/d△法,得到P波慢度曲线;采取走时和慢度结合使用的方法,得到走时曲线;从Gerver-Markushevich公式出发,导出一个易于计算的简捷形式,并由此反演初始速度模型;通过模型理论走时曲线的计算,选出与实测数据符合得较好的模型。平均模型中,低速层出现的深度较浅,约60公里;在360-420公里和660-680公里深处,存在两个速度急剧增加的梯度层;500公里处可能还有一个较小的梯度层。华北的东南和西南方向上,相当于660公里梯度层的深度的差异还较为显著。  相似文献   

10.
对三维电阻率反演问题进行了深入研究,提供了一种利用地表观测数据实现三维反演的实用算法.该方法应用有限差分求正演解,并通过对粗糙度矩阵元素进行适当改进,使之适用于各种情况下粗糙度矩阵的求取,进而建立在模型的总粗糙度极小条件下的反演方程.对反演方程采用收敛速度快且稳定的最小二乘正交分解(LSQR)法进行迭代求解,在迭代求解过程中只需利用偏导数矩阵和其转置矩阵乘以一个向量的结果,回避了直接求偏导数矩阵的繁琐计算,节省了内存,加快了反演的计算速度.不同的计算实例表明上述方法是求解大规模三维电阻率反演问题的有效方法.  相似文献   

11.
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation.
Ben IngramEmail:
  相似文献   

12.
Slant stacking transforms seismic data, recorded as a function of source-receiver offset and traveltime, into the domain of intercept time τ and ray parameter p. The shape of the τ-p-curves thus obtained is closely related to the slowness surfaces of the layers. A layer-stripping operation in the τ-p-domain removes all effects of the layers above the target layer. The resulting curve is equal to the slowness surface of the layer except for a scaling factor containing the thickness and dip of the layer. The slowness surface is a characteristic surface for anisotropic media. This makes the τ-p-domain very suitable for detecting and describing anisotropic layers. The relationship between the shape of τ-p-curves, the slowness surfaces, and the geometry of the layers is derived. Synthetic τ-p-curves calculated with the reflectivity method show some difficulties that can arise in determining the shape of the curves and in applying the stripping operation. It is shown that the effects of vertical inhomogeneity on the interpretation of τ-p-curves in terms of anisotropy are small.  相似文献   

13.
常规协克里金方法反演重力或重力梯度数据具有抗噪性好、加入先验信息容易等优点,其反演的地下密度分布能够识别异常体中心位置,还原异常体基本形态,但反演图像光滑,分辨率低,这是由于常规方法估计的密度协方差矩阵全局发散、平稳.为了通过协克里金方法获得聚焦的密度分布需要改善密度协方差矩阵的性质.首先,本文推导了理论密度协方差公式,其性质表明,当理论模型聚焦分布时,其密度协方差矩阵是非平稳且聚焦分布的.为了打破常规协方差矩阵全局平稳、发散的特征,本文设置密度阈值处理协方差矩阵,通过不断更新协方差矩阵来迭代实现协克里金反演,最终得到相对聚焦的反演结果.用本文方法处理重力与重力梯度数据恢复两种密度模型,均得到了与正演模型匹配的反演结果;再将方法运用于文顿盐丘的实际测量重力与重力梯度数据,反演结果与已知的地质情况匹配较好.  相似文献   

14.
A method is presented to estimate the elastic parameters and thickness of media that are locally laterally homogeneous using P‐wave and vertically polarized shear‐wave (SV‐wave) data. This method is a ‘layer‐stripping’ technique, and it uses many aspects of common focal point (CFP) technology. For each layer, a focusing operator is computed using a model of the elastic parameters with which a CFP gather can be constructed using the seismic data. Assuming local homogeneity, the resulting differential time shifts (DTSs) represent error in the model due to anisotropy and error in thickness. In the (τ?p) domain, DTSs are traveltimes Δτ that connect error in layer thickness z, vertical slowness q, and ray parameter p. Series expansion is used to linearize Δτ with respect to error in the elastic parameters and thickness, and least‐squares inversion is used to update the model. For stability, joint inversion of P and SV data is employed and, as pure SV data are relatively rare, the use of mode‐converted (PSV) data to represent SV in the joint inversion is proposed. Analytic and synthetic examples are used to demonstrate the utility and practicality of this inversion.  相似文献   

15.
Kinematical characteristics of reflected waves in anisotropic elastic media play an important role in the seismic imaging workflow. Considering compressional and converted waves, we derive new, azimuthally dependent, slowness-domain approximations for the kinematical characteristics of reflected waves (radial and transverse offsets, intercept time and traveltime) for layered orthorhombic media with varying azimuth of the vertical symmetry planes. The proposed method can be considered an extension of the well-known ‘generalized moveout approximation’ in the slowness domain, from azimuthally isotropic to azimuthally anisotropic models. For each slowness azimuth, the approximations hold for a wide angle range, combining power series coefficients in the vicinity of both the normal-incidence ray and an additional wide-angle ray. We consider two cases for the wide-angle ray: a ‘critical slowness match’ and a ‘pre-critical slowness match’ studied in Parts I and II of this work, respectively. For the critical slowness match, the approximations are valid within the entire slowness range, up to the critical slowness. For the ‘pre-critical slowness match’, the approximations are valid only within the bounded slowness range; however, the accuracy within the defined range is higher. The critical slowness match is particularly effective when the subsurface model includes a dominant high-velocity layer where, for nearly critical slowness values, the propagation in this layer is almost horizontal. Comparing the approximated kinematical characteristics with those computed by numerical ray tracing, we demonstrate high accuracy.  相似文献   

16.
地震层析成像LSQR算法的并行化   总被引:3,自引:1,他引:3       下载免费PDF全文
讨论了地震层析成像的LSQR算法(最小二乘QR分解). 在建立偏导数矩阵方程组时,对区内地震在方程中保留震源项,引入正交投影算子进行参数分离,对区外远震采用传统的平滑处理方式,用LSQR法求解联立的方程组. 由于区内地震的正交分解处理和区外远震的平滑处理,使得偏导数矩阵中的非零元素成倍增加,对于大型反演问题,这些非零元素常常达到几十GB到几百GB的数量级,巨量的内存占用成为LSQR算法的瓶颈. 针对这一问题,本文研究了偏导数矩阵中非零元素的分布规律,设计出合理的存储结构,采用分布式存储进行矩阵计算,提出了LSQR算法的并行化方案,并在联想深腾6800超级计算机上实现. 导出了LSQR算法的并行效率估算公式. 对两个地区的实际地震层析成像数据进行了效率测试.  相似文献   

17.
In this paper, we developed a specialized method to locate small aftershocks using a small-aperture temporary seismic array. The array location technique uses the first P arrival times to determine the horizontal slowness vector of the incoming P wave, then combines it with SP times to determine the event location. In order to reduce the influence of lateral velocity variation on the location determinations, we generated slowness corrections using events well-located by the permanent broadband network as calibration events, then we applied the corrections to the estimated slownesses. Applications of slowness corrections significantly improved event locations. This method can be a useful tool to locate events recorded by temporary fault-zone arrays in the near field but unlocated by the regional permanent seismic network. As a test, we first applied this method to 64 well-located aftershocks of the 1992 Landers, California, earthquake, recorded by both the Caltech/USGS Southern California Seismic Network and a small-aperture, temporary seismic array. The average horizontal and vertical separations between our locations and the well-determined catalogue locations are 1.35 and 1.75 km, respectively. We then applied this method to 132 unlocated aftershocks recorded only by the temporary seismic array. The locations show a clear tendency to follow the surface traces of the mainshock rupture.  相似文献   

18.
 The efficiency of a sequential data assimilation scheme relies on the capability to describe the error covariance. This aspect is all the more relevant if one needs accurate statistics on the estimation error. Frequently an ad hoc function depending on a few parameters is proposed, and these parameters are tuned, estimated or updated. This usually requires that the covariance is second-order stationary (i.e. depends only on the distance between two points). In this paper, we discuss this feature and show that even in simple applications (such as one-dimensional hydrodynamics), this assumption does not hold and may lead to poorly described estimation errors. We propose a method relying on the analysis of the error term and the use of the hydrodynamical model to generate one part of the covariance function, the other part being modeled using a second-order stationary approach. This method is discussed using a twin experiment in the case where a physical parameter is erroneous, and improves significantly the results: the model bias is strongly reduced and the estimation error is well described. Moreover, it enables a better adaptation of the Kalman gain to the actual estimation error.  相似文献   

19.
Surface-wave tomography is an important and widely used method for imaging the crust and upper mantle velocity structure of the Earth. In this study, we proposed a deep learning (DL) method based on convolutional neural network (CNN), named SfNet, to derive the vS model from the Rayleigh wave phase and group velocity dispersion curves. Training a network model usually requires large amount of training datasets, which is labor-intensive and expensive to acquire. Here we relied on synthetics generated automatically from various spline-based vS models instead of directly using the existing vS models of an area to build the training dataset, which enhances the generalization of the DL method. In addition, we used a random sampling strategy of the dispersion periods in the training dataset, which alleviates the problem that the real data used must be sampled strictly according to the periods of training dataset. Tests using synthetic data demonstrate that the proposed method is much faster, and the results for the vS model are more accurate and robust than those of conventional methods. We applied our method to a dataset for the Chinese mainland and obtained a new reference velocity model of the Chinese continent (ChinaVs-DL1.0), which has smaller dispersion misfits than those from the traditional method. The high accuracy and efficiency of our DL approach makes it an important method for vS model inversions from large amounts of surface-wave dispersion data.  相似文献   

20.
靳平  潘常洲 《地震学报》2002,24(6):617-626
介绍一种新的适合于地方遥测台网数据处理的方法,估算远震信号到达台站的方位角和慢度.该方法是根据信号在各台站上的到时与台站位置矢量在信号传播方向上的投影之间的相关性的原理.实际分析结果表明,应用该方法对地方台网的记录进行处理时可以准确地计算出信号的方位角和慢度,并能准确快捷地对地方台网记录的远震信号进行解释.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号