首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
利用辽宁区域地震台网记录分析重复地震   总被引:10,自引:3,他引:7       下载免费PDF全文
将Schaff和Richards所选的通过远震波形相关得到的分布在辽宁地区的23对ldquo;重复地震rdquo;,利用辽宁区域地震台网记录的波形资料进行相关计算,得到辽宁区域台网波形相关意义上的ldquo;重复地震rdquo;,并与Schaff和Richards的结果进行了对比分析. 利用辽宁区域台网得到的ldquo;重复地震rdquo;对辽宁区域地震台网的震相拾取和地震定位进行了评估. 结果表明,远震波形相关意义上的ldquo;重复地震rdquo;与区域台网波形相关意义上的ldquo;重复地震rdquo;有一定差别,它们只有部分交集. 但是,从总体上评估辽宁区域台网的定位能力和震相拾取误差,用两种方法得到的ldquo;重复地震rdquo;所给出的评估结果无实质差别——尽管用区域台网得到的ldquo;重复地震rdquo;似乎效果更好. 基于ldquo;重复地震rdquo;之间在理论上距离应小于1 km的工作假定,可以得到辽宁区域台网地震目录中的定位误差在5 km之内,平均为2 km;Pg震相的拾取误差94%在plusmn;1 s之间,Sg震相的拾取误差88%在plusmn;1 s之间.   相似文献   

2.
基于B样条函数的散乱数据曲面拟合和数据压缩   总被引:4,自引:0,他引:4       下载免费PDF全文
针对大规模散乱数据,本文提出一种基于B样条函数的曲面拟合和数据压缩方法.在此方法中,构建最小平面矩形区域覆盖所有的散乱数据点,根据二维情况下的B样条函数的线性插值公式,利用最小二乘法拟合规则网格控制点的数值,存储有限规则网格控制点的数值,并通过插值可以得到任意点的数值,从而达到曲面拟合和数据压缩的目的.通过算例验证了此方法的可行性以及使用三次B样条函数曲面拟合的优越性.由于此方法具有局部误差小、计算高效的特征,可以适用于数据梯度较小的二维系统数据的曲面拟合和数据压缩,如地形高程数据和射线追踪走时表.  相似文献   

3.
甘肃测震台网监测能力及地震目录完整性分析   总被引:12,自引:1,他引:11       下载免费PDF全文
区域地震台网监测能力的科学评估,是进行区域地震活动性和地震危险性分析的重要基础,最小完整性震级Mc是表征台网监测能力的关键.本文以甘肃测震台网的地震观测报告和区域地震目录为基础资料,分析了甘肃及邻区地震监测能力在时、空上的分布特征,利用ldquo;震级 序号rdquo;法、ldquo;最大曲率rdquo;法(MAXC)、拟合度分别为90%和95%的拟合优度检验法(GFT)及ldquo;完整性震级范围rdquo;法(EMR)等,研究了甘肃区域地震目录最小完整性震级Mc的时、空分布特征.结果表明,1980年以来甘肃测震台网的地震监测能力得到了逐步提高,模拟记录时期和ldquo;九五rdquo;期间甘东南地区的地震监测能力明显高于祁连山地震带中西段,ldquo;十五rdquo;测震台网运行后,甘肃及邻区的地震监测能力的空间差异明显缩小.最小完整性震级Mc和监测能力的时空分布特征具有较好的一致性.随着台网的改造,Mc逐步降低,ldquo;十五rdquo;台网运行后,甘肃及邻区的ML1.8以上地震基本完整.此外,还讨论了相关技术规范对区域台网地震目录的影响,并且提出了消除该影响的科学途径和有效方法.该研究结果可为甘肃及邻区地震活动性分析和地震危险性评价等相关研究提供参考.   相似文献   

4.
作为一个中长期地震预测方法,基于复杂系统统计物理的图象信息学PI算法近年来广受关注.针对7级以上强震成组和突发交替的川滇地区,考虑将与其构造和地震活动关系密切,且强震频发的安达曼-苏门答腊地区作为统一的强震预测研究区,使用PI算法进行MW7.0及以上预测ldquo;目标震级rdquo;的地震危险性分析.计算中使用了1973年以来的NEIC目录,采用10年尺度的地震活动ldquo;异常学习rdquo;时段和3年尺度ldquo;预测时间窗rdquo;,对预测效果进行了ROC检验.回溯性研究显示,PI预测效果较好,表明将川滇-安达曼-苏门答腊地区作为统一的7以上强震PI预测研究区在统计上具有合理性.从统计物理角度,研究区组合前后的各态遍历性曲线显示,组合后的研究区对PI的适用程度虽不优于单独考虑川滇地区,但优于安达曼-苏门答腊地区.PI图象显示,2008震前可能存在中长期尺度的ldquo;前兆性rdquo;地震活动异常.   相似文献   

5.
分块三维速度模型生成及理论地震图的计算   总被引:2,自引:0,他引:2       下载免费PDF全文
本文提出了在计算机上实现分块三维地壳模型及利用加权最小二乘拟合生成平缓光滑的三维速度函数的方法,给出了适用于分块、块内速度连续变化的三维模型中Cauchy射线追踪的新算法,简介了基于上述方法反射线的基本理论所编制的合成三维理论地震图的程序包RSSGTD.给出的两个盆地状模型的算例表明,所使用的模型生成方法具有模拟复杂地壳结构的能力;与三维样条函数方法比较,最小二乘拟合方法能给出更加适合射线方法合成地震图计算的速度函数,并且内存小、计算速度快;所给出的Cauchy射线追踪算法能够适合块状模型中任何体波射线的追踪.  相似文献   

6.
新近发展起来了一种基于双调和算子格林函数计算的数据插值方法——格林样条插值法,这是用中心点位于各观测数据点的多个格林函数进行加权叠加而解析地计算出插值曲面(曲线)的全局插值方法.本文介绍了该方法的基本原理、插值方法和实用程序的发展概况及使用中可能出现的问题.利用格林样条插值法和另两种常用插值法处理了某地布格重力异常的实测数据,对比插值结果说明,格林样条法在抑制虚假异常、稳定显示局部异常和消除奇异边缘效应等方面,具有一定的优越性.  相似文献   

7.
丁海平  张铭哲 《地震学报》2022,44(3):501-511
采用SMART-1台阵第5次和第45次地震的水平向分量加速度记录,首先计算了不同间距台站对的地震动空间相干函数;然后讨论了台站距离对相干函数拟合结果的影响,即某一特定距离的相干函数与所有不同距离的相干函数的拟合结果存在明显差异。为减小这一差异,提出了一种对各个不同距离的相干函数拟合参数进行二次回归的方法,并选用Loh相干函数模型进行了验证,最后给出了基于Loh相干函数模型的拟合参数的修正结果。结果表明本文提出的修正方法将大大提高相干函数模型中参数的拟合精度。   相似文献   

8.
近地表吸收参数反演是吸收衰减补偿的关键和基础.首先,对比不同微测井观测系统,选取适用于近地表品质因子估算的单井微测井资料用于频率依赖性分析.在此基础上,提出了一种新的近地表Q值反演方法——双参数拟合算法,该方法通过引入岩石物理分析得到的Q值与频率的经验函数,通过最小二乘遗传算法直接拟合Q值,用以验证Q随频率的变化关系.接着,在计算近地表衰减曲线的过程中,利用多道组合的方式计算得到各层的衰减曲线,用于不同地层的Q值反演.实际资料拟合结果表明,近地表品质因子Q具有较为明显的频率依赖性,随着频率的增加,Q值不断减小,这为近地表高精度Q值反演提供了新的思路.  相似文献   

9.
针对复杂探区因模型道品质差导致反射波剩余静校正效果不好的难题,本文提出了一种基于样条曲线拟合的二维初至波剩余静校正方法.该方法相对直线拟合法,可以用的初至信息更多,统计的短波长静校正量更为准确;相对折线拟合法,它不用拾取折射拐点,操作方便快捷,更具有可操作性.它采用三次样条函数将应用了长波长静校正量的初至拟合成一条光滑曲线,根据地表一致性原则将延迟时分解为炮点、检波点短波长静校正量.实际资料应用结果表明,基于样条曲线拟合的单炮初至变得光滑,叠加剖面同相轴连续性得到增强,可为反射波剩余静校正提供较好品质的模型道,对低信噪比资料剩余静校正问题的解决具有实际意义.  相似文献   

10.
蒋长胜  吴忠良 《地震学报》2005,27(3):269-275
使用哈佛CMT资料, 研究了2004年12月26日印尼北苏门答腊以西近海MW9.0地震前的长期地震活动. 这次地震前, 在1/4世纪的时间尺度、 1 500 km的空间尺度上, 存在加速矩释放(AMR)现象. 在这一空间尺度范围内,MW9.0地震仍落在分段的幂律分布上. 因此, 从地震的类临界点模型的角度考虑, 对这次特大地震的发生和地震的大小既无预测、 亦无预报的情况, 并非由物理上的ldquo;不可预测性rdquo;所致.   相似文献   

11.
Introduction Hilbert-Huang transform (HHT) is a great break in processing nonlinear and non-stationary data and can be successfully used in many science domains. There are mainly two parts in this method. The first part is to decompose the original data into several intrinsic mode functions (IMF) with the empirical mode decomposition (EMD). IMF components are derived from the original data directly according to the local characteristics in the data under some rules, so that IMF are poste…  相似文献   

12.
The estimation of velocity and depth is an important stage in seismic data processing and interpretation. We present a method for velocity-depth model estimation from unstacked data. This method is formulated as an iterative algorithm producing a model which maximizes some measure of coherency computed along traveltimes generated by tracing rays through the model. In the model the interfaces are represented as cubic splines and it is assumed that the velocity in each layer is constant. The inversion includes the determination of the velocities in all the layers and the location of the spline knots. The process input consists of unstacked seismic data and an initial velocity-depth model. This model is often based on nearby well information and an interpretation of the stacked section. Inversion is performed iteratively layer after layer; during each iteration synthetic travel-time curves are calculated for the interface under consideration. A functional characterizing the main correlation properties of the wavefield is then formed along the synthetic arrival times. It is assumed that the functional reaches a maximum value when the synthetic arrival time curves match the arrival times of the events on the field gathers. The maximum value of the functional is obtained by an effective algorithm of non-linear programming. The present inversion algorithm has the advantages that event picking on the unstacked data is not required and is not based on curve fitting of hyperbolic approximations of the arrival times. The method has been successfully applied to both synthetic and field data.  相似文献   

13.
It was found in Part I of this paper that approximating the sharp cut-off frequency characteristic best in a mean square sense by an impulse response of finite length M produced a characteristic whose slope on a linear frequency scale was proportional to the length of impulse response, but whose maximum overshoot of ±9% was independent of this length (Gibbs' phenomenon). Weighting functions, based on frequency tapering or arbitrarily chosen, were used in Part II to modify the truncated impulse response of the sharp cut-off frequency characteristic, and thereby obtain a trade-off between the value of maximum overshoot and the sharpness of the resulting characteristic. These weighting functions, known as apodising functions, were dependent on the time-bandwidth product , where , corresponded to the tapering range of frequencies. Part III now deals with digital filters where the number 2N–1 of coefficients is directly related to the finite length M of the continuous impulse response. The values of the filter coefficients are taken from the continuous impulse response at the sampling instants, and the resulting characteristic is approximately the same as that derived in Part II for the continuous finite length impulse response. Corresponding to known types of frequency tapering, we now specify a filter characteristic which is undefined in the tapering range, and determine the filter coefficients according to a mean square criterion over the rest of the frequency spectrum. The resulting characteristic is dependent on the time bandwidth product = (N–1/2)ξ up to a maximum value of 2, beyond which undesirable effects occur. This optimum partially specified characteristic is an improvement on the previous digital filters in terms of the trade-off ratio for values of maximum overshoot less than 1%. Similar to the previous optimum characteristic is the optimum partially specified weighted digital filter, where greater “emphasis is placed on reducing the value of maximum overshoot than of maximum undershoot”. Such characteristics are capable of providing better trade-off ratios than the other filters for maximum overshoots greater than 1/2%. However these filters have critical maximum numbers 2.NC–1 of coefficients, beyond which the resulting characteristics have unsuitable shapes. This type of characteristic differs from the others in not being a biassed odd function about its cut-off frequency.  相似文献   

14.
It has always been a difficult problem to extract horizontal and vertical displacement components from the InSAR LOS (Line of Sight) displacement since the advent of monitoring ground surface deformation with InSAR technique. Having tried to fit the firsthand field investigation data with a least squares model and obtained a preliminary result, this paper, based on the previous field data and the InSAR data, presents a linear cubic interpolation model which well fits the feature of earthquake fracture zone. This model inherits the precision of investigation data; moreover make use of some advantages of the InSAR technique, such as quasi-real time observation, continuous recording and all-weather measurement. Accordingly, by means of the model this paper presents a method to decompose the InSAR slant range co-seismic displacement (i.e. LOS change) into horizontal and vertical displacement components. Approaching the real motion step by step, finally a serial of curves representing the co-seismic horizontal and vertical displacement component along the main earthquake fracture zone are approximately obtained.  相似文献   

15.
Based on the existing continuous borehole strain observation, the multiquadric function fitting method was used to deal with time series data. The impact of difference kernel function parameters was discussed to obtain a valuable fitting result, from which the physical connotation of the original data and its possible applications were analyzed. Meanwhile, a brief comparison was made between the results of multiquadric function fitting and polynomial fitting.  相似文献   

16.
基于MDT测试资料的储层油水界面自动提取方法研究   总被引:2,自引:1,他引:1       下载免费PDF全文
通过对MDT地层压力数据以及地层压力梯度线形态的分析,提出了利用三次样条插值函数将有限个数据点的地层压力梯度线进行深度等间距采样,以得到连续分布的地层压力梯度线.在抛物线形式的地层压力梯度线上,油水界面对应于该抛物线的顶点.根据最大值原理可以在地层压力梯度线上提取油藏油水界面深度,利用Fortran语言编制了相应的计算机程序,实现了油藏油水界面自动提取的目的.利用该程序可以减小油田现场处理过程中带入的额外误差,节省操作人员的处理时间,减少工作量.实际应用表明,该方法是可靠的,计算结果完全符合油田储量计算的误差标准,利用其获取的结果真实有效.  相似文献   

17.
Aeromagnetic data collected in areas with severe diurnal magnetic variations (auroral zones) are difficult to level. This paper describes levelling of an aeromagnetic survey where such conditions prevail, and where sophisticated levelling techniques are needed. Corrections based on piecewise low‐order polynomial functions are often used to minimize mis‐ties in aeromagnetic data. We review this technique and describe similar mis‐tie fitting methods based on low‐pass filter levelling, tensioned B‐spline levelling and median levelling. It is demonstrated that polynomial levelling, low‐pass filter levelling and tensioned B‐spline levelling depend on the careful editing of outlying mis‐ties to avoid the introduction of false anomalies. These three techniques are equally efficient at removing level errors. Median levelling also removes level errors efficiently, but it is more robust in the sense that mis‐tie editing is not required. This is due to the inherent noise‐removal capabilities of the median filter. After mis‐tie editing, the total field anomalies of the other three techniques closely resemble the unedited median‐levelled total field anomaly.  相似文献   

18.
A method to calculate the resistivity transform of Schlumberger VES curves has been developed. It consists in approximating the field apparent resistivity data by utilizing a linear combination of simple functions, which must satisfy the following requirements: (i) they must be suitable for fitting the resistivity data; (ii) once the fitting function has been obtained they allow the kernel to be determined in an analytic way. The fitting operation is carried out by the least mean squares method, which also accomplishes a useful smoothing of the field curve (and therefore a partial noise filtering). It gives the possibility of assigning different weights to the apparent resistivity values to be approximated according to their different reliability. For several examples (theoretical resistivity curves in order to estimate the precision of the method and with field data to verify the practicality) yield good results with short execution time independent of shape the apparent resistivity curve.  相似文献   

19.
This paper describes statistical procedures for developing earthquake damage fragility functions. Although fragility curves abound in earthquake engineering and risk assessment literature, the focus has generally been on the methods for obtaining the damage data (i.e., the analysis of structures), and little emphasis is placed on the process for fitting fragility curves to this data. This paper provides a synthesis of the most commonly used methods for fitting fragility curves and highlights some of their significant limitations. More novel methods are described for parametric fragility curve development (generalized linear models and cumulative link models) and non‐parametric curves (generalized additive model and Gaussian kernel smoothing). An extensive discussion of the advantages and disadvantages of each method is provided, as well as examples using both empirical and analytical data. The paper further proposes methods for treating the uncertainty in intensity measure, an issue common with empirical data. Finally, the paper describes approaches for choosing among various fragility models, based on an evaluation of prediction error for a user‐defined loss function. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
在气枪源探测过程中,由于各种干扰因素的影响,导致部分有效信号缺失或受随机干扰严重,为了重构出连续完整的数据,依据气枪源信号在傅里叶变换域中具有稀疏性的特点,构建了一种基于压缩感知(Compressive Sensing,简称CS)的缺失信号重建方法。首先进行数值模拟,并将该方法与传统的插值方法处理效果进行对比,对重建效果进行均方根误差及信噪比(SNR)分析,结果显示:压缩感知方法重建前后信号的波形吻合度高、振幅一致性更强、同相轴清晰且连续性相同,对噪声压制较好。综上表明:该方法重建效果优于传统的三次样条插值方法。将该方法应用于实际资料,结果显示受干扰的有效信号能够得到很好的恢复重建。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号