首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Spectral decomposition is a widely used technique in analysis and interpretation of seismic data. According to the uncertainty principle, there exists a lower bound for the joint time–frequency resolution of seismic signals. The highest temporal resolution is achieved by a matching pursuit approach which uses waveforms from a dictionary of functions (atoms). This method, in its pure mathematical form can result in atoms whose shape and phase have no relation to the seismic trace. The high‐definition frequency decomposition algorithm presented in this paper interleaves iterations of atom matching and optimization. It divides the seismic trace into independent sections delineated by envelope troughs, and simultaneously matches atoms to all peaks. Co‐optimization of overlapping atoms ensures that the effects of interference between them are minimized. Finally, a second atom matching and optimization phase is performed in order to minimize the difference between the original and the reconstructed trace. The fully reconstructed traces can be used as inputs for a frequency‐based reconstruction and red–green–blue colour blending. Comparison with the results of the original matching pursuit frequency decomposition illustrates that high‐definition frequency decomposition based colour blends provide a very high temporal resolution, even in the low‐energy parts of the seismic data, enabling a precise analysis of geometrical variations of geological features.  相似文献   

2.
The widely used wavelets in the context of the matching pursuit are mostly focused on the time–frequency attributes of seismic traces. We propose a new type of wavelet basis based on the classic Ricker wavelet, where the quality factor Q is introduced. We develop a new scheme for seismic trace decomposition by applying the multi-channel orthogonal matching pursuit based on the proposed wavelet basis. Compared with the decomposition by the Ricker wavelets, the proposed method could use fewer wavelets to represent the seismic signal with fewer iterations. Besides, the quality factor of the subsurface media could be extracted from the decomposition results, and the seismic attenuation could be compensated expediently. We test the availability of the proposed methods on both synthetic seismic record and field post-stack data.  相似文献   

3.
基于匹配追踪和遗传算法的大地电磁噪声压制   总被引:1,自引:0,他引:1       下载免费PDF全文
针对匹配追踪计算量大、大地电磁数据处理效率低的问题,提出基于匹配追踪和遗传算法的大地电磁噪声压制方法.首先,利用Gabor原子构建过完备原子库,并对过完备原子库集合进行划分.然后,借助遗传算法的自适应性,快速搜寻最优匹配原子及所在位置.最后,运用最优匹配原子对待处理信号进行稀疏分解,重构有用信号.通过对计算机模拟的典型强干扰和矿集区实测大地电磁数据进行分析处理,实验结果表明,相对于匹配追踪和正交匹配追踪,文中所提方法能从过完备原子库中快速、自适应地选取最优匹配原子与不同噪声干扰类型高精度的匹配,极大地提升了计算效率;大地电磁时间域序列中的大尺度强干扰被有效剔除,视电阻率曲线更为光滑、连续,低频段的数据质量得到明显改善.  相似文献   

4.
在地震子波非因果、混合相位的假设下,本文应用自回归滑动平均(ARMA)模型对地震子波进行参数化建模,并提出利用线性(矩阵方程法)和非线性(ARMA拟合方法)相结合的参数估计方式对该模型进行参数估计.在利用矩阵方程法确定模型参数范围的基础上,利用累积量拟合法精确估计参数.理论分析和仿真结果表明,该方式有较好的适应性:一方面提高了子波估计精度,避免单独使用矩阵方程法在短数据地震记录情况下可能带来的估计误差;另一方面提高了子波提取运算效率,降低了ARMA模型拟合方法参数范围确定的复杂性,避免了单纯使用滑动平均(MA)模型拟合法估计过多参数所导致的运算规模过大问题.初步应用结果表明该方法是有效可行的.  相似文献   

5.
基于正交时频原子的地震信号快速匹配追踪   总被引:3,自引:1,他引:2       下载免费PDF全文
匹配追踪能够根据地震信号自身的特点进行自适应分解,但因计算量巨大使其不能被广泛应用.本文提出了一种正交时频原子快速匹配追踪方法,该方法在迭代分解过程中,充分利用地震信号的局部性特征作为先验信息点,在先验信息点附近采用动态搜索策略寻找最佳匹配时频原子,同时对时频原子进行正交变换,消除了时频原子库中的冗余分量,最终将信号分解为一系列正交时频原子的线性叠加.测试结果表明,该方法不仅保持了匹配追踪的分解精度,而且使计算效率有了质的提高.  相似文献   

6.
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem.  相似文献   

7.
阻抗差异较大的地层在地震剖面上呈现为强振幅同相轴,会掩盖附近储层有效信息,需要做去除强屏蔽的目标处理。针对常规匹配追踪方法在构造复杂地区匹配精度和空间连续性不佳的问题,本文提出一种基于自适应权值的多道匹配追踪去强屏蔽方法。首先利用层位构造信息对强反射层进行局部拉平处理,减弱地层空间构造对提取强反射层的影响,然后引入相邻地震道与中心道之间的相关系数作为多道平均的权值,以提高匹配结果的稳定性和横向连续性。同时使用解释得到的层位时间作为子波初始时间,有效提高运算效率。模型试算和实际地震资料应用分析表明,改进方法能有效剥离强反射并突出有效储层信息,剥离后的井震吻合度得到明显提升,相较于常规匹配追踪算法匹配精度更高,同时具有更好的空间连续性与强反射去除效果。   相似文献   

8.
随着油气勘探观测环境愈发复杂,采集的地震数据常常掺杂各种噪声信号,导致勘探目标引起的有效微弱信号被覆盖,严重影响高精度的地震勘探数据解译,因而有效的压制地震勘探数据噪声显得越发重要。本文采用字典学习策略,将复杂地震数据进行分块,通过分块数据的字典学习获取字典原子,构建高精度的字典学习地震数据稀疏表示,通过两次迭代更新字典原子,进行数据去噪。将本文的字典学习算法应用于含随机噪声的模拟数据和实测地震勘探数据处理,验证该算法的可行性及有效性。结果表明,本文算法有效去除了随机噪声,保留了有效信号同相轴,提高了信噪比,可为复杂含噪地震数据的去噪处理提供新的技术手段。   相似文献   

9.
匹配追踪算法是一种高精度的非线性的时频分析方法.该方法自从提出以来在地球物理学领域得到了广泛的应用.在以前的应用中,作者们都是在实信号的基础上对算法原理作了适当的简化.本文在复数地震道的基础上,基于复Morlet子波,阐述了匹配追踪算法的原理及其算法实现过程,并给出了具体的算法流程图.通过与传统的时频分析的比较,验证了该算法具有较高的时频分析精度.对于实际资料的应用表明,该算法能够很好的显示储层的位置与边界.  相似文献   

10.
The estimation of the Q factor of rocks by seismic surveys is a powerful tool for reservoir characterization, as it helps detecting possible fractures and saturating fluids. Seismic tomography allows building 3D macro-models for the Q factor, using methods as the spectral ratio and the frequency shift. Both these algorithms require windowing the seismic signal accurately in the time domain; however, this process can hardly follow the continuous variations of the wavelet length as a function of offset and propagation effects, and it is biased by the interpreter choice. In this paper, we highlight some drawback of signal windowing in the frequency-shift method, and introduce a tomographic approach to estimate the Q factor using the complex attributes of the seismic trace. We show that such approach is particularly needed when the dispersion is broadening the waveforms of signals with a long wave-path. Our method still requires an interpretative event picking, but no other parameters as the time window length and its possible smoothing options. We validate the new method with synthetic and real data examples, involving the joint tomographic inversion of direct and reflected signals. We show that a calibration of the frequency-shift method is needed to improve the estimation of the absolute Q factor, otherwise only relative contrasts are obtained.  相似文献   

11.
为将小波去噪方法应用于大尺度岩体结构微震监测信号的去噪研究,首先在MATLAB环境下进行仿真,验证了使用Symlet6小波进行小波去噪的可行性;利用4种自适应阈值规则对含噪信号进行去噪对比,结果表明4种阈值去噪后的信号在均方差较小的情况下都极大地提高了信号的信噪比,有效地去除了噪声,对不同的含噪信号,无偏似然原则阈值去...  相似文献   

12.
A crucial step in the use of synthetic seismograms is the estimation of the filtering needed to convert the synthetic reflection spike sequence into a clearly recognizable approximation of a given seismic trace. In the past the filtering has been effected by a single wavelet, usually found by trial and error, and evaluated by eye. Matching can be made more precise than this by using spectral estimation procedures to determine the contribution of primaries and other reflection components to the seismic trace. The wavelet or wavelets that give the least squares best fit to the trace can be found, the errors of fit estimated, and statistics developed for testing whether a valid match can be made. If the composition of the seismogram is assumed to be known (e.g. that it consists solely of primaries and internal multiples) the frequency response of the best fit wavelet is simply the ratio of the cross spectrum between the synthetic spike sequence and the seismic trace to the power spectrum of the synthetic spike sequence, and the statistics of the match are related to the ordinary coherence function. Usually the composition cannot be assumed to be known (e.g. multiples of unknown relative amplitude may be present), and the synthetic sequence has to be split into components that contribute in different ways to the seismic trace. The matching problem is then to determine what filters should be applied to these components, regarded as inputs to a multichannel filter, in order to best fit the seismic trace, regarded as a noisy output. Partial coherence analysis is intended for just this problem. It provides fundamental statistics for the match, and it cannot be properly applied without interpreting these statistics. A useful and concise statistic is the ratio of the power in the total filtered synthetic trace to the power in the errors of fit. This measures the overall goodness-of-fit of the least squares match. It corresponds to a coherent (signal) to incoherent (noise) power ratio. Two limits can be set on it: an upper one equal to the signal-to-noise ratio estimated from the seismic data themselves, and a lower one defined from the distribution of the goodness-of-fit ratios yielded by matching with random noise of the same bandwidth and duration as the seismic trace segment. A match can be considered completely successful if its goodness-of-fit reaches the upper limit; it is rejected if the goodness-of-fit falls below the lower one.  相似文献   

13.
估计地震数据的信噪比对于地震数据的处理和解释具有重要作用.以往估计地震数据信噪比的方法都需要分离数据中的有效信号和噪声,然后再估计相应的信噪比.这些估计方法的精度严重依赖信号估计方法或噪声压制方法的有效性,往往存在偏差.本文提出一种估计地震数据局部信噪比的深度卷积神经网络模型,通过迭代训练优化参数,构建从含噪地震数据到其信噪比的特征映射.然后使用该神经网络完成信噪比的推理预测,不需要分离地震数据中的有效信号和噪声.模拟数据和实际资料的处理结果都表明,本文的方法可以准确而高效地估计局部地震数据的信噪比,为地震数据质量的定量评价提供依据.  相似文献   

14.
希尔伯特-黄变换地震信号时频分析与属性提取   总被引:13,自引:10,他引:13       下载免费PDF全文
地震信号属于非线性和非平稳信号,传统的分析方法主要包括短时傅立叶变换、小波变换和Cohen类时频分布等等;希尔伯特-黄变换是分析非平稳信号的新方法,该方法的关键部分是信号的经验模态分解,通过经验模态分解,复杂的信号可以分解为有限的数量很少的几个固有模态函数,从而可以得到信号的希尔伯特时频谱;将该方法应用于单个的地震道数据,可以对地震道进行经验模态分解并得到希尔伯特谱,应用于地震剖面,可以得到意义更加明确的瞬时频率和瞬时振幅等地震属性,模型试算和实际应用表明了该方法的有效性.  相似文献   

15.
谱图重排的谱分解理论及其在储层探测中的应用   总被引:1,自引:0,他引:1  
谱分解理论是把单道地震记录分解为连续的时频谱平面,是地震资料处理和解释的重要技术之一.由于谱分解方法的多解性,所以同一道地震记录因为分解方法不同,得到的时频谱是不一样的.短时傅里叶变换,小波变换,S变换和匹配追踪算法都是对信号开窗分析,这些方法都受到不确定性原理的限制.Wigner-Ville变换避开了不确定性原理的限制,但是交叉项的存在限制了本方法的使用.本文利用谱图重排的时频分析方法(RSPWV)对合成的单道地震记录和实际的地震资料进行了分析.与短时傅里叶变换和匹配追踪算法的比较得出:此方法具有较高的时频分辨率,能够很好地识别气层.  相似文献   

16.
In previous publications, we presented a waveform-inversion algorithm for attenuation analysis in heterogeneous anisotropic media. However, waveform inversion requires an accurate estimate of the source wavelet, which is often difficult to obtain from field data. To address this problem, here we adopt a source-independent waveform-inversion algorithm that obviates the need for joint estimation of the source signal and attenuation coefficients. The key operations in that algorithm are the convolutions (1) of the observed wavefield with a reference trace from the modelled data and (2) of the modelled wavefield with a reference trace from the observed data. The influence of the source signature on attenuation estimation is mitigated by defining the objective function as the ℓ2-norm of the difference between the two convolved data sets. The inversion gradients for the medium parameters are similar to those for conventional waveform-inversion techniques, with the exception of the adjoint sources computed by convolution and cross-correlation operations. To make the source-independent inversion methodology more stable in the presence of velocity errors, we combine it with the local-similarity technique. The proposed algorithm is validated using transmission tests for a homogeneous transversely isotropic model with a vertical symmetry axis that contains a Gaussian anomaly in the shear-wave vertical attenuation coefficient. Then the method is applied to the inversion of reflection data for a modified transversely isotropic model from Hess. It should be noted that due to the increased nonlinearity of the inverse problem, the source-independent algorithm requires a more accurate initial model to obtain inversion results comparable to those produced by conventional waveform inversion with the actual wavelet.  相似文献   

17.
Matching Pursuits方法综述   总被引:2,自引:1,他引:2       下载免费PDF全文
Matching Pursuits(匹配逼近)算法是在一个确定的函数集合中自适应地选择一些函数来表示一个信号的计算过程,函数集合中的每个函数都称为原子.多样化的信号特征决定了可以精确刻画信号特征的原子的类型,而重复迭代逼近的贪婪算法又确定了运算效率是MP算法的存在和发展的问题核心.本文围绕MP算法中原子库的生成,原子参数的搜索索引方式和迭代逼近过程中的快速算法等方面,阐述了MP算法发展变化过程.  相似文献   

18.
Common-reflection surface is a method to describe the shape of seismic events, typically the slopes (dip) and curvature portions (traveltime). The most systematic approach to estimate the common-reflection surface traveltime attributes is to employ a sequence of single-variable search procedures, inheriting the advantage of a low computational cost, but also the disadvantage of a poor estimation quality. A search strategy where the common-reflection surface attributes are globally estimated in a single stage may yield more accurate estimates. In this paper, we propose to use the bio-inspired global optimization algorithm differential evolution to estimate all the two-dimensional common-offset common-reflection surface attributes simultaneously. The differential evolution algorithm can provide accurate estimates for the common-reflection surface traveltime attributes, with the benefit of having a small set of input parameters to be configured. We apply the differential evolution algorithm to estimate the two-dimensional common-reflection surface attributes in the synthetic Marmousi data set, contaminated by noise, and in a land field data with a small fold. By analysing the stacked and coherence sections, we could see that the differential evolution based common-offset common-reflection surface approach presented significant signal-to-noise ratio enhancement.  相似文献   

19.
利用零偏移VSP资料估计介质品质因子方法研究   总被引:15,自引:3,他引:15       下载免费PDF全文
利用峰值频率移动法估算零偏VSP资料的品质因子Q.该方法用Ricker子波和匹配地震子波分别逼近零相位和混合相位的震源子波,得到了峰值频率移动法估计Q值的公式.进而针对常规方法估计的地震子波峰值频率精度不高的问题,提出了估计地震子波峰值频率的特征结构法.通过合成零偏VSP资料的仿真试验,验证了峰值频率移动法估计Q值的正确性.仿真结果表明,与快速Fourier变换和Burg最大熵方法相比较,特征结构法得到的峰值频率和Q值精度高一些.仿真结果也表明,用峰值频率移动法估计Q值时需要选取恰当的子波参数,否则影响Q值估计的精度.  相似文献   

20.
The coupled flow-mass transport inverse problem is formulated using the maximum likelihood estimation concept. An evolutionary computational algorithm, the genetic algorithm, is applied to search for a global or near-global solution. The resulting inverse model allows for flow and transport parameter estimation, based on inversion of spatial and temporal distributions of head and concentration measurements. Numerical experiments using a subset of the three-dimensional tracer tests conducted at the Columbus, Mississippi site are presented to test the model's ability to identify a wide range of parameters and parametrization schemes. The results indicate that the model can be applied to identify zoned parameters of hydraulic conductivity, geostatistical parameters of the hydraulic conductivity field, angle of hydraulic conductivity anisotropy, solute hydrodynamic dispersivity, and sorption parameters. The identification criterion, or objective function residual, is shown to decrease significantly as the complexity of the hydraulic conductivity parametrization is increased. Predictive modeling using the estimated parameters indicated that the geostatistical hydraulic conductivity distribution scheme produced good agreement between simulated and observed heads and concentrations. The genetic algorithm, while providing apparently robust solutions, is found to be considerably less efficient computationally than a quasi-Newton algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号