首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 156 毫秒
1.
基于带状混合矩阵ICA实现地震盲反褶积   总被引:1,自引:2,他引:1       下载免费PDF全文
基于对地震反褶积本质上是一个盲过程的认识,引入高阶统计学盲源分离技术——独立分量分析(ICA)实现地震盲反褶积.在无噪声假设条件下,利用地震记录时间延迟矩阵和地震子波带状褶积矩阵,将地震褶积模型转化为一般线性混合ICA模型,采用FastICA算法,将带状性质作为先验信息,实现所谓带状ICA算法(B\|ICA),得到个数与子波算子长度相等的多个估计反射系数序列和估计子波序列,最后利用褶积模型提供的附加信息从中优选出最佳的反射系数序列及相应的地震子波.模型数据和实际二维地震道数值算例表明:对于统计性反褶积,在不对反射系数作高斯白噪假设,不对子波作最小相位假设的所谓“全盲”条件下,基于ICA方法(反射系数非高斯分布,地震子波非最小相位)可以较好解决地震盲反褶积问题,是基于二阶统计特性的地震信号统计性反褶积方法的提升,具有可行性和应用前景.  相似文献   

2.
Bussgang算法是针对褶积盲源分离问题提出的,本文将其用于地震盲反褶积处理.由于广义高斯概率密度函数具有逼近任意概率密度函数的能力,从反射系数序列的统计特征出发,引入广义高斯分布来体现反射系数序列超高斯分布特征.依据反射系数序列的统计特征和Bussgang算法原理,建立以Kullback-Leibler距离为非高斯性度量的目标函数,并导出算法中涉及到的无记忆非线性函数,最终实现了地震盲反褶积.模型试算和实际资料处理结果表明,该方法能较好地适应非最小相位系统,能够同时实现地震子波和反射系数估计,有效地提高地震资料分辨率.  相似文献   

3.
本文基于地层反射系数非高斯的统计特性,在反褶积输出单位方差约束下,将反褶积输出的负熵表示为非多项式函数,作为盲反褶积的目标函数,然后采用粒子群算法优化目标函数寻找最佳反褶积算子,实现地震信号的盲反褶积.数值模拟和实际资料处理结果表明,与传统反褶积方法相比,本文方法同时适应于最小相位子波及混合相位子波的反褶积,能够更好地从地震数据中估计反射系数,有效拓宽地震资料的频谱,得到高分辨率的地震资料.  相似文献   

4.
使用常规的Wiener反褶积必须假设震源子波在地层旅行过程中是平稳的即一成不变的,这个前提条件与实际野外地震资料采集差别较大,而基于Gabor变换反褶积技术考虑到地震能量的衰减、子波的形变等非平稳性特征.地震道在Gabor域可因式分解成三项即震源子波、衰减函数和反射系数,该技术设计POU窗函数,并利用此函数在Gabor域对地震信号进行局部时频分解.Gabor域反褶积算法在Gabor域通过除以衰减函数和震源子波的乘积来估算地层反射系数,然后再做Gabor反变换可求得时间域的地层反射系数.理论模型的测试和实际地震资料的应用均表明,与Wiener反褶积相比较,基于Gabor变换反褶积可补偿中深层的能量衰减并因此拓宽有效频带和提高时间分辨率.  相似文献   

5.
分形脉冲反褶积方法   总被引:8,自引:1,他引:7       下载免费PDF全文
解地震反演问题的脉冲反褶积方法是基于反射系数白噪和子波为最小相位的假设下提出的.近几年的研究证明反射系数并不都是白噪,而是某种分形噪声,如果用一类分形反褶积方法,则将地震反演问题化为难以求解的非线性方程组.本文用反射系数的分形性质,推导出一个更为简单易解的线性方程组,称为分形脉冲反褶积.数值计算表明,本文的方法是有效的.  相似文献   

6.
常规的反褶积方法通过线性褶积压缩子波提高地震记录的分辨率,其能力受到有效信号频带的限制.随机稀疏脉冲非线性反褶积方法将传统的以子波压缩为核心理念的反褶积方法转移到反射系数位置和大小的检测上来,它直接从地震记录中通过非线性反演方法得到反射系数的位置和大小,突破了地震资料有效频带的限制,能够较大幅度提高地震记录的分辨率.同时通过对反射系数统计特征的有效约束,减小了反褶积结果的多解性.模型实验表明,随机稀疏脉冲反褶积对噪声和子波的敏感性较小,能够较好的保护弱反射信号.在模型实验的基础上,利用随机稀疏脉冲反褶积对实际地震资料进行了实验处理,有效的改善了地震资料的分辨率.  相似文献   

7.
常规预测反褶积方法需要假设反射系数是不相关的白噪声序列,利用维纳-霍夫(WH,Wiener-Hopf)方程求解滤波器,消除地震记录中的相关成分,从而实现衰减多次波和提高分辨率的目的。实际上,一次波反射系数序列存在一定的相关成分,不满足白噪声假设,处理后反射系数序列的相关成分也被消除掉,导致有效信号失真。针对这一问题,本文提出了一种改进方法。首先,利用谱模拟方法直接从地震记录中估计子波自相关;其次,利用估引计的子波自相关构建包含多次波相关信息并避免一次波反射系数相关信息的自相关函数;最后,将构建的自相关函数带入WH方程,计算预测滤波算子进行预测反褶积处理。文中对该方法进行了模型试算和实际资料处理,并于与传统预测反褶积进行对比,结果表明:本文方法能够适应非白噪的反射系数序列,处理后不改变反射系数序列的统计特性,与传统预测反褶积相比,本方法在不降低多次波衰减能力和数据分辨率提升水平的前提下,大大降低了处理噪声,提高了处理的保真性。  相似文献   

8.
基于反射地震记录变子波模型提高地震记录分辨率   总被引:6,自引:1,他引:5       下载免费PDF全文
本文给出了地震记录变子波模型的一种近似数学表达式.基于该表达式研究了反射系数序列不满足白噪假设和子波在地下传播时发生变化这两种情况下地震道谱的组成及结构,讨论了谱白化及反褶积方法在这两种情况下效果不佳的原因.然后基于变子波模型,提出了一种新的提高地震记录分辨率的方法:第一步,用自适应于地震记录的Gabor分子窗把地震记录恰当地划分成若干片断,每段内信号近似平稳,然后将地震记录变换到时间-频率域;第二步,在变换域对每个分子窗内信号的振幅谱进行处理以拓宽频带;最后把处理后的时间-频率域函数反变换回时间域得到提高分辨率后的结果.本文提出的方法具有能较好地适用于反射系数不满足白噪假设的情况及提高分辨率后的地震记录能较好地保持原地震记录的相对能量关系等优点,模型和实际资料算例结果均表明,本文方法在拓宽地震资料频带及保持地震记录局部能量相对关系方面均明显优于谱白化方法.  相似文献   

9.
预条件共轭梯度反褶积方法是结合稀疏反褶积的实现,运用优化的预条件共轭梯度法,完成反射系数的反演。用该方法处理地震资料时可提高资料频率,展宽有效频率宽度。但由于地震信号具有时变性,因此本文将该反褶积过程中的子波用多尺度时变子波代替。由数值算例可以看出,该方法可取得较好的实用效果。  相似文献   

10.
非稳态地震稀疏约束反褶积研究(英文)   总被引:1,自引:1,他引:0  
传统Robinson褶积模型主要受缚于三种不合理的假设,即白噪反射系数、最小相位地震子波与稳态假设,而现代反射系数反演方法(如稀疏约束反褶积等)均在前两个假设上寻求突破的同时却忽视了一个重要事实:实际地震信号具有典型的非稳态特征,这直接冲击着反射系数反演中地震子波不随时间变化的这一基础性假设。本文首先通过实际反射系数测试证实,非稳态效应造成重要信息无法得到有效展现,且对深层影响尤为严重。为校正非稳态影响,本文从描述非稳态方面具有普适性的非稳态褶积模型出发,借助对数域的衰减曲线指导检测非稳态影响并以此实现对非稳态均衡与校正。与常规不同,本文利用对数域Gabor反褶积仅移除非稳态影响,而将分离震源子波和反射系数的任务交给具有更符合实际条件的稀疏约束反褶积处理,因此结合两种反褶积技术即可有效解决非稳态特征影响,又能避免反射系数和地震子波理想化假设的不利影响。海上地震资料的应用实际表明,校正非稳态影响有助于恢复更丰富的反射系数信息,使得与地质沉积和构造相关的细节特征得到更加清晰的展现。  相似文献   

11.
多分辨率地震信号反褶积   总被引:11,自引:2,他引:9       下载免费PDF全文
基于二进小波变换提出了一种新的反褶积方法─-多分辨率地震信号反褶积.在地震信号二进小波变换域中的各尺度上分别进行其分辨率随小波尺度变化的反褶积,利用不同分辨率反褶积结果之间的相关性,以及测量噪声随尺度的衰减特性,从低分辨率反褶积结果逼近高分辨率反褶积结果.理论分析和实验表明,该方法有较高的精度,并且在较低信噪比情况下有好的效果.  相似文献   

12.
Wavelet estimation and well-tie procedures are important tasks in seismic processing and interpretation. Deconvolutional statistical methods to estimate the proper wavelet, in general, are based on the assumptions of the classical convolutional model, which implies a random process reflectivity and a minimum-phase wavelet. The homomorphic deconvolution, however, does not take these premises into account. In this work, we propose an approach to estimate the seismic wavelet using the advantages of the homomorphic deconvolution and the deterministic estimation of the wavelet, which uses both seismic and well log data. The feasibility of this approach is verified on well-to-seismic tie from a real data set from Viking Graben Field, North Sea, Norway. The results show that the wavelet estimated through this methodology produced a higher quality well tie when compared to methods of estimation of the wavelet that consider the classical assumptions of the convolutional model.  相似文献   

13.
Statistical deconvolution, as it is usually applied on a routine basis, designs an operator from the trace autocorrelation to compress the wavelet which is convolved with the reflectivity sequence. Under the assumption of a white reflectivity sequence (and a minimum-delay wavelet) this simple approach is valid. However, if the reflectivity is distinctly non-white, then the deconvolution will confuse the contributions to the trace spectral shape of the wavelet and reflectivity. Given logs from a nearby well, a simple two-parameter model may be used to describe the power spectral shape of the reflection coefficients derived from the broadband synthetic. This modelling is attractive in that structure in the smoothed spectrum which is consistent with random effects is not built into the model. The two parameters are used to compute simple inverse- and forward-correcting filters, which can be applied before and after the design and implementation of the standard predictive deconvolution operators. For whitening deconvolution, application of the inverse filter prior to deconvolution is unnecessary, provided the minimum-delay version of the forward filter is used. Application of the technique to seismic data shows the correction procedure to be fast and cheap and case histories display subtle, but important, differences between the conventionally deconvolved sections and those produced by incorporating the correction procedure into the processing sequence. It is concluded that, even with a moderate amount of non-whiteness, the corrected section can show appreciably better resolution than the conventionally processed section.  相似文献   

14.
Deconvolution is an essential step for high-resolution imaging in seismic data processing. The frequency and phase of the seismic wavelet change through time during wave propagation as a consequence of seismic absorption. Therefore, wavelet estimation is the most vital step of deconvolution, which plays the main role in seismic processing and inversion. Gabor deconvolution is an effective method to eliminate attenuation effects. Since Gabor transform does not prepare the information about the phase, minimum-phase assumption is usually supposed to estimate the phase of the wavelet. This manner does not return the optimum response where the source wavelet would be dominantly a mixed phase. We used the kurtosis maximization algorithm to estimate the phase of the wavelet. First, we removed the attenuation effect in the Gabor domain and computed the amplitude spectrum of the source wavelet; then, we rotated the seismic trace with a constant phase to reach the maximum kurtosis. This procedure was repeated in moving windows to obtain the time-varying phase changes. After that, the propagating wavelet was generated to solve the inversion problem of the convolutional model. We showed that the assumption of minimum phase does not reflect a suitable response in the case of mixed-phase wavelets. Application of this algorithm on synthetic and real data shows that subtle reflectivity information could be recovered and vertical seismic resolution is significantly improved.  相似文献   

15.
Wiener deconvolution is generally used to improve resolution of the seismic sections, although it has several important assumptions. I propose a new method named Gold deconvolution to obtain Earth’s sparse-spike reflectivity series. The method uses a recursive approach and requires the source waveform to be known, which is termed as Deterministic Gold deconvolution. In the case of the unknown wavelet, it is estimated from seismic data and the process is then termed as Statistical Gold deconvolution. In addition to the minimum phase, Gold deconvolution method also works for zero and mixed phase wavelets even on the noisy seismic data. The proposed method makes no assumption on the phase of the input wavelet, however, it needs the following assumptions to produce satisfactory results: (1) source waveform is known, if not, it should be estimated from seismic data, (2) source wavelet is stationary at least within a specified time gate, (3) input seismic data is zero offset and does not contain multiples, and (4) Earth consists of sparse spike reflectivity series. When applied in small time and space windows, the Gold deconvolution algorithm overcomes nonstationarity of the input wavelet. The algorithm uses several thousands of iterations, and generally a higher number of iterations produces better results. Since the wavelet is extracted from the seismogram itself for the Statistical Gold deconvolution case, the Gold deconvolution algorithm should be applied via constant-length windows both in time and space directions to overcome the nonstationarity of the wavelet in the input seismograms. The method can be extended into a two-dimensional case to obtain time-and-space dependent reflectivity, although I use one-dimensional Gold deconvolution in a trace-by-trace basis. The method is effective in areas where small-scale bright spots exist and it can also be used to locate thin reservoirs. Since the method produces better results for the Deterministic Gold deconvolution case, it can be used for the deterministic deconvolution of the data sets with known source waveforms such as land Vibroseis records and marine CHIRP systems.  相似文献   

16.
The desired result of an optimum seismic data processing sequence, is a broad band zerophase section, i.e. a bandpassed version of the actual reflectivity function. However, a lot of socalled zerophase-sections still carry a significant phase-error, which is due to unrealistic assumptions in the processing stream in terms of the design of standard processes as for example deconvolution. The two major issues here are the color of the reflectivity series and the misuse of prewhitening. If not properly handled they lead to a phase- and amplitude spectrum bias in the final section, preventing it from being zerophase. Whereas the reflectivity bias leads to a phase error of 50 to 90 deg, the prewhitening bias results in a phase error, which is directly proportional to the logarithm of the actual prewhitening factor.Therefore, if the spike deconvolution process is applied in a time-variant manner, as a consequence a time-variant and usually frequency dependent phase error is introduced! In this article we have made an effort to include sufficient detail to facilitate a clear understanding of the problems involved.The standard processing flow should have a minimum-delay transform and spike deconvolution prestack, followed by a zerophase transform poststack, where the residual wavelet is assumed to be minimum phase.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号