首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In this case study we consider the seismic processing of a challenging land data set from the Arabian Peninsula. It suffers from rough top‐surface topography, a strongly varying weathering layer, and complex near‐surface geology. We aim at establishing a new seismic imaging workflow, well‐suited to these specific problems of land data processing. This workflow is based on the common‐reflection‐surface stack for topography, a generalized high‐density velocity analysis and stacking process. It is applied in a non‐interactive manner and provides an entire set of physically interpretable stacking parameters that include and complement the conventional stacking velocity. The implementation introduced combines two different approaches to topography handling to minimize the computational effort: after initial values of the stacking parameters are determined for a smoothly curved floating datum using conventional elevation statics, the final stack and also the related residual static correction are applied to the original prestack data, considering the true source and receiver elevations without the assumption of nearly vertical rays. Finally, we extrapolate all results to a chosen planar reference level using the stacking parameters. This redatuming procedure removes the influence of the rough measurement surface and provides standardized input for interpretation, tomographic velocity model determination, and post‐stack depth migration. The methodology of the residual static correction employed and the details of its application to this data example are discussed in a separate paper in this issue. In view of the complex near‐surface conditions, the imaging workflow that is conducted, i.e. stack – residual static correction – redatuming – tomographic inversion – prestack and post‐stack depth migration, leads to a significant improvement in resolution, signal‐to‐noise ratio and reflector continuity.  相似文献   

2.
基于偏移成像道集的剩余静校正方法   总被引:1,自引:1,他引:0       下载免费PDF全文
针对陆上地震资料处理的静校正问题,提出了一种基于偏移成像道集的剩余静校正方法.与传统的由动校正后的CMP道集中拾取剩余时差不同,本文基于偏移成像道集求取剩余时差,避免了复杂情况下同相轴归位不准确导致的剩余时差拾取误差.通过生成随炮点和检波点位置变化的偏移道集,实现了由偏移道集中直接拾取炮、检点的地表一致性剩余时差;该炮、检点偏移道集只在指定的局部时窗生成,并不增加大的计算量.二维和三维实际数据测试表明了该方法的有效性和实用性.  相似文献   

3.
Local seismic event slopes contain subsurface velocity information and can be used to estimate seismic stacking velocity. In this paper, we propose a novel approach to estimate the stacking velocity automatically from seismic reflection data using similarity‐weighted k‐means clustering, in which the weights are local similarity between each trace in common midpoint gather and a reference trace. Local similarity reflects the local signal‐to‐noise ratio in common midpoint gather. We select the data points with high signal‐to‐noise ratio to be used in the velocity estimation with large weights in mapped traveltime and velocity domain by similarity‐weighted k‐means clustering with thresholding. By using weighted k‐means clustering, we make clustering centroids closer to those data points with large weights, which are more reliable and have higher signal‐to‐noise ratio. The interpolation is used to obtain the whole velocity volume after we have got velocity points calculated by weighted k‐means clustering. Using the proposed method, one obtains a more accurate estimate of the stacking velocity because the similarity‐based weighting in clustering takes into account the signal‐to‐noise ratio and reliability of different data points in mapped traveltime and velocity domain. In order to demonstrate that, we apply the proposed method to synthetic and field data examples, and the resulting images are of higher quality when compared with the ones obtained using existing methods.  相似文献   

4.
The method of common reflection surface (CRS) extends conventional stacking of seismic traces over offset to multidimensional stacking over offset‐midpoint surfaces. We propose a new form of the stacking surface, derived from the analytical solution for reflection traveltime from a hyperbolic reflector. Both analytical comparisons and numerical tests show that the new approximation can be significantly more accurate than the conventional CRS approximation at large offsets or at large midpoint separations while using essentially the same parameters.  相似文献   

5.
静校正问题是地震勘探的关键问题,直接影响地震勘探精度和准确性.实际地震采集过程中,当在相同接收点位置上不同时间内插拔布设了不同的检波器时,对于目前基于地表一致性理论假设的基准面静校正和剩余静校正,以及非地表一致性剩余静校正都不具备适用条件.为解决这一问题,本文提出了基于共姿态道集的静校正方法,将相同接收点位置上不同时间布设的检波点所接收的地震数据抽成不同的共姿态道集,在共姿态道集内实施地表一致性静校正;当某接收点位置上具有若干个共姿态道集时,该接收点位置上可能会存在多个检波点静校量;炮点静校问题仍然采用地表一致性静校正方法解决.该方法解决了同一接收点位置上不同共姿态道集之间的非地表一致性静校正问题,同时也解决了全区的检波点和炮点的地表一致性静校正问题,在实际数据应用效果明显.  相似文献   

6.
Static shifts from near‐surface inhomogeneities very often represent the key problem in the processing of seismic data from arid regions. In this case study, the deep bottom fill of a wadi strongly degrades the image quality of a 2D seismic data set. The resulting static and dynamic problems are solved by both conventional and common‐reflection‐surface (CRS) processing. A straightforward approach derives conventional refraction statics from picked first breaks and then goes through several iterations of manual velocity picking and residual statics calculation. The surface‐induced static and dynamic inhomogeneities, however, are not completely solved by these conventional methods. In CRS processing, the local adaptation of the CRS stacking parameters results in very detailed dynamic corrections. They resolve the local inhomogeneities that were not detected by manual picking of stacking velocities and largely compensate for the surface‐induced deterioration in the stack. The subsequent CRS residual statics calculations benefit greatly from the large CRS stacking fold which increases the numbers of estimates for single static shifts. This improves the surface‐consistent averaging of static shifts and the convergence of the static solution which removes the remaining static shifts in the 2D seismic data. The large CRS stacking fold also increases the signal‐to‐noise ratio in the final CRS stack.  相似文献   

7.
周衍  饶莹 《地球物理学报》2019,62(11):4393-4400
我国北方地区黄土塬覆盖区的静校正问题是地震数据处理中的难点问题之一.黄土塬表层覆盖巨厚黄土,高差起伏较大,地震静校正问题严重;而且黄土塬覆盖区的潜水面普遍较深,常规折射静校正方法无法取得令人满意的处理效果.本文针对黄土塬覆盖区的静校正难题,研究层析反演静校正方法在黄土塬地区的适用性和可靠性.层析反演静校正利用地震波初至走时数据、通过迭代反演的方法构建速度模型,进而依据所得的近地表速度模型对地震数据进行静校正处理.本文的迭代反演采用同步迭代重构算法(SIRT),并且对同步迭代重构算法进行了改进,使得层析反演的迭代过程趋于稳定.但是,因为黄土塬覆盖区地表高程的横向变化剧烈,相邻检波点的高差及其静校正量有时差异很大,在运用层析静校正求取长波长静校正量的同时,还需采用初至波剩余静校正方法求取短波长静校正量.实例证明,综合应用依据初至波走时数据的层析静校正和剩余静校正方法,同时计算长波长和短波长的静校正量,能够有效地解决黄土塬覆盖区实际地震资料的静校正问题.  相似文献   

8.
地震记录小波域高阶相关叠加技术   总被引:11,自引:1,他引:11       下载免费PDF全文
水平叠加技术能够有效地压制随机噪声和多次波,但是对于由于相邻道的噪声表现出相关性和一致性而产生的某些相干干扰,常规的叠加技术容易产生假的同相轴,难以取得较高的信噪比.基于小波理论和高阶统计理论本文提出了小波域高阶相关叠加技术(HOCS),其基本原理是将经动静校正的CMP道集变换到小波域,然后在小波域求取高阶相关系数,进而加权叠加.数值实验和实际资料计算的结果表明,此方法比常规叠加更能够有效地提高信噪比,剔除常规加权叠加无法抑制的某些相关噪声.  相似文献   

9.
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running‐window energy ratio of the short‐term average to the long‐term average of the passive seismic data for each trace. We show that for the common case of a low signal‐to‐noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross‐correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal‐to‐noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.  相似文献   

10.
三维地震与地面微地震联合校正方法   总被引:2,自引:1,他引:1       下载免费PDF全文
由于地面微地震监测台站布设在地表,会受到地表起伏、低降速带厚度和速度变化的影响,降低了微地震事件的识别准确度和定位精度,限制了地面微地震监测技术在复杂地表地区的应用.因此,将三维地震勘探技术的思路引入到地面微地震监测中,提出了三维地震与地面微地震联合校正方法,将油气勘探和开发技术更加紧密地结合在一起.根据三维地震数据和低降速带测量数据,通过约束层析反演方法建立精确的近地表速度模型,将地面微地震台站从起伏地表校正到高速层中的平滑基准面上,有效消除复杂近地表的影响.其次,根据射孔数据和声波测井速度信息,通过非线性反演方法建立最优速度模型,由于已经消除复杂近地表的影响,在进行速度模型优化时不需要考虑近地表的影响,因而建立的速度模型更加准确.最后,在精确速度模型的基础上,通过互相关方法求取剩余静校正量,进一步消除了复杂近地表和速度模型近似误差的影响.三维地震与地面微地震联合校正方法采用逐步校正的思路,能够有效消除复杂近地表的影响,提高微地震数据的品质和速度模型的精确度,保证了微地震事件的定位精度,具有良好的应用前景.  相似文献   

11.
浅层地震反射法是一种常用的勘探方法.在浅层地震资料处理中,静校正的精度直接影响速度反演的结果和叠加剖面的质量,在地形平缓时,固定基准面静校正可以满足勘探精度的要求,但在复杂地形条件下,其存在较大误差,即使采用浮动基准面,仍会由于地表一致性假设而残余静校正量,不能消除地形起伏引起的影响,为了提高浅层地震反射静校正的精度必须在常规静校正后进行一次剩余静校正,本文给出起伏地形条件下,滑动基准面(过共中心点的水平面即为该共中心点的滑动基准面)的剩余静校正量,该校正量与炮检距、反射层埋深、地层波速以及炮点和接收点高程有关,适用于单一介质和层状介质情况,本文通过对典型地形起伏的3个水平均匀层状介质理论地质模型的速度谱计算和分析,阐明在复杂地形条件下,应用本文提出的剩余静校正方法可以消除地形起伏的影响,提高静校正精度,在此基础上做动校正可以得到高质量的水平叠加剖面.  相似文献   

12.
起伏地表煤田地震资料静校正   总被引:1,自引:1,他引:0       下载免费PDF全文
由于地表起伏和近地表结构变化产生的静校正问题严重影响了煤田地震资料的成像质量.为此,首先利用低速带分片拟合的广义线性反演技术进行折射波静校正,解决长波长静校正问题和部分短波长静校正问题,然后,利用叠加能量最大静校正技术进一步解决剩余静校正问题,最后,利用非地表一致性剩余时差校正技术,解决速度和射线等误差引起的非地表一致性剩余时差问题.实验结果表明,在以串连的方式应用了三种校正方法之后,在共炮点道集上,折射渡同相轴的线性形态得到了恢复;在动校正后的共中心点道集上,煤层反射的双曲线同相轴被拉平;在叠加剖面上,煤层反射的信噪比得到了改善.  相似文献   

13.
14.
In the case of onshore data sets, the acquired reflection events can be strongly impaired due to rough top‐surface topography and inhomogeneities in the uppermost low‐velocity layer, the so‐called weathering layer. Without accounting for these influences, the poor data quality will make data processing very difficult. Usually, the correction for the top‐surface topography is not perfect. The residuals from this correction and the influence of the weathering layers lead to small distortions along the reflection events. We integrated a residual static correction method into our data‐driven common‐reflection‐surface‐stack‐based imaging workflow to further eliminate such distortions. The moveout‐corrected traces and the stacked pilot trace are cross‐correlated to determine a final estimate of the surface‐consistent residual statics in an iterative manner. As the handling of top‐surface topography within the common‐reflection‐surface stack is discussed in a separate paper in this special issue, the corresponding residual static correction will be explained in more detail. For this purpose, the results obtained with a data set from the Arabian Peninsula will be presented.  相似文献   

15.
Seismic field data are often irregularly or coarsely sampled in space due to acquisition limits. However, complete and regular data need to be acquired in most conventional seismic processing and imaging algorithms. We have developed a fast joint curvelet‐domain seismic data reconstruction method by sparsity‐promoting inversion based on compressive sensing. We have made an attempt to seek a sparse representation of incomplete seismic data by curvelet coefficients and solve sparsity‐promoting problems through an iterative thresholding process to reconstruct the missing data. In conventional iterative thresholding algorithms, the updated reconstruction result of each iteration is obtained by adding the gradient to the previous result and thresholding it. The algorithm is stable and accurate but always requires sufficient iterations. The linearised Bregman method can accelerate the convergence by replacing the previous result with that before thresholding, thus promoting the effective coefficients added to the result. The method is faster than conventional one, but it can cause artefacts near the missing traces while reconstructing small‐amplitude coefficients because some coefficients in the unthresholded results wrongly represent the residual of the data. The key process in the joint curvelet‐domain reconstruction method is that we use both the previous results of the conventional method and the linearised Bregman method to stabilise the reconstruction quality and accelerate the recovery for a while. The acceleration rate is controlled through weighting to adjust the contribution of the acceleration term and the stable term. A fierce acceleration could be performed for the recovery of comparatively small gaps, whereas a mild acceleration is more appropriate when the incomplete data has a large gap of high‐amplitude events. Finally, we carry out a fast and stable recovery using the trade‐off algorithm. Synthetic and field data tests verified that the joint curvelet‐domain reconstruction method can effectively and quickly reconstruct seismic data with missing traces.  相似文献   

16.
Data interpolation is an important step for seismic data analysis because many processing tasks, such as multiple attenuation and migration, are based on regularly sampled seismic data. Failed interpolations may introduce artifacts and eventually lead to inaccurate final processing results. In this paper, we generalised seismic data interpolation as a basis pursuit problem and proposed an iteration framework for recovering missing data. The method is based on non‐linear iteration and sparse transform. A modified Bregman iteration is used for solving the constrained minimisation problem based on compressed sensing. The new iterative strategy guarantees fast convergence by using a fixed threshold value. We also propose a generalised velocity‐dependent formulation of the seislet transform as an effective sparse transform, in which the non‐hyperbolic normal moveout equation serves as a bridge between local slope patterns and moveout parametres in the common‐midpoint domain. It can also be reduced to the traditional velocity‐dependent seislet if special heterogeneity parametre is selected. The generalised velocity‐dependent seislet transform predicts prestack reflection data in offset coordinates, which provides a high compression of reflection events. The method was applied to synthetic and field data examples, and the results show that the generalised velocity‐dependent seislet transform can reconstruct missing data with the help of the modified Bregman iteration even for non‐hyperbolic reflections under complex conditions, such as vertical transverse isotropic (VTI) media or aliasing.  相似文献   

17.
In this paper, we discuss high‐resolution coherence functions for the estimation of the stacking parameters in seismic signal processing. We focus on the Multiple Signal Classification which uses the eigendecomposition of the seismic data to measure the coherence along stacking curves. This algorithm can outperform the traditional semblance in cases of close or interfering reflections, generating a sharper velocity spectrum. Our main contribution is to propose complexity‐reducing strategies for its implementation to make it a feasible alternative to semblance. First, we show how to compute the multiple signal classification spectrum based on the eigendecomposition of the temporal correlation matrix of the seismic data. This matrix has a lower order than the spatial correlation used by other methods, so computing its eigendecomposition is simpler. Then we show how to compute its coherence measure in terms of the signal subspace of seismic data. This further reduces the computational cost as we now have to compute fewer eigenvectors than those required by the noise subspace currently used in the literature. Furthermore, we show how these eigenvectors can be computed with the low‐complexity power method. As a result of these simplifications, we show that the complexity of computing the multiple signal classification velocity spectrum is only about three times greater than semblance. Also, we propose a new normalization function to deal with the high dynamic range of the velocity spectrum. Numerical examples with synthetic and real seismic data indicate that the proposed approach provides stacking parameters with better resolution than conventional semblance, at an affordable computational cost.  相似文献   

18.
Statics are an effective approach to correct for complex velocity variations in the near surface, but so far, to a large extent, a general and robust automatic static correction method is still lacking. In this paper, we propose a novel two‐phase automatic static correction method, which is capable of handling both primary wave statics (PP statics) and converted‐wave statics (S‐wave statics). Our method is purely data driven, and it aims at maximizing stacking power in the target zone of the stack image. Low‐frequency components of the data are analysed first using an advanced genetic algorithm to estimate seed statics and the time structure for an event of interest, and then the original full‐band data are further aligned via the back‐and‐forth coordinate descent method using the seed statics as initial values and the time structure for event alignment guidance. We apply our new method to two field datasets, i.e., one for 2D PP static correction and the other for 3D S‐wave static correction.  相似文献   

19.
Dip‐moveout (DMO) correction is often applied to common‐offset sections of seismic data using a homogeneous isotropic medium assumption, which results in a fast execution. Velocity‐residual DMO is developed to correct for the medium‐treatment limitation of the fast DMO. For reasonable‐sized velocity perturbations, the residual DMO operator is small, and thus is an efficient means of applying a conventional Kirchhoff approach. However, the shape of the residual DMO operator is complicated and may form caustics. We use the Fourier domain for the operator development part of the residual DMO, while performing the convolution with common‐offset data in the space–time domain. Since the application is based on an integral (Kirchhoff) method, this residual DMO preserves all the flexibility features of an integral DMO. An application to synthetic and real data demonstrates effectiveness of the velocity‐residual DMO in data processing and velocity analysis.  相似文献   

20.
In many land seismic situations, the complex seismic wave propagation effects in the near‐surface area, due to its unconsolidated character, deteriorate the image quality. Although several methods have been proposed to address this problem, the negative impact of 3D complex near‐surface structures is still unsolved to a large extent. This paper presents a complete 3D data‐driven solution for the near‐surface problem based on 3D one‐way traveltime operators, which extends our previous attempts that were limited to a 2D situation. Our solution is composed of four steps: 1) seismic wave propagation from the surface to a suitable datum reflector is described by parametrized one‐way propagation operators, with all the parameters estimated by a new genetic algorithm, the self‐adjustable input genetic algorithm, in an automatic and purely data‐driven way; 2) surface‐consistent residual static corrections are estimated to accommodate the fast variations in the near‐surface area; 3) a replacement velocity model based on the traveltime operators in the good data area (without the near‐surface problem) is estimated; 4) data interpolation and surface layer replacement based on the estimated traveltime operators and the replacement velocity model are carried out in an interweaved manner in order to both remove the near‐surface imprints in the original data and keep the valuable geological information above the datum. Our method is demonstrated on a subset of a 3D field data set from the Middle East yielding encouraging results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号