首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
The signal-to-noise (S/N) ratio of seismic reflection data can be significantly enhanced by stacking. However, stacking using the arithmetic mean (straight stacking) does not maximize the S/N ratio of the stack if there are trace-to-trace variations in the S/N ratio. In this case, the S/N ratio of the stack is maximized by weighting each trace by its signal amplitude divided by its noise power, provided the noise is stationary. We estimate these optimum weights using two criteria: the amplitude-decay rate and the measured noise amplitude for each trace. The amplitude-decay rates are measured relative to the median amplitude-decay rate as a function of midpoint and offset. The noise amplitudes are measured using the data before the first seismic arrivals or at late record times. The optimum stacking weights are estimated from these two quantities using an empirical equation. Tests with synthetic data show that, even after noisy-trace editing, the S/N ratio of the weighted stack can be more than 10 dB greater than the S/N ratio of the straight stack, but only a few decibels more than the S/N ratio of the trace equalized stack. When the S/N ratio is close to 0 dB, a difference of 4 dB is clearly visible to the eye, but a difference of 1 dB or less is not visible. In many cases the S/N ratio of the trace-equalized stack is only a few decibels less than that of the optimum stack, so there is little to be gained from weighted stacking. However, when noisy-trace editing is omitted, the S/N ratio of the weighted stack can be more than 10 dB greater than that of the trace-equalized stack. Tests using field data show that the results from straight stacking, trace-equalized stacking, and weighted stacking are often indistinguishable, but weighted stacking can yield slight improvements on isolated portions of the data.  相似文献   

2.
Time reversal mirrors can be used to backpropagate and refocus incident wavefields to their actual source location, with the subsequent benefits of imaging with high‐resolution and super‐stacking properties. These benefits of time reversal mirrors have been previously verified with computer simulations and laboratory experiments but not with exploration‐scale seismic data. We now demonstrate the high‐resolution and the super‐stacking properties in locating seismic sources with field seismic data that include multiple scattering. Tests on both synthetic data and field data show that a time reversal mirror has the potential to exceed the Rayleigh resolution limit by factors of 4 or more. Results also show that a time reversal mirror has a significant resilience to strong Gaussian noise and that accurate imaging of source locations from passive seismic data can be accomplished with traces having signal‐to‐noise ratios as low as 0.001. Synthetic tests also demonstrate that time reversal mirrors can sometimes enhance the signal by a factor proportional to the square root of the product of the number of traces, denoted as N and the number of events in the traces. This enhancement property is denoted as super‐stacking and greatly exceeds the classical signal‐to‐noise enhancement factor of . High‐resolution and super‐stacking are properties also enjoyed by seismic interferometry and reverse‐time migration with the exact velocity model.  相似文献   

3.
The method of common reflection surface (CRS) extends conventional stacking of seismic traces over offset to multidimensional stacking over offset‐midpoint surfaces. We propose a new form of the stacking surface, derived from the analytical solution for reflection traveltime from a hyperbolic reflector. Both analytical comparisons and numerical tests show that the new approximation can be significantly more accurate than the conventional CRS approximation at large offsets or at large midpoint separations while using essentially the same parameters.  相似文献   

4.
Topography and severe variations of near‐surface layers lead to travel‐time perturbations for the events in seismic exploration. Usually, these perturbations could be estimated and eliminated by refraction technology. The virtual refraction method is a relatively new technique for retrieval of refraction information from seismic records contaminated by noise. Based on the virtual refraction, this paper proposes super‐virtual refraction interferometry by cross‐correlation to retrieve refraction wavefields by summing the cross‐correlation of raw refraction wavefields and virtual refraction wavefields over all receivers located outside the retrieved source and receiver pair. This method can enhance refraction signal gradually as the source–receiver offset decreases. For further enhancement of refracted waves, a scheme of hybrid virtual refraction wavefields is applied by stacking of correlation‐type and convolution‐type super‐virtual refractions. Our new method does not need any information about the near‐surface velocity model, which can solve the problem of directly unmeasured virtual refraction energy from the virtual source at the surface, and extend the acquisition aperture to its maximum extent in raw seismic records. It can also reduce random noise influence in raw seismic records effectively and improve refracted waves’ signal‐to‐noise ratio by a factor proportional to the square root of the number of receivers positioned at stationary‐phase points, based on the improvement of virtual refraction's signal‐to‐noise ratio. Using results from synthetic and field data, we show that our new method is effective to retrieve refraction information from raw seismic records and improve the accuracy of first‐arrival picks.  相似文献   

5.
Static correction is a common step in a seismic data proccessing flowchart for land data. Here we propose a new algorithm for automatic short‐period static correction. The algorithm is based on the assumption that seismic events after short‐period static correction should be locally plane nearly everywhere. No other assumptions are made. Therefore the proposed method does not require a preliminary velocity analysis. The algorithm consists in two main parts: evaluation of second spatial differences of trajectories and subsequent regularized integration of these differences. The proposed method proves its robustness and shows results comparable with conventional residual static correction based on improving common‐midpoint stacking. In contrast to the conventional residual static, the proposed algorithm can estimate short‐period statics in complex cases where common‐midpoint stacking fails because of non‐hyperbolic events.  相似文献   

6.
For converted waves stacking requires a true common reflection point gather which, in this case, is also a common conversion point (CCP) gather. We consider converted waves of the PS- and SP-type in a stack of horizontal layers. The coordinates of the conversion points for waves of PS- or SP-type, respectively, in a single homogeneous layer are calculated as a function of the offset, the reflector depth and the velocity ratio vp/vs. Knowledge of the conversion points enables us to gather the seismic traces in a common conversion point (CCP) record. Numerical tests show that the CCP coordinates in a multilayered medium can be approximated by the equations given for a single layer. In practical applications, an a priori estimate of vp/vs is required to obtain the CCP for a given reflector depth. A series expansion for the traveltime of converted waves as a function of the offset is presented. Numerical examples have been calculated for several truncations. For small offsets, a hyperbolic approximation can be used. For this, the rms velocity of converted waves is defined. A Dix-type formula, relating the product of the interval velocities of compressional and shear waves to the rms velocity of the converted waves, is presented.  相似文献   

7.
径向道变换压制相干噪声方法研究   总被引:6,自引:2,他引:4       下载免费PDF全文
径向道变换(Radial Trace Transform)是将地震道集振幅值从偏移距一双程旅行时坐标系变换到视速度-双程旅行时坐标系,通过这种坐标系的变换,使相干噪声与有效信号在视速度和频率方面都有效分离.本文在介绍RT变换基本原理基础上,分析了RT变换中两种常用插值方法及其特点.并利用对模拟地震资料的处理,证明了RT域模拟-相减法较其带通滤波法在相干噪声压制与反射信号保持方面具有明显优势.最后,根据噪声特点,通过选择合理RT滤波参数,对实际地震资料进行处理试验,获得了较好的去噪效果,明显提高了资料信噪比,验证了研究方法的有效性.  相似文献   

8.
Common midpoint data are now being collected with ever increasing source-receiver offsets. For wide aperture seismic data classical methods of interpretation fail, since velocity analyses and signal-to-noise enhancement methods based on hyperbolic traveltime curves are no longer appropriate. Therefore, the goals of increased velocity resolution and signal enhancement, which motivate the increase in offset, are not achieved. Approximate methods, involving higher order traveltime curves or extrapolations, have been developed for velocity analysis but these are ineffective in the presence of refracted arrivals, and lack a physical basis. These problems can be minimized by transforming the observational data to the domain of intercept or vertical delay time τ and horizontal ray parameter p. In this domain headwave refractions are collapsed into points and both near vertical and wide angle reflections can be analyzed simultaneously to derive velocity-depth information, even in the presence of velocity gradients or low velocity zones.  相似文献   

9.
Non‐uniqueness occurs with the 1D parametrization of refraction traveltime graphs in the vertical dimension and with the 2D lateral resolution of individual layers in the horizontal dimension. The most common source of non‐uniqueness is the inversion algorithm used to generate the starting model. This study applies 1D, 1.5D and 2D inversion algorithms to traveltime data for a syncline (2D) model, in order to generate starting models for wave path eikonal traveltime tomography. The 1D tau‐p algorithm produced a tomogram with an anticline rather than a syncline and an artefact with a high seismic velocity. The 2D generalized reciprocal method generated tomograms that accurately reproduced the syncline, together with narrow regions at the thalweg with seismic velocities that are less than and greater than the true seismic velocities as well as the true values. It is concluded that 2D inversion algorithms, which explicitly identify forward and reverse traveltime data, are required to generate useful starting models in the near‐surface where irregular refractors are common. The most likely tomogram can be selected as either the simplest model or with a priori information, such as head wave amplitudes. The determination of vertical velocity functions within individual layers is also subject to non‐uniqueness. Depths computed with vertical velocity gradients, which are the default with many tomography programs, are generally 50% greater than those computed with constant velocities for the same traveltime data. The average vertical velocity provides a more accurate measure of depth estimates, where it can be derived. Non‐uniqueness is a fundamental reality with the inversion of all near‐surface seismic refraction data. Unless specific measures are taken to explicitly address non‐uniqueness, then the production of a single refraction tomogram, which fits the traveltime data to sufficient accuracy, does not necessarily demonstrate that the result is either ‘correct’ or the most probable.  相似文献   

10.
In this paper, we discuss high‐resolution coherence functions for the estimation of the stacking parameters in seismic signal processing. We focus on the Multiple Signal Classification which uses the eigendecomposition of the seismic data to measure the coherence along stacking curves. This algorithm can outperform the traditional semblance in cases of close or interfering reflections, generating a sharper velocity spectrum. Our main contribution is to propose complexity‐reducing strategies for its implementation to make it a feasible alternative to semblance. First, we show how to compute the multiple signal classification spectrum based on the eigendecomposition of the temporal correlation matrix of the seismic data. This matrix has a lower order than the spatial correlation used by other methods, so computing its eigendecomposition is simpler. Then we show how to compute its coherence measure in terms of the signal subspace of seismic data. This further reduces the computational cost as we now have to compute fewer eigenvectors than those required by the noise subspace currently used in the literature. Furthermore, we show how these eigenvectors can be computed with the low‐complexity power method. As a result of these simplifications, we show that the complexity of computing the multiple signal classification velocity spectrum is only about three times greater than semblance. Also, we propose a new normalization function to deal with the high dynamic range of the velocity spectrum. Numerical examples with synthetic and real seismic data indicate that the proposed approach provides stacking parameters with better resolution than conventional semblance, at an affordable computational cost.  相似文献   

11.
刘国昌  李超 《地球物理学报》2020,63(4):1569-1584
描述地震波衰减特征的品质因子Q对地震数据处理和油藏描述非常重要,在地震勘探领域,Q值一般通过垂直地震剖面(VSP)数据或地面地震数据得到.由于叠前地面地震数据具有复杂的射线路径且存在噪声、调谐干涉效应等影响,从叠前地震数据中准确估计Q值相对困难.本文以地震波射线传播为基础,根据同相轴局部斜率和射线参数的映射关系,将多射线波形频谱同时带入谱比法联合反演估计Q值,提出了基于多射线联合反演的速度无关叠前Q值估计方法.该方法通过局部斜率属性避开了速度对Q值估计的影响,局部斜率携带地震波传播的速度信息,具有相同局部斜率的地震反射波具有相同的传播射线参数.同相轴局部斜率是地震数据域的属性,而速度是模型域的参数,在估计Q值中采用数据域的属性参数可以直接应用于数据的联合反演,而不需要通过速度对其做进一步的转化,从而提高了Q值估计的精度.同时,本方法采用预测映射(predictive mapping)技术将非零炮检距反射信息映射到零炮检距处,从而获得零偏移距走时对应的Q值.模拟和实际算例验证了本文方法的有效性.  相似文献   

12.
Time‐lapse seismic analysis is utilized in CO2 geosequestration to verify the CO2 containment within a reservoir. A major risk associated with geosequestration is a possible leakage of CO2 from the storage formation into overlaying formations. To mitigate this risk, the deployment of carbon capture and storage projects requires fast and reliable detection of relatively small volumes of CO2 outside the storage formation. To do this, it is necessary to predict typical seepage scenarios and improve subsurface seepage detection methods. In this work we present a technique for CO2 monitoring based on the detection of diffracted waves in time‐lapse seismic data. In the case of CO2 seepage, the migrating plume might form small secondary accumulations that would produce diffracted, rather than reflected waves. From time‐lapse data analysis, we are able to separate the diffracted waves from the predominant reflections in order to image the small CO2 plumes. To explore possibilities to detect relatively small amounts of CO2, we performed synthetic time‐lapse seismic modelling based on the Cooperative Research Centre for Greenhouse Gas Technologies (CO2CRC) Otway project data. The detection method is based on defining the CO2 location by measuring the coherency of the signal along diffraction offset‐traveltime curves. The technique is applied to a time‐lapse stacked section using a stacking velocity to construct offset‐traveltime curves. Given the amount of noise found in the surface seismic data, the predicted minimum detectable amount of CO2 is 1000–2000 tonnes. This method was also applied to real data obtained from a time‐lapse seismic physical model. The use of diffractions rather than reflections for monitoring small amounts of CO2 can enhance the capability of subsurface monitoring in CO2 geosequestration projects.  相似文献   

13.
The main objective of this work is to establish the applicability of shallow surface‐seismic traveltime tomography in basalt‐covered areas. A densely sampled ~1300‐m long surface seismic profile, acquired as part of the SeiFaBa project in 2003 ( Japsen et al. 2006 ) at Glyvursnes in the Faroe Islands, served as the basis to evaluate the performance of the tomographic method in basalt‐covered areas. The profile is centred at a ~700‐m deep well. VP, VS and density logs, a zero‐offset VSP, downhole‐geophone recordings and geological mapping in the area provided good means of control. The inversion was performed with facilities of the Wide Angle Reflection/Refraction Profiling program package ( Ditmar et al. 1999 ). We tested many inversion sequences while varying the inversion parameters. Modelled traveltimes were verified by full‐waveform modelling. Typically an inversion sequence consists in several iterations that proceed until a satisfactory solution is reached. However, in the present case with high velocity contrasts in the subsurface we obtained the best result with two iterations: first obtaining a smooth starting model with small traveltime residuals by inverting with a high smoothing constraint and then inverting with the lowest possible smoothing constraint to allow the inversion to have the full benefit of the traveltime residuals. The tomogram gives usable velocity information for the near‐surface geology in the area but fails to reproduce the expected velocity distribution of the layered basalt flows. Based on the analysis of the tomogram and geological mapping in the area, a model was defined that correctly models first arrivals from both surface seismic data and downhole‐geophone data.  相似文献   

14.
A new, adaptive multi‐criteria method for accurate estimation of three‐component three‐dimensional vertical seismic profiling of first breaks is proposed. Initially, we manually pick first breaks for the first gather of the three‐dimensional borehole set and adjust several coefficients to approximate the first breaks wave‐shape parameters. We then predict the first breaks for the next source point using the previous one, assuming the same average velocity. We follow this by calculating an objective function for a moving trace window to minimize it with respect to time shift and slope. This function combines four main properties that characterize first breaks on three‐component borehole data: linear polarization, signal/noise ratio, similarity in wave shapes for close shots and their stability in the time interval after the first break. We then adjust the coefficients by combining current and previous values. This approach uses adaptive parameters to follow smooth wave‐shape changes. Finally, we average the first breaks after they are determined in the overlapping windows. The method utilizes three components to calculate the objective function for the direct compressional wave projection. An adaptive multi‐criteria optimization approach with multi three‐component traces makes this method very robust, even for data contaminated with high noise. An example using actual data demonstrates the stability of this method.  相似文献   

15.
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing.  相似文献   

16.
Three‐dimensional seismic survey design should provide an acquisition geometry that enables imaging and amplitude‐versus‐offset applications of target reflectors with sufficient data quality under given economical and operational constraints. However, in land or shallow‐water environments, surface waves are often dominant in the seismic data. The effectiveness of surface‐wave separation or attenuation significantly affects the quality of the final result. Therefore, the need for surface‐wave attenuation imposes additional constraints on the acquisition geometry. Recently, we have proposed a method for surface‐wave attenuation that can better deal with aliased seismic data than classic methods such as slowness/velocity‐based filtering. Here, we investigate how surface‐wave attenuation affects the selection of survey parameters and the resulting data quality. To quantify the latter, we introduce a measure that represents the estimated signal‐to‐noise ratio between the desired subsurface signal and the surface waves that are deemed to be noise. In a case study, we applied surface‐wave attenuation and signal‐to‐noise ratio estimation to several data sets with different survey parameters. The spatial sampling intervals of the basic subset are the survey parameters that affect the performance of surface‐wave attenuation methods the most. Finer spatial sampling will reduce aliasing and make surface‐wave attenuation easier, resulting in better data quality until no further improvement is obtained. We observed this behaviour as a main trend that levels off at increasingly denser sampling. With our method, this trend curve lies at a considerably higher signal‐to‐noise ratio than with a classic filtering method. This means that we can obtain a much better data quality for given survey effort or the same data quality as with a conventional method at a lower cost.  相似文献   

17.
A new type of seismic imaging, based on Feynman path integrals for waveform modelling, is capable of producing accurate subsurface images without any need for a reference velocity model. Instead of the usual optimization for traveltime curves with maximal signal semblance, a weighted summation over all representative curves avoids the need for velocity analysis, with its common difficulties of subjective and time‐consuming manual picking. The summation over all curves includes the stationary one that plays a preferential role in classical imaging schemes, but also multiple stationary curves when they exist. Moreover, the weighted summation over all curves also accounts for non‐uniqueness and uncertainty in the stacking/migration velocities. The path‐integral imaging can be applied to stacking to zero‐offset and to time and depth migration. In all these cases, a properly defined weighting function plays a vital role: to emphasize contributions from traveltime curves close to the optimal one and to suppress contributions from unrealistic curves. The path‐integral method is an authentic macromodel‐independent technique in the sense that there is strictly no parameter optimization or estimation involved. Development is still in its initial stage, and several conceptual and implementation issues are yet to be solved. However, application to synthetic and real data examples shows that it has the potential for becoming a fully automatic imaging technique.  相似文献   

18.
Interpretation techniques are presented that aim at the estimation of seismic velocities. The application of localized slant stacks, weighted by coherency, produces a decomposition of multichannel seismic data into single trace instantaneous slowness p(x, t) components. Colour displays support the interpretation of seismic data relevant to the near surface velocity structure. Since p(x, t) is directly related to stacking velocities and the depth of reflection, or bottoming points, in the subsurface, this data transformation provides a powerful tool for the inversion of reflection and refraction data.  相似文献   

19.
Xu  Yankai  Cao  Siyuan  Pan  Xiao 《Studia Geophysica et Geodaetica》2019,63(4):554-568

Singular value decomposition (SVD) is a useful method for random noise suppression in seismic data processing. A structure-oriented SVD (SOSVD) approach which incorporates structure prediction to the SVD filter is effcient in attenuating noise except distorting seismic events at faults and crossing points. A modified SOSVD approach using a weighted stack, called structure-oriented weighted SVD (SOWSVD), is proposed. In this approach, the SVD filter is used to attenuate noise for prediction traces of a primitive trace which are produced via the plane-wave prediction. A weighting function related to local similarity and distance between each prediction trace and the primitive trace is applied to the denoised prediction traces stacking. Both synthetic and field data examples suggest the SOWSVD performs better than the SOSVD in both suppressing random noise and preserving the information of the discontinuities for seismic data with crossing events and faults.

  相似文献   

20.
Cost reduction in seismic reconnaissance is an issue in geothermal exploration and can principally be achieved by sparse acquisition. To address the adherent decrease in signal/noise ratio, the common‐reflection‐surface method has been proposed. We reduced the data density of an existing 3D dataset and evaluated the results of common‐reflection‐surface processing using seismic attributes. The application of the common‐reflection‐surface method leads in all cases to an improvement of the signal/noise ratio. The most distinct improvement can be seen in the low fold regions. The improvement depends strongly on the midpoint aperture, and there is a tradeoff between reflector continuity and horizontal resolution. If small scale targets are to be imaged, a small aperture size is necessary, which may be far below the Fresnel zone for a specific reflector. The substantial reduction of the data density leads in our case to an irrecoverable information loss.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号