首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
复杂近地表条件会降低地震数据的质量,通常采用基于地表一致性的时移静校正消除其影响.但静校正与速度是密不可分的,而确定复杂近地表速度是非常困难的.基于CFP技术处理复杂近地表问题时避免了对速度的直接操作,使得静校正和速度的确定相互独立.首先根据叠前数据估算出波场的传播算子,然后依据等时原理在DTS模板中进行算子更新,再用这些更新的算子重建基准面和实现近地表单程时间成像.获得正确的算子振幅也是重建基准面的关键.  相似文献   

2.
We describe an integrated method for solving the complex near‐surface problem in land seismic imaging. This solution is based on an imaging approach and is obtained without deriving a complex near‐surface velocity model. We start by obtaining from the data the kinematics of the one‐way focusing operators (i.e. time‐reversed Green's functions) that describe propagation between the acquisition surface and a chosen datum reflector using the common‐focus‐point technology. The conventional statics solutions obtained from prior information about the near surface are integrated in the initial estimates of the focusing operators. The focusing operators are updated iteratively until the imaging principle of equal traveltime is fulfilled for each subsurface gridpoint of the datum reflector. Therefore, the seismic data is left intact without any application of time shifts, which makes this method an uncommitted statics solution. The focusing operators can be used directly for wave‐equation redatuming to the respective reflector or for prestack imaging if determined for multiple reflecting boundaries. The underlying velocity model is determined by tomographic inversion of the focusing operators while also integrating any hard prior information (e.g. well information). This velocity model can be used to perform prestack depth imaging or to calculate the depth of the new datum level. We demonstrate this approach on 2D seismic data acquired in Saudi Arabia in an area characterized by rugged topography and complex near‐surface geology.  相似文献   

3.
In many land seismic situations, the complex seismic wave propagation effects in the near‐surface area, due to its unconsolidated character, deteriorate the image quality. Although several methods have been proposed to address this problem, the negative impact of 3D complex near‐surface structures is still unsolved to a large extent. This paper presents a complete 3D data‐driven solution for the near‐surface problem based on 3D one‐way traveltime operators, which extends our previous attempts that were limited to a 2D situation. Our solution is composed of four steps: 1) seismic wave propagation from the surface to a suitable datum reflector is described by parametrized one‐way propagation operators, with all the parameters estimated by a new genetic algorithm, the self‐adjustable input genetic algorithm, in an automatic and purely data‐driven way; 2) surface‐consistent residual static corrections are estimated to accommodate the fast variations in the near‐surface area; 3) a replacement velocity model based on the traveltime operators in the good data area (without the near‐surface problem) is estimated; 4) data interpolation and surface layer replacement based on the estimated traveltime operators and the replacement velocity model are carried out in an interweaved manner in order to both remove the near‐surface imprints in the original data and keep the valuable geological information above the datum. Our method is demonstrated on a subset of a 3D field data set from the Middle East yielding encouraging results.  相似文献   

4.
结合基准面重建的叠前时间偏移方法   总被引:1,自引:1,他引:0       下载免费PDF全文
董春晖  张剑锋 《地球物理学报》2010,53(10):2435-2441
提出了一种结合虚拟界面、瑞利积分和相移法的混合的基准面重建方法.通过与叠前时间偏移方法结合,形成了针对起伏地表采集数据的叠前时间偏移方法和新流程.该方法能正确考虑波在近地表传播的实际路径,克服了高速层出露时静校正方法的误差;它也能自己确定虚拟层速度,避免了现行基于波场延拓的基准面重建方法需要准确近地表速度的困难.文中分别用近地表存在明显低速层和近地表有高速层出露这两类模型的理论数据,验证了所发展方法和流程的有效性.  相似文献   

5.
In this case study we consider the seismic processing of a challenging land data set from the Arabian Peninsula. It suffers from rough top‐surface topography, a strongly varying weathering layer, and complex near‐surface geology. We aim at establishing a new seismic imaging workflow, well‐suited to these specific problems of land data processing. This workflow is based on the common‐reflection‐surface stack for topography, a generalized high‐density velocity analysis and stacking process. It is applied in a non‐interactive manner and provides an entire set of physically interpretable stacking parameters that include and complement the conventional stacking velocity. The implementation introduced combines two different approaches to topography handling to minimize the computational effort: after initial values of the stacking parameters are determined for a smoothly curved floating datum using conventional elevation statics, the final stack and also the related residual static correction are applied to the original prestack data, considering the true source and receiver elevations without the assumption of nearly vertical rays. Finally, we extrapolate all results to a chosen planar reference level using the stacking parameters. This redatuming procedure removes the influence of the rough measurement surface and provides standardized input for interpretation, tomographic velocity model determination, and post‐stack depth migration. The methodology of the residual static correction employed and the details of its application to this data example are discussed in a separate paper in this issue. In view of the complex near‐surface conditions, the imaging workflow that is conducted, i.e. stack – residual static correction – redatuming – tomographic inversion – prestack and post‐stack depth migration, leads to a significant improvement in resolution, signal‐to‐noise ratio and reflector continuity.  相似文献   

6.
The common focal point (CFP) method and the common reflection surface (CRS) stack method are compared. The CRS method is a fast, highly automated procedure that provides high S/N ratio simulation of zero‐offset (ZO) images by combining, per image point, the reflection energy of an arc segment that is tangential to the reflector. It uses smooth parametrized two‐way stacking operators, based on a data‐driven triplet of attributes in 2D (eight parameters in 3D). As a spin‐off, the attributes can be used for several applications, such as the determination of the geometrical spreading factor, multiple prediction, and tomographic inversion into a smooth background velocity model. The CFP method aims at decomposing two‐way seismic reflection data into two full‐aperture one‐way propagation operators. By applying an iterative updating procedure in a half‐migrated domain, it provides non‐smooth focusing operators for prestack imaging using only the energy from one focal point at the reflector. The data‐driven operators inhibit all propagation effects of the overburden. The CFP method provides several spin‐offs, amongst which is the CFP matrix related to one focal point, which displays the reflection amplitudes as measured at the surface for each source–receiver pair. The CFP matrix can be used to determine the specular reflection source–receiver pairs and the Fresnel zone at the surface for reflection in one single focal point. Other spin‐offs are the prediction of internal multiples, the determination of reflectivity effects, velocity‐independent redatuming and tomographic inversion to obtain a velocity–depth model. The CFP method is less fast and less automated than the CRS method. From a pointwise comparison of features it is concluded that one method is not a subset of the other, but that both methods can be regarded as being to some extent complementary.  相似文献   

7.
A redatuming operation is used to simulate the acquisition of data in new levels, avoiding distortions produced by near-surface irregularities related to either geometric or material property heterogeneities. In this work, the application of the true-amplitude Kirchhoff redatuming (TAKR) operator on homogeneous media is compared with conventional Kirchhoff redatuming (KR) operator restricted to the zero-offset case. The TAKR and the KR operators are analytically and numerically compared in order to verify their impacts on the data at a new level. Analyses of amplitude and velocity sensitivity of the TAKR and KR were performed: one concerning the difference between the weight functions and the other related to the velocity variation. The comparisons between operators were performed using numerical examples. The feasibility of the KR and TAKR operators was demonstrated not only kinematically but also dynamically for their purposes. In other words, one preserves amplitude (KR), and the other corrects the amplitude (TAKR). In the end, we applied the operators to a GPR data set.  相似文献   

8.
Interferometric redatuming is a data‐driven method to transform seismic responses with sources at one level and receivers at a deeper level into virtual reflection data with both sources and receivers at the deeper level. Although this method has traditionally been applied by cross‐correlation, accurate redatuming through a heterogeneous overburden requires solving a multidimensional deconvolution problem. Input data can be obtained either by direct observation (for instance in a horizontal borehole), by modelling or by a novel iterative scheme that is currently being developed. The output of interferometric redatuming can be used for imaging below the redatuming level, resulting in a so‐called interferometric image. Internal multiples from above the redatuming level are eliminated during this process. In the past, we introduced point‐spread functions for interferometric redatuming by cross‐correlation. These point‐spread functions quantify distortions in the redatumed data, caused by internal multiple reflections in the overburden. In this paper, we define point‐spread functions for interferometric imaging to quantify these distortions in the image domain. These point‐spread functions are similar to conventional resolution functions for seismic migration but they contain additional information on the internal multiples in the overburden and they are partly data‐driven. We show how these point‐spread functions can be visualized to diagnose image defocusing and artefacts. Finally, we illustrate how point‐spread functions can also be defined for interferometric imaging with passive noise sources in the subsurface or with simultaneous‐source acquisition at the surface.  相似文献   

9.
Wave‐equation redatuming can be a very efficient method of overcoming the overburden imprint on the target area. Owing to the growing amount of 3D data, it is increasingly important to develop a feasible method for the redatuming of 3D prestack data. Common 3D acquisition designs produce relatively sparse data sets, which cannot be redatumed successfully by applying conventional wave‐equation redatuming. We propose a redatuming approach that can be used to perform wave‐equation redatuming of sparse 3D data. In this new approach, additional information about the medium velocity below the new datum is included, i.e. redatumed root‐mean‐square (RMS) velocities, which can be extracted from the input data set by conventional velocity analysis, are used. Inclusion of this additional information has the following implications: (i) it becomes possible to simplify the 4D redatuming integral into a 2D integral such that the number of traces needed to calculate one output time sample and the computational effort are both reduced; (ii) the information about the subsurface enables an infill of traces which are needed for the integral calculation but which are missing in the sparse input data set. Two tests applying this new approach to fully sampled 2D data show satisfactory results, implying that this method can certainly be used for the redatuming of sparse 3D data sets.  相似文献   

10.
In order to make 3D prestack depth migration feasible on modern computers it is necessary to use a target-oriented migration scheme. By limiting the output of the migration to a specific depth interval (target zone), the efficiency of the scheme is improved considerably. The first step in such a target-oriented approach is redatuming of the shot records at the surface to the upper boundary of the target zone. For this purpose, efficient non-recursive wavefield extrapolation operators should be generated. We propose a ray tracing method or the Gaussian beam method. With both methods operators can be efficiently generated for any irregular shooting geometry at the surface. As expected, the amplitude behaviour of the Gaussian beam method is better than that of the ray tracing based operators. The redatuming algorithm is performed per shot record, which makes the data handling very efficient. From the shot records at the surface‘genuine zero-offset data’are generated at the upper boundary of the target zone. Particularly in situations with a complicated overburden, the quality of target-oriented zero-offset data is much better than can be reached with a CMP stacking method at the surface. The target-oriented zero-offset data can be used as input to a full 3D zero-offset depth migration scheme, in order to obtain a depth section of the target zone.  相似文献   

11.
Recently, new on‐shore acquisition designs have been presented with multi‐component sensors deployed in the shallow sub‐surface (20 m–60 m). Virtual source redatuming has been proposed for these data to compensate for surface statics and to enhance survey repeatability. In this paper, we investigate the feasibility of replacing the correlation‐based formalism that undergirds virtual source redatuming with multi‐dimensional deconvolution, offering various advantages such as the elimination of free‐surface multiples and the potential to improve virtual source repeatability. To allow for data‐driven calibration of the sensors and to improve robustness in cases with poor sensor spacing in the shallow sub‐surface (resulting in a relatively high wavenumber content), we propose a new workflow for this configuration. We assume a dense source sampling and target signals that arrive at near‐vertical propagation angles. First, the data are preconditioned by applying synthetic‐aperture‐source filters in the common receiver domain. Virtual source redatuming is carried out for the multi‐component recordings individually, followed by an intermediate deconvolution step. After this specific pre‐processing, we show that the downgoing and upgoing constituents of the wavefields can be separated without knowledge of the medium parameters, the source wavelet, or sensor characteristics. As a final step, free‐surface multiples can be eliminated by multi‐dimensional deconvolution of the upgoing fields with the downgoing fields.  相似文献   

12.
We present preserved‐amplitude downward continuation migration formulas in the aperture angle domain. Our approach is based on shot‐receiver wavefield continuation. Since source and receiver points are close to the image point, a local homogeneous reference velocity can be approximated after redatuming. We analyse this approach in the framework of linearized inversion of Kirchhoff and Born approximations. From our analysis, preserved‐amplitude Kirchhoff and Born inverse formulas can be derived for the 2D case. They involve slant stacks of filtered subsurface offset domain common image gathers followed by the application of the appropriate weighting factors. For the numerical implementation of these formulas, we develop an algorithm based on the true amplitude version of the one‐way paraxial approximation. Finally, we demonstrate the relevance of our approach with a set of applications on synthetic datasets and compare our results with those obtained on the Marmousi model by multi‐arrival ray‐based preserved‐amplitude migration. While results are similar, we observe that our results are less affected by artefacts.  相似文献   

13.
In many cases, seismic measurements are coarsely sampled in at least one dimension. This leads to aliasing artefacts and therefore to problems in the subsequent processing steps. To avoid this, seismic data reconstruction can be applied in advance. The success and reliability of reconstruction methods are dependent on the assumptions they make on the data. In many cases, wavefields are assumed to (locally) have a linear space–time behaviour. However, field data are usually complex, with strongly curved events. Therefore, in this paper, we propose the double focal transformation as an efficient way for complex data reconstruction. Hereby, wavefield propagation is formulated as a transformation, where one‐way propagation operators are used as its basis functions. These wavefield operators can be based on a macro velocity model, which allows our method to use prior information in order to make the data decomposition more effective. The basic principle of the double focal transformation is to focus seismic energy along source and receiver coordinates simultaneously. The seismic data are represented by a number of localized events in the focal domain, whereas aliasing noise spreads out. By imposing a sparse solution in the focal domain, aliasing noise is suppressed, and data reconstruction beyond aliasing is achieved. To facilitate the process, only a few effective depth levels need to be included, preferably along the major boundaries in the data, from which the propagation operators can be calculated. Results on 2D and 3D synthetic data illustrate the method's virtues. Furthermore, seismic data reconstruction on a 2D field dataset with gaps and aliased source spacing demonstrates the strength of the double focal transformation, particularly for near‐offset reflections with strong curvature and for diffractions.  相似文献   

14.
We apply a redatuming methodology, designed to handle rugged topography and the presence of high‐velocity layers near the acquisition surface, to a 2D land seismic data set acquired in Saudi Arabia. This methodology is based on a recently developed prestack operator, which we call the topographic datuming operator (TDO). The TDO, unlike static corrections, allows for the movement of reflections laterally with respect to their true locations, corresponding to the new datum level. Thus, it mitigates mispositioning of events and velocity bias introduced by the assumption of surface consistency and the time‐invariant time shifts brought about by static corrections. Using the shallow velocities estimated from refracted events, the TDO provides a superior continuity of reflections and better focusing than that obtained from conventional static corrections in most parts of the processed 2D line. The computational cost of applying the TDO is only slightly higher than static corrections. The marginal additional computational cost and the possibility of estimating, after TDO redatuming, stacking velocities that are not affected by a spurious positive bias, as in the case of static corrections, are further advantages of the proposed methodology. The likelihood of strong heterogeneities in the most complex part of the line limits the applicability of any approach based upon geometrical optics; however, the TDO produces results that are slightly better than those obtained from static corrections because of its ability to partially collapse diffractions generated in the near surface.  相似文献   

15.
The performance of refraction inversion methods that employ the principle of refraction migration, whereby traveltimes are laterally migrated by the offset distance (which is the horizontal separation between the point of refraction and the point of detection on the surface), can be adversely affected by very near‐surface inhomogeneities. Even inhomogeneities at single receivers can limit the lateral resolution of detailed seismic velocities in the refractor. The generalized reciprocal method ‘statics’ smoothing method (GRM SSM) is a smoothing rather than a deterministic method for correcting very near‐surface inhomogeneities of limited lateral extent. It is based on the observation that there are only relatively minor differences in the time‐depths to the target refractor computed for a range of XY distances, which is the separation between the reverse and forward traveltimes used to compute the time‐depth. However, any traveltime anomalies, which originate in the near‐surface, migrate laterally with increasing XY distance. Therefore, an average of the time‐depths over a range of XY values preserves the architecture of the refractor, but significantly minimizes the traveltime anomalies originating in the near‐surface. The GRM statics smoothing corrections are obtained by subtracting the average time‐depth values from those computed with a zero XY value. In turn, the corrections are subtracted from the traveltimes, and the GRM algorithms are then re‐applied to the corrected data. Although a single application is generally adequate for most sets of field data, model studies have indicated that several applications of the GRM SSM can be required with severe topographic features, such as escarpments. In addition, very near‐surface inhomogeneities produce anomalous head‐wave amplitudes. An analogous process, using geometric means, can largely correct amplitude anomalies. Furthermore, the coincidence of traveltime and amplitude anomalies indicates that variations in the near‐surface geology, rather than variations in the coupling of the receivers, are a more likely source of the anomalies. The application of the GRM SSM, together with the averaging of the refractor velocity analysis function over a range of XY values, significantly minimizes the generation of artefacts, and facilitates the computation of detailed seismic velocities in the refractor at each receiver. These detailed seismic velocities, together with the GRM SSM‐corrected amplitude products, can facilitate the computation of the ratio of the density in the bedrock to that in the weathered layer. The accuracy of the computed density ratio improves where lateral variations in the seismic velocities in the weathered layer are known.  相似文献   

16.
Elastic redatuming can be carried out before or after decomposition of the multicomponent data into independent PP, PS, SP, and SS responses. We argue that from a practical point of view, elastic redatuming is preferably applied after decomposition. We review forward and inverse extrapolation of decomposed P- and S-wavefields. We use the forward extrapolation operators to derive a model of discrete multicomponent seismic data. This forward model is fully described in terms of matrix manipulations. By applying these matrix manipulations in reverse order we arrive at an elastic processing scheme for multicomponent data in which elastic redatuming plays an essential role. Finally, we illustrate elastic redatuming with a controlled 2D example, consisting of simulated multicomponent seismic data.  相似文献   

17.
Surface removal and internal multiple removal are explained by recursively separating the primary and multiple responses at each depth level with the aid of wavefield prediction error filtering. This causal removal process is referred to as “data linearization.” The linearized output (primaries only) is suitable for linear migration algorithms. Next, a summary is given on the migration of full wavefields (primaries + multiples) by using the concept of secondary sources in each subsurface gridpoint. These secondary sources are two‐way and contain the gridpoint reflection and the gridpoint transmission properties. In full wavefield migration, a local inversion process replaces the traditional linear imaging conditions. Finally, Marchenko redatuming is explained by iteratively separating the full wavefield response from above a new datum and the full wavefield response from below a new datum. The redatuming output is available for linear migration (Marchenko imaging) or, even better, for full wavefield migration. Linear migration, full wavefield migration, and Marchenko imaging are compared with each other. The principal conclusion of this essay is that multiples should not be removed, but they should be utilized, yielding two major advantages: (i) illumination is enhanced, particularly in the situation of low signal‐to‐noise primaries; and (ii) both the upper side and the lower side of reflectors are imaged. It is also concluded that multiple scattering algorithms are more transparent if they are formulated in a recursive depth manner. In addition to transparency, a recursive depth algorithm has the flexibility to enrich the imaging process by inserting prior geological knowledge or by removing numerical artefacts at each depth level. Finally, it is concluded that nonlinear migration algorithms must have a closed‐loop architecture to allow successful imaging of incomplete seismic data volumes (reality of field data).  相似文献   

18.
Field‐survey characteristics can have an important impact on the quality of multiples predicted by surface‐related multiple elimination (SRME) algorithms. This paperexamines the effects of three particular characteristics: in‐line spatial sampling, source stability, and cable feathering. Inadequate spatial sampling causes aliasing artefacts. These can be reduced by f–k filtering at the expense of limiting the bandwidth in the predicted multiples. Source‐signature variations create artefacts in predicted multiples due to spatial discontinuities. Variations from a well‐behaved airgun array produced artefacts having an rms amplitude about 26 dB below the rms amplitude of multiples predicted with no variations. Cable feathering has a large impact on the timingerrors in multiples predicted by 2D SRME when it is applied in areas having cross dip. All these problems can be reduced by a combination of better survey design, use of advanced data‐acquisition technologies, and additional data‐processing steps.  相似文献   

19.
Fluid depletion within a compacting reservoir can lead to significant stress and strain changes and potentially severe geomechanical issues, both inside and outside the reservoir. We extend previous research of time‐lapse seismic interpretation by incorporating synthetic near‐offset and full‐offset common‐midpoint reflection data using anisotropic ray tracing to investigate uncertainties in time‐lapse seismic observations. The time‐lapse seismic simulations use dynamic elasticity models built from hydro‐geomechanical simulation output and a stress‐dependent rock physics model. The reservoir model is a conceptual two‐fault graben reservoir, where we allow the fault fluid‐flow transmissibility to vary from high to low to simulate non‐compartmentalized and compartmentalized reservoirs, respectively. The results indicate time‐lapse seismic amplitude changes and travel‐time shifts can be used to qualitatively identify reservoir compartmentalization. Due to the high repeatability and good quality of the time‐lapse synthetic dataset, the estimated travel‐time shifts and amplitude changes for near‐offset data match the true model subsurface changes with minimal errors. A 1D velocity–strain relation was used to estimate the vertical velocity change for the reservoir bottom interface by applying zero‐offset time shifts from both the near‐offset and full‐offset measurements. For near‐offset data, the estimated P‐wave velocity changes were within 10% of the true value. However, for full‐offset data, time‐lapse attributes are quantitatively reliable using standard time‐lapse seismic methods when an updated velocity model is used rather than the baseline model.  相似文献   

20.
Seismic wave propagation in transversely isotropic (TI) media is commonly described by a set of coupled partial differential equations, derived from the acoustic approximation. These equations produce pure P‐wave responses in elliptically anisotropic media but generate undesired shear‐wave components for more general TI anisotropy. Furthermore, these equations suffer from instabilities when the anisotropy parameter ε is less than δ. One solution to both problems is to use pure acoustic anisotropic wave equations, which can produce pure P‐waves without any shear‐wave contaminations in both elliptical and anelliptical TI media. In this paper, we propose a new pure acoustic transversely isotropic wave equation, which can be conveniently solved using the pseudospectral method. Like most other pure acoustic anisotropic wave equations, our equation involves complicated pseudo‐differential operators in space which are difficult to handle using the finite difference method. The advantage of our equation is that all of its model parameters are separable from the spatial differential and pseudo‐differential operators; therefore, the pseudospectral method can be directly applied. We use phase velocity analysis to show that our equation, expressed in a summation form, can be properly truncated to achieve the desired accuracy according to anisotropy strength. This flexibility allows us to save computational time by choosing the right number of summation terms for a given model. We use numerical examples to demonstrate that this new pure acoustic wave equation can produce highly accurate results, completely free from shear‐wave artefacts. This equation can be straightforwardly generalized to tilted TI media.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号