首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
经验模态分解算法(EMD)是一种基于有效波和噪声尺度差异进行波场分离的随机噪声压制方法,但由于实际地震数据波场复杂,导致模态混叠较严重,仅凭该方法进行去噪很难达到理想效果.本文基于EMD算法对信号多尺度的分解特性,结合Hausdorff维数约束条件,提出一种用于地震随机噪声衰减的新方法.首先对地震数据进行EMD自适应分解,得到一系列具有不同尺度的、分形自相似性的固有模态分量(IMF);在此基础上,基于有效信号和随机噪声的Hausdorff维数差异,识别混有随机噪声的IMF分量,对该分量进行相关的阈值滤波处理,从而实现有效信号和随机噪声的有效分离.文中从仿真信号试验出发,到模型地震数据和实际地震数据的测试处理,同时与传统的EMD处理结果相对比.结果表明,本文方法对地震随机噪声的衰减有更佳的压制效果.  相似文献   

2.
We present an approach based on local‐slope estimation for the separation of scattered surface waves from reflected body waves. The direct and scattered surface waves contain a significant amount of seismic energy. They present great challenges in land seismic data acquisition and processing, particularly in arid regions with complex near‐surface heterogeneities (e.g., dry river beds, wadis/large escarpments, and karst features). The near‐surface scattered body‐to‐surface waves, which have comparable amplitudes to reflections, can mask the seismic reflections. These difficulties, added to large amplitude direct and back‐scattered surface (Rayleigh) waves, create a major reduction in signal‐to‐noise ratio and degrade the final sub‐surface image quality. Removal of these waves can be difficult using conventional filtering methods, such as an filter, without distorting the reflected signal. The filtering algorithm we present is based on predicting the spatially varying slope of the noise, using steerable filters, and separating the signal and noise components by applying a directional nonlinear filter oriented toward the noise direction to predict the noise and then subtract it from the data. The slope estimation step using steerable filters is very efficient. It requires only a linear combination of a set of basis filters at fixed orientation to synthesize an image filtered at an arbitrary orientation. We apply our filtering approach to simulated data as well as to seismic data recorded in the field to suppress the scattered surface waves from reflected body waves, and we demonstrate its superiority over conventional techniques in signal preservation and noise suppression.  相似文献   

3.
For 3‐D shallow‐water seismic surveys offshore Abu Dhabi, imaging the target reflectors requires high resolution. Characterization and monitoring of hydrocarbon reservoirs by seismic amplitude‐versus‐offset techniques demands high pre‐stack amplitude fidelity. In this region, however, it still was not clear how the survey parameters should be chosen to satisfy the required data quality. To answer this question, we applied the focal‐beam method to survey evaluation and design. This subsurface‐ and target‐oriented approach enables quantitative analysis of attributes such as the best achievable resolution and pre‐stack amplitude fidelity at a fixed grid point in the subsurface for a given acquisition geometry at the surface. This method offers an efficient way to optimize the acquisition geometry for maximum resolution and minimum amplitude‐versus‐offset imprint. We applied it to several acquisition geometries in order to understand the effects of survey parameters such as the four spatial sampling intervals and apertures of the template geometry. The results led to a good understanding of the relationship between the survey parameters and the resulting data quality and identification of the survey parameters for reflection imaging and amplitude‐versus‐offset applications.  相似文献   

4.
Surface waves in seismic data are often dominant in a land or shallow‐water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near‐surface characterization. However, their complex properties make the surface‐wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three‐dimensional surface‐wave estimation and separation using an iterative closed‐loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward‐modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface‐wave model is parameterized by the frequency‐dependent slowness and source properties for each surface‐wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.  相似文献   

5.
A new, adaptive multi‐criteria method for accurate estimation of three‐component three‐dimensional vertical seismic profiling of first breaks is proposed. Initially, we manually pick first breaks for the first gather of the three‐dimensional borehole set and adjust several coefficients to approximate the first breaks wave‐shape parameters. We then predict the first breaks for the next source point using the previous one, assuming the same average velocity. We follow this by calculating an objective function for a moving trace window to minimize it with respect to time shift and slope. This function combines four main properties that characterize first breaks on three‐component borehole data: linear polarization, signal/noise ratio, similarity in wave shapes for close shots and their stability in the time interval after the first break. We then adjust the coefficients by combining current and previous values. This approach uses adaptive parameters to follow smooth wave‐shape changes. Finally, we average the first breaks after they are determined in the overlapping windows. The method utilizes three components to calculate the objective function for the direct compressional wave projection. An adaptive multi‐criteria optimization approach with multi three‐component traces makes this method very robust, even for data contaminated with high noise. An example using actual data demonstrates the stability of this method.  相似文献   

6.
Presence of noise in the acquisition of surface nuclear magnetic resonance data is inevitable. There are various types of noise, including Gaussian noise, spiky events, and harmonic noise that affect the signal quality of surface nuclear magnetic resonance measurements. In this paper, we describe an application of a two‐step noise suppression approach based on a non‐linear adaptive decomposition technique called complete ensemble empirical mode decomposition in conjunction with a statistical optimization process for enhancing the signal‐to‐noise ratio of the surface nuclear magnetic resonance signal. The filtering procedure starts with applying the complete ensemble empirical mode decomposition method to decompose the noisy surface nuclear magnetic resonance signal into a finite number of intrinsic mode functions. Afterwards, a threshold region based on de‐trended fluctuation analysis is defined to identify the noisy intrinsic mode functions, and then the no‐noise intrinsic mode functions are used to recover the partially de‐noised signal. In the second stage, we applied a statistical method based on the variance criterion to the signal obtained from the initial phase to mitigate the remaining noise. To demonstrate the functionality of the proposed strategy, the method was evaluated on an added‐noise synthetic surface nuclear magnetic resonance signal and on field data. The results show that the proposed procedure allows us to improve the signal‐to‐noise ratio significantly and, consequently, extract the signal parameters (i.e., and V0) from noisy surface nuclear magnetic resonance data efficiently.  相似文献   

7.
4D seismic is widely used to remotely monitor fluid movement in subsurface reservoirs. This technique is especially effective offshore where high survey repeatability can be achieved. It comes as no surprise that the first 4D seismic that successfully monitored the CO2 sequestration process was recorded offshore in the Sleipner field, North Sea. In the case of land projects, poor repeatability of the land seismic data due to low S/N ratio often obscures the time‐lapse seismic signal. Hence for a successful on shore monitoring program improving seismic repeatability is essential. Stage 2 of the CO2CRC Otway project involves an injection of a small amount (around 15,000 tonnes) of CO2/CH4 gas mixture into a saline aquifer at a depth of approximately 1.5 km. Previous studies at this site showed that seismic repeatability is relatively low due to variations in weather conditions, near surface geology and farming activities. In order to improve time‐lapse seismic monitoring capabilities, a permanent receiver array can be utilised to improve signal to noise ratio and hence repeatability. A small‐scale trial of such an array was conducted at the Otway site in June 2012. A set of 25 geophones was installed in 3 m deep boreholes in parallel to the same number of surface geophones. In addition, four geophones were placed into boreholes of 1–12 m depth. In order to assess the gain in the signal‐to‐noise ratio and repeatability, both active and passive seismic surveys were carried out. The surveys were conducted in relatively poor weather conditions, with rain, strong wind and thunderstorms. With such an amplified background noise level, we found that the noise level for buried geophones is on average 20 dB lower compared to the surface geophones. The levels of repeatability for borehole geophones estimated around direct wave, reflected wave and ground roll are twice as high as for the surface geophones. Both borehole and surface geophones produce the best repeatability in the 30–90 Hz frequency range. The influence of burying depth on S/N ratio and repeatability shows that significant improvement in repeatability can be reached at a depth of 3 m. The level of repeatability remains relatively constant between 3 and 12 m depths.  相似文献   

8.
Coherent noise in land seismic data primarily consists in source‐generated surface‐wave modes. The component that is traditionally considered most relevant is the so‐called ground roll, consisting in surface‐wave modes propagating directly from sources to receivers. In many geological situations, near?surface heterogeneities and discontinuities, as well as topography irregularities, diffract the surface waves and generate secondary events, which can heavily contaminate records. The diffracted and converted surface waves are often called scattered noise and can be a severe problem particularly in areas with shallow or outcropping hard lithological formations. Conventional noise attenuation techniques are not effective with scattering: they can usually address the tails but not the apices of the scattered events. Large source and receiver arrays can attenuate scattering but only in exchange for a compromise to signal fidelity and resolution. We present a model?based technique for the scattering attenuation, based on the estimation of surface‐wave properties and on the prediction of surface waves with a complex path involving diffractions. The properties are estimated first, to produce surface?consistent volumes of the propagation properties. Then, for all gathers to filter, we integrate the contributions of all possible diffractors, building a scattering model. The estimated scattered wavefield is then subtracted from the data. The method can work in different domains and copes with aliased surface waves. The benefits of the method are demonstrated with synthetic and real data.  相似文献   

9.
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing.  相似文献   

10.
Reconstruction of seismic data is routinely used to improve the quality and resolution of seismic data from incomplete acquired seismic recordings. Curvelet‐based Recovery by Sparsity‐promoting Inversion, adapted from the recently‐developed theory of compressive sensing, is one such kind of reconstruction, especially good for recovery of undersampled seismic data. Like traditional Fourier‐based methods, it performs best when used in conjunction with randomized subsampling, which converts aliases from the usual regular periodic subsampling into easy‐to‐eliminate noise. By virtue of its ability to control gap size, along with the random and irregular nature of its sampling pattern, jittered (sub)sampling is one proven method that has been used successfully for the determination of geophone positions along a seismic line. In this paper, we extend jittered sampling to two‐dimensional acquisition design, a more difficult problem, with both underlying Cartesian and hexagonal grids. We also study what we term separable and non‐separable two‐dimensional jittered samplings. We find hexagonal jittered sampling performs better than Cartesian jittered sampling, while fully non‐separable jittered sampling performs better than separable jittered sampling. Two other 2D randomized sampling methods, Poisson Disk sampling and Farthest Point sampling, both known to possess blue‐noise spectra, are also shown to perform well.  相似文献   

11.
Planar waves events recorded in a seismic array can be represented as lines in the Fourier domain. However, in the real world, seismic events usually have curvature or amplitude variability, which means that their Fourier transforms are no longer strictly linear but rather occupy conic regions of the Fourier domain that are narrow at low frequencies but broaden at high frequencies where the effect of curvature becomes more pronounced. One can consider these regions as localised “signal cones”. In this work, we consider a space–time variable signal cone to model the seismic data. The variability of the signal cone is obtained through scaling, slanting, and translation of the kernel for cone‐limited (C‐limited) functions (functions whose Fourier transform lives within a cone) or C‐Gaussian function (a multivariate function whose Fourier transform decays exponentially with respect to slowness and frequency), which constitutes our dictionary. We find a discrete number of scaling, slanting, and translation parameters from a continuum by optimally matching the data. This is a non‐linear optimisation problem, which we address by a fixed‐point method that utilises a variable projection method with ?1 constraints on the linear parameters and bound constraints on the non‐linear parameters. We observe that slow decay and oscillatory behaviour of the kernel for C‐limited functions constitute bottlenecks for the optimisation problem, which we partially overcome by the C‐Gaussian function. We demonstrate our method through an interpolation example. We present the interpolation result using the estimated parameters obtained from the proposed method and compare it with those obtained using sparsity‐promoting curvelet decomposition, matching pursuit Fourier interpolation, and sparsity‐promoting plane‐wave decomposition methods.  相似文献   

12.
Local seismic event slopes contain subsurface velocity information and can be used to estimate seismic stacking velocity. In this paper, we propose a novel approach to estimate the stacking velocity automatically from seismic reflection data using similarity‐weighted k‐means clustering, in which the weights are local similarity between each trace in common midpoint gather and a reference trace. Local similarity reflects the local signal‐to‐noise ratio in common midpoint gather. We select the data points with high signal‐to‐noise ratio to be used in the velocity estimation with large weights in mapped traveltime and velocity domain by similarity‐weighted k‐means clustering with thresholding. By using weighted k‐means clustering, we make clustering centroids closer to those data points with large weights, which are more reliable and have higher signal‐to‐noise ratio. The interpolation is used to obtain the whole velocity volume after we have got velocity points calculated by weighted k‐means clustering. Using the proposed method, one obtains a more accurate estimate of the stacking velocity because the similarity‐based weighting in clustering takes into account the signal‐to‐noise ratio and reliability of different data points in mapped traveltime and velocity domain. In order to demonstrate that, we apply the proposed method to synthetic and field data examples, and the resulting images are of higher quality when compared with the ones obtained using existing methods.  相似文献   

13.
Dictionary learning is a successful method for random seismic noise attenuation that has been proven by some scholars. Dictionary learning–based techniques aim to learn a set of common bases called dictionaries from given noised seismic data. Then, the denoising process will be performed by assuming a sparse representation on each small local patch of the seismic data over the learned dictionary. The local patches that are extracted from the seismic section are essentially two‐dimensional matrices. However, for the sake of simplicity, almost all of the existing dictionary learning methods just convert each two‐dimensional patch into a one‐dimensional vector. In doing this, the geometric structure information of the raw data will be revealed, leading to low capability in the reconstruction of seismic structures, such as faults and dip events. In this paper, we propose a two‐dimensional dictionary learning method for the seismic denoising problem. Unlike other dictionary learning–based methods, the proposed method represents the two‐dimensional patches directly to avoid the conversion process, and thus reserves the important structure information for a better reconstruction. Our method first learns a two‐dimensional dictionary from the noisy seismic patches. Then, we use the two‐dimensional dictionary to sparsely represent all of the noisy two‐dimensional patches to obtain clean patches. Finally, the clean patches are patched back to generate a denoised seismic section. The proposed method is compared with the other three denoising methods, including FX‐decon, curvelet and one‐dimensional learning method. The results demonstrate that our method has better denoising performance in terms of signal‐to‐noise ratio, fault and amplitude preservation.  相似文献   

14.
Conventional time-space domain and frequency-space domain prediction filtering methods assume that seismic data consists of two parts, signal and random noise. That is, the so-called additive noise model. However, when estimating random noise, it is assumed that random noise can be predicted from the seismic data by convolving with a prediction error filter. That is, the source-noise model. Model inconsistencies, before and after denoising, compromise the noise attenuation and signal-preservation performances of prediction filtering methods. Therefore, this study presents an inversion-based time-space domain random noise attenuation method to overcome the model inconsistencies. In this method, a prediction error filter (PEF), is first estimated from seismic data; the filter characterizes the predictability of the seismic data and adaptively describes the seismic data’s space structure. After calculating PEF, it can be applied as a regularized constraint in the inversion process for seismic signal from noisy data. Unlike conventional random noise attenuation methods, the proposed method solves a seismic data inversion problem using regularization constraint; this overcomes the model inconsistency of the prediction filtering method. The proposed method was tested on both synthetic and real seismic data, and results from the prediction filtering method and the proposed method are compared. The testing demonstrated that the proposed method suppresses noise effectively and provides better signal-preservation performance.  相似文献   

15.
In tight gas sands, the signal‐to‐noise ratio of nuclear magnetic resonance log data is usually low, which limits the application of nuclear magnetic resonance logs in this type of reservoir. This project uses the method of wavelet‐domain adaptive filtering to denoise the nuclear magnetic resonance log data from tight gas sands. The principles of the maximum correlation coefficient and the minimum root mean square error are used to decide on the optimal basis function for wavelet transformation. The feasibility and the effectiveness of this method are verified by analysing the numerical simulation results and core experimental data. Compared with the wavelet thresholding denoise method, this adaptive filtering method is more effective in noise filtering, which can improve the signal‐to‐noise ratio of nuclear magnetic resonance data and the inversion precision of transverse relaxation time T2 spectrum. The application of this method to nuclear magnetic resonance logs shows that this method not only can improve the accuracy of nuclear magnetic resonance porosity but also can enhance the recognition ability of tight gas sands in nuclear magnetic resonance logs.  相似文献   

16.
Topography and severe variations of near‐surface layers lead to travel‐time perturbations for the events in seismic exploration. Usually, these perturbations could be estimated and eliminated by refraction technology. The virtual refraction method is a relatively new technique for retrieval of refraction information from seismic records contaminated by noise. Based on the virtual refraction, this paper proposes super‐virtual refraction interferometry by cross‐correlation to retrieve refraction wavefields by summing the cross‐correlation of raw refraction wavefields and virtual refraction wavefields over all receivers located outside the retrieved source and receiver pair. This method can enhance refraction signal gradually as the source–receiver offset decreases. For further enhancement of refracted waves, a scheme of hybrid virtual refraction wavefields is applied by stacking of correlation‐type and convolution‐type super‐virtual refractions. Our new method does not need any information about the near‐surface velocity model, which can solve the problem of directly unmeasured virtual refraction energy from the virtual source at the surface, and extend the acquisition aperture to its maximum extent in raw seismic records. It can also reduce random noise influence in raw seismic records effectively and improve refracted waves’ signal‐to‐noise ratio by a factor proportional to the square root of the number of receivers positioned at stationary‐phase points, based on the improvement of virtual refraction's signal‐to‐noise ratio. Using results from synthetic and field data, we show that our new method is effective to retrieve refraction information from raw seismic records and improve the accuracy of first‐arrival picks.  相似文献   

17.
Attenuation of random noise and enhancement of structural continuity can significantly improve the quality of seismic interpretation. We present a new technique, which aims at reducing random noise while protecting structural information. The technique is based on combining structure prediction with either similarity‐mean filtering or lower‐upper‐middle filtering. We use structure prediction to form a structural prediction of seismic traces from neighbouring traces. We apply a non‐linear similarity‐mean filter or an lower‐upper‐middle filter to select best samples from different predictions. In comparison with other common filters, such as mean or median, the additional parameters of the non‐linear filters allow us to better control the balance between eliminating random noise and protecting structural information. Numerical tests using synthetic and field data show the effectiveness of the proposed structure‐enhancing filters.  相似文献   

18.
Attenuation in seismic wave propagation is a common cause for poor illumination of subsurface structures. Attempts to compensate for amplitude loss in seismic images by amplifying the wavefield may boost high‐frequency components, such as noise, and create undesirable imaging artefacts. In this paper, rather than amplifying the wavefield directly, we develop a stable compensation operator using stable division. The operator relies on a constant‐Q wave equation with decoupled fractional Laplacians and compensates for the full attenuation phenomena by performing wave extrapolation twice. This leads to two new imaging conditions to compensate for attenuation in reverse‐time migration. A time‐dependent imaging condition is derived by applying Q‐compensation in the frequency domain, whereas a time‐independent imaging condition is formed in the image space by calculating image normalisation weights. We demonstrate the feasibility and robustness of the proposed methods using three synthetic examples. We found that the proposed methods are capable of properly compensating for attenuation without amplifying high‐frequency noise in the data.  相似文献   

19.
20.
In many land seismic situations, the complex seismic wave propagation effects in the near‐surface area, due to its unconsolidated character, deteriorate the image quality. Although several methods have been proposed to address this problem, the negative impact of 3D complex near‐surface structures is still unsolved to a large extent. This paper presents a complete 3D data‐driven solution for the near‐surface problem based on 3D one‐way traveltime operators, which extends our previous attempts that were limited to a 2D situation. Our solution is composed of four steps: 1) seismic wave propagation from the surface to a suitable datum reflector is described by parametrized one‐way propagation operators, with all the parameters estimated by a new genetic algorithm, the self‐adjustable input genetic algorithm, in an automatic and purely data‐driven way; 2) surface‐consistent residual static corrections are estimated to accommodate the fast variations in the near‐surface area; 3) a replacement velocity model based on the traveltime operators in the good data area (without the near‐surface problem) is estimated; 4) data interpolation and surface layer replacement based on the estimated traveltime operators and the replacement velocity model are carried out in an interweaved manner in order to both remove the near‐surface imprints in the original data and keep the valuable geological information above the datum. Our method is demonstrated on a subset of a 3D field data set from the Middle East yielding encouraging results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号