首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 551 毫秒
1.
Seismic data processing is a challenging task, especially when dealing with vector-valued datasets. These data are characterized by correlated components, where different levels of uncorrelated random noise corrupt each one of the components. Mitigating such noise while preserving the signal of interest is a primary goal in the seismic-processing workflow. The frequency-space deconvolution is a well-known linear prediction technique, which is commonly used for random noise suppression. This paper represents vector-field seismic data through quaternion arrays and shows how to mitigate random noise by proposing the extension of the frequency-space deconvolution to its hypercomplex version, the quaternion frequency-space deconvolution. It also shows how a widely linear prediction model exploits the correlation between data components of improper signals. The widely linear scheme, named widely-linear quaternion frequency-space deconvolution, produces longer prediction filters, which have enhanced signal preservation capabilities shown through synthetic and field vector-valued data examples.  相似文献   

2.
The relative source time function (RSTF) inversion uncertainty assessment was performed for two small, mining-induced seismic events (M W =2.9 and 3.0) that occurred at Rudna copper mine in Poland. The seismograms of selected events were recorded by the seismic net work composed of over 60, short-period, vertical seismometers, recording ground velocity, located in the distance ranging from 400 m up to 8 km from their hypocenters. The RSTFs were calculated for each seismic station independently, using the empirical Green’s function technique. The pseudospectral approximation of the sought RSTF by a finite sum of Gaussian kernel functions was used and the inverse problem was solved with the adaptive simulated annealing algorithm. Both methods improved the stability of the deconvolution procedure and physical correctness of the final solution in comparison to the classical deconvolution methods. To estimate the inversion uncertainties, classical Markov-chain Monte-Carlo techniques were used. The uncertainty analysis allows for improved selection of a priori data to the following inversion for kinematic rupture process.  相似文献   

3.
The purpose of deconvolution is to retrieve the reflectivity from seismic data. To do this requires an estimate of the seismic wavelet, which in some techniques is estimated simultaneously with the reflectivity, and in others is assumed known. The most popular deconvolution technique is inverse filtering. It has the property that the deconvolved reflectivity is band-limited. Band-limitation implies that reflectors are not sharply resolved, which can lead to serious interpretation problems in detailed delineation. To overcome the adverse effects of band-limitation, various alternatives for inverse filtering have been proposed. One class of alternatives is Lp-norm deconvolution, L1norm deconvolution being the best-known of this class. We show that for an exact convolutional forward model and statistically independent reflectivity and additive noise, the maximum likelihood estimate of the reflectivity can be obtained by Lp-norm deconvolution for a range of multivariate probability density functions of the reflectivity and the noise. The L-norm corresponds to a uniform distribution, the L2-norm to a Gaussian distribution, the L1-norm to an exponential distribution and the L0-norm to a variable that is sparsely distributed. For instance, if we assume sparse and spiky reflectivity and Gaussian noise with zero mean, the Lp-norm deconvolution problem is solved best by minimizing the L0-norm of the reflectivity and the L2-norm of the noise. However, the L0-norm is difficult to implement in an algorithm. From a practical point of view, the frequency-domain mixed-norm method that minimizes the L1norm of the reflectivity and the L2-norm of the noise is the best alternative. Lp-norm deconvolution can be stated in both time and frequency-domain. We show that both approaches are only equivalent for the case when the noise is minimized with the L2-norm. Finally, some Lp-norm deconvolution methods are compared on synthetic and field data. For the practical examples, the wide range of possible Lp-norm deconvolution methods is narrowed down to three methods with p= 1 and/or 2. Given the assumptions of sparsely distributed reflectivity and Gaussian noise, we conclude that the mixed L1norm (reflectivity) L2-norm (noise) performs best. However, the problems inherent to single-trace deconvolution techniques, for example the problem of generating spurious events, remain. For practical application, a greater problem is that only the main, well-separated events are properly resolved.  相似文献   

4.
We propose a three‐step bandwidth enhancing wavelet deconvolution process, combining linear inverse filtering and non‐linear reflectivity construction based on a sparseness assumption. The first step is conventional Wiener deconvolution. The second step consists of further spectral whitening outside the spectral bandwidth of the residual wavelet after Wiener deconvolution, i.e., the wavelet resulting from application of the Wiener deconvolution filter to the original wavelet, which usually is not a perfect spike due to band limitations of the original wavelet. We specifically propose a zero‐phase filtered sparse‐spike deconvolution as the second step to recover the reflectivity dominantly outside of the bandwidth of the residual wavelet after Wiener deconvolution. The filter applied to the sparse‐spike deconvolution result is proportional to the deviation of the amplitude spectrum of the residual wavelet from unity, i.e., it is of higher amplitude; the closer the amplitude spectrum of the residual wavelet is to zero, but of very low amplitude, the closer it is to unity. The third step consists of summation of the data from the two first steps, basically adding gradually the contribution from the sparse‐spike deconvolution result at those frequencies at which the residual wavelet after Wiener deconvolution has small amplitudes. We propose to call this technique “sparsity‐enhanced wavelet deconvolution”. We demonstrate the technique on real data with the deconvolution of the (normal‐incidence) source side sea‐surface ghost of marine towed streamer data. We also present the extension of the proposed technique to time‐varying wavelet deconvolution.  相似文献   

5.
Wiener deconvolution is generally used to improve resolution of the seismic sections, although it has several important assumptions. I propose a new method named Gold deconvolution to obtain Earth’s sparse-spike reflectivity series. The method uses a recursive approach and requires the source waveform to be known, which is termed as Deterministic Gold deconvolution. In the case of the unknown wavelet, it is estimated from seismic data and the process is then termed as Statistical Gold deconvolution. In addition to the minimum phase, Gold deconvolution method also works for zero and mixed phase wavelets even on the noisy seismic data. The proposed method makes no assumption on the phase of the input wavelet, however, it needs the following assumptions to produce satisfactory results: (1) source waveform is known, if not, it should be estimated from seismic data, (2) source wavelet is stationary at least within a specified time gate, (3) input seismic data is zero offset and does not contain multiples, and (4) Earth consists of sparse spike reflectivity series. When applied in small time and space windows, the Gold deconvolution algorithm overcomes nonstationarity of the input wavelet. The algorithm uses several thousands of iterations, and generally a higher number of iterations produces better results. Since the wavelet is extracted from the seismogram itself for the Statistical Gold deconvolution case, the Gold deconvolution algorithm should be applied via constant-length windows both in time and space directions to overcome the nonstationarity of the wavelet in the input seismograms. The method can be extended into a two-dimensional case to obtain time-and-space dependent reflectivity, although I use one-dimensional Gold deconvolution in a trace-by-trace basis. The method is effective in areas where small-scale bright spots exist and it can also be used to locate thin reservoirs. Since the method produces better results for the Deterministic Gold deconvolution case, it can be used for the deterministic deconvolution of the data sets with known source waveforms such as land Vibroseis records and marine CHIRP systems.  相似文献   

6.
The voluminous gravity and magnetic data sets demand automatic interpretation techniques like Naudy, Euler and Werner deconvolution. Of these techniques, the Euler deconvolution has become a popular choice because the method assumes no particular geological model. However, the conventional approach to solving Euler equation requires tentative values of the structural index preventing it from being fully automatic and assumes a constant background that can be easily violated if the singular points are close to each other. We propose a possible solution to these problems by simultaneously estimating the source location, depth and structural index assuming nonlinear background. The Euler equation is solved in a nonlinear fashion using the optimization technique like conjugate gradient. This technique is applied to a published synthetic data set where the magnetic anomalies were modeled for a complex assemblage of simple magnetic bodies. The results for close by singular points are superior to those obtained by assuming linear background. We also applied the technique to a magnetic data set collected along the western continental margin of India. The results are in agreement with the regional magnetic interpretation and the bathymetric expressions.  相似文献   

7.
Enhancing the resolution and accuracy of surface ground-penetrating radar (GPR) reflection data by inverse filtering to recover a zero-phased band-limited reflectivity image requires a deconvolution technique that takes the mixed-phase character of the embedded wavelet into account. In contrast, standard stochastic deconvolution techniques assume that the wavelet is minimum phase and, hence, often meet with limited success when applied to GPR data. We present a new general-purpose blind deconvolution algorithm for mixed-phase wavelet estimation and deconvolution that (1) uses the parametrization of a mixed-phase wavelet as the convolution of the wavelet's minimum-phase equivalent with a dispersive all-pass filter, (2) includes prior information about the wavelet to be estimated in a Bayesian framework, and (3) relies on the assumption of a sparse reflectivity. Solving the normal equations using the data autocorrelation function provides an inverse filter that optimally removes the minimum-phase equivalent of the wavelet from the data, which leaves traces with a balanced amplitude spectrum but distorted phase. To compensate for the remaining phase errors, we invert in the frequency domain for an all-pass filter thereby taking advantage of the fact that the action of the all-pass filter is exclusively contained in its phase spectrum. A key element of our algorithm and a novelty in blind deconvolution is the inclusion of prior information that allows resolving ambiguities in polarity and timing that cannot be resolved using the sparseness measure alone. We employ a global inversion approach for non-linear optimization to find the all-pass filter phase values for each signal frequency. We tested the robustness and reliability of our algorithm on synthetic data with different wavelets, 1-D reflectivity models of different complexity, varying levels of added noise, and different types of prior information. When applied to realistic synthetic 2-D data and 2-D field data, we obtain images with increased temporal resolution compared to the results of standard processing.  相似文献   

8.
In contrast to the conventional deconvolution technique (Wiener-Levinson), the spike-, predictive-, and gap-deconvolution is realized with the help of an adaptive updating technique of the prediction operator. As the prediction operator will be updated from sample to sample, this procedure can be used for time variant deconvolution. Updating formulae discussed are the adaptive updating formula and the sequential algorithm for the sequential estimation technique. This updating technique is illustrated using both synthetic and real seismic data.  相似文献   

9.
10.
Bussgang算法是针对褶积盲源分离问题提出的,本文将其用于地震盲反褶积处理.由于广义高斯概率密度函数具有逼近任意概率密度函数的能力,从反射系数序列的统计特征出发,引入广义高斯分布来体现反射系数序列超高斯分布特征.依据反射系数序列的统计特征和Bussgang算法原理,建立以Kullback-Leibler距离为非高斯性度量的目标函数,并导出算法中涉及到的无记忆非线性函数,最终实现了地震盲反褶积.模型试算和实际资料处理结果表明,该方法能较好地适应非最小相位系统,能够同时实现地震子波和反射系数估计,有效地提高地震资料分辨率.  相似文献   

11.
The τ-p transform is an invertible transformation of seismic shot records expressed as a function of time and offset into the τ (intercept time) and p (ray parameter) domain. The τ-p transform is derived from the solution of the wave equation for a point source in a three-dimensional, vertically non-homogeneous medium and therefore is a true amplitude process for the assumed model. The main advantage of this transformation is to present a point source shot record as a series of plane wave experiments. The asymptotic expansion of this transformation is found to be useful in reflection seismic data processing. The τ-p and frequency-wavenumber (or f-k) processes are closely related. Indeed, the τ-p process embodies the frequency-wavenumber transformation, so the use of this technique suffers the same limitations as the f-k technique. In particular, the wavefield must be sampled with sufficient spatial density to avoid wavenumber aliasing. The computation of this transform and its inverse transform consists of a two-dimensional Fast Fourier Transform followed by an interpolation, then by an inverse-time Fast Fourier Transform. This technique is extended from a vertically inhomogeneous three-dimensional medium to a vertically and laterally inhomogeneous three-dimensional medium. The τ-p transform may create artifacts (truncation and aliasing effects) which can be reduced by a finer spatial density of geophone groups by a balancing of the seismic data and by a tapering of the extremities of the seismic data. The τ-p domain is used as a temporary domain where the attack of coherent noise is well addressed; this technique can be viewed as ‘time-variant f-k filtering’. In addition, the process of deconvolution and multiple suppression in the τ-p domain is at least as well addressed as in the time-offset domain.  相似文献   

12.
自适应Kalman滤波反褶积的快速实现方法   总被引:6,自引:2,他引:6       下载免费PDF全文
提出了以二进小波变换为基础的自适应Kalman滤波反褶积(AKFD)新方法,针对该方法的计算复杂程度,提出了一种快速实现方法.二进小波变换的AKFD抛弃了传统预测反褶积对信号平稳性的假设,克服了提高分辨率而信噪比明显降低的问题,具有很好的抗噪性能.在小波域进行的AKFD在压制假反射以及提高分辨率方面比时间域的AKFD好,克服了在时域内进行AKFD抬升低频成分的缺陷.利用二维地震数据的局部平稳性的假设提出了快速实现方法,通过分段求取自适应预测算子,分别于横向及纵向采用样条插值的方法进行插值,来减少求取自适应预测算子的计算量,达到快速实现的目的.经过大量实验表明计算速度提高数百倍,仍能保持原来的计算效果.  相似文献   

13.
14.
利用祁连山主动源观测系统记录的气枪激发信号资料,通过滤波、叠加、反褶积、互相关等处理,获得了观测系统各台站P波不同震相走时变化数据,发现2019年9月16日张掖M5.0地震前震源区附近台站存在一定的P波走时变化,表明震源区地下结构在震前存在应力状态的改变。  相似文献   

15.
The Euler deconvolution is the most popular technique used to interpret potential field data in terms of simple sources characterized by the value of the degree of homogeneity. A more recent technique, the continuous wavelet transform, allows the same kind of interpretation. The Euler deconvolution is usually applied to data at a constant level while the continuous wavelet transform is usually applied to the points belonging to lines (ridges) connecting the m -order partial derivative modulus maxima of the upward-continued field at different altitudes in the harmonic region. In this paper a new method is proposed that unifies the two techniques. The method consists of the application of Euler's equation to the ridges so that the equation assumes a reduced form. Along each ridge the ratio among the m -order partial derivative of the field and its vertical partial derivative, for isolated source model, is a straight line whose slope and intercept allows the estimation of the source depth and degree of homogeneity. The method, strictly valid for single source model, has also been applied to the multisource case, where the presence of the interference among the field generated by each single source causes the path of the ratio to be no longer straight. The method in this case gives approximate solutions that are good estimations of the source depth and its degree of homogeneity only for a restricted range of altitudes, where the ratio is approximately linear and the source behaves as if it were isolated.  相似文献   

16.
—Adaptive filters offer advantages over Wiener filters for time-varying processes. They are used for deconvolution of seismic data which exhibit non-stationary behavior, and seldom for noise reduction. Different algorithms for adaptive filtering exist. The least-mean-squares (LMS) algorithm, because of its simplicity, has been widely applied to data from different fields that fall outside geophysics. The application of the LMS algorithm to improve the signal-to-noise ratio in deep reflection seismic pre-stack data is studied in this paper. Synthetic data models and field data from the DEKORP project are used to this end.¶Three adaptive filter techniques, one-trace technique, two-trace technique and time-slice technique, are examined closely to establish the merits and demerits of each technique. The one-trace technique does not improve the signal-to-noise ratio in deep reflection seismic data where signal and noise cover the same frequency range. With the two-trace technique, the strongest noise reduction is achieved for small noise on the data. The filter efficiency decreases rapidly with increasing noise. Furthermore, the filter performance is poor upon application to common-midpoint (CMP) gathers with no normal-moveout (NMO) corrections. Application of the two-trace method to seismic traces before dynamic correction results in gaps in the signal along the reflection hyperbolas. The time-slice technique, introduced in this paper, offers the best answer. In this case, the one-trace technique is applied to the NMO-corrected gathers across all traces in each gather at each time to separate the low-wavenumber component of the signal in offset direction from the high-wavenumber noise component. The stacking velocities used for the dynamic correction do not need to be known very accurately because in deep reflection seismics, residual moveouts are small and have only a minor influence on the results of the adaptive time-slice technique. Noise reduction is more significant with the time-slice technique than with the two-trace technique. The superiority of the adaptive time-slice technique is demonstrated with the DEKORP data.  相似文献   

17.
The receiver function method was originally developed to analyse earthquake data recorded by multicomponent (3C) sensors and consists in deconvolving the horizontal component by the vertical component. The deconvolution process removes travel path effects from the source to the base of the target as well as the earthquake source signature. In addition, it provides the possibility of separating the emergent P and PS waves based on adaptive subtraction between recorded components if plane waves of constant ray parameters are considered. The resulting receiver function signal is the local PS wave's impulse response generated at impedance contrasts below the 3C receiver.We propose to adapt this technique to the wide‐angle multi‐component reflection acquisition geometry. We focus on the simplest case of land data reflection acquisition. Our adapted version of the receiver function approach consists in a multi‐step procedure that first removes the P wavefield recorded on the horizontal component and next removes the source signature. The separation step is performed in the τ?p domain while the source designature can be achieved in either the τ?p or the t?x domain. Our technique does not require any a priori knowledge of the subsurface. The resulting receiver function is a pure PS‐wave reflectivity response, which can be used for amplitude versus slowness or offset analysis. Stack of the receiver function leads to a high‐quality S wave image.  相似文献   

18.
Vibroseis is a source used commonly for inland seismic exploration. This non-destructive source is often used in urban areas with strong environmental noise. The main goal of seismic data processing is to increase the signal/noise ratio where a determinant step is deconvolution. Vibroseis seismic data do not meet the basic minimum-phase assumption for the application of spiking and predictive deconvolution, therefore various techniques, such as phase shift, are applied to the data, to be able to successfully perform deconvolution of vibroseis data.This work analyzes the application of deconvolution techniques before and after cross-correlation on a real data set acquired for high resolution prospection of deep aquifers. In particular, we compare pre-correlation spiking and predictive deconvolution with Wiener filtering and with post-correlation time variant spectral whitening deconvolution. The main result is that at small offsets, post cross-correlation spectral whitening deconvolution and pre-correlation spiking deconvolution yield comparable results, while for large offsets the best result is obtained by applying a pre-cross-correlation predictive deconvolution.  相似文献   

19.
气枪震源资料反褶积方法及处理流程研究   总被引:5,自引:5,他引:0  
气枪震源具有极高的可重复性,可用于地下介质变化的监测。但不同工作条件下气枪震源产生的信号会存在细微差异,反褶积方法能在一定程度上消除由震源变化引起的记录信号变化。为了去除气枪震源子波信号,获取气枪源到台站之间的格林函数,通常需要选取一种恰当的方法对地震波形数据进行反褶积处理。频率域水准反褶积和时间域迭代反褶积是在接收函数等领域已被广泛使用的2种反褶积方法。本文以云南宾川主动源资料为例,对比了利用这2种方法处理气枪震源资料的效果,结果表明:在计算效率方面,频率域水准反褶积方法更具优势;在处理结果的信噪比方面,时间域迭代反褶积方法表现更好,P波初至也更清晰。此外,进一步讨论了在多炮资料的处理过程中反褶积和叠加等操作的顺序问题,最后提出了从气枪震源资料中提取气枪源到台站之间的格林函数的一般流程。  相似文献   

20.
The conventional nonstationary convolutional model assumes that the seismic signal is recorded at normal incidence. Raw shot gathers are far from this assumption because of the effects of offsets. Because of such problems, we propose a novel prestack nonstationary deconvolution approach. We introduce the radial trace (RT) transform to the nonstationary deconvolution, we estimate the nonstationary deconvolution factor with hyperbolic smoothing based on variable-step sampling (VSS) in the RT domain, and we obtain the high-resolution prestack nonstationary deconvolution data. The RT transform maps the shot record from the offset and traveltime coordinates to those of apparent velocity and traveltime. The ray paths of the traces in the RT better satisfy the assumptions of the convolutional model. The proposed method combines the advantages of stationary deconvolution and inverse Q filtering, without prior information for Q. The nonstationary deconvolution in the RT domain is more suitable than that in the space-time (XT) domain for prestack data because it is the generalized extension of normal incidence. Tests with synthetic and real data demonstrate that the proposed method is more effective in compensating for large-offset and deep data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号