首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In certain areas continuous Vibroseis profiling is not possible due to varying terrain conditions. Impulsive sources can be used to maintain continuous coverage. While this technique keeps the coverage at the desired level, for the processing of the actual data there is the problem of using different sources resulting in different source wavelets. In addition, the effect of the free surface is different for these two energy sources. The approach to these problems consists of a minimum-phase transformation of the two-sided Vibroseis data by removal of the anticipation component of the autocorrelation of the filtered sweep and a minimum-phase transformation of the impulsive source data by replacement of the recording filter operator with its minimum-phase correspondent. Therefore, after this transformation, both datasets show causal wavelets and a conventional deconvolution (spike or predictive) may be used. After stacking, a zero-phase transformation can be performed resulting in traces well suited for computing pseudo-acoustic impedance logs or for application of complex seismic trace analysis. The solution is also applicable to pure Vibroseis data, thereby eliminating the need for a special Vibroseis deconvolution. The processing steps described above are demonstrated on synthetic and actual data. The transformation operators used are two-sided recursive (TSR) shaping filters. After application of the above adjustment procedure, remaining signal distortions can be removed by modifying only the phase spectrum or both the amplitude and phase spectra. It can be shown that an arbitrary distortion defined in the frequency domain, i.e., a distortion of the amplitude and phase spectrum, is noticeable in the time section as a two-sided signal.  相似文献   

2.
Vibroseis is a source used commonly for inland seismic exploration. This non-destructive source is often used in urban areas with strong environmental noise. The main goal of seismic data processing is to increase the signal/noise ratio where a determinant step is deconvolution. Vibroseis seismic data do not meet the basic minimum-phase assumption for the application of spiking and predictive deconvolution, therefore various techniques, such as phase shift, are applied to the data, to be able to successfully perform deconvolution of vibroseis data.This work analyzes the application of deconvolution techniques before and after cross-correlation on a real data set acquired for high resolution prospection of deep aquifers. In particular, we compare pre-correlation spiking and predictive deconvolution with Wiener filtering and with post-correlation time variant spectral whitening deconvolution. The main result is that at small offsets, post cross-correlation spectral whitening deconvolution and pre-correlation spiking deconvolution yield comparable results, while for large offsets the best result is obtained by applying a pre-cross-correlation predictive deconvolution.  相似文献   

3.
A new method of Vibroseis deconvolution has been recently proposed by the authors. This discussion describes the effects of noise on the application of this method. The initial deconvolution step involves estimating the spectrum of the Vibroseis wavelet by homomorphic filtering. It is shown that noise causes problems with phase estimation. Hence, the Vibroseis wavelet is assumed to be zero phase. Examples demonstrate that zero phase cepstral filtering is a robust wavelet estimation approach for noisy data. The second step of the deconvolution method forms an impulse response model by a spectral extension method. Although this step can improve the resolution of seismic arrivals, it must be applied with caution in view of the deleterious effects of noise.  相似文献   

4.
—Seismic recording systems without a telemetry system have often been affected by electromagnetic induced spikes or bursts, which lead to strong data distortions combined with the correlation process of the vibroseis method. Partial or total loss of the desired seismic information is possible if no automatic spike and burst reduction is available in the field prior to vertical stacking and correlation of the field record.¶Currently, combined with the use of modern telemetry recording systems, the most common noise reduction methods in vibroseis techniques (e.g., spike and burst reduction, diversity stack) are already applied in the field to reduce noise in a very early state. The success of these automatic correction methods depends on the fundamental principles of the recording situation, the actual characteristic of the distorting noise and the parameter justification by the operator. Since field data are usually correlated and already vertical stacked in the field to minimize logistical and processing costs, no subsequent parameter corrections are possible to optimize the noise reduction after correlation and vertical stacking of a production record.¶The noise reduction method described in this paper uses final recorded and stacked vibroseis field data at the correlated or uncorrelated stage of processing. The method eliminates signal artifacts caused by spikes or bursts combined with a standard convolution process. A modified correlation operator compresses the noise artifact in time using a single trace convolution process. After elimination of this compressed noise, re-application of the convolution process leads to a noise-corrected replacement of the input data. The efficiency of the method is shown with a synthetic data set and a real vibroseis field record. Furthermore, several thousand records from a 2-D deep seismic reflection project could be corrected with good results using this method.  相似文献   

5.
The effects of source and receiver motion on seismic data are considered using extensions of the standard convolutional model. In particular, receiver motion introduces a time-variant spatial shift into data, while source motion converts the effect of the source signature from a single-channel convolution in time to a multichannel convolution in time and space. These results are consistent with classical Doppler theory and suggest that Doppler shifting can introduce distortions into seismic data even at relatively slow acquisition speeds. It is shown that, while both source and receiver motion are known to be important for marine vibroseis acquisition, receiver motion alone can produce significant artifacts in marine 3D data. Fortunately, the convolutional nature of the distortions renders them amenable to correction using simple deconvolution techniques. Specifically, the effects of receiver motion can be neutralized by applying an appropriate reverse time-variant spatial shift, while those due to source motion can be addressed by introducing time-variant spatial shifts both before and after standard, deterministic, signature deconvolution or correlation.  相似文献   

6.
Wiener deconvolution is generally used to improve resolution of the seismic sections, although it has several important assumptions. I propose a new method named Gold deconvolution to obtain Earth’s sparse-spike reflectivity series. The method uses a recursive approach and requires the source waveform to be known, which is termed as Deterministic Gold deconvolution. In the case of the unknown wavelet, it is estimated from seismic data and the process is then termed as Statistical Gold deconvolution. In addition to the minimum phase, Gold deconvolution method also works for zero and mixed phase wavelets even on the noisy seismic data. The proposed method makes no assumption on the phase of the input wavelet, however, it needs the following assumptions to produce satisfactory results: (1) source waveform is known, if not, it should be estimated from seismic data, (2) source wavelet is stationary at least within a specified time gate, (3) input seismic data is zero offset and does not contain multiples, and (4) Earth consists of sparse spike reflectivity series. When applied in small time and space windows, the Gold deconvolution algorithm overcomes nonstationarity of the input wavelet. The algorithm uses several thousands of iterations, and generally a higher number of iterations produces better results. Since the wavelet is extracted from the seismogram itself for the Statistical Gold deconvolution case, the Gold deconvolution algorithm should be applied via constant-length windows both in time and space directions to overcome the nonstationarity of the wavelet in the input seismograms. The method can be extended into a two-dimensional case to obtain time-and-space dependent reflectivity, although I use one-dimensional Gold deconvolution in a trace-by-trace basis. The method is effective in areas where small-scale bright spots exist and it can also be used to locate thin reservoirs. Since the method produces better results for the Deterministic Gold deconvolution case, it can be used for the deterministic deconvolution of the data sets with known source waveforms such as land Vibroseis records and marine CHIRP systems.  相似文献   

7.
Statistical deconvolution, as it is usually applied on a routine basis, designs an operator from the trace autocorrelation to compress the wavelet which is convolved with the reflectivity sequence. Under the assumption of a white reflectivity sequence (and a minimum-delay wavelet) this simple approach is valid. However, if the reflectivity is distinctly non-white, then the deconvolution will confuse the contributions to the trace spectral shape of the wavelet and reflectivity. Given logs from a nearby well, a simple two-parameter model may be used to describe the power spectral shape of the reflection coefficients derived from the broadband synthetic. This modelling is attractive in that structure in the smoothed spectrum which is consistent with random effects is not built into the model. The two parameters are used to compute simple inverse- and forward-correcting filters, which can be applied before and after the design and implementation of the standard predictive deconvolution operators. For whitening deconvolution, application of the inverse filter prior to deconvolution is unnecessary, provided the minimum-delay version of the forward filter is used. Application of the technique to seismic data shows the correction procedure to be fast and cheap and case histories display subtle, but important, differences between the conventionally deconvolved sections and those produced by incorporating the correction procedure into the processing sequence. It is concluded that, even with a moderate amount of non-whiteness, the corrected section can show appreciably better resolution than the conventionally processed section.  相似文献   

8.
Klauder wavelet removal before vibroseis deconvolution   总被引:1,自引:0,他引:1  
The spiking deconvolution of a field seismic trace requires that the seismic wavelet on the trace be minimum phase. On a dynamite trace, the component wavelets due to the effects of recording instruments, coupling, attenuation, ghosts, reverberations and other types of multiple reflection are minimum phase. The seismic wavelet is the convolution of the component wavelets. As a result, the seismic wavelet on a dynamite trace is minimum phase and thus can be removed by spiking deconvolution. However, on a correlated vibroseis trace, the seismic wavelet is the convolution of the zero-phase Klauder wavelet with the component minimum-phase wavelets. Thus the seismic wavelet occurring on a correlated vibroseis trace does not meet the minimum-phase requirement necessary for spiking deconvolution, and the final result of deconvolution is less than optimal. Over the years, this problem has been investigated and various methods of correction have been introduced. In essence, the existing methods of vibroseis deconvolution make use of a correction that converts (on the correlated trace) the Klauder wavelet into its minimum-phase counterpart. The seismic wavelet, which is the convolution of the minimum-phase counterpart with the component minimum-phase wavelets, is then removed by spiking deconvolution. This means that spiking deconvolution removes both the constructed minimum-phase Klauder counterpart and the component minimum-phase wavelets. Here, a new method is proposed: instead of being converted to minimum phase, the Klauder wavelet is removed directly. The spiking deconvolution can then proceed unimpeded as in the case of a dynamite record. These results also hold for gap predictive deconvolution because gap deconvolution is a special case of spiking deconvolution in which the deconvolved trace is smoothed by the front part of the minimum-phase wavelet that was removed.  相似文献   

9.
In contrast to the conventional deconvolution technique (Wiener-Levinson), the spike-, predictive-, and gap-deconvolution is realized with the help of an adaptive updating technique of the prediction operator. As the prediction operator will be updated from sample to sample, this procedure can be used for time variant deconvolution. Updating formulae discussed are the adaptive updating formula and the sequential algorithm for the sequential estimation technique. This updating technique is illustrated using both synthetic and real seismic data.  相似文献   

10.
The convolution assumption between excess rainfall and runoff provides a framework in which catchment runoff can be predicted with reasonable accuracy and moderate computational cost. Associated with it, the deconvolution problem of estimating unitgraph ordinates from rainfall–runoff events involves a matrix with a particularly simple structure. This matrix structure is used here as a basis on which the ill-posed nature of deconvolution is analysed. As a result, based on a simple transform of the excess rainfall data, a very simple criterion is derived to test the degree to which deconvolution may yield a unit hydrograph estimate displaying spurious oscillations of large magnitude. This has practical implications as the solution to an ill-posed problem can be very sensitive to errors in the model and the data and therefore may need to be stabilized. Illustration of these issues is provided using published rainfall–runoff data.  相似文献   

11.
We propose a three‐step bandwidth enhancing wavelet deconvolution process, combining linear inverse filtering and non‐linear reflectivity construction based on a sparseness assumption. The first step is conventional Wiener deconvolution. The second step consists of further spectral whitening outside the spectral bandwidth of the residual wavelet after Wiener deconvolution, i.e., the wavelet resulting from application of the Wiener deconvolution filter to the original wavelet, which usually is not a perfect spike due to band limitations of the original wavelet. We specifically propose a zero‐phase filtered sparse‐spike deconvolution as the second step to recover the reflectivity dominantly outside of the bandwidth of the residual wavelet after Wiener deconvolution. The filter applied to the sparse‐spike deconvolution result is proportional to the deviation of the amplitude spectrum of the residual wavelet from unity, i.e., it is of higher amplitude; the closer the amplitude spectrum of the residual wavelet is to zero, but of very low amplitude, the closer it is to unity. The third step consists of summation of the data from the two first steps, basically adding gradually the contribution from the sparse‐spike deconvolution result at those frequencies at which the residual wavelet after Wiener deconvolution has small amplitudes. We propose to call this technique “sparsity‐enhanced wavelet deconvolution”. We demonstrate the technique on real data with the deconvolution of the (normal‐incidence) source side sea‐surface ghost of marine towed streamer data. We also present the extension of the proposed technique to time‐varying wavelet deconvolution.  相似文献   

12.
Seismic interferometry is the process of generating new seismic traces from the cross‐correlation, convolution or deconvolution of existing traces. One of the starting assumptions for deriving the representations for seismic interferometry by cross‐correlation is that there is no intrinsic loss in the medium where the recordings are performed. In practice, this condition is not always met. Here, we investigate the effect of intrinsic losses in the medium on the results retrieved from seismic interferometry by cross‐correlation. First, we show results from a laboratory experiment in a homogeneous sand chamber with strong losses. Then, using numerical modelling results, we show that in the case of a lossy medium ghost reflections will appear in the cross‐correlation result when internal multiple scattering occurs. We also show that if a loss compensation is applied to the traces to be correlated, these ghosts in the retrieved result can be weakened, can disappear, or can reverse their polarity. This compensation process can be used to estimate the quality factor in the medium.  相似文献   

13.
14.
The most common noise-reduction methods employed in the vibroseis technique (e.g. spike and burst reduction, vertical stacking) are applied in the field to reduce noise at a very early stage. In addition, vibrator phase control systems prevent signal distortions produced by non-linearity of the source itself. However, the success of these automatic correction methods depends on parameter justification by the operator and the actual characteristics of the distorting noise. More specific noise-reduction methods (e.g. Combisweep (Trade mark of Geco-Prakla), elimination of harmonics) increase production costs or need uncorrelated data for the correction process. Because the field data are usually correlated and vertically stacked in the field to minimize logistical and processing costs, it is not possible to make subsequent parameter corrections to optimize the noise reduction after correlation and vertical stacking of a production record. The noise-reduction method described here uses the final recorded, correlated and stacked vibroseis field data. This method eliminates signal artifacts caused e.g. by incorrect vibroseis source signals being used in parameter estimation when a frequency–time analysis is combined with a standard convolution process. Depending on the nature of the distortions, a synthetically generated, nearly recursive noise-separation operator compresses the noise artifact in time using a trace-by-trace filter. After elimination of this compressed noise, re-application of the separation operator leads to a noise-corrected replacement of the input data. The method is applied to a synthetic data set and to a real vibroseis field record from deep seismic sounding, with good results.  相似文献   

15.
Some time ago, we described and implemented two methods of seismic data compression. In the first method a seismic trace is considered as being the convolution of a distribution made up of the trace peak values with a Gaussian pseudo-pulse. The second method is performed through a truncation of the sequential (Walsh, Paley or Haar) spectrum of each trace. In this paper it is shown that neither method has adverse effects on quality when traces with their information compressed undergo conventional data processing, such as stacking and deconvolution.  相似文献   

16.
A new approach to deconvolution has been developed to improve the attenuation of multiple energy. This approach to deconvolution is unique in that it not only eliminates the usual assumptions of a minimum phase lag wavelet and a random distribution of impulses, but also overcomes the noise limitation of the homomorphic deconvolution and its inherent instability to phase computation. We attempt to analyse the continuous alteration of the acoustic waveform during the propagation through a linear system. Based on the results of this analysis, the surface-related measurements are described as a convolution of the impulse response of the system with the non-stationary forward wavelet which includes all multiple terms generated within the system. The amplitude spectrum of the forward wavelet is recovered from the amplitude spectrum of the recorded signal, using the difference between the rate of decay of the source wavelet and the duration of the measurement. The phase spectrum of the forward wavelet is estimated using the Hilbert transform and the fact that the mixed phase lag wavelet can be presented as a convolution of the minimum and maximum phase lag wavelets. The multiples are discriminated from primaries by comparison of the phase spectrum of the seismic signal and the inverse of the forward wavelet. Therefore, the technique is called phase inversion deconvolution (PID). This approach requires no velocity information in order to recognize and attenuate multiple energy. Therefore, primary energy is recovered in the near-offset region where the velocity differential between primary and multiple energies is very small.  相似文献   

17.
Static correction computations require knowledge of the refracted traveltimes. Zero-phase wavelet sources cannot be picked reliably when incoherent picking techniques are used. Assuming a complex convolutional model for Vibroseis, a coherent picking technique based on the matched filter is described. In order to match the filter to the first arrival wavelet an adaptive algorithm is used. This allows the filter to change both with shot and offset so that all the properties of matched filtering such as improvement of S/N and resolution can be exploited. Incoherent picking is used before coherent picking to improve the convergence of adaptive picking.  相似文献   

18.
伪随机编码源电磁响应的精细辨识   总被引:4,自引:3,他引:1       下载免费PDF全文
与传统阶跃源激励方式相比,采用m序列伪随机编码对发射源波形进行编码,提高了电磁探测的深度和分辨能力.然而受这种编码源发射波形自相关旁瓣效应的影响,使得对大地冲激响应的精细辨识效果受到一定限制.为了解决这一问题,在以往相关辨识方法研究的基础上,进一步考虑发射自相关旁瓣的影响,首先提出一种由收发互相关中高精度提取大地冲激响应的数学方法;然后通过数值模拟给出了m序列编码源大地冲激响应的精细辨识结果;同时对以m序列为发射波形的勘探系统相关参数选择进行了分析;最后利用本文提出的方法对野外实测数据进行辨识处理,通过与其他EM方法结果进行对比,证明了本文提出方法的可靠性.  相似文献   

19.
A new spectral factorization method is presented for the estimation of a causal as well as a causally invertible ARMA operator from the correlation sequence of seismic traces. The method has been implemented for multichannel deconvolution of seismic traces with the aim of exploiting the trace-to-trace correlation that exists within seismograms. A layered earth model with a small reflectivity sequence has been considered, and the seismic traces have been considered as the output of a linear system driven by white noise reflection coefficient sequences. The present method is the concatenation of three algorithms, namely Kung's method for state variable ( F , G , H ) realization using a singular value decomposition (SVD) algorithm, Faurre's technique for computation of the strong spectral factor and Leverrier's algorithm for ARMA representation of the spectral factor. The inverted ARMA operator is used as a recursive filter for deconvolution of seismic traces. In the example shown, two traces with a covariance sequence of 160 ms length have been considered for multichannel deconvolution of stacked seismic traces. The results presented, when compared with those obtained from a conventional deconvolution algorithm, have shown encouraging prospects.  相似文献   

20.
—Seismic data processing mostly takes into account the statistics inherent in the data to improve the data quality. Since some years the deterministic approach for processing shows many advantages. This approach takes into account e.g., the source signature, with the knowledge of its amplitude and phase behavior. The transformation of the signal into an optimized form is called wavelet processing. By this step an optimal input for deconvolution can be produced, which needs a minimum- delay signal to function well. The interpreter needs a signal which gives the optimum resolution, which is accomplished by the zero-phase transformation of the input signal. The combination of different input sources such as Vibroseis and Dynamite requires a phase adoption. All these procedures can be implemented via Two-Sided-Recursive (TSR-) filters. Spectral balancing can be accomplished very effectively in time domain after a minimum delay transform of the input signals. The DEKORP data suffer from a low signal/noise ratio, so that special methods for the suppression of coherent noise trains were developed. This can be done by subtractive coherency filtering. Multiple seismic reflections also can be suppressed by this method very effectively. All processing procedures developed during recent years are now fully integrated in commercial software operated by the processing center in Clausthal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号