首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Least squares Fourier reconstruction is basically a solution to a discrete linear inverse problem that attempts to recover the Fourier spectrum of the seismic wavefield from irregularly sampled data along the spatial coordinates. The estimated Fourier coefficients are then used to reconstruct the data in a regular grid via a standard inverse Fourier transform (inverse discrete Fourier transform or inverse fast Fourier transform). Unfortunately, this kind of inverse problem is usually under‐determined and ill‐conditioned. For this reason, the least squares Fourier reconstruction with minimum norm adopts a damped least squares inversion to retrieve a unique and stable solution. In this work, we show how the damping can introduce artefacts on the reconstructed 3D data. To quantitatively describe this issue, we introduce the concept of “extended” model resolution matrix, and we formulate the reconstruction problem as an appraisal problem. Through the simultaneous analysis of the extended model resolution matrix and of the noise term, we discuss the limits of the Fourier reconstruction with minimum norm reconstruction and assess the validity of the reconstructed data and the possible bias introduced by the inversion process. Also, we can guide the parameterization of the forward problem to minimize the occurrence of unwanted artefacts. A simple synthetic example and real data from a 3D marine common shot gather are used to discuss our approach and to show the results of Fourier reconstruction with minimum norm reconstruction.  相似文献   

2.
Seismic data reconstruction, as a preconditioning process, is critical to the performance of subsequent data and imaging processing tasks. Often, seismic data are sparsely and non-uniformly sampled due to limitations of economic costs and field conditions. However, most reconstruction processing algorithms are designed for the ideal case of uniformly sampled data. In this paper, we propose the non-equispaced fast discrete curvelet transform-based three-dimensional reconstruction method that can handle and interpolate non-uniformly sampled data effectively along two spatial coordinates. In the procedure, the three-dimensional seismic data sets are organized in a sequence of two-dimensional time slices along the source–receiver domain. By introducing the two-dimensional non-equispaced fast Fourier transform in the conventional fast discrete curvelet transform, we formulate an L1 sparsity regularized problem to invert for the uniformly sampled curvelet coefficients from the non-uniformly sampled data. In order to improve the inversion algorithm efficiency, we employ the linearized Bregman method to solve the L1-norm minimization problem. Once the uniform curvelet coefficients are obtained, uniformly sampled three-dimensional seismic data can be reconstructed via the conventional inverse curvelet transform. The reconstructed results using both synthetic and real data demonstrate that the proposed method can reconstruct not only non-uniformly sampled and aliased data with missing traces, but also the subset of observed data on a non-uniform grid to a specified uniform grid along two spatial coordinates. Also, the results show that the simple linearized Bregman method is superior to the complex spectral projected gradient for L1 norm method in terms of reconstruction accuracy.  相似文献   

3.
大地电磁测深资料的二次函数逼近非线性反演   总被引:12,自引:4,他引:8       下载免费PDF全文
将二次函数逼近非线性优化首次应用于大地电磁测深反演问题,该反演方法利用二次函数有唯一最小值的特点进行逼近大地电磁反演模型,从而避免了常规的迭代反演过程中陷入局部极小问题,实现了对目标函数求全局极小,较好地解决了非唯一性问题;同时该方法不用求灵敏度矩阵,且对初始模型无任何要求。通过理论模型检验、井旁MT点反演结果与测井曲线的对比及MT测线的反演电阻率深度剖面与地震测线的时间剖面对比均表明,本文方法取得较好的应用效果。  相似文献   

4.
Reflections in seismic data induce serious non‐linearity in the objective function of full‐ waveform inversion. Thus, without a good initial velocity model that can produce reflections within a half cycle of the frequency used in the inversion, convergence to a solution becomes difficult. As a result, we tend to invert for refracted events and damp reflections in data. Reflection induced non‐linearity stems from cycle skipping between the imprint of the true model in observed data and the predicted model in synthesized data. Inverting for the phase of the model allows us to address this problem by avoiding the source of non‐linearity, the phase wrapping phenomena. Most of the information related to the location (or depths) of interfaces is embedded in the phase component of a model, mainly influenced by the background model, while the velocity‐contrast information (responsible for the reflection energy) is mainly embedded in the amplitude component. In combination with unwrapping the phase of data, which mitigates the non‐linearity introduced by the source function, I develop a framework to invert for the unwrapped phase of a model, represented by the instantaneous depth, using the unwrapped phase of the data. The resulting gradient function provides a mechanism to non‐linearly update the velocity model by applying mainly phase shifts to the model. In using the instantaneous depth as a model parameter, we keep track of the model properties unfazed by the wrapping phenomena.  相似文献   

5.
Potential field data such as geoid and gravity anomalies are globally available and offer valuable information about the Earth's lithosphere especially in areas where seismic data coverage is sparse. For instance, non‐linear inversion of Bouguer anomalies could be used to estimate the crustal structures including variations of the crustal density and of the depth of the crust–mantle boundary, that is, Moho. However, due to non‐linearity of this inverse problem, classical inversion methods would fail whenever there is no reliable initial model. Swarm intelligence algorithms, such as particle swarm optimisation, are a promising alternative to classical inversion methods because the quality of their solutions does not depend on the initial model; they do not use the derivatives of the objective function, hence allowing the use of L1 norm; and finally, they are global search methods, meaning, the problem could be non‐convex. In this paper, quantum‐behaved particle swarm, a probabilistic swarm intelligence‐like algorithm, is used to solve the non‐linear gravity inverse problem. The method is first successfully tested on a realistic synthetic crustal model with a linear vertical density gradient and lateral density and depth variations at the base of crust in the presence of white Gaussian noise. Then, it is applied to the EIGEN 6c4, a combined global gravity model, to estimate the depth to the base of the crust and the mean density contrast between the crust and the upper‐mantle lithosphere in the Eurasia–Arabia continental collision zone along a 400 km profile crossing the Zagros Mountains (Iran). The results agree well with previously published works including both seismic and potential field studies.  相似文献   

6.
Seismic field data are often irregularly or coarsely sampled in space due to acquisition limits. However, complete and regular data need to be acquired in most conventional seismic processing and imaging algorithms. We have developed a fast joint curvelet‐domain seismic data reconstruction method by sparsity‐promoting inversion based on compressive sensing. We have made an attempt to seek a sparse representation of incomplete seismic data by curvelet coefficients and solve sparsity‐promoting problems through an iterative thresholding process to reconstruct the missing data. In conventional iterative thresholding algorithms, the updated reconstruction result of each iteration is obtained by adding the gradient to the previous result and thresholding it. The algorithm is stable and accurate but always requires sufficient iterations. The linearised Bregman method can accelerate the convergence by replacing the previous result with that before thresholding, thus promoting the effective coefficients added to the result. The method is faster than conventional one, but it can cause artefacts near the missing traces while reconstructing small‐amplitude coefficients because some coefficients in the unthresholded results wrongly represent the residual of the data. The key process in the joint curvelet‐domain reconstruction method is that we use both the previous results of the conventional method and the linearised Bregman method to stabilise the reconstruction quality and accelerate the recovery for a while. The acceleration rate is controlled through weighting to adjust the contribution of the acceleration term and the stable term. A fierce acceleration could be performed for the recovery of comparatively small gaps, whereas a mild acceleration is more appropriate when the incomplete data has a large gap of high‐amplitude events. Finally, we carry out a fast and stable recovery using the trade‐off algorithm. Synthetic and field data tests verified that the joint curvelet‐domain reconstruction method can effectively and quickly reconstruct seismic data with missing traces.  相似文献   

7.
不规则采样地震数据的重建是地震数据分析处理的重要问题.本文给出了一种基于非均匀快速傅里叶变换的最小二乘反演地震数据重建的方法,在最小二乘反演插值方程中,引入正则化功率谱约束项,通过非均匀快速傅里叶变换和修改周期图的方式,自适应迭代修改约束项,使待插值数据的频谱越来越接近真实的频谱,采用预条件共轭梯度法迭代求解,保证了解的稳定性和收敛速度.理论模型和实际地震数据插值试验证明了本文方法能够去除空间假频,速度快、插值效果好,具有实用价值.  相似文献   

8.
Autoregressive modeling is used to estimate the spectrum of aliased data. A region of spectral support is determined by identifying the location of peaks in the estimated spatial spectrum of the data. This information is used to pose a Fourier reconstruction problem that inverts for a few dominant wavenumbers that are required to model the data. Synthetic and real data examples are used to illustrate the method. In particular, we show that the proposed method can accurately reconstruct aliased data and data with gaps.  相似文献   

9.
Practical applications of surface wave inversion demand reliable inverted shear‐wave profiles and a rigorous assessment of the uncertainty associated to the inverted parameters. As a matter of fact, the surface wave inverse problem is severely affected by solution non‐uniqueness: the degree of non‐uniqueness is closely related to the complexity of the observed dispersion pattern and to the experimental inaccuracies in dispersion measurements. Moreover, inversion pitfalls may be connected to specific problems such as inadequate model parametrization and incorrect identification of the surface wave modes. Consequently, it is essential to tune the inversion problem to the specific dataset under examination to avoid unnecessary computations and possible misinterpretations. In the heuristic inversion algorithm presented in this paper, different types of model constraints can be easily introduced to bias constructively the solution towards realistic estimates of the 1D shear‐wave profile. This approach merges the advantages of global inversion, like the extended exploration of the parameter space and a theoretically rigorous assessment of the uncertainties on the inverted parameters, with the practical approach of Lagrange multipliers, which is often used in deterministic inversion, which helps inversion to converge towards models with desired properties (e.g., ‘smooth’ or ‘minimum norm' models). In addition, two different forward kernels can be alternatively selected for direct‐problem computations: either the conventional modal inversion or, instead, the direct minimization of the secular function, which allows the interpreter to avoid mode identification. A rigorous uncertainty assessment of the model parameters is performed by posterior covariance analysis on the accepted solutions and the modal superposition associated to the inverted models is investigated by full‐waveform modelling. This way, the interpreter has several tools to address the more probable sources of inversion pitfalls within the framework of a rigorous and well‐tested global inversion algorithm. The effectiveness and the versatility of this approach, as well as the impact of the interpreter's choices on the final solution and on its posterior uncertainty, are illustrated using both synthetic and real data. In the latter case, the inverted shear velocity profiles are blind compared with borehole data.  相似文献   

10.
A two‐and‐half dimensional model‐based inversion algorithm for the reconstruction of geometry and conductivity of unknown regions using marine controlled‐source electromagnetic (CSEM) data is presented. In the model‐based inversion, the inversion domain is described by the so‐called regional conductivity model and both geometry and material parameters associated with this model are reconstructed in the inversion process. This method has the advantage of using a priori information such as the background conductivity distribution, structural information extracted from seismic and/or gravity measurements, and/or inversion results a priori derived from a pixel‐based inversion method. By incorporating this a priori information, the number of unknown parameters to be retrieved becomes significantly reduced. The inversion method is the regularized Gauss‐Newton minimization scheme. The robustness of the inversion is enhanced by adopting nonlinear constraints and applying a quadratic line search algorithm to the optimization process. We also introduce the adjoint formulation to calculate the Jacobian matrix with respect to the geometrical parameters. The model‐based inversion method is validated by using several numerical examples including the inversion of the Troll field data. These results show that the model‐based inversion method can quantitatively reconstruct the shapes and conductivities of reservoirs.  相似文献   

11.
The seismic inversion problem is a highly non‐linear problem that can be reduced to the minimization of the least‐squares criterion between the observed and the modelled data. It has been solved using different classical optimization strategies that require a monotone descent of the objective function. We propose solving the full‐waveform inversion problem using the non‐monotone spectral projected gradient method: a low‐cost and low‐storage optimization technique that maintains the velocity values in a feasible convex region by frequently projecting them on this convex set. The new methodology uses the gradient direction with a particular spectral step length that allows the objective function to increase at some iterations, guarantees convergence to a stationary point starting from any initial iterate, and greatly speeds up the convergence of gradient methods. We combine the new optimization scheme as a solver of the full‐waveform inversion with a multiscale approach and apply it to a modified version of the Marmousi data set. The results of this application show that the proposed method performs better than the classical gradient method by reducing the number of function evaluations and the residual values.  相似文献   

12.
Planar waves events recorded in a seismic array can be represented as lines in the Fourier domain. However, in the real world, seismic events usually have curvature or amplitude variability, which means that their Fourier transforms are no longer strictly linear but rather occupy conic regions of the Fourier domain that are narrow at low frequencies but broaden at high frequencies where the effect of curvature becomes more pronounced. One can consider these regions as localised “signal cones”. In this work, we consider a space–time variable signal cone to model the seismic data. The variability of the signal cone is obtained through scaling, slanting, and translation of the kernel for cone‐limited (C‐limited) functions (functions whose Fourier transform lives within a cone) or C‐Gaussian function (a multivariate function whose Fourier transform decays exponentially with respect to slowness and frequency), which constitutes our dictionary. We find a discrete number of scaling, slanting, and translation parameters from a continuum by optimally matching the data. This is a non‐linear optimisation problem, which we address by a fixed‐point method that utilises a variable projection method with ?1 constraints on the linear parameters and bound constraints on the non‐linear parameters. We observe that slow decay and oscillatory behaviour of the kernel for C‐limited functions constitute bottlenecks for the optimisation problem, which we partially overcome by the C‐Gaussian function. We demonstrate our method through an interpolation example. We present the interpolation result using the estimated parameters obtained from the proposed method and compare it with those obtained using sparsity‐promoting curvelet decomposition, matching pursuit Fourier interpolation, and sparsity‐promoting plane‐wave decomposition methods.  相似文献   

13.
In seismic waveform inversion, non‐linearity and non‐uniqueness require appropriate strategies. We formulate four types of L2 normed misfit functionals for Laplace‐Fourier domain waveform inversion: i) subtraction of complex‐valued observed data from complex‐valued predicted data (the ‘conventional phase‐amplitude’ residual), ii) a ‘conventional phase‐only’ residual in which amplitude variations are normalized, iii) a ‘logarithmic phase‐amplitude’ residual and finally iv) a ‘logarithmic phase‐only’ residual in which the only imaginary part of the logarithmic residual is used. We evaluate these misfit functionals by using a wide‐angle field Ocean Bottom Seismograph (OBS) data set with a maximum offset of 55 km. The conventional phase‐amplitude approach is restricted in illumination and delineates only shallow velocity structures. In contrast, the other three misfit functionals retrieve detailed velocity structures with clear lithological boundaries down to the deeper part of the model. We also test the performance of additional phase‐amplitude inversions starting from the logarithmic phase‐only inversion result. The resulting velocity updates are prominent only in the high‐wavenumber components, sharpening the lithological boundaries. We argue that the discrepancies in the behaviours of the misfit functionals are primarily caused by the sensitivities of the model gradient to strong amplitude variations in the data. As the observed data amplitudes are dominated by the near‐offset traces, the conventional phase‐amplitude inversion primarily updates the shallow structures as a result. In contrast, the other three misfit functionals eliminate the strong dependence on amplitude variation naturally and enhance the depth of illumination. We further suggest that the phase‐only inversions are sufficient to obtain robust and reliable velocity structures and the amplitude information is of secondary importance in constraining subsurface velocity models.  相似文献   

14.
The problem of conversion from time‐migration velocity to an interval velocity in depth in the presence of lateral velocity variations can be reduced to solving a system of partial differential equations. In this paper, we formulate the problem as a non‐linear least‐squares optimization for seismic interval velocity and seek its solution iteratively. The input for the inversion is the Dix velocity, which also serves as an initial guess. The inversion gradually updates the interval velocity in order to account for lateral velocity variations that are neglected in the Dix inversion. The algorithm has a moderate cost thanks to regularization that speeds up convergence while ensuring a smooth output. The proposed method should be numerically robust compared to the previous approaches, which amount to extrapolation in depth monotonically. For a successful time‐to‐depth conversion, image‐ray caustics should be either nonexistent or excluded from the computational domain. The resulting velocity can be used in subsequent depth‐imaging model building. Both synthetic and field data examples demonstrate the applicability of the proposed approach.  相似文献   

15.
Estimating elastic parameters from prestack seismic data remains a subject of interest for the exploration and development of hydrocarbon reservoirs. In geophysical inverse problems, data and models are in general non‐linearly related. Linearized inversion methods often have the disadvantage of strong dependence on the initial model. When the initial model is far from the global minimum, inversion iteration is likely to converge to the local minimum. This problem can be avoided by using global optimization methods. In this paper, we implemented and tested a prestack seismic inversion scheme based on a quantum‐behaved particle swarm optimization (QPSO) algorithm aided by an edge‐preserving smoothing ( EPS) operator. We applied the algorithm to estimate elastic parameters from prestack seismic data. Its performance on both synthetic data and real seismic data indicates that QPSO optimization with the EPS operator yields an accurate solution.  相似文献   

16.
This paper compares three alternative algorithms for simultaneously estimating a source wavelet at the same time as an earth model in full‐waveform inversion: (i) simultaneous descent, (ii) alternating descent and (iii) descent with the variable projection method. The latter is a technique for solving separable least‐squares problems that is well‐known in the applied mathematics literature. When applied to full‐waveform inversion, it involves making the source wavelet an implicit function of the earth model via a least‐squares filter‐estimation process. Since the source wavelet becomes purely a function of medium parameters, it no longer needs to be treated as a separate unknown in the inversion. Essentially, the predicted data are projected onto the measured data in a least‐squares sense at every function evaluation, making use of the fact that the filter estimation problem is trivial when compared to the full‐waveform inversion problem. Numerical tests on a simple 1D model indicate that the variable projection method gives the best result; actually producing results in quality that are very similar to control experiments with a known, correct wavelet.  相似文献   

17.
Simultaneous estimation of velocity gradients and anisotropic parameters from seismic reflection data is one of the main challenges in transversely isotropic media with a vertical symmetry axis migration velocity analysis. In migration velocity analysis, we usually construct the objective function using the l2 norm along with a linear conjugate gradient scheme to solve the inversion problem. Nevertheless, for seismic data this inversion scheme is not stable and may not converge in finite time. In order to ensure the uniform convergence of parameter inversion and improve the efficiency of migration velocity analysis, this paper develops a double parameterized regularization model and gives the corresponding algorithms. The model is based on the combination of the l2 norm and the non‐smooth l1 norm. For solving such an inversion problem, the quasi‐Newton method is utilized to make the iterative process stable, which can ensure the positive definiteness of the Hessian matrix. Numerical simulation indicates that this method allows fast convergence to the true model and simultaneously generates inversion results with a higher accuracy. Therefore, our proposed method is very promising for practical migration velocity analysis in anisotropic media.  相似文献   

18.
We develop a two‐dimensional full waveform inversion approach for the simultaneous determination of S‐wave velocity and density models from SH ‐ and Love‐wave data. We illustrate the advantages of the SH/Love full waveform inversion with a simple synthetic example and demonstrate the method's applicability to a near‐surface dataset, recorded in the village ?achtice in Northwestern Slovakia. Goal of the survey was to map remains of historical building foundations in a highly heterogeneous subsurface. The seismic survey comprises two parallel SH‐profiles with maximum offsets of 24 m and covers a frequency range from 5 Hz to 80 Hz with high signal‐to‐noise ratio well suited for full waveform inversion. Using the Wiechert–Herglotz method, we determined a one‐dimensional gradient velocity model as a starting model for full waveform inversion. The two‐dimensional waveform inversion approach uses the global correlation norm as objective function in combination with a sequential inversion of low‐pass filtered field data. This mitigates the non‐linearity of the multi‐parameter inverse problem. Test computations show that the influence of visco‐elastic effects on the waveform inversion result is rather small. Further tests using a mono‐parameter shear modulus inversion reveal that the inversion of the density model has no significant impact on the final data fit. The final full waveform inversion S‐wave velocity and density models show a prominent low‐velocity weathering layer. Below this layer, the subsurface is highly heterogeneous. Minimum anomaly sizes correspond to approximately half of the dominant Love‐wavelength. The results demonstrate the ability of two‐dimensional SH waveform inversion to image shallow small‐scale soil structure. However, they do not show any evidence of foundation walls.  相似文献   

19.
Full‐waveform inversion is re‐emerging as a powerful data‐fitting procedure for quantitative seismic imaging of the subsurface from wide‐azimuth seismic data. This method is suitable to build high‐resolution velocity models provided that the targeted area is sampled by both diving waves and reflected waves. However, the conventional formulation of full‐waveform inversion prevents the reconstruction of the small wavenumber components of the velocity model when the subsurface is sampled by reflected waves only. This typically occurs as the depth becomes significant with respect to the length of the receiver array. This study first aims to highlight the limits of the conventional form of full‐waveform inversion when applied to seismic reflection data, through a simple canonical example of seismic imaging and to propose a new inversion workflow that overcomes these limitations. The governing idea is to decompose the subsurface model as a background part, which we seek to update and a singular part that corresponds to some prior knowledge of the reflectivity. Forcing this scale uncoupling in the full‐waveform inversion formalism brings out the transmitted wavepaths that connect the sources and receivers to the reflectors in the sensitivity kernel of the full‐waveform inversion, which is otherwise dominated by the migration impulse responses formed by the correlation of the downgoing direct wavefields coming from the shot and receiver positions. This transmission regime makes full‐waveform inversion amenable to the update of the long‐to‐intermediate wavelengths of the background model from the wide scattering‐angle information. However, we show that this prior knowledge of the reflectivity does not prevent the use of a suitable misfit measurement based on cross‐correlation, to avoid cycle‐skipping issues as well as a suitable inversion domain as the pseudo‐depth domain that allows us to preserve the invariant property of the zero‐offset time. This latter feature is useful to avoid updating the reflectivity information at each non‐linear iteration of the full‐waveform inversion, hence considerably reducing the computational cost of the entire workflow. Prior information of the reflectivity in the full‐waveform inversion formalism, a robust misfit function that prevents cycle‐skipping issues and a suitable inversion domain that preserves the seismic invariant are the three key ingredients that should ensure well‐posedness and computational efficiency of full‐waveform inversion algorithms for seismic reflection data.  相似文献   

20.
Full‐waveform inversion is an appealing technique for time‐lapse imaging, especially when prior model information is included into the inversion workflow. Once the baseline reconstruction is achieved, several strategies can be used to assess the physical parameter changes, such as parallel difference (two separate inversions of baseline and monitor data sets), sequential difference (inversion of the monitor data set starting from the recovered baseline model) and double‐difference (inversion of the difference data starting from the recovered baseline model) strategies. Using synthetic Marmousi data sets, we investigate which strategy should be adopted to obtain more robust and more accurate time‐lapse velocity changes in noise‐free and noisy environments. This synthetic application demonstrates that the double‐difference strategy provides the more robust time‐lapse result. In addition, we propose a target‐oriented time‐lapse imaging using regularized full‐waveform inversion including a prior model and model weighting, if the prior information exists on the location of expected variations. This scheme applies strong prior model constraints outside of the expected areas of time‐lapse changes and relatively less prior constraints in the time‐lapse target zones. In application of this process to the Marmousi model data set, the local resolution analysis performed with spike tests shows that the target‐oriented inversion prevents the occurrence of artefacts outside the target areas, which could contaminate and compromise the reconstruction of the effective time‐lapse changes, especially when using the sequential difference strategy. In a strongly noisy case, the target‐oriented prior model weighting ensures the same behaviour for both time‐lapse strategies, the double‐difference and the sequential difference strategies and leads to a more robust reconstruction of the weak time‐lapse changes. The double‐difference strategy can deliver more accurate time‐lapse variation since it can focus to invert the difference data. However, the double‐difference strategy requires a preprocessing step on data sets such as time‐lapse binning to have a similar source/receiver location between two surveys, while the sequential difference needs less this requirement. If we have prior information about the area of changes, the target‐oriented sequential difference strategy can be an alternative and can provide the same robust result as the double‐difference strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号