首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A robust metric of data misfit such as the ?1‐norm is required for geophysical parameter estimation when the data are contaminated by erratic noise. Recently, the iteratively re‐weighted and refined least‐squares algorithm was introduced for efficient solution of geophysical inverse problems in the presence of additive Gaussian noise in the data. We extend the algorithm in two practically important directions to make it applicable to data with non‐Gaussian noise and to make its regularisation parameter tuning more efficient and automatic. The regularisation parameter in iteratively reweighted and refined least‐squares algorithm varies with iteration, allowing the efficient solution of constrained problems. A technique is proposed based on the secant method for root finding to concentrate on finding a solution that satisfies the constraint, either fitting to a target misfit (if a bound on the noise is available) or having a target size (if a bound on the solution is available). This technique leads to an automatic update of the regularisation parameter at each and every iteration. We further propose a simple and efficient scheme that tunes the regularisation parameter without requiring target bounds. This is of great importance for the field data inversion where there is no information about the size of the noise and the solution. Numerical examples from non‐stationary seismic deconvolution and velocity‐stack inversion show that the proposed algorithm is efficient, stable, and robust and outperforms the conventional and state‐of‐the‐art methods.  相似文献   

2.
Linear prediction filters are an effective tool for reducing random noise from seismic records. Unfortunately, the ability of prediction filters to enhance seismic records deteriorates when the data are contaminated by erratic noise. Erratic noise in this article designates non‐Gaussian noise that consists of large isolated events with known or unknown distribution. We propose a robust fx projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation. Instead of adopting the ?2‐norm, as commonly used in the conventional design of fx filters, we utilize the hybrid ‐norm to penalize the energy of the additive noise. The estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion. First, the additive noise sequence is fixed, and the prediction error filter is estimated via the least‐squares solution of a system of linear equations. Then, the prediction error filter is fixed, and the additive noise sequence is estimated through a cost function containing a hybrid ‐norm that prevents erratic noise to influence the final solution. In other words, we proposed and designed a robust M‐estimate of a special autoregressive moving‐average model in the fx domain. Synthetic and field data examples are used to evaluate the performance of the proposed algorithm.  相似文献   

3.
Least‐squares reverse time migration provides better imaging result than conventional reverse time migration by reducing the migration artefacts, improving the resolution of the image and balancing the amplitudes of the reflectors. However, it is computationally intensive. To reduce its computational cost, we propose an efficient amplitude encoding least‐squares reverse time migration scheme in the time domain. Although the encoding scheme is effective in increasing the computational efficiency, it also introduces the well‐known crosstalk noise in the gradient that degrades the quality of the imaging result. We analyse the cause of the crosstalk noise using an encoding correlation matrix and then develop two numerical schemes to suppress the crosstalk noise during the inversion process. We test the proposed method with synthetic and field data. Numerical examples show that the proposed scheme can provide better imaging result than reverse time migration, and it also generates images comparable with those from common shot least‐squares reverse time migration but with less computational cost.  相似文献   

4.
This paper proposes a non‐iterative time integration (NITI) scheme for non‐linear dynamic FEM analysis. The NITI scheme is constructed by combining explicit and implicit schemes, taking advantage of their merits, and enables stable computation without an iteration process for convergence even when used for non‐linear dynamic problems. Formulation of the NITI scheme is presented and its stability is studied. Although the NITI scheme is not unconditionally stable when applied to non‐linear problems, it is stable in most cases unless stiffness hardening occurs or the problem has a large velocity‐dependent term. The NITI scheme is applied to dynamic analysis of the non‐linear soil–structure system and computation results are compared with those by the central difference method (CDM). Comparison shows that the stability of the NITI scheme is superior to that of the CDM. Accuracy of the NITI scheme is verified because its results are identical with those by the CDM in which the time step is set as 1/10 of that for the NITI scheme. The application of the NITI scheme to the mesh‐partitioned FEM is also proposed. It is applied to dynamic analysis of the linear soil–structure system. It yields the same results as a conventional single‐domain FEM analysis using the Newmark β method. This result verifies the usability of mesh‐partitioned FEM analysis using the NITI scheme. Copyright © 2003 John Wiley& Sons, Ltd.  相似文献   

5.
Attenuation of random noise and enhancement of structural continuity can significantly improve the quality of seismic interpretation. We present a new technique, which aims at reducing random noise while protecting structural information. The technique is based on combining structure prediction with either similarity‐mean filtering or lower‐upper‐middle filtering. We use structure prediction to form a structural prediction of seismic traces from neighbouring traces. We apply a non‐linear similarity‐mean filter or an lower‐upper‐middle filter to select best samples from different predictions. In comparison with other common filters, such as mean or median, the additional parameters of the non‐linear filters allow us to better control the balance between eliminating random noise and protecting structural information. Numerical tests using synthetic and field data show the effectiveness of the proposed structure‐enhancing filters.  相似文献   

6.
Numerical implementation of the gradient of the cost function in a gradient‐based full‐ waveform inversion (FWI) is essentially a migration operator used in wave equation migration. In FWI, minimizing different data residual norms results in different weighting strategies of data residuals at receiver locations prior to back‐propagation into the medium. In this paper, we propose different scaling methods to the receiver wavefield and compare their performances. Using time‐domain reverse‐time migration (RTM), we show that compared to conventional algorithms, this type of scaling is able to significantly suppress non‐Gaussian noise, i.e., outliers. Our tests also show that scaling by its absolute norm produces better results than other approaches.  相似文献   

7.
The conventional spectral analysis method for interpretation of magnetic data assumes stationary spatial series and a white‐noise source distribution. However, long magnetic profiles may not be stationary in nature and source distributions are not white. Long non‐stationary magnetic profiles can be divided into stationary subprofiles following Wiener filter theory. A least‐squares inverse method is used to calculate the scaling exponents and depth values of magnetic interfaces from the power spectrum. The applicability of this approach is demonstrated on non‐stationary synthetic and field magnetic data collected along the Nagaur–Jhalawar transect, western India. The stationarity of the whole profile and the subprofiles of the synthetic and field data is tested. The variation of the mean and standard deviations of the subprofiles is significantly reduced compared with the whole profile. The depth values found from the synthetic model are in close agreement with the assumed depth values, whereas for the field data these are in close agreement with estimates from seismic, magnetotelluric and gravity data.  相似文献   

8.
This paper describes least‐squares reverse‐time migration. The method provides the exact adjoint operator pair for solving the linear inverse problem, thereby enhancing the convergence of gradient‐based iterative linear inversion methods. In this formulation, modified source wavelets are used to correct the source signature imprint in the predicted data. Moreover, a roughness constraint is applied to stabilise the inversion and reduce high‐wavenumber artefacts. It is also shown that least‐squares migration implicitly applies a deconvolution imaging condition. Three numerical experiments illustrate that this method is able to produce seismic reflectivity images with higher resolution, more accurate amplitudes, and fewer artefacts than conventional reverse‐time migration. The methodology is currently feasible in 2‐D and can naturally be extended to 3‐D when computational resources become more powerful.  相似文献   

9.
S. Alapati  Z. J. Kabala 《水文研究》2000,14(6):1003-1016
A non‐linear least‐squares (NLS) method is used without regularization to recover the release history of a groundwater contaminant plume from its current measured spatial distribution. The flow system is assumed to be one‐dimensional, with the plume originating from a known single site. The solution is found to be very sensitive to noise and to the extent to which the plume is dissipated. Although the NLS method is extremely sensitive to measurement errors for the gradual release scenario, it can resolve the release histories for catastrophic release scenarios reasonably well, even in the presence of moderate measurement errors. A number of synthetic numerical examples are analysed. We find that for catastrophic contaminant releases the NLS method may be an alternative to the Tikhonov regularization approach. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

10.
In the traditional inversion of the Rayleigh dispersion curve, layer thickness, which is the second most sensitive parameter of modelling the Rayleigh dispersion curve, is usually assumed as correct and is used as fixed a priori information. Because the knowledge of the layer thickness is typically not precise, the use of such a priori information may result in the traditional Rayleigh dispersion curve inversions getting trapped in some local minima and may show results that are far from the real solution. In this study, we try to avoid this issue by using a joint inversion of the Rayleigh dispersion curve data with vertical electric sounding data, where we use the common‐layer thickness to couple the two methods. The key idea of the proposed joint inversion scheme is to combine methods in one joint Jacobian matrix and to invert for layer S‐wave velocity, resistivity, and layer thickness as an additional parameter, in contrast with a traditional Rayleigh dispersion curve inversion. The proposed joint inversion approach is tested with noise‐free and Gaussian noise data on six characteristic, synthetic sub‐surface models: a model with a typical dispersion; a low‐velocity, half‐space model; a model with particularly stiff and soft layers, respectively; and a model reproduced from the stiff and soft layers for different layer‐resistivity propagation. In the joint inversion process, the non‐linear damped least squares method is used together with the singular value decomposition approach to find a proper damping value for each iteration. The proposed joint inversion scheme tests many damping values, and it chooses the one that best approximates the observed data in the current iteration. The quality of the joint inversion is checked with the relative distance measure. In addition, a sensitivity analysis is performed for the typical dispersive sub‐surface model to illustrate the benefits of the proposed joint scheme. The results of synthetic models revealed that the combination of the Rayleigh dispersion curve and vertical electric sounding methods in a joint scheme allows to provide reliable sub‐surface models even in complex and challenging situations and without using any a priori information.  相似文献   

11.
In this paper, we present a methodology to perform geophysical inversion of large‐scale linear systems via a covariance‐free orthogonal transformation: the discrete cosine transform. The methodology consists of compressing the matrix of the linear system as a digital image and using the interesting properties of orthogonal transformations to define an approximation of the Moore–Penrose pseudo‐inverse. This methodology is also highly scalable since the model reduction achieved by these techniques increases with the number of parameters of the linear system involved due to the high correlation needed for these parameters to accomplish very detailed forward predictions and allows for a very fast computation of the inverse problem solution. We show the application of this methodology to a simple synthetic two‐dimensional gravimetric problem for different dimensionalities and different levels of white Gaussian noise and to a synthetic linear system whose system matrix has been generated via geostatistical simulation to produce a random field with a given spatial correlation. The numerical results show that the discrete cosine transform pseudo‐inverse outperforms the classical least‐squares techniques, mainly in the presence of noise, since the solutions that are obtained are more stable and fit the observed data with the lowest root‐mean‐square error. Besides, we show that model reduction is a very effective way of parameter regularisation when the conditioning of the reduced discrete cosine transform matrix is taken into account. We finally show its application to the inversion of a real gravity profile in the Atacama Desert (north Chile) obtaining very successful results in this non‐linear inverse problem. The methodology presented here has a general character and can be applied to solve any linear and non‐linear inverse problems (through linearisation) arising in technology and, particularly, in geophysics, independently of the geophysical model discretisation and dimensionality. Nevertheless, the results shown in this paper are better in the case of ill‐conditioned inverse problems for which the matrix compression is more efficient. In that sense, a natural extension of this methodology would be its application to the set of normal equations.  相似文献   

12.
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing.  相似文献   

13.
The seismic inversion problem is a highly non‐linear problem that can be reduced to the minimization of the least‐squares criterion between the observed and the modelled data. It has been solved using different classical optimization strategies that require a monotone descent of the objective function. We propose solving the full‐waveform inversion problem using the non‐monotone spectral projected gradient method: a low‐cost and low‐storage optimization technique that maintains the velocity values in a feasible convex region by frequently projecting them on this convex set. The new methodology uses the gradient direction with a particular spectral step length that allows the objective function to increase at some iterations, guarantees convergence to a stationary point starting from any initial iterate, and greatly speeds up the convergence of gradient methods. We combine the new optimization scheme as a solver of the full‐waveform inversion with a multiscale approach and apply it to a modified version of the Marmousi data set. The results of this application show that the proposed method performs better than the classical gradient method by reducing the number of function evaluations and the residual values.  相似文献   

14.
We present a Gaussian packet migration method based on Gabor frame decomposition and asymptotic propagation of Gaussian packets. A Gaussian packet has both Gaussian‐shaped time–frequency localization and space–direction localization. Its evolution can be obtained by ray tracing and dynamic ray tracing. In this paper, we first briefly review the concept of Gaussian packets. After discussing how initial parameters affect the shape of a Gaussian packet, we then propose two Gabor‐frame‐based Gaussian packet decomposition methods that can sparsely and accurately represent seismic data. One method is the dreamlet–Gaussian packet method. Dreamlets are physical wavelets defined on an observation plane and can represent seismic data efficiently in the local time–frequency space–wavenumber domain. After decomposition, dreamlet coefficients can be easily converted to the corresponding Gaussian packet coefficients. The other method is the Gabor‐frame Gaussian beam method. In this method, a local slant stack, which is widely used in Gaussian beam migration, is combined with the Gabor frame decomposition to obtain uniform sampled horizontal slowness for each local frequency. Based on these decomposition methods, we derive a poststack depth migration method through the summation of the backpropagated Gaussian packets and the application of the imaging condition. To demonstrate the Gaussian packet evolution and migration/imaging in complex models, we show several numerical examples. We first use the evolution of a single Gaussian packet in media with different complexities to show the accuracy of Gaussian packet propagation. Then we test the point source responses in smoothed varying velocity models to show the accuracy of Gaussian packet summation. Finally, using poststack synthetic data sets of a four‐layer model and the two‐dimensional SEG/EAGE model, we demonstrate the validity and accuracy of the migration method. Compared with the more accurate but more time‐consuming one‐way wave‐equation‐based migration, such as beamlet migration, the Gaussian packet method proposed in this paper can correctly image the major structures of the complex model, especially in subsalt areas, with much higher efficiency. This shows the application potential of Gaussian packet migration in complicated areas.  相似文献   

15.
Seismic time‐lapse surveys are susceptible to repeatability errors due to varying environmental conditions. To mitigate this problem, we propose the use of interferometric least‐squares migration to estimate the migration images for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for interferometric least‐squares migration, and the data are approximately redatumed to this reference reflector before imaging. This virtual redatuming mitigates the repeatability errors in the time‐lapse migration image. Results with synthetic and field data show that interferometric least‐squares migration can sometimes reduce or eliminate artifacts caused by non‐repeatability in time‐lapse surveys and provide a high‐resolution estimate of the time‐lapse change in the reservoir.  相似文献   

16.
The inversion of induced‐polarization parameters is important in the characterization of the frequency electrical response of porous rocks. A Bayesian approach is developed to invert these parameters assuming the electrical response is described by a Cole–Cole model in the time or frequency domain. We show that the Bayesian approach provides a better analysis of the uncertainty associated with the parameters of the Cole–Cole model compared with more conventional methods based on the minimization of a cost function using the least‐squares criterion. This is due to the strong non‐linearity of the inverse problem and non‐uniqueness of the solution in the time domain. The Bayesian approach consists of propagating the information provided by the measurements through the model and combining this information with a priori knowledge of the data. Our analysis demonstrates that the uncertainty in estimating the Cole–Cole model parameters from induced‐polarization data is much higher for measurements performed in the time domain than in the frequency domain. Our conclusion is that it is very difficult, if not impossible, to retrieve the correct value of the Cole–Cole parameters from time‐domain induced‐polarization data using standard least‐squares methods. In contrast, the Cole–Cole parameters can be more correctly inverted in the frequency domain. These results are also valid for other models describing the induced‐polarization spectral response, such as the Cole–Davidson or power law models.  相似文献   

17.
Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity‐based filters. In this paper, we discuss a model‐driven form of seismic interferometry that allows suppression of scattered ground‐roll noise in land seismic data. The conventional cross‐correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co‐located with a receiver. The model‐driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co‐location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane‐wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least‐squares solution. Using a number of both synthetic and real two‐dimensional (2D) and three‐dimensional (3D) land seismic data sets, we show that this model‐driven approach provides effective results, allowing suppression of scattered ground‐roll noise without having an adverse effect on the underlying signal.  相似文献   

18.
A structure's health or level of damage can be monitored by identifying changes in structural or modal parameters. This research directly identifies changes in structural stiffness due to modelling error or damage for a post‐tensioned pre‐cast reinforced concrete frame building with rocking beam column connections and added damping and stiffness (ADAS) elements. A structural health monitoring (SHM) method based on adaptive least mean squares (LMS) filtering theory is presented that identifies changes from a simple baseline model of the structure. This method is able to track changes in the stiffness matrix, identifying when the building is (1) rocking, (2) moving in a hybrid rocking–elastic regime, or (3) responding linearly. Results are compared for two different LMS‐based SHM methods using an L 2 error norm metric. In addition, two baseline models of the structure, one using tangential stiffness and the second a more accurate bi‐linear stiffness model, are employed. The impact of baseline model complexity is then delineated. The LMS‐based methods are able to track the non‐linearity of the system to within 15% using this metric, with the error due primarily to filter convergence rates as the structural response changes regimes while undergoing the El Centro ground motion record. The use of a bi‐linear baseline model for the SHM problem is shown to result in error metrics that are at least 50% lower than those for the tangential baseline model. Errors of 5–15% with this L 2 error norm are fairly stringent compared to the greater than 2 × changes in stiffness undergone by the structure, however, in practice the usefulness of the results is dependent on the resolution required by the user. The impact of sampling rate is shown to be negligible over the range of 200–1000Hz, along with the choice of LMS‐based SHM method. The choice of baseline model and its level of knowledge about the actual structure is seen to be the dominant factor in achieving good results. The methods presented require 2.8–14.0 Mcycles of computation and therefore could easily be implemented in real time. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
This paper discusses how to use the three‐dimensional (3D) time‐domain finite‐element method incorporating the least‐squares method to calculate the equivalent foundation mass, damping and stiffness matrices. Numerical simulations indicate that the accuracy of these equivalent matrices is acceptable when the applied harmonic force of 1+sine is used. Moreover, the accuracy of the least‐squares method using the 1+sine force is not sensitive to the first time step for inclusion of data. Since the finite‐element method can model problems flexibly, the equivalent mass, damping and stiffness matrices of very complicated soil profiles and foundations can be established without difficulty using this least‐squares method. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
The problem of conversion from time‐migration velocity to an interval velocity in depth in the presence of lateral velocity variations can be reduced to solving a system of partial differential equations. In this paper, we formulate the problem as a non‐linear least‐squares optimization for seismic interval velocity and seek its solution iteratively. The input for the inversion is the Dix velocity, which also serves as an initial guess. The inversion gradually updates the interval velocity in order to account for lateral velocity variations that are neglected in the Dix inversion. The algorithm has a moderate cost thanks to regularization that speeds up convergence while ensuring a smooth output. The proposed method should be numerically robust compared to the previous approaches, which amount to extrapolation in depth monotonically. For a successful time‐to‐depth conversion, image‐ray caustics should be either nonexistent or excluded from the computational domain. The resulting velocity can be used in subsequent depth‐imaging model building. Both synthetic and field data examples demonstrate the applicability of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号