首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, new on‐shore acquisition designs have been presented with multi‐component sensors deployed in the shallow sub‐surface (20 m–60 m). Virtual source redatuming has been proposed for these data to compensate for surface statics and to enhance survey repeatability. In this paper, we investigate the feasibility of replacing the correlation‐based formalism that undergirds virtual source redatuming with multi‐dimensional deconvolution, offering various advantages such as the elimination of free‐surface multiples and the potential to improve virtual source repeatability. To allow for data‐driven calibration of the sensors and to improve robustness in cases with poor sensor spacing in the shallow sub‐surface (resulting in a relatively high wavenumber content), we propose a new workflow for this configuration. We assume a dense source sampling and target signals that arrive at near‐vertical propagation angles. First, the data are preconditioned by applying synthetic‐aperture‐source filters in the common receiver domain. Virtual source redatuming is carried out for the multi‐component recordings individually, followed by an intermediate deconvolution step. After this specific pre‐processing, we show that the downgoing and upgoing constituents of the wavefields can be separated without knowledge of the medium parameters, the source wavelet, or sensor characteristics. As a final step, free‐surface multiples can be eliminated by multi‐dimensional deconvolution of the upgoing fields with the downgoing fields.  相似文献   

2.
The integrated optimum problem of structures subjected to strong earthquakes and wind excitations, optimizing the number of actuators, the configuration of actuators and the control algorithms simultaneously, is studied. Two control algorithms, optimal control and acceleration feedback control, are used as the control algorithms. A multi‐level optimization model is proposed with respect to the solution procedure of the optimum problem. The characteristics of the model are analysed, and the formulation of each suboptimization problem at each level is presented. To solve the multi‐level optimization problem, a multi‐level genetic algorithm (MLGA) is proposed. The proposed model and MLGA are used to solve two multi‐level optimization problems in which the optimization of the number of actuators, the positions of actuators and the control algorithm are considered in different levels. In problem 1, an example structure is excited by strong wind, and in problem 2, an example structure is subjected to strong earthquake excitation. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
This paper presents the theory to eliminate from the recorded multi‐component source, multi‐component receiver marine electromagnetic measurements the effect of the physical source radiation pattern and the scattering response of the water‐layer. The multi‐component sources are assumed to be orthogonally aligned above the receivers at the seabottom. Other than the position of the sources, no source characteristics are required. The integral equation method, which for short is denoted by Lorentz water‐layer elimination, follows from Lorentz' reciprocity theorem. It requires information only of the electromagnetic parameters at the receiver level to decompose the electromagnetic measurements into upgoing and downgoing constituents. Lorentz water‐layer elimination replaces the water layer with a homogeneous half‐space with properties equal to those of the sea‐bed. The source is redatumed to the receiver depth. When the subsurface is arbitrary anisotropic but horizontally layered, the Lorentz water‐layer elimination scheme greatly simplifies and can be implemented as deterministic multi‐component source, multi‐component receiver multidimensional deconvolution of common source gathers. The Lorentz deconvolved data can be further decomposed into scattering responses that would be recorded from idealized transverse electric and transverse magnetic mode sources and receivers. This combined electromagnetic field decomposition on the source and receiver side gives data equivalent to data from a hypothetical survey with the water‐layer absent, with idealized single component transverse electric and transverse magnetic mode sources and idealized single component transverse electric and transverse magnetic mode receivers. When the subsurface is isotropic or transverse isotropic and horizontally layered, the Lorentz deconvolution decouples into pure transverse electric and transverse magnetic mode data processing problems, where a scalar field formulation of the multidimensional Lorentz deconvolution is sufficient. In this case single‐component source data are sufficient to eliminate the water‐layer effect. We demonstrate the Lorentz deconvolution by using numerically modeled data over a simple isotropic layered model illustrating controlled‐source electromagnetic hydrocarbon exploration. In shallow water there is a decrease in controlled‐source electromagnetic sensitivity to thin resistors at depth. The Lorentz deconvolution scheme is designed to overcome this effect by eliminating the water‐layer scattering, including the field's interaction with air.  相似文献   

4.
This paper studies aspects that influence the de‐ghosting performance of marine multi‐level sources based on a modified Johnson model. The normalized squared error between actual signature and its corresponding ghost‐free signature is introduced to evaluate the multi‐level source design. The results show that optimum depth combinations and volume combinations exist in the design. However, there is also some flexibility in the volume combination which makes it possible to optimize with respect to residual bubble oscillation. By considering both operational aspects and performance, we propose that three or four levels in a multi‐level source are reasonable. Compared to a horizontal source, a multi‐level source can be designed to reduce the notch effect, strengthen the down‐going energy and improve the energy transmission directivity. Studies of the influence of depth and firing time deviations indicate that a multi‐level source is more stable than a normal horizontal source in an operational environment.  相似文献   

5.
To reduce the numerical errors arising from the improper enforcement of the artificial boundary conditions on the distant surface that encloses the underground part of the subsurface, we present a finite‐element–infinite‐element coupled method to significantly reduce the computation time and memory cost in the 2.5D direct‐current resistivity inversion. We first present the boundary value problem of the secondary potential. Then, a new type of infinite element is analysed and applied to replace the conventionally used mixed boundary condition on the distant boundary. In the internal domain, a standard finite‐element method is used to derive the final system of linear equations. With a novel shape function for infinite elements at the subsurface boundary, the final system matrix is sparse, symmetric, and independent of source electrodes. Through lower upper decomposition, the multi‐pole potentials can be swiftly obtained by simple back‐substitutions. We embed the newly developed forward solution to the inversion procedure. To compute the sensitivity matrix, we adopt the efficient adjoint equation approach to further reduce the computation cost. Finally, several synthetic examples are tested to show the efficiency of inversion.  相似文献   

6.
Modelling and inversion of controlled‐source electromagnetic (CSEM) fields requires accurate interpolation of modelled results near strong resistivity contrasts. There, simple linear interpolation may produce large errors, whereas higher‐order interpolation may lead to oscillatory behaviour in the interpolated result. We propose to use the essentially non‐oscillatory, piecewise polynomial interpolation scheme designed for piecewise smooth functions that contains discontinuities in the function itself or in its first or higher derivatives. The scheme uses a non‐linear adaptive algorithm to select a set of interpolation points that represent the smoothest part of the function among the sets of neighbouring points. We present numerical examples to demonstrate the usefulness of the scheme. The first example shows that the essentially non‐oscillatory interpolation (ENO) scheme better captures an isolated discontinuity. In the second example, we consider the case of sampling the electric field computed by a finite‐volume CSEM code at a receiver location. In this example, the ENO interpolation performs quite well. However, the overall error is dominated by the discretization error. The other examples consider the comparison between sampling with essentially non‐oscillatory interpolation and existing interpolation schemes. In these examples, essentially non‐oscillatory interpolation provides more accurate results than standard interpolation, especially near discontinuities.  相似文献   

7.
We propose a novel technique for improving a long‐term multi‐step‐ahead streamflow forecast. A model based on wavelet decomposition and a multivariate Bayesian machine learning approach is developed for forecasting the streamflow 3, 6, 9, and 12 months ahead simultaneously. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model accuracy can be increased by using the wavelet boundary rule introduced in this study. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data from the Yellowstone River in the Uinta Basin in Utah. The model based on the combination of wavelet and Bayesian machine learning regression techniques is compared with that of the wavelet and artificial neural networks‐based model. The robustness of the models is evaluated. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Multi‐decadal groundwater level records, which provide information about long‐term variability and trends, are relatively rare. Whilst a number of studies have sought to reconstruct river flow records, there have been few attempts to reconstruct groundwater level time‐series over a number of decades. Using long rainfall and temperature records, we developed and applied a methodology to do this using a lumped conceptual model. We applied the model to six sites in the UK, in four different aquifers: Chalk, limestone, sandstone and Greensand. Acceptable models of observed monthly groundwater levels were generated at four of the sites, with maximum Nash–Sutcliffe Efficiency scores of between 0.84 and 0.93 over the calibration and evaluation periods, respectively. These four models were then used to reconstruct the monthly groundwater level time‐series over approximately 60 years back to 1910. Uncertainty in the simulated levels associated with model parameters was assessed using the Generalized Likelihood Uncertainty Estimation method. Known historical droughts and wet period in the UK are clearly identifiable in the reconstructed levels, which were compared using the Standardized Groundwater Level Index. Such reconstructed records provide additional information with which to improve estimates of the frequency, severity and duration of groundwater level extremes and their spatial coherence, which for example is important for the assessment of the yield of boreholes during drought periods. Copyright © 2016 British Geological Survey. Hydrological Processes © 2016 John Wiley & Sons Ltd  相似文献   

9.
Surface waves are often used to estimate a near‐surface shear‐velocity profile. The inverse problem is solved for the locally one‐dimensional problem of a set of homogeneous horizontal elastic layers. The result is a set of shear velocities, one for each layer. To obtain a P‐wave velocity profile, the P‐guided waves should be included in the inversion scheme. As an alternative to a multi‐layered model, we consider a simple smooth acoustic constant‐density velocity model, which has a negative constant vertical depth gradient of the squared P‐wave slowness and is bounded by a free surface at the top and a homogeneous half‐space at the bottom. The exact solution involves Airy functions and provides an analytical expression for the dispersion equation. If the ratio is sufficiently small, the dispersion curves can be picked from the seismic data and inverted for the continuous P‐wave velocity profile. The potential advantages of our model are its low computational cost and the fact that the result can serve as a smooth starting model for full‐waveform inversion. For the latter, a smooth initial model is often preferred over a rough one. We test the inversion approach on synthetic elastic data computed for a single‐layer P‐wave model and on field data, both with a small ratio. We find that a single‐layer model can recover either the shallow or deeper part of the profile but not both, when compared with the result of a multi‐layer inversion that we use as a reference. An extension of our analytic model to two layers above a homogeneous half‐space, each with a constant vertical gradient of the squared P‐wave slowness and connected in a continuous manner, improves the fit of the picked dispersion curves. The resulting profile resembles a smooth approximation of the multi‐layered one but contains, of course, less detail. As it turns out, our method does not degrade as gracefully as, for instance, diving‐wave tomography, and we can only hope to fit a subset of the dispersion curves. Therefore, the applicability of the method is limited to cases where the ratio is small and the profile is sufficiently simple. A further extension of the two‐layer model to more layers, each with a constant depth gradient of the squared slowness, might improve the fit of the modal structure but at an increased cost.  相似文献   

10.
Within a landform, the channelized water path from any point to the corresponding outlet is split into successive components within the Strahler ordering scheme. The probability density functions (pdf) of the length L of the whole channelized path and of the lengths of the components are studied as multi‐level structural functions. We have considered a granitic area and studied both its main basin and the set of its 48 constituent basins. With respect to the main basin, the pdf of the component lengths exhibit a strong scaling property, except for the highest orders, due to a hierarchical constraint; hence, the pdf of sum L has no particular shape. We have nevertheless identified an underlying structural pattern at particular infra‐ and supra‐basin levels, where the hierarchical constraint is weaker. This identification process entails noting structurally emerging patterns based on multi‐level variables and distributions, which satisfy the general self‐similarity of networks. The fairly good fit of an analytical gamma law with most of these emerging patterns can prove to be a positive step towards both a general modelling approach to the geomorphometric functions and a stronger geomorphological basement of hydrological transfer functions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
We consider the calculation of the electrical field quantities, electric potential and the vertical component of the total volume density of electric current, in a horizontally layered, piecewise homogeneous and arbitrarily anisotropic earth due to a system of direct current point sources. By applying Fourier transformation with respect to the horizontal space coordinates to the static field equations, the field quantities are obtained as the solutions of the system of transform-domain differential equations in the vertical (depth) coordinates. A recurrence scheme has been given to compute the tranform-domain field quantities at any depth. The corresponding space-domain quantities are then obtained by inverse Fast Fourier Transformation (FFT). A complete computer program has been developed for computing the electric potentials at any depth of the layered earth, which is composed of an arbitrary number of anisotropic layers with arbitrary conductivity tensors. By considering the point sources at different depths from the surface, equipotential contours on the surface of arbitrarily anisotropic layered earth models are given.  相似文献   

12.
The scope of this study is to investigate the effect of the direction of seismic excitation on the fragility of an already constructed, 99‐m‐long, three‐span highway overpass. First, the investigation is performed at a component level, quantifying the sensitivity of local damage modes of individual bridge components (namely, piers, bearings, abutments, and footings) to the direction of earthquake excitation. The global vulnerability at the system level is then assessed for a given angle of incidence of the earthquake ground motion to provide a single‐angle, multi‐damage probabilistic estimate of the bridge overall performance. A multi‐angle, multi‐damage, vulnerability assessment methodology is then followed, assuming uniform distribution for the angle of incidence of seismic waves with respect to the bridge axis. The above three levels of investigation highlight that the directivity of ground motion excitation may have a significant impact on the fragility of the individual bridge components, which shall not be a priori neglected. Most importantly, depending on the assumptions made for the component to the system level transition, this local sensitivity is often suppressed. It may be therefore necessary, based on the ultimate purpose of the vulnerability or the life cycle analysis, to obtain a comprehensive insight on the multiple damage potential of all individual structural and foundation components under multi‐angle excitation, to quantify the statistical correlation among the distinct damage modes and to identify the components that are both most critical and sensitive to the direction of ground motion and carefully define their limit states which control the predicted bridge fragility. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Recent earthquake events evidenced that damage of structural components in a lifeline network may cause prolonged disruption of lifeline services, which eventually results in significant socio‐economic losses in the affected area. Despite recent advances in network reliability analysis, the complexity of the problem and various uncertainties still make it a challenging task to evaluate the post‐hazard performance and connectivity of lifeline networks efficiently and accurately. In order to overcome such challenges and take advantage of merits of multi‐scale analysis, this paper develops a multi‐scale system reliability analysis method by integrating a network decomposition approach with the matrix‐based system reliability (MSR) method. In addition to facilitating system reliability analysis of large‐size networks, the multi‐scale approach enables optimizing the level of computational effort on subsystems; identifying the relative importance of components and subsystems at multiple scales; and providing a collaborative risk management framework. The MSR method is uniformly applied for system reliability analyses at both the lower‐scale (for link failure) and the higher‐scale (for system connectivity) to obtain the probability of general system events, various conditional probabilities, component importance measures, statistical correlation between subsystem failures and parameter sensitivities. The proposed multi‐scale analysis method is demonstrated by its application to a gas distribution network in Shelby County of Tennessee. A parametric study is performed to determine the number of segments during the lower‐scale MSR analysis of each pipeline based on the strength of the spatial correlation of seismic intensity. It is shown that the spatial correlation should be considered at both scales for accurate reliability evaluation. The proposed multi‐scale analysis approach provides an effective framework of risk assessment and decision support for lifeline networks under earthquake hazards. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Reverse‐time migration gives high‐quality, complete images by using full‐wave extrapolations. It is thus not subject to important limitations of other migrations that are based on high‐frequency or one‐way approximations. The cross‐correlation imaging condition in two‐dimensional pre‐stack reverse‐time migration of common‐source data explicitly sums the product of the (forward‐propagating) source and (backward‐propagating) receiver wavefields over all image times. The primary contribution at any image point travels a minimum‐time path that has only one (specular) reflection, and it usually corresponds to a local maximum amplitude. All other contributions at the same image point are various types of multipaths, including prismatic multi‐arrivals, free‐surface and internal multiples, converted waves, and all crosstalk noise, which are imaged at later times, and potentially create migration artefacts. A solution that facilitates inclusion of correctly imaged, non‐primary arrivals and removal of the related artefacts, is to save the depth versus incident angle slice at each image time (rather than automatically summing them). This results in a three‐parameter (incident angle, depth, and image time) common‐image volume that integrates, into a single unified representation, attributes that were previously computed by separate processes. The volume can be post‐processed by selecting any desired combination of primary and/or multipath data before stacking over image time. Separate images (with or without artifacts) and various projections can then be produced without having to remigrate the data, providing an efficient tool for optimization of migration images. A numerical example for a simple model shows how primary and prismatic multipath contributions merge into a single incident angle versus image time trajectory. A second example, using synthetic data from the Sigsbee2 model, shows that the contributions to subsalt images of primary and multipath (in this case, turning wave) reflections are different. The primary reflections contain most of the information in regions away from the salt, but both primary and multipath data contribute in the subsalt region.  相似文献   

15.
We developed a frequency‐domain acoustic‐elastic coupled waveform inversion based on the Gauss‐Newton conjugate gradient method. Despite the use of a high‐performance computer system and a state‐of‐the‐art parallel computation algorithm, it remained computationally prohibitive to calculate the approximate Hessian explicitly for a large‐scale inverse problem. Therefore, we adopted the conjugate gradient least‐squares algorithm, which is frequently used for geophysical inverse problems, to implement the Gauss‐Newton method so that the approximate Hessian is calculated implicitly. Thus, there was no need to store the Hessian matrix. By simultaneously back‐propagating multi‐components consisting of the pressure and displacements, we could efficiently extract information on the subsurface structures. To verify our algorithm, we applied it to synthetic data sets generated from the Marmousi‐2 model and the modified SEG/EAGE salt model. We also extended our algorithm to the ocean‐bottom cable environment and verified it using ocean‐bottom cable data generated from the Marmousi‐2 model. With the assumption of a hard seafloor, we recovered both the P‐wave velocity of complicated subsurface structures as well as the S‐wave velocity. Although the inversion of the S‐wave velocity is not feasible for the high Poisson's ratios used to simulate a soft seafloor, several strategies exist to treat this problem. Our example using multi‐component data showed some promise in mitigating the soft seafloor effect. However, this issue still remains open.  相似文献   

16.
The electromagnetic response of a horizontal electric dipole transmitter in the presence of a conductive, layered earth is important in a number of geophysical applications, ranging from controlled‐source audio‐frequency magnetotellurics to borehole geophysics to marine electromagnetics. The problem has been thoroughly studied for more than a century, starting from a dipole resting on the surface of a half‐space and subsequently advancing all the way to a transmitter buried within a stack of anisotropic layers. The solution is still relevant today. For example, it is useful for one‐dimensional modelling and interpretation, as well as to provide background fields for two‐ and three‐dimensional modelling methods such as integral equation or primary–secondary field formulations. This tutorial borrows elements from the many texts and papers on the topic and combines them into what we believe is a helpful guide to performing layered earth electromagnetic field calculations. It is not intended to replace any of the existing work on the subject. However, we have found that this combination of elements is particularly effective in teaching electromagnetic theory and providing a basis for algorithmic development. Readers will be able to calculate electric and magnetic fields at any point in or above the earth, produced by a transmitter at any location. As an illustrative example, we calculate the fields of a dipole buried in a multi‐layered anisotropic earth to demonstrate how the theory that developed in this tutorial can be implemented in practice; we then use the example to examine the diffusion of volume charge density within anisotropic media—a rarely visualised process. The algorithm is internally validated by comparing the response of many thin layers with alternating high and low conductivity values to the theoretically equivalent (yet algorithmically simpler) anisotropic solution, as well as externally validated against an independent algorithm.  相似文献   

17.
A new, adaptive multi‐criteria method for accurate estimation of three‐component three‐dimensional vertical seismic profiling of first breaks is proposed. Initially, we manually pick first breaks for the first gather of the three‐dimensional borehole set and adjust several coefficients to approximate the first breaks wave‐shape parameters. We then predict the first breaks for the next source point using the previous one, assuming the same average velocity. We follow this by calculating an objective function for a moving trace window to minimize it with respect to time shift and slope. This function combines four main properties that characterize first breaks on three‐component borehole data: linear polarization, signal/noise ratio, similarity in wave shapes for close shots and their stability in the time interval after the first break. We then adjust the coefficients by combining current and previous values. This approach uses adaptive parameters to follow smooth wave‐shape changes. Finally, we average the first breaks after they are determined in the overlapping windows. The method utilizes three components to calculate the objective function for the direct compressional wave projection. An adaptive multi‐criteria optimization approach with multi three‐component traces makes this method very robust, even for data contaminated with high noise. An example using actual data demonstrates the stability of this method.  相似文献   

18.
Finite‐difference frequency‐domain modelling of seismic wave propagation is attractive for its efficient solution of multisource problems, and this is crucial for full‐waveform inversion and seismic imaging, especially in the three‐dimensional seismic problem. However, implementing the free surface in the finite‐difference method is nontrivial. Based on an average medium method and the limit theorem, we present an adaptive free‐surface expression to describe the behaviour of wavefields at the free surface, and no extra work for the free‐surface boundary condition is needed. Essentially, the proposed free‐surface expression is a modification of density and constitutive relation at the free surface. In comparison with a direct difference approximate method of the free‐surface boundary condition, this adaptive free‐surface expression can produce more accurate and stable results for a broad range of Poisson's ratio. In addition, this expression has a good performance in handling the lateral variation of Poisson's ratio adaptively and without instability.  相似文献   

19.
Seismic tomography is a well‐established approach to invert smooth macro‐velocity models from kinematic parameters, such as traveltimes and their derivatives, which can be directly estimated from data. Tomographic methods differ more with respect to data domains than in the specifications of inverse‐problem solving schemes. Typical examples are stereotomography, which is applied to prestack data and Normal‐Incidence‐Point‐wave tomography, which is applied to common midpoint stacked data. One of the main challenges within the tomographic approach is the reliable estimation of the kinematic attributes from the data that are used in the inversion process. Estimations in the prestack domain (weak and noisy signals), as well as in the post‐stack domain (occurrence of triplications and diffractions leading to numerous conflicting dip situations) may lead to parameter inaccuracies that will adversely impact the resulting velocity models. To overcome the above limitations, a new tomographic procedure applied in the time‐migrated domain is proposed. We call this method Image‐Incident‐Point‐wave tomography. The new scheme can be seen as an alternative to Normal‐Incidence‐Point‐wave tomography. The latter method is based on traveltime attributes associated with normal rays, whereas the Image‐Incidence‐Point‐wave technique is based on the corresponding quantities for the image rays. Compared to Normal‐Incidence‐Point‐wave tomography the proposed method eases the selection of the tomography attributes, which is shown by synthetic and field data examples. Moreover, the method provides a direct way to convert time‐migration velocities into depth‐migration velocities without the need of any Dix‐style inversion.  相似文献   

20.
A marine source generates both a direct wavefield and a ghost wavefield. This is caused by the strong surface reflectivity, resulting in a blended source array, the blending process being natural. The two unblended response wavefields correspond to the real source at the actual location below the water level and to the ghost source at the mirrored location above the water level. As a consequence, deghosting becomes deblending (‘echo‐deblending’) and can be carried out with a deblending algorithm. In this paper we present source deghosting by an iterative deblending algorithm that properly includes the angle dependence of the ghost: It represents a closed‐loop, non‐causal solution. The proposed echo‐deblending algorithm is also applied to the detector deghosting problem. The detector cable may be slanted, and shot records may be generated by blended source arrays, the blending being created by simultaneous sources. Similar to surface‐related multiple elimination the method is independent of the complexity of the subsurface; only what happens at and near the surface is relevant. This means that the actual sea state may cause the reflection coefficient to become frequency dependent, and the water velocity may not be constant due to temporal and lateral variations in the pressure, temperature, and salinity. As a consequence, we propose that estimation of the actual ghost model should be part of the echo‐deblending algorithm. This is particularly true for source deghosting, where interaction of the source wavefield with the surface may be far from linear. The echo‐deblending theory also shows how multi‐level source acquisition and multi‐level streamer acquisition can be numerically simulated from standard acquisition data. The simulated multi‐level measurements increase the performance of the echo‐deblending process. The output of the echo‐deblending algorithm on the source side consists of two ghost‐free records: one generated by the real source at the actual location below the water level and one generated by the ghost source at the mirrored location above the water level. If we apply our algorithm at the detector side as well, we end up with four ghost‐free shot records. All these records are input to migration. Finally, we demonstrate that the proposed echo‐deblending algorithm is robust for background noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号