首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In hydraulic fracturing treatments, locating not only hydraulic fractures but also any pre‐existing natural fractures and faults in a subsurface reservoir is very important. Hydraulic fractures can be tracked by locating microseismic events, but to identify the locations of natural fractures, an additional technique is required. In this paper, we present a method to image pre‐existing fractures and faults near a borehole with virtual reverse vertical seismic profiling data or virtual single‐well profiling data (limited to seismic reflection data) created from microseismic monitoring using seismic interferometry. The virtual source data contain reflections from natural fractures and faults, and these features can be imaged by applying migration to the virtual source data. However, the imaging zone of fractures in the proposed method is strongly dependent on the geographic extent of the microseismic events and the location and direction of the fracture. To verify our method, we produced virtual reverse vertical seismic profiling and single‐well profiling data from synthetic microseismic data and compared them with data from real sources in the same relative position as the virtual sources. The results show that the reflection travel times from the fractures in the virtual source data agree well with travel times in the real‐source data. By applying pre‐stack depth migration to the virtual source data, images of the natural fractures were obtained with accurate locations. However, the migrated section of the single‐well profiling data with both real and virtual sources contained spurious fracture images on the opposite side of the borehole. In the case of virtual single‐well profiling data, we could produce correct migration images of fractures by adopting directional redatuming for which the occurrence region of microseismic events is divided into several subdivisions, and fractures located only on the opposite side of the borehole are imaged for each subdivision.  相似文献   

2.
In this paper, we present a case study on the use of the normalized source strength (NSS) for interpretation of magnetic and gravity gradient tensors data. This application arises in exploration of nickel, copper and platinum group element (Ni‐Cu‐PGE) deposits in the McFaulds Lake area, Northern Ontario, Canada. In this study, we have used the normalized source strength function derived from recent high resolution aeromagnetic and gravity gradiometry data for locating geological bodies. In our algorithm, we use maxima of the normalized source strength for estimating the horizontal location of the causative body. Then we estimate depth to the source and structural index at that point using the ratio between the normalized source strength and its vertical derivative calculated at two levels; the measurement level and a height h above the measurement level. To discriminate more reliable solutions from spurious ones, we reject solutions with unreasonable estimated structural indices. This method uses an upward continuation filter which reduces the effect of high frequency noise. In the magnetic case, the advantage is that, in general, the normalized magnetic source strength is relatively insensitive to magnetization direction, thus it provides more reliable information than standard techniques when geologic bodies carry remanent magnetization. For dipping gravity sources, the calculated normalized source strength yields a reliable estimate of the source location by peaking right above the top surface. Application of the method on aeromagnetic and gravity gradient tensor data sets from McFaulds Lake area indicates that most of the gravity and magnetic sources are located just beneath a 20 m thick (on average) overburden and delineated magnetic and gravity sources which can be probably approximated by geological contacts and thin dikes, come up to the overburden.  相似文献   

3.
We have developed a least-squares approach to determine simultaneously the depth to both the top and base of a buried finite vertical cylinder (vertical line element approximation) and a 2-D vertical thin sheet from moving average residual anomaly profiles obtained from gravity data using filters of successive window lengths. The method involves using a relationship between the depth to the top, and base of the source and a combination of windowed observations. The method is based on computing the standard deviation of the depths to the top, determined from all moving average residual anomalies for each value of the depth to the base. The standard deviation may generally be considered a criterion for determining the correct depth to the top and base of the buried structure. When the correct depth to the base value is used, the standard deviation of the depths to the top is less than the standard deviation using incorrect values of the depth to the base. This method can be applied to residuals as well as to the observed gravity data. The method is applied to synthetic examples with and without random errors and tested on two field examples from the USA and Canada.  相似文献   

4.
We present a new integrated approach to the interpretation of magnetic basement that is based on recognition of characteristic patterns in distributions and alignments of magnetic source depth solutions above and below the surface of magnetic basement. This approach integrates a quantitative analysis of depth solutions, obtained by 2D Werner deconvolution of the magnetic data, with a qualitative evaluation of the Bouguer gravity anomalies. The crystalline/metamorphic basement and sedimentary cover have different origins, tectonic histories, lithologies and magnetic properties. These differences result in different geometries of magnetic sources associated with faults, fracture zones, igneous intrusions, erosional truncations, subcrop edges and other structural discontinuities. Properly tuned, 2D Werner deconvolution is able to resolve the intra‐sedimentary and intra‐basement magnetic source geometries into distinctly different distributions and alignments of calculated depth solutions. An empirical set of criteria, basement indicators, was developed for identification and correlation of the basement surface. The ambiguity of basement correlation with limited or non‐existent well control, which is common for onshore frontier and offshore explorations, can be reduced by incorporating the Bouguer gravity data into the process of correlation.  相似文献   

5.
Potential field data such as geoid and gravity anomalies are globally available and offer valuable information about the Earth's lithosphere especially in areas where seismic data coverage is sparse. For instance, non‐linear inversion of Bouguer anomalies could be used to estimate the crustal structures including variations of the crustal density and of the depth of the crust–mantle boundary, that is, Moho. However, due to non‐linearity of this inverse problem, classical inversion methods would fail whenever there is no reliable initial model. Swarm intelligence algorithms, such as particle swarm optimisation, are a promising alternative to classical inversion methods because the quality of their solutions does not depend on the initial model; they do not use the derivatives of the objective function, hence allowing the use of L1 norm; and finally, they are global search methods, meaning, the problem could be non‐convex. In this paper, quantum‐behaved particle swarm, a probabilistic swarm intelligence‐like algorithm, is used to solve the non‐linear gravity inverse problem. The method is first successfully tested on a realistic synthetic crustal model with a linear vertical density gradient and lateral density and depth variations at the base of crust in the presence of white Gaussian noise. Then, it is applied to the EIGEN 6c4, a combined global gravity model, to estimate the depth to the base of the crust and the mean density contrast between the crust and the upper‐mantle lithosphere in the Eurasia–Arabia continental collision zone along a 400 km profile crossing the Zagros Mountains (Iran). The results agree well with previously published works including both seismic and potential field studies.  相似文献   

6.
The recent use of marine electromagnetic technology for exploration geophysics has primarily focused on applying the controlled source electromagnetic method for hydrocarbon mapping. However, this technology also has potential for structural mapping applications, particularly when the relative higher frequency controlled source electromagnetic data are combined with the lower frequencies of naturally occurring magnetotelluric data. This paper reports on an extensive test using data from 84 marine controlled source electromagnetic and magnetotelluric stations for imaging volcanic sections and underlying sediments on a 128‐km‐long profile. The profile extends across the trough between the Faroe and Shetland Islands in the North Sea. Here, we focus on how 2.5D inversion can best recover the volcanic and sedimentary sections. A synthetic test carried out with 3D anisotropic model responses shows that vertically transverse isotropy 2.5D inversion using controlled source electromagnetic and magnetotelluric data provides the most accurate prediction of the resistivity in both volcanic and sedimentary sections. We find the 2.5D inversion works well despite moderate 3D structure in the synthetic model. Triaxial inversion using the combination of controlled source electromagnetic and magnetotelluric data provided a constant resistivity contour that most closely matched the true base of the volcanic flows. For the field survey data, triaxial inversion of controlled source electromagnetic and magnetotelluric data provides the best overall tie to well logs with vertically transverse isotropy inversion of controlled source electromagnetic and magnetotelluric data a close second. Vertical transverse isotropy inversion of controlled source electromagnetic and magnetotelluric data provided the best interpreted base of the volcanic horizon when compared with our best seismic interpretation. The structural boundaries estimated by the 20‐Ω·m contour of the vertical resistivity obtained by vertical transverse isotropy inversion of controlled source electromagnetic and magnetotelluric data gives a maximum geometric location error of 11% with a mean error of 1.2% compared with the interpreted base of the volcanic horizon. Both the model study and field data interpretation indicate that marine electromagnetic technology has the potential to discriminate between low‐resistivity prospective siliciclastic sediments and higher resistivity non‐prospective volcaniclastic sediments beneath the volcanic section.  相似文献   

7.
Triassic outcrops in the Atlassic zone of northern Tunisia may be modelled in two ways: salt bodies piercing through Cretaceous terrains or Triassic salt flows stratified within an Albian series. Both models find support from gravity data and are debatable. To evaluate the mass distribution changes with depth, the Bouguer anomaly of the El Kef‐Ouargha region was successively decomposed into regional and residual components to construct multiple pseudo‐depth slices and apparent density maps. Analyses of gravity lows clearly show a vertical continuity of less dense materials below the Triassic salt outcrops. These features can be explained by salt diapirism during Mesozoic and Cenozoic. Further, gravity data tend to indicate less dense materials below Aptian outcropping in Jebel Aite (Oued Bou Adila); thus suggesting Triassic materials occurring at depth. In addition, dense entities were recognized under Mio‐Pliocene and Quaternary deposits, which are thought to correspond to Cretaceous paleoshoals currently collapsed by non‐outcropping faults. Our findings lend support to a diapir model intruding overburden rather than the salt glacier model stratified in the Albian series proposed by some authors as the genetic structural model for Triassic material‐bearing series in the north of Tunisia.  相似文献   

8.
A method is described to locate secondary faults, which can be difficult to identify on the Bouguer gravity map. The method is based on cross-correlation between the theoretical anomaly due to a vertical step and the second vertical derivative of the Bouguer anomaly. Faults are located from the closed maxima and minima on the cross-correlation contour map calculated for two perpendicular directions. One-dimensional model computations show that the magnitude of the extremum of the cross-correlation is related to the depth to the top of the hanging wall and the throw of the fault. Application of the method to the Bouguer gravity map of the former mouth of the Yellow River in the Shengli Oilfield area near the Bo Hai Sea shows the effectiveness of the method.  相似文献   

9.
To better understand (and correct for) the factors affecting the estimation of attenuation (Q), we simulate subsurface wave propagation with the Weyl/Sommerfeld integral. The complete spherical wavefield emanating from a P‐wave point source surrounded by a homogeneous, isotropic and attenuative medium is thus computed. In a resulting synthetic vertical seismic profile, we observe near‐field and far‐field responses and a 90° phase rotation between them. Depth dependence of the magnitude spectra in these two depth regions is distinctly different. The logarithm of the magnitude spectra shows a linear dependence on frequency in the far‐field but not in those depth regions where the near‐field becomes significant. Near‐field effects are one possible explanation for large positive and even negative Q‐factors in the shallow section that may be estimated from real vertical seismic profile data when applying the spectral ratio method. We outline a near‐field compensation technique that can reduce errors in the resultant Q estimates.  相似文献   

10.
We suggest a new method to determine the piecewise‐continuous vertical distribution of instantaneous velocities within sediment layers, using different order time‐domain effective velocities on their top and bottom points. We demonstrate our method using a synthetic model that consists of different compacted sediment layers characterized by monotonously increasing velocity, combined with hard rock layers, such as salt or basalt, characterized by constant fast velocities, and low velocity layers, such as gas pockets. We first show that, by using only the root‐mean‐square velocities and the corresponding vertical travel times (computed from the original instantaneous velocity in depth) as input for a Dix‐type inversion, many different vertical distributions of the instantaneous velocities can be obtained (inverted). Some geological constraints, such as limiting the values of the inverted vertical velocity gradients, should be applied in order to obtain more geologically plausible velocity profiles. In order to limit the non‐uniqueness of the inverted velocities, additional information should be added. We have derived three different inversion solutions that yield the correct instantaneous velocity, avoiding any a priori geological constraints. The additional data at the interface points contain either the average velocities (or depths) or the fourth‐order average velocities, or both. Practically, average velocities can be obtained from nearby wells, whereas the fourth‐order average velocity can be estimated from the quartic moveout term during velocity analysis. Along with the three different types of input, we consider two types of vertical velocity models within each interval: distribution with a constant velocity gradient and an exponential asymptotically bounded velocity model, which is in particular important for modelling thick layers. It has been shown that, in the case of thin intervals, both models lead to similar results. The method allows us to establish the instantaneous velocities at the top and bottom interfaces, where the velocity profile inside the intervals is given by either the linear or the exponential asymptotically bounded velocity models. Since the velocity parameters of each interval are independently inverted, discontinuities of the instantaneous velocity at the interfaces occur naturally. The improved accuracy of the inverted instantaneous velocities is particularly important for accurate time‐to‐depth conversion.  相似文献   

11.
Electrical conductivity (EC) logs were obtained by both open‐borehole logging and passive multilevel sampling (MLS) in an observation borehole penetrating the Coastal Aquifer in Tel Aviv, Israel. Homogeneous vertical velocities for a 70‐m thick subaquifer were approximated from each profile using a steady‐state advection‐diffusion model. The open‐borehole log led to an overestimation of the steady‐state upward advective flux of deep brines (vertical velocity of 0.95 cm/yr as compared to 0.07 cm/yr for the MLS profile). The combination of depth‐dependent data and the suggested simple modeling approach comprises a method for assessing the vertical location of salinity sources and the nature of salt transport from them (i.e., advective vs. diffusive). However, in this case, the easily obtained open‐borehole logs should not be used for collecting depth‐dependent data.  相似文献   

12.
Information on the mass and the spatial location of an arbitrary source body can be obtained by performing suitable integrations of 3D gravity and magnetic data along an infinite straight line. No assumptions on the density/magnetization distribution or the shape and location of the source are required. For an oblique borehole, a relationship between the lower limit of the source mass and the distance to the body is obtained. The mass contrast and the magnetic moment of the source can also be estimated. For a vertical borehole, both gravity and vertical magnetic component anomalies have equal areas to the left and right of the depth axis. The particular case of a horizontal gallery not intersecting the body is also studied. If the source is intersected, a lower limit is estimated for the maximum thickness of the body along the gallery. Information on the vertical coordinate of the centre of mass of the source can also be obtained. Numerical tests with synthetic gravity data support the theoretical results.  相似文献   

13.
Non‐uniqueness occurs with the 1D parametrization of refraction traveltime graphs in the vertical dimension and with the 2D lateral resolution of individual layers in the horizontal dimension. The most common source of non‐uniqueness is the inversion algorithm used to generate the starting model. This study applies 1D, 1.5D and 2D inversion algorithms to traveltime data for a syncline (2D) model, in order to generate starting models for wave path eikonal traveltime tomography. The 1D tau‐p algorithm produced a tomogram with an anticline rather than a syncline and an artefact with a high seismic velocity. The 2D generalized reciprocal method generated tomograms that accurately reproduced the syncline, together with narrow regions at the thalweg with seismic velocities that are less than and greater than the true seismic velocities as well as the true values. It is concluded that 2D inversion algorithms, which explicitly identify forward and reverse traveltime data, are required to generate useful starting models in the near‐surface where irregular refractors are common. The most likely tomogram can be selected as either the simplest model or with a priori information, such as head wave amplitudes. The determination of vertical velocity functions within individual layers is also subject to non‐uniqueness. Depths computed with vertical velocity gradients, which are the default with many tomography programs, are generally 50% greater than those computed with constant velocities for the same traveltime data. The average vertical velocity provides a more accurate measure of depth estimates, where it can be derived. Non‐uniqueness is a fundamental reality with the inversion of all near‐surface seismic refraction data. Unless specific measures are taken to explicitly address non‐uniqueness, then the production of a single refraction tomogram, which fits the traveltime data to sufficient accuracy, does not necessarily demonstrate that the result is either ‘correct’ or the most probable.  相似文献   

14.
Tile‐drain response to rainfall events is determined by unsaturated vertical flow to the water table, followed by horizontal saturated water movement. In this study, unsaturated vertical movement from the redistribution of water is modelled using a sharp‐front approximation, and the saturated horizontal flow is modelled by an approximate solution to the Boussinesq equation. The unsaturated flow component models the fast response that is associated with the presence of preferential flow paths. By convoluting the responses of the two components, a transfer function is developed that predicts tile‐drain response to unit amounts of infiltrated water. It is observed that the unsaturated flow component can be cast in a form that is linear in a power function of the infiltrated depth. Since the approach is process based, model parameter definitions are easily identified with soil properties at the field scale. Furthermore, it is demonstrated that the transfer function model parameters can be estimated from moment analysis. Using superposition, the transient tile‐drain response to arbitrary amounts of infiltrated water can be constructed. Comparison with data measured from the Water Quality Field Station show that this approach provides a promising method for generating tile‐drain response to rainfall events. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
The conventional spectral analysis method for interpretation of magnetic data assumes stationary spatial series and a white‐noise source distribution. However, long magnetic profiles may not be stationary in nature and source distributions are not white. Long non‐stationary magnetic profiles can be divided into stationary subprofiles following Wiener filter theory. A least‐squares inverse method is used to calculate the scaling exponents and depth values of magnetic interfaces from the power spectrum. The applicability of this approach is demonstrated on non‐stationary synthetic and field magnetic data collected along the Nagaur–Jhalawar transect, western India. The stationarity of the whole profile and the subprofiles of the synthetic and field data is tested. The variation of the mean and standard deviations of the subprofiles is significantly reduced compared with the whole profile. The depth values found from the synthetic model are in close agreement with the assumed depth values, whereas for the field data these are in close agreement with estimates from seismic, magnetotelluric and gravity data.  相似文献   

16.
A high‐resolution method to image the horizontal boundaries of gravity and magnetic sources is presented (the enhanced horizontal derivative (EHD) method). The EHD is formed by taking the horizontal derivative of a sum of vertical derivatives of increasing order. The location of EHD maxima is used to outline the source boundaries. While for gravity anomalies the method can be applied immediately, magnetic anomalies should be previously reduced to the pole. We found that working on reduced‐to‐the‐pole magnetic anomalies leads to better results than those obtainable by working on magnetic anomalies in dipolar form, even when the magnetization direction parameters are not well estimated. This is confirmed also for other popular methods used to estimate the horizontal location of potential fields source boundaries. The EHD method is highly flexible, and different conditions of signal‐to‐noise ratios and depths‐to‐source can be treated by an appropriate selection of the terms of the summation. A strategy to perform high‐order vertical derivatives is also suggested. This involves both frequency‐ and space‐domain transformations and gives more stable results than the usual Fourier method. The high resolution of the EHD method is demonstrated on a number of synthetic gravity and magnetic fields due to isolated as well as to interfering deep‐seated prismatic sources. The resolving power of this method was tested also by comparing the results with those obtained by another high‐resolution method based on the analytic signal. The success of the EHD method in the definition of the source boundary is due to the fact that it conveys efficiently all the different boundary information contained in any single term of the sum. Application to a magnetic data set of a volcanic area in southern Italy helped to define the probable boundaries of a calderic collapse, marked by a number of magmatic intrusions. Previous interpretations of gravity and magnetic fields suggested a subcircular shape for this caldera, the boundaries of which are imaged with better detail using the EHD method.  相似文献   

17.
Full Tensor Gravity Gradiometry (FTG) data are routinely used in exploration programmes to evaluate and explore geological complexities hosting hydrocarbon and mineral resources. FTG data are typically used to map a host structure and locate target responses of interest using a myriad of imaging techniques. Identified anomalies of interest are then examined using 2D and 3D forward and inverse modelling methods for depth estimation. However, such methods tend to be time consuming and reliant on an independent constraint for clarification. This paper presents a semi‐automatic method to interpret FTG data using an adaptive tilt angle approach. The present method uses only the three vertical tensor components of the FTG data (Tzx, Tzy and Tzz) with a scale value that is related to the nature of the source (point anomaly or linear anomaly). With this adaptation, it is possible to estimate the location and depth of simple buried gravity sources such as point masses, line masses and vertical and horizontal thin sheets, provided that these sources exist in isolation and that the FTG data have been sufficiently filtered to minimize the influence of noise. Computation times are fast, producing plausible results of single solution depth estimates t hat relate directly to anomalies. For thick sheets, the method can resolve the thickness of these layers assuming the depth to the top is known from drilling or other independent geophysical data. We demonstrate the practical utility of the method using examples of FTG data acquired over the Vinton Salt Dome, Louisiana, USA and basalt flows in the Faeroe‐Shetland Basin, UK. A major benefit of the method is the ability to quickly construct depth maps. Such results are used to produce best estimate initial depth to source maps that can act as initial models for any detailed quantitative modelling exercises using 2D/3D forward/inverse modelling techniques.  相似文献   

18.
Wavefield decomposition forms an important ingredient of various geophysical methods. An example of wavefield decomposition is the decomposition into upgoing and downgoing wavefields and simultaneous decomposition into different wave/field types. The multi‐component field decomposition scheme makes use of the recordings of different field quantities (such as particle velocity and pressure). In practice, different recordings can be obscured by different sensor characteristics, requiring calibration with an unknown calibration factor. Not all field quantities required for multi‐component field decomposition might be available, or they can suffer from different noise levels. The multi‐depth‐level decomposition approach makes use of field quantities recorded at multiple depth levels, e.g., two horizontal boreholes closely separated from each other, a combination of a single receiver array combined with free‐surface boundary conditions, or acquisition geometries with a high‐density of vertical boreholes. We theoretically describe the multi‐depth‐level decomposition approach in a unified form, showing that it can be applied to different kinds of fields in dissipative, inhomogeneous, anisotropic media, e.g., acoustic, electromagnetic, elastodynamic, poroelastic, and seismoelectric fields. We express the one‐way fields at one depth level in terms of the observed fields at multiple depth levels, using extrapolation operators that are dependent on the medium parameters between the two depth levels. Lateral invariance at the depth level of decomposition allows us to carry out the multi‐depth‐level decomposition in the horizontal wavenumber–frequency domain. We illustrate the multi‐depth‐level decomposition scheme using two synthetic elastodynamic examples. The first example uses particle velocity recordings at two depth levels, whereas the second example combines recordings at one depth level with the Dirichlet free‐surface boundary condition of zero traction. Comparison with multi‐component decomposed fields shows a perfect match in both amplitude and phase for both cases. The multi‐depth‐level decomposition scheme is fully customizable to the desired acquisition geometry. The decomposition problem is in principle an inverse problem. Notches may occur at certain frequencies, causing the multi‐depth‐level composition matrix to become uninvertible, requiring additional notch filters. We can add multi‐depth‐level free‐surface boundary conditions as extra equations to the multi‐component composition matrix, thereby overdetermining this inverse problem. The combined multi‐component–multi‐depth‐level decomposition on a land data set clearly shows improvements in the decomposition results, compared with the performance of the multi‐component decomposition scheme.  相似文献   

19.
To advance and optimize secondary and tertiary oil recovery techniques, it is essential to know the areal propagation and distribution of the injected fluids in the subsurface. We investigate the applicability of controlled‐source electromagnetic methods to monitor fluid movements in a German oilfield (Bockstedt, onshore Northwest Germany) as injected brines (highly saline formation water) have much lower electrical resistivity than the oil within the reservoir. The main focus of this study is on controlled‐source electromagnetic simulations to test the sensitivity of various source–receiver configurations. The background model for the simulations is based on two‐dimensional inversion of magnetotelluric data gathered across the oil field and calibrated with resistivity logs. Three‐dimensional modelling results suggest that controlled‐source electromagnetic methods are sensitive to resistivity changes at reservoir depths, but the effect is difficult to resolve with surface measurements only. Resolution increases significantly if sensors or transmitters can be placed in observation wells closer to the reservoir. In particular, observation of the vertical electric field component in shallow boreholes and/or use of source configurations consisting of combinations of vertical and horizontal dipoles are promising. Preliminary results from a borehole‐to‐surface controlled‐source electromagnetic field survey carried out in spring 2014 are in good agreement with the modelling studies.  相似文献   

20.
位场数据解释的Theta-Depth法   总被引:1,自引:0,他引:1       下载免费PDF全文
Theta图是利用位场(重磁)数据识别边界的常用方法,其表达式为重磁异常水平变化与垂直变化的比值函数.该方法计算浅源地质体边界的效果较好,而由于深源位场数据在换算过程中会产生趋同效应,在深源地质体识别应用中计算结果不准确,为此,本文提出Theta-Depth法并进行地质体埋深的计算.首先给出直接利用Theta图像进行场源体深度估算的方法,然后推导出基于Theta导数的线性方程来自动估算场源位置参数,本文方法可有效地利用Theta图像的特征为约束条件来提高反演结果的精度.理论模型试验证明本文提出的Theta-Depth法能有效地计算出场源体位置和深度.将该方法应用于满都拉地区实测磁数据的解释,帮助圈定了矿脉的分布.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号