首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 238 毫秒
1.
This paper presents the application of the electrical resistivity tomography (ERT) method to the investigation of the Tertiary maar structure of Baruth (Germany) known from previous gravimetric surveys. ERT was applied to support the optimum location for a palaeoclimatological drill hole.
  Special modifications of data acquisition, signal processing and inversion are introduced to adapt the method of ERT to the special requirements for the 3-D investigation of structures with horizontal extensions of 1  km or more. More than 5000 dipole–dipole combinations were recorded at three concentric circular electrode arrangements using stand-alone transient data acquisition systems (RefTek).
  We present a fast approximate imaging technique based on the simultaneous iterative reconstruction technique (SIRT). As the complete calculation of the inverse Frechét matrix is avoided, the algorithm is especially suitable for large data and model spaces, where complete inversion is beyond the limits of available computing hardware. The single-step method is applicable to arbitrary irregular electrode layouts. Synthetic tests show that the imaging procedure reconstructs the main features of the subsurface.
  A low-resistivity body could be interpreted as limnic sediments filling the interior of the Tertiary maar crater. Considering the horizontal resistivity gradient, estimates for the lateral and depth extents of the structure were made. An optimum position for a palaeoclimatological borehole was found, and was in good agreement with the gravimetric minimum.  相似文献   

2.
The inversion of high-resolution geoid anomaly maps derived from satellite altimetry should allow one to retrieve the lithospheric elastic thickness, T e , and crustal density, c . Indeed, the bending of a lithospheric plate under the load of a seamount depends on both parameters, and the associated geoid anomaly is correspondingly dependent on the two parameters. The difference between the observed and modelled geoid signatures is estimated by a cost function, J , of the two variables, T e and c . We show that this cost function forms a valley structure along which many local minima appear, the global minimum of J corresponding to the true values of the lithospheric parameters. Classical gradient methods fail to find this global minimum because they converge to the first local minimum of J encountered, so that the final parameter estimate strongly depends on the starting pair of values ( T e ,   c ). We here implement a non-linear optimization algorithm to recover these two parameters from altimetry data. We demonstrate from the inversion of synthetic data that this approach ensures robust estimates of T e and c by activating two search phases alternately: a gradient phase to find a local minimum of J , and a tunnelling phase through high values of the cost function. The accuracy of the solution can be improved by a search in an iteratively restricted parameter subspace. Applying our non-linear inversion to the Great Meteor Seamount geoid data, we further show that the inverse problem is intrinsically ill-posed. As a consequence, minute geoid (or gravity) data errors can induce large changes in any recovery of lithospheric elastic thickness and crustal density.  相似文献   

3.
We use Monte Carlo Markov chains to solve the Bayesian MT inverse problem in layered situations. The domain under study is divided into homogeneous layers, and the model parameters are the conductivity of each layer. We use an a priori distribution of the parameters which favours smooth models. For each layer, the a priori and a posteriori distributions are digitized over a limited set of conductivity values.
  The Markov chain relies on updating the model parameters during successive scanning of the domain under study. For each step of the scanning, the conductivity is updated in one layer given the actual value of the conductivity in the other layers. Thus we designed an ergodic Markov chain, the invariant distribution of which is the a posteriori distribution of the parameters, provided the forward problem is completely solved at each step.
  We have estimated the a posteriori marginal probability distributions from the simulated successive values of the Markov chain. In addition, we give examples of complex magnetotelluric impedance inversion in tabular situations, for both synthetic models and field situations, and discuss the influence of the smoothing parameter.  相似文献   

4.
A data space approach to magnetotelluric (MT) inversion reduces the size of the system of equations that must be solved from M × M , as required for a model space approach, to only N × N , where M is the number of model parameter and N is the number of data. This reduction makes 3-D MT inversion on a personal computer possible for modest values of M and N . However, the need to store the N × M sensitivity matrix J remains a serious limitation. Here, we consider application of conjugate gradient (CG) methods to solve the system of data space Gauss–Newton equations. With this approach J is not explicitly formed and stored, but instead the product of J with an arbitrary vector is computed by solving one forward problem. As a test of this data space conjugate gradient (DCG) algorithm, we consider the 2-D MT inverse problem. Computational efficiency is assessed and compared to the data space Occam's (DASOCC) inversion by counting the number of forward modelling calls. Experiments with synthetic data show that although DCG requires significantly less memory, it generally requires more forward problem solutions than a scheme such as DASOCC, which is based on a full computation of J .  相似文献   

5.
Anisotropy in multi-offset deep-crustal seismic experiments   总被引:1,自引:0,他引:1  
Modelling of deep-seismic wide-angle data commonly assumes that the Earth is heterogeneous and isotropic. It is important to know the magnitudes of errors that may be introduced by isotropic-based wide-angle models when the Earth is anisotropic. It is equally important to find ways of detecting anisotropy and determining its properties.
  This paper explores the errors introduced by interpreting anisotropic seismic data with isotropic models. Errors in P -wave reflector depths are dependent on the magnitude of the velocity anisotropy and the direction of the fast axis. The interpreted, isotropic, model velocity function is found to correspond closely to the horizontal velocity of the anisotropic medium. An additional observed parameter is the time mismatch , which we define to be the difference between the vertical two-way traveltime to a reflector and the time-converted wide-angle position of the reflector. The magnitude of the time mismatch is typically <1.0  s (when the whole crust is anisotropic) and is found to be closely related to the magnitude and sign of the anisotropic anellipticity. The relationships are extendible to more complicated models, including those with vertical velocity gradients, crustal zonation, and lower symmetry orders.
  A time mismatch may be symptomatic of the presence of anisotropy. We illustrate the observation of a time mismatch for a real multi-offset seismic data set collected north of Scotland and discuss the implications for crustal anisotropy in that region.  相似文献   

6.
Regularization is usually necessary in solving seismic tomographic inversion problems. In general the equation system of seismic tomography is very large, often making a suitable choice of the regularization parameter difficult. In this paper, we propose an algorithm for the practical choice of the regularization parameter in linear tomographic inversion. The algorithm is based on the types of statistical assumptions most commonly used in seismic tomography. We first transfer the system of equations into a Krylov subspace by using Lanczos bidiagonalization. In the transformed subspace, the system of equations is then changed into the form of a standard damped least squares normal equation. The solution to this normal equation can be written as an explicit function of the regularization parameter, which makes the choice of the regularization computationally convenient. Two criteria for the choice of the regularization parameter are investigated with the numerical simulations. If the dimensions of the transformed space are much less than that of the original model space, the algorithm can be very computationally efficient, which is practically useful in large seismic tomography problems.  相似文献   

7.
In a spatio-temporal data set, identifying spatio-temporal clusters is difficult because of the coupling of time and space and the interference of noise. Previous methods employ either the window scanning technique or the spatio-temporal distance technique to identify spatio-temporal clusters. Although easily implemented, they suffer from the subjectivity in the choice of parameters for classification. In this article, we use the windowed kth nearest (WKN) distance (the geographic distance between an event and its kth geographical nearest neighbour among those events from which to the event the temporal distances are no larger than the half of a specified time window width [TWW]) to differentiate clusters from noise in spatio-temporal data. The windowed nearest neighbour (WNN) method is composed of four steps. The first is to construct a sequence of TWW factors, with which the WKN distances of events can be computed at different temporal scales. Second, the appropriate values of TWW (i.e. the appropriate temporal scales, at which the number of false positives may reach the lowest value when classifying the events) are indicated by the local maximum values of densities of identified clustered events, which are calculated over varying TWW by using the expectation-maximization algorithm. Third, the thresholds of the WKN distance for classification are then derived with the determined TWW. In the fourth step, clustered events identified at the determined TWW are connected into clusters according to their density connectivity in geographic–temporal space. Results of simulated data and a seismic case study showed that the WNN method is efficient in identifying spatio-temporal clusters. The novelty of WNN is that it can not only identify spatio-temporal clusters with arbitrary shapes and different spatio-temporal densities but also significantly reduce the subjectivity in the classification process.  相似文献   

8.
基于钻孔点集Voronoi 图的矿产储量新算法   总被引:6,自引:0,他引:6  
该文在分析平面点集Voronoi图特性及其生成算法的基础上,针对传统矿产储量计算方法的不足,提出了一种新的方法。该法以钻孔平面点集的Voronoi图为基础,利用Voronoi多边形的势力范围特性,实现了任意区域内矿床体积与矿产储量的计算。并给出了以Delphi编程实现的系统界面与计算实例,讨论了基于钻孔三维点集的Voronoi体进行矿产品位与经济可采性分析的前景。  相似文献   

9.
Local search heuristics for very large-scale vehicle routing problems (VRPs) have made remarkable advances in recent years. However, few local search heuristics have focused on the use of the spatial neighborhood in Voronoi diagrams to improve local searches. Based on the concept of a k-ring shaped Voronoi neighbor, we propose a Voronoi spatial neighborhood-based search heuristic and algorithm to solve very large-scale VRPs. In this algorithm, k-ring Voronoi neighbors of a customer are limited to building and updating local routings, and rearranging local routings with improper links. This algorithm was evaluated using four sets of benchmark tests for 200–8683 customers. Solutions were compared with specific examples in the literature, such as the one-depot VRP. This algorithm produced better solutions than some of the best-known benchmark VRP solutions and requires less computational time. The algorithm outperformed previous methods used to solve very large-scale, real-world distance constrained capacitated VRP.  相似文献   

10.
On the density distribution within the Earth   总被引:1,自引:0,他引:1  
The distribution of density as a function of position within the Earth is much less well constrained than the seismic velocities. The primary information comes from the mass and moment of inertia of the Earth and this information alone requires that there be a concentration of mass towards the centre of the globe. Additional information is to be found in the frequencies of the graver normal modes of the Earth which are sensitive to density through self-gravitation effects induced in deformation.
  The present generation of density models has been constructed using linearized inversion techniques from earlier models, which ultimately relate back to models developed by Bullen and based in large part on physical arguments. A number of experiments in non-linear inversion have been conducted using the PREM reference model, with fixed velocity and attenuation, but with the density model constrained to lie within fixed bounds on both density and density gradient. A set of models is constructed from a uniform probability density within the bound and slope constraints. Each of the resultant density models is tested against the mass and moment of inertia of the Earth, and for successful models a comparison is made with observed normal mode frequencies. From the misfit properties of the ensemble of models the robustness of the density profile in different portions of the Earth can be assessed, which can help with the design of parametrization for future reference models. In both the lower mantle and the outer core it would be desirable to allow a more flexible representation than the single cubic polynomial employed in PREM.  相似文献   

11.
Inference of mantle viscosity from GRACE and relative sea level data   总被引:12,自引:0,他引:12  
Gravity Recovery And Climate Experiment (GRACE) satellite observations of secular changes in gravity near Hudson Bay, and geological measurements of relative sea level (RSL) changes over the last 10 000 yr in the same region, are used in a Monte Carlo inversion to infer-mantle viscosity structure. The GRACE secular change in gravity shows a significant positive anomaly over a broad region (>3000 km) near Hudson Bay with a maximum of ∼2.5 μGal yr−1 slightly west of Hudson Bay. The pattern of this anomaly is remarkably consistent with that predicted for postglacial rebound using the ICE-5G deglaciation history, strongly suggesting a postglacial rebound origin for the gravity change. We find that the GRACE and RSL data are insensitive to mantle viscosity below 1800 km depth, a conclusion similar to that from previous studies that used only RSL data. For a mantle with homogeneous viscosity, the GRACE and RSL data require a viscosity between  1.4 × 1021  and  2.3 × 1021  Pa s. An inversion for two mantle viscosity layers separated at a depth of 670 km, shows an ensemble of viscosity structures compatible with the data. While the lowest misfit occurs for upper- and lower-mantle viscosities of  5.3 × 1020  and  2.3 × 1021  Pa s, respectively, a weaker upper mantle may be compensated by a stronger lower mantle, such that there exist other models that also provide a reasonable fit to the data. We find that the GRACE and RSL data used in this study cannot resolve more than two layers in the upper 1800 km of the mantle.  相似文献   

12.
The derivation of seismic reflection and transmission coefficients is generally based on the assumption that the medium parameters behave as step functions of depth, at least in a finite region around the interface. However, outliers observed in well logs generally behave quite differently from step functions. In this paper we represent an interface by a self-similar singularity, embedded between two homogeneous half-spaces, and we derive its frequency-dependent normal-incidence reflection and transmission coefficients. For ω  → 0 the expressions for the coefficients reduce to those for a discrete boundary between two homogeneous half-spaces; for ω → ∞ they become frequency-independent. These asymptotic expressions have a relatively simple form and depend on the singularity exponent α .
  The exact as well as the asymptotic expressions are used to evaluate the time-domain reflection and transmission responses of a self-similar interface. Finally, we use a numerical method to model the response of a smoothed version of a self-similar interface (note that the velocity of a smoothed singularity remains finite). It turns out that smoothing has hardly any effect on the response, provided that the smoothing does not affect the scales corresponding to the seismic frequency range.  相似文献   

13.
一种用于界定经济客体空间影响范围的方法——Voronoi图   总被引:39,自引:4,他引:35  
经济客体的空间影响范围界定十分复杂,但在区域规划和城市规划中有着重要的理论和实际意义。该文提出可采用Voronoi图方法用于经济客体的空间影响范围界定,介绍了Voronoi图的基本原理和其若干扩展,编写了生成Voronoi图的程序,最后以城市为例探讨了Voronoi图在经济客体空间影响范围界定中的应用。  相似文献   

14.
The discrimination between electrolytic and electronic conductors is highly relevant to geological modelling as it allows conclusions to be drawn about the formation and mineral composition of rocks. The induced polarization (IP) method, which compares the electric current injected into the ground with the corresponding earth potential differences can be used for this purpose.
  This paper describes a new method based on the theory that non-linear electrochemical processes on the surface of electronic conductors are responsible for non-linear IP (NLIP) phenomena. This results in multiples of the fundamental frequency being observed in the telluric voltage spectra when a monochromatic current signal is fed into the ground. The non-linearity of the current–voltage characteristic is most effectively described by a spectral method.
  A laboratory experiment was carried out, using an electrolytic trough with a small graphite cylinder serving as an electronic conductor, which clearly demonstrated the validity of the method. A field experiment was undertaken at a borehole of approximately 450  m depth, located in the transition zone of the Tepla-Barrandium and Moldanubicum in East Bavaria. A sinusoidal current was injected into the ground using a logging tool at depths varying between 150 and 450  m. The corresponding potential differences were simultaneously observed along a profile on the surface. Field and laboratory results show a striking similarity. It can be concluded that an extensive electronic conductor—probably graphite—is steeply dipping southwards meeting the borehole at approximately 310  m depth.  相似文献   

15.

Our study interprets large-scale gravity data to delineate concealed banded iron formation (BIF) iron mineralization in India's Rajasthan province. The study area belongs to the Bharatpur, Dausa, and Karauli districts of Rajasthan. We measured 1462 gravity readings to understand the rock types, depth and geometry of the different rock formations in the proposed study area. We also collected representative lithologies from more than 100 locations in the study area and calculated their density values. The measured gravity datasets are investigated via qualitative (e.g., Bouguer anomaly, first derivative and second derivative) and quantitative (radially averaged power spectrum, 3D Euler deconvolution, and 3D inversion) approach. The qualitative methods suggest a general NE–SW orientation of the BIFs, controlled by the general trend of the study area's structural setting. The lithological contact between the Bhilwara and Vindhyan Supergroups is demarcated by a NE–SW trending steep gravity gradient zone. In this area, representative lithologies yield high densities (about 3.746 gm/cc), and the samples identified as BIF represent exploration targets for iron ore. We have also developed our own in-house 3D gravity inversion code in this study. A model space inversion algorithm is converted into a data space using the identity relationship. It makes inversion algorithm very user-friendly on conventional desktop computers. The outcomes from the 3D inversion suggest that the concealed iron ore thickens to the west. This interpretation is also in good correlation with Euler 3D deconvolution of the gravity data.

  相似文献   

16.
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the 'maximum likelihood' regularization scheme. We construct a 'probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity–density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion.
We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently.
An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area.  相似文献   

17.
Inversion for multiple parameter classes   总被引:2,自引:0,他引:2  
Many geophysical data, such as the frequencies of the free oscillations of the Earth, depend on more than one type of model parameter. For inverse problems depending on multiple parameter classes, an iterative solution procedure is introduced in which each parameter class can be treated in the same way. This approach has considerable advantages where a large number of parameters are employed, but can still be useful for smaller systems.
  The iteration by parameter class commences by solving for the direct dependence on a particular parameter class, and at subsequent iterations the cross-dependences between classes are introduced. The update affects only the right-hand side of the equations, and, because the same sets of equations have to be solved at each iteration, an efficient computational implementation can be made. The largest set of equations that has to be solved at a time corresponds to the number of variables in an individual parameter class rather than the full set of parameters, which confers substantial computational benefits for very large problems.  相似文献   

18.
We investigate the use of general, non- l 2 measures of data misfit and model structure in the solution of the non-linear inverse problem. Of particular interest are robust measures of data misfit, and measures of model structure which enable piecewise-constant models to be constructed. General measures can be incorporated into traditional linearized, iterative solutions to the non-linear problem through the use of an iteratively reweighted least-squares (IRLS) algorithm. We show how such an algorithm can be used to solve the linear inverse problem when general measures of misfit and structure are considered. The magnetic stripe example of Parker (1994 ) is used as an illustration. This example also emphasizes the benefits of using a robust measure of misfit when outliers are present in the data. We then show how the IRLS algorithm can be used within a linearized, iterative solution to the non-linear problem. The relevant procedure contains two iterative loops which can be combined in a number of ways. We present two possibilities. The first involves a line search to determine the most appropriate value of the trade-off parameter and the complete solution, via the IRLS algorithm, of the linearized inverse problem for each value of the trade-off parameter. In the second approach, a schedule of prescribed values for the trade-off parameter is used and the iterations required by the IRLS algorithm are combined with those for the linearized, iterative inversion procedure. These two variations are then applied to the 1-D inversion of both synthetic and field time-domain electromagnetic data.  相似文献   

19.
基于局部聚类的网络Voronoi图生成方法研究   总被引:1,自引:1,他引:0  
提出一种将网络约束下的Voronoi和空间聚类相结合的方法,通过构造局部的聚类分析方法对网络边进行加权,根据实际的点过程性质可以把权重定义为加权或者乘权,进行标准化后与道路段本身长度融合进行计算,依此生成网络Voronoi图,以期理解城市街道的空间特性。以武汉市江汉区为例,对城市网格管理系统产生的城市事件进行算法验证,结果表明,该方法提供了一种灵活的网络约束下的服务区域划分工具,可用于基于网络空间点过程影响下的服务区划分,也可用于系统性地定量刻画城市管理的动态特性。  相似文献   

20.
Precise time and facies correlations between drilled holes are fundamental for a better understanding of the geological evolution of sedimentary basins. A downhole magnetic measurement device called the geological high-sensitivity magnetic tool (GHMT) has been run within two wells drilled by Gaz de France in the Landes oil-field (southwest France) as part of a gas storage exploration program. The method of interpretation of downhole magnetic measurements yielded a magnetostratigraphy within each well, allowing absolute dating and time correlations between the wells.
  Magnetic susceptibility and natural gamma ray intensity are useful parameters for establishing high-resolution lithological correlations at a basin scale. We present a correlation parameter established from a simultaneous analysis of the susceptibility and the gamma ray logs within each well. The correlation parameter appears to provide a new tool for delineating lithological elements when local lithological changes are too subtle to show clear well-to-well correlations either from susceptibility logs or from gamma ray logs. This new approach is interpreted as a sensitive way to detect relative variations between the detrital and clay content of the penetrated sediment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号