首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The use of a cross-correlation prefiltering technique to enhance the ability of Jansson's iterativedeconvolution procedure to deconvolve extremely noisy chromatographic data is investigated.Test casesinclude peaks whose resolutions are as low as 0.35 and whose signal-to-noise ratios are as low as 1:1.Evaluation criteria include RMS error,relative peak error and peak area repeatability.For comparisonpurposes,relative peak area errors and peak area variances are also evaluated for noisy but well resolvedpeaks that have only been prefiltered with the cross-correlation filter.Jansson's method in conjunctionwith cross-correlation prefiltering is shown not only to resolve overlapped peaks but in some cases toimprove their signal-to-noise ratios.The study also establishes some limits to the capabilities of Jansson'smethod will regard to adverse data conditions.  相似文献   

2.
In this paper, we present a new approach to estimate high-resolution teleseismic receiver functions using a simultaneous iterative time-domain sparse deconvolution. This technique improves the deconvolution by using reweighting strategies based on a Cauchy criterion. The resulting sparse receiver functions enhance the primary converted phases and its multiples. To test its functionality and reliability, we applied this approach to synthetic experiments and to seismic data recorded at station ABU, in Japan. Our results show Ps conversions at approximately 4.0 s after the primary P onset, which are consistent with other seismological studies in this area. We demonstrate that the sparse deconvolution is a simple, efficient technique in computing receiver functions with significantly greater resolution than conventional approaches.  相似文献   

3.
Delineation of detailed mantle structure frequently requires the separation of source signature and structural response from seismograms recorded at teleseismic distances. This deconvolution problem can be posed in a log-spectral domain where the operation of time-domain convolution is reduced to an additive form. The introduction of multiple events recorded at many stations leads to a system of consistency equations that must be honoured by both the source time functions and the impulse responses associated with propagation paths between sources and receivers. The system is inherently singular, and stabilization is accomplished through the supply of an initial estimate of the source time function. Although alternative choices exist, an effective estimate is derived from the eigenimage associated with the largest eigenvalue in a singular-value decomposition of the suite of aligned seismograms corresponding to a given event. The relation of the deconvolution scheme to simultaneous least-squares deconvolution is examined. Application of the methodology to broadband teleseismic P waveforms recorded on the Canadian National Seismograph Network demonstrates the retrieval of effective Green's functions including secondary phases associated with upper-mantle structure.  相似文献   

4.
A simple algorithm for deconvolution and regression of shot-noise-limited data is illustrated in this paper.The algorithm is easily adapted to almost any model and converges to the global optimum.Multiple-component spectrum regression,spectrum deconvolution and smoothing examples are used to illustratethe algorithm.The algorithm and a method for determining uncertainties in the parameters based on theFisher information matrix are given and illustrated with three examples.An experimental example ofspectrograph grating order compensation of a diode array solar spectroradiometer is given to illustratethe use of this technique in environmental analysis.The major advantages of the EM algorithm are foundto be its stability,simplicity,conservation of data magnitude and guaranteed convergence.  相似文献   

5.
A seismogram that is several times the length of the source-receiver wavelet is windowed into two parts—these may overlap—to obtain two seismograms with approximately the same source function but different Green's functions. A similarly windowed synthetic seismogram gives two corresponding synthetic seismograms. The spectral product of the window 1 data with the window 2 synthetic is equal to the spectral product of the window 1 synthetic with the window 2 data only if the correct earth model is used to compute the synthetic. This partition principle is applied to well-log sonic waveform data from Ocean Drilling Project hole 806B, a slow formation, and used there to estimate Poisson's ratio from a single seismogram whose transmitter and receiver functions are unknown. A multichannel extension of the algorithm gives even better results. The effective borehole radius R b, was included in the inversion procedure, because of waveform sensitivity to R b. Inversion results for R b agreed with the sonic caliper, but not the mechanical caliper; thus if R b is not included in the inversion its value should be taken from the sonic caliper.  相似文献   

6.
恰功铁矿是近几年新发现的矿山,然而矿体的形态、大小、埋深、位置、产状、边界等几何特征还没有被认识清楚。为了弄清这些问题,本文通过欧拉反褶积方法对磁异常化极数据进行了反演,反演矿体深度为0-120m;在C-6异常中心选择了两条剖面进行了2.5维拟合反演,反演矿体厚度为20~30m,欧拉反褶积反演结果和2.5维拟合反演结果与ZK32钻孔验证结果相吻合。最后,通过建立恰功矽卡岩型铁矿地质—地球物理找矿模型,为该地区寻找隐伏夕卡岩型铁矿提供了思路。  相似文献   

7.
A new method for short- and long-term forecasting of mineral commodities based upon historical data is developed. The method, referred to as the latest trend tracing (LTT) model, is constructed as a weighting and adaptive approach based on a general linear model. The LTT model considers the functions of data location and statistical behavior. The newest data receive the largest weights, whereas the older data are given smaller weights. The LTT model is performed by an iterative algorithm. The data set is successively partitioned into training and testing subsets. Each LTT model is estimated and tested for each partition. The updated estimates are then synthesized to produce the final estimates based upon the data locations and estimation variances. The LTT model is demonstrated on two real case studies, one on the projection of U.S. aluminum consumption and the other on the forecasting of U.S. copper consumption.  相似文献   

8.
Two approximate methods for weighted principal components analysis (WPCA) were devised and testedin numerical experiments using either empirical variances (obtained from replicated data) or assumedvariances (derived from unreplicated data). In the first ('spherical') approximation each data vector wasassigned a weight proportional to the geometrical mean of its variances in all dimensions. Thearithmetical mean of variances was used instead in the other approximation. Both the numericalexperiments with artificial data containing random errors of various kinds (constant, proportional,constant plus proportional, Poisson) and the analysis of two sets of Raman spectra clearly indicated thenecessity of introducing statistical weights. The spherical approximation was found to be slightly betterthan the arithmetical one. The application of statistical weighting was found to improve the performanceof PCA in estimation problems.  相似文献   

9.
We propose a two-step inversion of three-component seismograms that (1) recovers the far-field source time function at each station and (2) estimates the distribution of co-seismic slip on the fault plane for small earthquakes (magnitude 3 to 4). The empirical Green's function (EGF) method consists of finding a small earthquake located near the one we wish to study and then performing a deconvolution to remove the path, site, and instrumental effects from the main-event signal.
The deconvolution between the two earthquakes is an unstable procedure: we have therefore developed a simulated annealing technique to recover a stable and positive source time function (STF) in the time domain at each station with an estimation of uncertainties. Given a good azimuthal coverage, we can obtain information on the directivity effect as well as on the rupture process. We propose an inversion method by simulated annealing using the STF to recover the distribution of slip on the fault plane with a constant rupture-velocity model. This method permits estimation of physical quantities on the fault plane, as well as possible identification of the real fault plane.
We apply this two-step procedure for an event of magnitude 3 recorded in the Gulf of Corinth in August 1991. A nearby event of magnitude 2 provides us with empirical Green's functions for each station. We estimate an active fault area of 0.02 to 0.15 km2 and deduce a stress-drop value of 1 to 30 bar and an average slip of 0.1 to 1.6 cm. The selected fault of the main event is in good agreement with the existence of a detachment surface inferred from the tectonics of this half-graben.  相似文献   

10.
In previous papers Jansson's method was found to be successful at deconvolving severely overlapped gaschromatographic peaks.In the most recent paper the method was evaluated with respect to quantitativeaceuracy,peak area and retention time repeatability.The problems associated with deconvolving noisydata and some alternatives which can improve the ability of Jansson's method to deconvolve noisy dataare discussed.These alternatives include presmoothing the data with a nine-point,third-order polynomialfilter and data reblurring.This paper will test these methods on peaks with various degrees of resolutionand signal-to-noise ratios.  相似文献   

11.
The opening of the Gulf of Aden and the Red Sea, and the collision of the Arabian plate with the jigsaw southern margin of the Anatolian plate have sheared the Sinai-Levant microplate off the NW part of the Arabian plate, and created the left-lateral Dead Sea (Levant) transform fault. The structural setting of the northern Levant region, particularly Lebanon and the Palmyrides, has been complicated by detachments along incompetent evaporitic horizons, roughly separating the post-Triassic succession from the underlying crustal material. The interpretation of the multiple source Werner deconvolution (MSWD) estimates of Bouguer gravity profiles, which were separately calculated for Syria and Lebanon, integrated with the available geological and geophysical results leads to the following interpretations: (1) the crust of Syria thickens southeastwards from approximately 32 km under the Al-Ghab Graben to >36 km under the Aleppo high, the Palmyride fold belt and the Rutbah high; (2) the lower-crustal (basaltic) layer thickens northwestwards from the hinterland to the Al-Ghab graben at the expense of the overlying andesitic layer; (3) the Mid-Beqa'a fault is delineated by the MSWD estimates in Lebanon and its NE extension in Syria; (4) the Phanerozoic section in the southwesternmost parts of the Palmyrides is ∼ 13 km thick, and the shortening there could exceed 30 km; (5) the Palmyride fold belt, and the Serghaya and Mid-Beqa'a faults could have accounted for about 70 km of the 105 km left-lateral displacement along the southern segment of the Dead Sea transform fault system, without transmission to the Syrian (northern) segment of the fault system; (6) the splitting of the Dead Sea transform fault in the Kuleh Depression into the Serghaya. Mid-Beqa'a, Yammouneh and Roum faults could be explained by the rotation of the detached post-Triassic succession over a stable deep left-lateral fracture of the Dead Sea fault in the underlying crustal material.  相似文献   

12.
The migration of teleseismic receiver functions yields high-resolution images of the crustal structure of western Crete. Data were collected during two field campaigns in 1996 and 1997 by networks of six and 47 short-period three-component seismic stations, respectively. A total of 1288 seismograms from 97 teleseismic events were restituted to true ground displacement within a period range from 0.5 to 7 s. The application of a noise-adaptive deconvolution filter and a new polarization analysis technique helped to overcome problems with local coda and noise conditions. The computation and migration of receiver functions results in images of local crustal structures with unprecedented spatial resolution for this region. The crust under Crete consists of a continental top layer of 15–20 km thickness above a 20–30 km thick subducted fossil accretionary wedge with a characteristic en echelon fault sequence. The downgoing oceanic Moho lies at a depth of 40–60 km and shows a topography or undulation with an amplitude of several kilometres. As a consequence of slab depth and distribution of local seismicity, the Mediterranean Ridge is interpreted as the recent accretionary wedge.  相似文献   

13.
This paper demonstrates the utility of the iterative proportional fitting procedure (IPF) in generating disaggregated spatial data from aggregated data and evaluates the performance of the procedure. Estimates of individual level data created by IPF using data of equal-interval categories are reliable, but the performance of the estimation can be improved by increasing sample size. The improvement usually is enough to offset the increase in error created by other factors. If the two variables defining the cross-classification have a significant interaction effect and the number of categories in each variable is larger than two, then IPF is preferred over an independent model.  相似文献   

14.
小波变换在相对海平面变化研究中的应用   总被引:5,自引:5,他引:0  
欧素英  陈子燊 《地理科学》2004,24(3):358-364
根据广东沿岸14个验潮站的月均序列,应用小波分析方法将其进行时域-频域分解,分析了近40多年来月平均潮位序列的多层次尺度结构,进而研究相对海平面的周期变化和趋势变化。结果表明,用小波变换研究相对海平面在时域-频域中的周期分布及变化时,能较好地揭示周期变化的局部特征;广东沿岸相对海平面变化包含着0.5年、1年、2~4年、10~11年及18~20年左右等周期变化,且周期变化在时间域中具有明显的局部化特征;据实测资料计算,周期变化对海平面的趋势变化影响明显,未消除周期变化的趋势分析结果偏大,用小波变换有效地消除周期变化后得出粤西及珠江口地区沿岸相对海平面变化率,一般而言,广东沿岸海平面呈上升趋势,上升幅度约为0.36~1.2 mm/a。  相似文献   

15.
A tomographic inversion technique that inverts traveltimes to obtain a model of the subsurface in terms of velocities and interfaces is presented. It uses a combination of refraction, wide-angle reflection and normal-incidence data, it simultaneously inverts for velocities and interface depths, and it is able to quantify the errors and trade-offs in the final model. The technique uses an iterative linearized approach to the non-linear traveltime inversion problem. The subsurface is represented as a set of layers separated by interfaces, across which the velocity may be discontinuous. Within each layer the velocity varies in two dimensions and has a continuous first derivative. Rays are traced in this medium using a technique based on ray perturbation theory, and two-point ray tracing is avoided by interpolating the traveltimes to the receivers from a roughly equidistant fan of rays. The calculated traveltimes are inverted by simultaneously minimizing the misfit between the data and calculated traveltimes, and the roughness of the model. This 'smoothing regularization' stabilizes the solution of the inverse problem. In practice, the first iterations are performed with a high level of smoothing. As the inversion proceeds, the level of smoothing is gradually reduced until the traveltime residual is at the estimated level of noise in the data. At this point, a minimum-feature solution is obtained, which should contain only those features discernible over the noise.
The technique is tested on a synthetic data set, demonstrating its accuracy and stability and also illustrating the desirability of including a large number of different ray types in an inversion.  相似文献   

16.
We present a method for the retrieval of the phase velocities of surface-wave overtones. The 'single-station' method is successful for several Love and Rayleigh overtone branches (up to at least four) in mode-specific period ranges between 40 and 200 s. It uses mode-branch cross-correlation functions and relies on adjusting the phase and amplitude of the mode branches one at a time. A standard statistical optimization technique is used. We discuss in detail the a priori information that is added to stabilize the retrieval procedure. In addition, we present a technique to estimate the reliability of individual phase and amplitude measurements. The retrieval method and the technique to estimate reliabilities can be used together in a highly automated way, making the methods especially suited for studying the large volume of digital data now available.
We include several applications to synthetic and recorded waveforms. We will discuss in detail an experiment with 90 waveforms that have travelled along very similar paths from Vanuatu to California. For this path, we will present average overtone phase velocities and an average 1-D velocity structure.  相似文献   

17.
Summary. An iterative algorithm is presented to be used in the search for the shape of a 2-D local deep geoelectric inhomogeneity lying in a layered medium; an anomaly having been identified in the usual way by observing an alternating time-harmonic electromagnetic field along the surface of the Earth. The normal section parameters (conductivities and thicknesses) and the excess electrical conductivity (inside inhomogeneity) are assumed to be known. The shape of the inhomogeneity is determined by means of a misfit functional minimization technique. A gradient minimization algorithm is constructed and Tikhonov's regularization scheme is applied to achieve stability of the solution. The effectiveness of such an approach is demonstrated by model calculations and by the interpretation of the Carpathian geomagnetic anomaly. Finally, a brief discussion of the problems of the practical application of this formalized trial procedure is presented. Because of the lack of reliable estimates of the excess conductivity, it is proposed to consider a family of models selected for the set of probable values of model parameters. This family can be treated as a generalized solution of the interpretation problem.  相似文献   

18.
Generalized Born scattering of elastic waves in 3-D media   总被引:1,自引:0,他引:1  
It is well known that when a seismic wave propagates through an elastic medium with gradients in the parameters which describe it (e.g. slowness and density), energy is scattered from the incident wave generating low-frequency partial reflections. Many approximate solutions to the wave equation, e.g. geometrical ray theory (GRT), Maslov theory and Gaussian beams, do not model these signals. The problem of describing partial reflections in 1-D media has been extensively studied in the seismic literature and considerable progress has been made using iterative techniques based on WKBJ, Airy or Langer type ansätze. In this paper we derive a first-order scattering formalism to describe partial reflections in 3-D media. The correction term describing the scattered energy is developed as a volume integral over terms dependent upon the first spatial derivatives (gradients) of the parameters describing the medium and the solution. The relationship we derive could, in principle, be used as the basis for an iterative scheme but the computational expense, particularly for elastic media, will usually prohibit this approach. The result we obtain is closely related to the usual Born approximation, but differs in that the scattering term is not derived from a perturbation to a background model, but rather from the error in an approximate Green's function. We examine analytically the relationship between the results produced by the new formalism and the usual Born approximation for a medium which has no long-wavelength heterogeneities. We show that in such a case the two methods agree approximately as expected, but that in a media with heterogeneities of all wavelengths the new gradient scattering formalism is superior. We establish analytically the connection between the formalism developed here and the iterative approach based on the WKBJ solution which has been used previously in 1-D media. Numerical examples are shown to illustrate the examples discussed.  相似文献   

19.
Detailed land cover maps provide important information for research and decision-making but are often expensive to develop and can become outdated quickly. Widespread availability of aerial photography provides increased accessibility of high-resolution imagery and the potential to produce high-accuracy land cover classifications. However, these classifications often require expert knowledge and are time consuming. Our goal was to develop an efficient, accurate technique for classifying impervious surface in urbanizing Wake County, North Carolina. Using an iterative training technique, we classified 111 nonmosaicked, very-high-resolution images using the Feature Analyst software developed by Visual Learning Systems. Feature Analyst provides object extraction classifications by analyzing spatial context in relation to spectral data to classify high-resolution imagery. Our image classification results were 95 percent accurate in impervious surface extraction, with an overall total accuracy of 92 percent. Using this method, users with relatively limited geographic information system (GIS) training and modest budgets can produce highly accurate object-extracted classifications of impervious and pervious surface that are easily manipulated in a GIS.  相似文献   

20.
ABSTRACT

The objective of this paper is to investigate uncertainties surrounding relationships between spatial autocorrelation (SA) and the modifiable areal unit problem (MAUP) with an extensive simulation experiment. Especially, this paper aims to explore how differently the MAUP behaves for the level of SA focusing on how the initial level of SA at the finest spatial scale makes a significant difference to the MAUP effects on the sample statistics such as means, variances, and Moran coefficients (MCs). The simulation experiment utilizes a random spatial aggregation (RSA) procedure and adopts Moran spatial eigenvectors to simulate different SA levels. The main findings are as follows. First, there are no substantive MAUP effects for means. However, the initial level of SA plays a role for the zoning effect, especially when extreme positive SA is present. Second, there is a clear and strong scale effect for the variances. However, the initial SA level plays a non-negligible role in how this scale effect deploys. Third, the initial SA level plays a crucial role in the nature and extent of the MAUP effects on MCs. A regression analysis confirms that the initial SA level makes a substantial difference to the variability of the MAUP effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号