首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
We present an approximate method to estimate the resolution, covariance and correlation matrix for linear tomographic systems Ax = b that are too large to be solved by singular value decomposition. An explicit expression for the approximate inverse matrix A is found using one-step backprojections on the Penrose condition AA ≈ I , from which we calculate the statistical properties of the solution. The computation of A can easily be parallelized, each column being constructed independently.
The method is validated on small systems for which the exact covariance can still be computed with singular value decomposition. Though A is not accurate enough to actually compute the solution x , the qualitative agreement obtained for resolution and covariance is sufficient for many purposes, such as rough assessment of model precision or the reparametrization of the model by the grouping of correlating parameters. We present an example for the computation of the complete covariance matrix of a very large (69 043 × 9610) system with 5.9 × 106 non-zero elements in A . Computation time is proportional to the number of non-zero elements in A . If the correlation matrix is computed for the purpose of reparametrization by combining highly correlating unknowns x i , a further gain in efficiency can be obtained by neglecting the small elements in A , but a more accurate estimation of the correlation requires a full treatment of even the smaller A ij . We finally develop a formalism to compute a damped version of A .  相似文献   

2.
3.
We present an improved method for computing polarization attributes of particle motion from multicomponent seismic recordings in the time–frequency domain by using the continuous wavelet transform. This method is based on the analysis of the covariance matrix. We use an approximate analytical formula to compute the elements of the covariance matrix for a time window which is derived from an averaged instantaneous frequency of the multicomponent record. The length of the time-window is automatically and adaptively set to match the dominant period of the analysing wavelet at each time–frequency point. Then the eigenparameters are estimated for each time–frequency point without interpolation. With these key features, our method provides a suitable approach for polarization analysis of dispersive signals or overlapping seismic arrivals in multicomponent seismic data. For polarization analysis in the time domain, we show that the proposed method is consistent with existing polarization analysis methods. We apply the method to real data sets from exploration and earthquake seismology to illustrate some filtering applications and wave type characterizations.  相似文献   

4.
A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0–6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.  相似文献   

5.
Summary. We have analysed a thirty-six day recording of the natural electric and magnetic field variations obtained on the deep ocean floor north-east of Hawaii. The electromagnetic fields are dominated by tides which have an appreciable oceanic component, especially in the east electric and north magnetic components. The techniques of data analysis included singular value decomposition (SVD) to remove uncorrelated noise. There are three degrees of freedom in the data set for periods longer than five hours, indicating a correlation of the vertical magnetic field and the horizontal components, suggesting source field inhomogeneity. Tensor response functions were calculated using spectral band averaging with both SVD and least squares techniques and rotated to the principal direction. One diagonal component, determined mainly by the north electric and east magnetic fields, is not interpretable as a one-dimensional induction phenomenon. The other diagonal term of the response function indicates a rapid rise in conductivity to 0.05 mho m−1 near 160 km. No decrease in conductivity below this depth is resolvable. Polarization analysis of the magnetic field indicates moving source fields with a wavelength near 5000 km. Model studies suggest that the two dimensionality in the response function may be caused by motion in the ionospheric current system.  相似文献   

6.
王聪  黄宁  杨保 《地理科学》2014,34(2):237-241
气候重建研究中,重建数据有限的特点对研究造成很大影响。对于解决这个问题,区域优化平均法是一个很有效的重建方法。区域优化平均法可以通过最优权值和有限的温度数据计算目标区域平均温度的一种方法。应用区域优化平均法时,首先利用均方差最小化的优化加权机制和拉格朗日乘子法计算得到最优权值,然后最优权值结合温度数据计算得到区域平均温度。现阶段的区域优化平均在计算大范围区域的平均温度时有其自身弱点。为克服这一弱点,使其可以计算大范围区域的平均温度,例如北半球平均温度,本文对区域优化平均法做如下改进:不再使用网格划分求和的方式求解协方差模式,利用Haar小波函数和矩阵算子求得协方差模式;利用全选主元高斯消去法求解线性代数方程组得到最优权值。结果表明,Haar小波函数和矩阵算子用于计算中,使协方差模式的计算结果更精确。计算所用数据源于气候研究中心(CRU),CRU被认为是最权威的数据来源之一。以计算北半球1961~1990年平均温度为例,发现改进后的区域优化平均法的计算所得结果与CRU已有结果的相关性较改进之前有所提高。因此,针对古气候重建过程中代用数据记录有限的问题,改进后的区域优化平均法提供了一个更为合理可行的计算方法。  相似文献   

7.
Summary. The Backus–Gilbert method is applied to obtain the phase velocity variations on a sphere from the measured phase velocity. Narrow peak kernels, with radii of about 2000 km, are obtained for almost everywhere on the sphere. The phase velocity results are thus interpreted as an average within such regions. The most trouble comes from the antipodal peak in the resolution kernel. This is evaluated as contamination and is incorporated in the error estimation. The total error, which is a root mean square of contamination from the antipodal peak and statistical error estimated from the data covariance matrix, is about 1 per cent of the phase velocity in the average earth model, which is the Preliminary Reference Earth Model (PREM). However, there is about a factor of 2 variation of errors on the sphere. Maximum variations of phase velocity are about 3–4 per cent of the phase velocity in the average earth model, and thus there still remain anomalies which exceed estimated errors. The estimated errors correspond to one standard deviation under the assumptions of uncorrelated Gaussian distribution. For high confidence interval, they show that statistically significant anomalies are scarce for the current data set. Generally, Love-wave phase velocity maps show more resolved features than Rayleigh-wave maps and we can see, in high confidence maps, fast velocities in old oceans and old continents and slow velocities in tectonically active regions like the East Pacific Rise and various back-arc regions.  相似文献   

8.
We present a new formulation of the inverse problem of determining the temporal and spatial power moments of the seismic moment rate density distribution, in which its positivity is enforced through a set of linear conditions. To test and demonstrate the method, we apply it to artificial data for the great 1994 deep Bolivian earthquake. We use two different kinds of faulting models to generate the artificial data. One is the Haskell-type of faulting model. The other consists of a collection of a few isolated points releasing moment on a fault, as was proposed in recent studies of this earthquake. The positions of 13 teleseismic stations for which P - and SH -wave data are actually available for this earthquake are used. The numerical experiments illustrate the importance of the positivity constraints without which incorrect solutions are obtained. We also show that the Green functions associated with the problem must be approximated with a low approximation error to obtain reliable solutions. This is achieved by using a more uniform approximation than Taylor's series. We also find that it is necessary to use relatively long-period data first to obtain the low- (0th and 1st) degree moments. Using the insight obtained into the size and duration of the process from the first-degree moments, we can decrease the integration region, substitute these low-degree moments into the problem and use higher-frequency data to find the higher-power moments, so as to obtain more reliable estimates of the spatial and temporal source dimensions. At the higher frequencies, it is necessary to divide the region in which we approximate the Green functions into small pieces and approximate the Green functions separately in each piece to achieve a low approximation error. A derivation showing that the mixed spatio-temporal moments of second degree represent the average speeds of the centroids in the corresponding direction is given.  相似文献   

9.
Summary. The computational effectiveness of travel-time inversion methods depends on the parameterization of a 3-D velocity structure. We divide a region of interest into a few layers and represent the perturbation of wave slowness in each layer by a series of Chebyshev polynomials. Then a relatively complex velocity structure can be dcscribed by a small set of parameters that can be accurately evaluated by a linearized inversion of travel-time residuals. This method has been applied to artificial and real data at small epicentral distances and in the teleseismic distance range. The corresponding matrix equations were solved using singular value decomposition. The results suggest that the method combines resolution with computational convenience.  相似文献   

10.
A numerical method is presented for calculating complete theoretical seismograms, under the assumption that the earth models have velocity, density and attenuation profiles which are arbitrary piece-wise continuous functions of depth only. Solutions for the stress-displacement vectors in the medium are expanded in terms of orthogonal cylindrical functions. Our method for solving the resulting two-point boundary value problems differs from that of other investigators in three ways. First, collocation is used in traditionally troublesome situations, e.g. for highly evanescent waves, at turning points, and in regions having large gradient in material properties. Second, in some situations (high frequencies and small gradients) P and S -waves decouple and we use a different solution method for each wave type, instead of trying to force a single method to find all solutions. For example, above the P - and S -waves turning points an approximate fundamental matrix may be used for each wave type. At the P -wave turning point, the fundamental matrix may be used for the S -wave components but collocation is used for the P -wave. Between the P - and S -wave turning points collocation is used for the evanescent P -wave and the fundamental matrix is used for the S -wave. At the S -wave turning point and below, collocation is used for both. Third, the computational algorithm chooses the appropriate solution method and depth domain upon which it is employed based upon a specified error tolerance and the known inaccuracies of the various approximations employed. Once solutions of the boundary value problems are obtained, a Fourier—Bessel transform is then applied to get back into the space-time domain.  相似文献   

11.
Modelling spatio-temporal dependencies resulting from dynamic processes that evolve in both space and time is essential in many scientific fields. Spatio-temporal Kriging is one of the space–time procedures, which has progressed the most over the last few years. Kriging predictions strongly depend on the covariance function associated with the stochastic process under study. Therefore, the choice of such a covariance function, which is usually based on empirical covariance, is a core aspect in the prediction procedure. As the empirical covariance is not necessarily a permissible covariance function, it is necessary to fit a valid covariance model. Due to the complexity of these valid models in the spatio-temporal case, visualising them is of great help, at least when selecting the set of candidate models to represent the spatio-temporal dependencies suggested by the empirical covariogram. We focus on the visualisation of the most interesting stationary non-separable covariance functions and how they change as their main parameters take different values. We wrote a specialised code for visualisation purposes. In order to illustrate the usefulness of visualisation when choosing the appropriate non-separable spatio-temporal covariance model, we focus on an important pollution problem, namely the levels of carbon monoxide, in the city of Madrid, Spain.  相似文献   

12.
13.
Seismic traveltimes and amplitudes in reflection-seismic data show different dependences on the geometry of reflection interfaces, and on the variation of interval velocities. These dependences are revealed by eigenanalysis of the Hessian matrix, defined in terms of the Fréchet matrix and its adjoint associated with different norms chosen in the model space. The eigenvectors and eigenvalues of the Hessian clearly show that for reflection tomographic inversion, traveltime and amplitude data contain complementary information. Both for reflector-geometry and for interval-velocity variations, the traveltimes are sensitive to the model components with small wavenumbers, whereas the amplitudes are more sensitive to the components with high wavenumbers. The model resolution matrices, after the rejection of eigenvectors corresponding to small eigenvalues, give us some insight into how the addition of amplitude information could potentially contribute to the recovery of physical parameters.
In order to cooperatively invert seismic traveltimes and amplitudes simultaneously, we propose an empirical definition of the data covariance matrix which balances the relative sensitivities of different types of data. We investigate the cooperative use of both data types for, separately, interface-geometry and 2-D interval-velocity variations. In both cases we find that cooperative inversions can provide better solutions than those using traveltimes alone. The potential benefit of including amplitude-data constraints in seismic-reflection traveltime tomography is therefore that it may be possible to resolve the known ambiguity between the reflector-depth uncertainty and the interval-velocity uncertainty better.  相似文献   

14.
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the 'maximum likelihood' regularization scheme. We construct a 'probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity–density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion.
We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently.
An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area.  相似文献   

15.
A simple algorithm for deconvolution and regression of shot-noise-limited data is illustrated in this paper.The algorithm is easily adapted to almost any model and converges to the global optimum.Multiple-component spectrum regression,spectrum deconvolution and smoothing examples are used to illustratethe algorithm.The algorithm and a method for determining uncertainties in the parameters based on theFisher information matrix are given and illustrated with three examples.An experimental example ofspectrograph grating order compensation of a diode array solar spectroradiometer is given to illustratethe use of this technique in environmental analysis.The major advantages of the EM algorithm are foundto be its stability,simplicity,conservation of data magnitude and guaranteed convergence.  相似文献   

16.
北半球春季植被NDVI对温度变化响应的区域差异   总被引:47,自引:0,他引:47  
利用1982年到2000年的探路者NDVI资料,采用奇异值分解分析方法,研究北半球春季NDVI对温度变化响应的空间差异,前7对模态对总的协方差平方的解释率高达91%以上,反映出NDVI和气温的相关性非常高,第一对模态解释率达42.6%,显示北半球最显著的NDVI响应中心在西西伯利亚,其次是北美大陆,中心在其中东部,第三对及以后的模态反映的是次一次的空间特征,分析表明这些NDVI一温度的耦合模态受大尺度的大气环流系统的显著影响,9个重要的大气环流指标能解释整个北半球NDVI方差的55.6%,其中对欧洲、北美东南部,北美西北部,亚洲高纬以及东亚地区的影响最突出,因此,研究未来植被生态系统对全球变化响应的区域特征时,必须要考虑到这些环流系统的可能变化及其影响。  相似文献   

17.
PLS1 regression is generally viewed as lying in between PCR and OLS regression.Proof is given thatthe coefficient of determination,R~2,for a PLS multivariate calibration model is at least as high as thatfor a PCR model with the same number of components.It appears that PLS can be linked to acorrelation-weighted polynomial regression of a constant response on the eigenvalues of the covariancematrix of the predictor variables.  相似文献   

18.
We address the problem of estimating the spherical-harmonic power spectrum of a statistically isotropic scalar signal from noise-contaminated data on a region of the unit sphere. Three different methods of spectral estimation are considered: (i) the spherical analogue of the one-dimensional (1-D) periodogram, (ii) the maximum-likelihood method and (iii) a spherical analogue of the 1-D multitaper method. The periodogram exhibits strong spectral leakage, especially for small regions of area   A ≪ 4π  , and is generally unsuitable for spherical spectral analysis applications, just as it is in 1-D. The maximum-likelihood method is particularly useful in the case of nearly-whole-sphere coverage,   A ≈ 4π  , and has been widely used in cosmology to estimate the spectrum of the cosmic microwave background radiation from spacecraft observations. The spherical multitaper method affords easy control over the fundamental trade-off between spectral resolution and variance, and is easily implemented regardless of the region size, requiring neither non-linear iteration nor large-scale matrix inversion. As a result, the method is ideally suited for most applications in geophysics, geodesy or planetary science, where the objective is to obtain a spatially localized estimate of the spectrum of a signal from noisy data within a pre-selected and typically small region.  相似文献   

19.
Calculation of electromagnetic sensitivities in the time domain   总被引:1,自引:0,他引:1  
The speed of calculating sensitivities for 3-D conductivity structures for time- domain electromagnetic methods is significantly improved by applying the reciprocity theorem directly in the time domain. The sensitivities are obtained by convolving the electric field in the subsurface due to a transmitter at the surface with the electric field impulse response due to another transmitter, which replaces the original receiver. The acceleration compared to the classical perturbation method is approximately P/R , where P is the number of model parameters and R is the number of receiver positions. If the sensitivity has to be calculated very close to the receiver, approximate sensitivities can be obtained using an integral condition. Comparisons with the classical perturbation approach show that the method gives accurate results. Examples using transmitter–receiver configurations from a long-offset transient electromagnetics survey demonstrate the usefulness of sensitivities for the evaluation of resolution properties.  相似文献   

20.
Probabilistic landslide hazard assessment at the basin scale   总被引:32,自引:9,他引:32  
We propose a probabilistic model to determine landslide hazard at the basin scale. The model predicts where landslides will occur, how frequently they will occur, and how large they will be. We test the model in the Staffora River basin, in the northern Apennines, Italy. For the study area, we prepare a multi-temporal inventory map through the interpretation of multiple sets of aerial photographs taken between 1955 and 1999. We partition the basin into 2243 geo-morpho-hydrological units, and obtain the probability of spatial occurrence of landslides by discriminant analysis of thematic variables, including morphological, lithological, structural and land use. For each mapping unit, we obtain the landslide recurrence by dividing the total number of landslide events inventoried in the unit by the time span of the investigated period. Assuming that landslide recurrence will remain the same in the future, and adopting a Poisson probability model, we determine the exceedance probability of having one or more landslides in each mapping unit, for different periods. We obtain the probability of landslide size by analysing the frequency–area statistics of landslides, obtained from the multi-temporal inventory map. Assuming independence, we obtain a quantitative estimate of landslide hazard for each mapping unit as the joint probability of landslide size, of landslide temporal occurrence and of landslide spatial occurrence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号