首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 218 毫秒
1.
In 1963, Gandin published a monograph on “optimum interpolation for the objective analysis of meteorological fields, ? a method that is similar mathematically to geodetical least-squares prediction and collocation, simple kriging, and spectral interpolation. The common problem is the interpolation or extrapolation or estimation of a continuous spatial property from finitely many observations. Gandin 's method is presented in an inverse-theoretical context with focus on a methodological comparison with related methods. Underlying mathematical assumptions as well as geological implications are discussed. An introductory overview of inverse methods in the earth sciences is given, with emphasis on methods with a structure analysis step.  相似文献   

2.
Triangle based interpolation is introduced by an outline of two classical planar interpolation methods, viz. linear triangular facets and proximal polygons. These are shown to have opposite local bias. By applying cross products of triangles to obtain local gradients, a method designated slant-top proximal polygon interpolation is introduced that is intermediate between linear facets and polygonal interpolation in its local bias. This surface is not continuous, but, by extending and weighting the gradient planes, a C1 surface can be obtained. The gradients also allow a roughness index to be calculated for each data point in the set. This index is used to control the shape of a blending function that provides a weighted combination of the gradient planes and linear interpolation. This results in a curvilinear, C1,interpolation of the data set that is bounded by the linear interpolation and the weighted gradient planes and is tangent to the slant-top interpolation at the data points. These procedures may be applied to data with two, three, or four independent variables.  相似文献   

3.
    
Components of geostatistical estimation, developed as a method for ore deposit assessment, are discussed in detail. The assumption that spatial observations can be treated as a stochastic process is judged to be an inappropriate model for natural data. Problems of semivariogram formulation are reviewed, and this method is considered to be inadequate for estimating the function being sought. Characteristics of bivariate interpolation are summarized, highlighting kriging limitations as an interpolation method. Limitations are similar to those of inverse distance weighted observations interpolation. Attention is drawn to the local bias of kriging and misplaced claims that it is an optimal interpolation method. The so-called estimation variance, interpreted as providing confidence limits for estimation of mining blocks, is shown to be meaningless as an index of local variation. The claim that geostatistics constitutes a new science is examined in detail. Such novelties as exist in the method are shown to transgress accepted principles of scientific inference. Stochastic modeling in general is discussed, and purposes of the approach emphasized. For the purpose of detailed quantitative assessment it can provide only prediction qualified by hypothesis at best. Such an approach should play no part in ore deposit assessment where the need is for local detailed inventories; these can only be achieved properly through local deterministic methods, where prediction is purely deductive.EDITOR-IN-CHIEF'S NOTE: The Editorial Board has long recognized the desirability of greater open discussion and comment of timely topics in the journal. Therefore, I solicited the following contribution from Professors Philip and Watson and a response to their paper from Professor Journel. In addition, Journel sent to me comments by a student, Srivastava. None of these three papers has undergone reviewing by other workers in the field as normally is required byMathe-matical Geology. We thank these authors for their papers and hope that these discussions will be beneficial to all our readers.  相似文献   

4.
One objective of the aerial radiometric surveys flown as part of the U.S. Department of Energy's National Uranium Resource Evaluation (NURE) program was to ascertain the spatial distribution of near-surface radioelement abundances on a regional scale. Some method for identifying groups of observations with similar -ray spectral signatures and radioelement concentration values was therefore required. It is shown in this paper that cluster analysis can identify such groups with or without a priori knowledge of the geology of an area. An approach that combines principal components analysis with convergentk-means cluster analysis is used to classify 6991 observations (each observation comprising three radiometric variables) from the Precambrian rocks of the Copper Mountain, Wyoming area. This method is compared with a convergentk-means analysis that utilizes available geologic knowledge. Both methods identify four clusters. Three of the clusters represent background values for the Precambrian rocks of the area, and the fourth represents outliers (anomalously high214Bi). A segmentation of the data corresponding to geologic reality as interpreted by other methods has been achieved by perceptive quantitative analysis of aerial radiometric data. The techniques employed are composites of classical clustering methods designed to handle the special problems presented by large data sets.  相似文献   

5.
This paper analysis the stability of several methods for obtaining numerical solutions of second-order ordinary differential equations. The methods are popular in structural and geotechnical engineering applications and are direct, that is they do not require the transformation of the second-order equation into a first-order system. They include Newmark's method in both implicit and explicit forms, Wilson's θ-method, Houbolt's method and some variants on this latter method. We shall examine the stability of the methods when applied to the second-order scalar test equation where a and c are real.  相似文献   

6.
We evaluate the performance and statistical accuracy of the fast Fourier transform method for unconditional and conditional simulation. The method is applied under difficult but realistic circumstances of a large field (1001 by 1001 points) with abundant conditioning criteria and a band limited, anisotropic, fractal-based statistical characterization (the von Kármán model). The simple Fourier unconditional simulation is conducted by Fourier transform of the amplitude spectrum model, sampled on a discrete grid, multiplied by a random phase spectrum. Although computationally efficient, this method failed to adequately match the intended statistical model at small scales because of sinc-function convolution. Attempts to alleviate this problem through the covariance method (computing the amplitude spectrum by taking the square root of the discrete Fourier transform of the covariance function) created artifacts and spurious high wavenumber content. A modified Fourier method, consisting of pre-aliasing the wavenumber spectrum, satisfactorily remedies sinc smoothing. Conditional simulations using Fourier-based methods require several processing stages, including a smooth interpolation of the differential between conditioning data and an unconditional simulation. Although kriging is the ideal method for this step, it can take prohibitively long where the number of conditions is large. Here we develop a fast, approximate kriging methodology, consisting of coarse kriging followed by faster methods of interpolation. Though less accurate than full kriging, this fast kriging does not produce visually evident artifacts or adversely affect the a posteriori statistics of the Fourier conditional simulation.  相似文献   

7.
Summary A new probabilistic approach is introduced for slope stability analysis, which is general in types of variable distributions and correlations or dependency between variables, and flexible enough to include any adverse impact analysis for blasting vibrations and groundwater conditions.The material strength within a slope area, given in terms of the internal friction angle (ø) and cohesion (c), is randomized in the bivariate joint probability analysis. To be a completely general engineering method, the new probabilistic approach employs the random variable transformation technique: the Hermite model of the Gaussian transformation function, which transforms the experimental histogram of shear strength parameters to the standard Gaussian distribution (=0, 2=1.0).Because a binormal joint probability is analysed on the true probability region projected on the plane of the Gaussian transformed variables, it is an exact solution of slope stability based on the available sample data. No assumption on the shape of the experimental histogram or independency between two random variables is made as in the current probability methods of slope analysis.  相似文献   

8.
An artificial neural network (ANN) toolbox is created within GIS software for spatial interpolation, which will help GIS users to train and test ANNs, perform spatial analysis, and display results as a single process. The performance is compared to that of the open source Fast Artificial Neural Network library and conventional interpolation methods by creating digital elevation models (DEMs) given that nearly exact solutions exist. Simulation results show that the advanced backpropagations such as iRprop speed up the learning, while they can get stuck in a local minimum depending on initial weight sets. Besides, the division of input–output examples into training and test data affects the accuracy, particularly when the distribution of the examples is skewed and peaked, and the number of data is small. ANNs, however, show the similar performance to inversed distance weighted or kriging and outperform polynomial interpolations as a global interpolation method in high-dimensional data. In addition, the neural network residual kriging (NNRK) model, which combines the ANN toolbox and kriging within GIS software, is performed. The NNRK outperforms conventional methods and well captures global trends and local variations. A key outcome of this work is that the ANN toolbox created within the de facto standard GIS software is applicable to various spatial analysis including hazard risk assessment over a large area, in particular when there are multiple potential causes, the relationship between risk factors and hazard events is not clear, and the number of available data is small given its performance for DEM generation.  相似文献   

9.
The Second-Order Stationary Universal Kriging Model Revisited   总被引:3,自引:0,他引:3  
Universal kriging originally was developed for problems of spatial interpolation if a drift seemed to be justified to model the experimental data. But its use has been questioned in relation to the bias of the estimated underlying variogram (variogram of the residuals), and furthermore universal kriging came to be considered an old-fashioned method after the theory of intrinsic random functions was developed. In this paper the model is reexamined together with methods for handling problems in the inference of parameters. The efficiency of the inference of covariance parameters is shown in terms of bias, variance, and mean square error of the sampling distribution obtained by Monte Carlo simulation for three different estimators (maximum likelihood, bias corrected maximum likelihood, and restricted maximum likelihood). It is shown that unbiased estimates for the covariance parameters may be obtained but if the number of samples is small there can be no guarantee of good estimates (estimates close to the true value) because the sampling variance usually is large. This problem is not specific to the universal kriging model but rather arises in any model where parameters are inferred from experimental data. The validity of the estimates may be evaluated statistically as a risk function as is shown in this paper.  相似文献   

10.
Within a single well, a least-squares model such as Ø = a +bT sonic may be appropriate, insofar as random errors in core measurement and in sonic tool response are concerned. However, if several wells are in the reservoir, and if structural variation across the field exists, no single least-squares solution will adequately describe all wells with the same precision. Interpolation, via a cubic spline, is offered as an alternative to model this type of variation. However, to control both inter- and intra-well variability, a combination of least squares and interpolation is the model of choice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号