首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lognormal kriging was developed early in geostatistics to take account of the often seen skewed distribution of the experimental mining data. Intuitively, taking the distribution of the data into account should lead to a better local estimate than that which would have been obtained when it is ignored. In practice however, the results obtained are sometimes disappointing. This paper tries to explain why this is so from the behavior of the lognormal kriging estimator. The estimator is shown to respect certain unbiasedness properties when considering the whole working field using the regression curve and its confidence interval for both simple or ordinary kriging. When examined locally, however, the estimator presents a behavior that is neither expected nor intuitive. These results lead to the question: is the theoretically correct lognormal kriging estimator suited to the practical problem of local estimation?  相似文献   

2.
The term lognormal kriging does not correspond to a single well defined estimator. In fact, several types of lognormal estimators forin situ reserves are available, and this may cause confusion. These estimators are based on different assumptions—that is, different models. This paper presents a review of these estimators.  相似文献   

3.
A logarithmic transformation may be used to improve the efficiency of estimates of the mean when observations follow the lognormal distribution. But if this transformation is applied to observations that follow another distribution, bias may be introduced. We consider some consequences of erroneously applying lognormal estimation theory and demonstrate that biased estimates may be obtained for certain classes of distributions. Illustrations of bias obtained in gold sampling are given.  相似文献   

4.
Multigaussian kriging is used in geostatistical applications to assess the recoverable reserves in ore deposits, or the probability for a contaminant to exceed a critical threshold. However, in general, the estimates have to be calculated by a numerical integration (Monte Carlo approach). In this paper, we propose analytical expressions to compute the multigaussian kriging estimator and its estimation variance, thanks to polynomial expansions. Three extensions are then considered, which are essential for mining and environmental applications: accounting for an unknown and locally varying mean (local stationarity), accounting for a block-support correction, and estimating spatial averages. All these extensions can be combined; they generalize several known techniques like ordinary lognormal kriging and uniform conditioning by a Gaussian value. An application of the concepts to a porphyry copper deposit shows that the proposed “ordinary multigaussian kriging” approach leads to more realistic estimates of the recoverable reserves than the conventional methods (disjunctive and simple multigaussian krigings), in particular in the nonmineralized undersampled areas.  相似文献   

5.
Intuitively it is obvious that if a linear unbiased estimator is only “slightly” suboptimal, the estimate cannot differ “much” from the corresponding best linear unbiased estimate for any “reasonable” observation vector. I present a Euclidean, nonstochastic bound which quantifies this heuristic notion. I then use it to constrain the difference between two vertical motion estimators from repeated precise levelling, where a candidate estimator is both known to be suboptimal at the standard model, and thought to be more resistant than the optimal estimator against deviations from it.  相似文献   

6.
Extreme value analysis provides a semiparametric method for analyzing the extreme long tails of skew distributions which may be observed when handling mining data. The estimation of important tail characteristics, such as the extreme value index, allows for a discrimination between competing distribution models. It measures the thickness of such long tailed distributions, if only a limited sample is available. This paper stresses the practical implementation of extreme value theory, which is used to discriminate a lognormal from a mixed lognormal distribution in a case study of size distributions for alluvial diamonds.  相似文献   

7.
This study compares kriging and maximum entropy estimators for spatial estimation and monitoring network design. For second-order stationary random fields (a subset of Gaussian fields) the estimators and their associated interpolation error variances are identical. Simple lognormal kriging differs from the lognormal maximum entropy estimator, however, in both mathematical formulation and estimation error variances. Two numerical examples are described that compare the two estimators. Simple lognormal kriging yields systematically higher estimates and smoother interpolation surfaces compared to those produced by the lognormal maximum entropy estimator. The second empirical comparison applies kriging and entropy-based models to the problem of optimizing groundwater monitoring network design, using six alternative objective functions. The maximum entropy-based sampling design approach is shown to be the more computationally efficient of the two.  相似文献   

8.
Turbidite bed thickness distributions are often interpreted in terms of power laws, even when there are significant departures from a single straight line on a log–log exceedence probability plot. Alternatively, these distributions have been described by a lognormal mixture model. Statistical methods used to analyse and distinguish the two models (power law and lognormal mixture) are presented here. In addition, the shortcomings of some frequently applied techniques are discussed, using a new data set from the Tarcău Sandstone of the East Carpathians, Romania, and published data from the Marnoso‐Arenacea Formation of Italy. Log–log exceedence plots and least squares fitting by themselves are inappropriate tools for the analysis of bed thickness distributions; they must be accompanied by the assessment of other types of diagrams (cumulative probability, histogram of log‐transformed values, q–q plots) and the use of a measure of goodness‐of‐fit other than R2, such as the chi‐square or the Kolmogorov–Smirnov statistics. When interpreting data that do not follow a single straight line on a log–log exceedence plot, it is important to take into account that ‘segmented’ power laws are not simple mixtures of power law populations with arbitrary parameters. Although a simple model of flow confinement does result in segmented plots at the centre of a basin, the segmented shape of the exceedence curve breaks down as the sampling location moves away from the basin centre. The lognormal mixture model is a sedimentologically intuitive alternative to the power law distribution. The expectation–maximization algorithm can be used to estimate the parameters and thus to model lognormal bed thickness mixtures. Taking into account these observations, the bed thickness data from the Tarcău Sandstone are best described by a lognormal mixture model with two components. Compared with the Marnoso‐Arenacea Formation, in which bed thicknesses of thin beds have a larger variability than thicknesses of the thicker beds, the thinner‐bedded population of the Tarcău Sandstone has a lower variability than the thicker‐bedded population. Such differences might reflect contrasting depositional settings, such as the difference between channel levées and basin plains.  相似文献   

9.
Ordinary kriging is well-known to be optimal when the data have a multivariate normal distribution (and if the variogram is known), whereas lognormal kriging presupposes the multivariate lognormality of the data. But in practice, real data never entirely satisfy these assumptions. In this article, the sensitivity of these two kriging estimators to departures from these assumptions and in particular, their resistance to outliers is considered. An outlier effect index designed to assess the effect of a single outlier on both estimators is proposed, which can be extended to other types of estimators. Although lognormal kriging is sensitive to slight variations in the sill of the variogram of the logs (i.e., their variance), it is not influenced by the estimate of the mean of the logs.This paper was presented at MGUS 87 Conference, Redwood City, California, 14 April 1987.  相似文献   

10.
It is generally agreed that particle size distributions of sediments tend ideally to approximate the form of the lognormal probability law, but there is no single widely accepted explanation of how sedimentary processes generate the form of this law. Conceptually, and in its simplest form, sediment genesis involves the transformation of a parent rock mass into a particulate end product by processes that include size reduction and selection during weathering, transportation, and deposition. The many variables that operate simultaneously during this transformation can be shown to produce a distribution of particle sizes that approaches asymptotically the lognormal form when the effect of the variables is multiplicative. This was first shown by Kolmogorov (1941). Currently available models combine breakage and selection in differing degrees, but are similar in treating the processes as having multiplicative effects on particle sizes. The present paper, based on careful specification of the initial state, the nth breakage rule and the nth selection rule, leads to two stochastic models for particle breakage, and for both models the probability distributions of particle sizes are obtained. No attempt is made to apply these models to real world sedimentary processes, although this topic is touched upon in the closing remarks.  相似文献   

11.
A common characteristic of gold deposits is highly skewed frequency distributions. Lognormal and three-parameter lognormal distributions have worked well for Witwatersrand-type deposits. Epithermal gold deposits show evidence of multiple pulses of mineralization, which make fitting simple distribution models difficult. A new approach is proposed which consists of the following steps: (1) ordering the data in descending order. (2) Finding the cumulative coefficient of variation for each datum. Look for the quantile where there is a sudden acceleration of the cumulative C.V. Typically, the quantile will be above 0.85. (3) Fitting a lognormal model to the data above that quantile. Establish the mean above the quantile, Z H * . This is done by fitting a single or double truncated lognormal model. (4) Use variograms to establish the spatial continuity of below-quantile data (ZL) and indicator variable (1 if below quantile, 0 if above). (5) Estimate grade of blocks by (1*) (Z L * )+(1 – 1*) (Z H * ), where 1* is the kriged estimate of the indicator, and Z L * is the kriged estimate of the below quantile portion of the distribution. The method is illustrated for caldera, Carlin-type, and hot springs-type deposits. For the latter two types, slight variants of the above steps are developed.  相似文献   

12.
13.
Magnitude-frequency concepts in earth surface processes have found widespread application following the publication of the well-known paper Wolman and Miller. Of particular interest in such studies is the determination of those event magnitudes which make the most important long-term contributions to the total work of a given process. However, there has been little discussion to date concerning an appropriate estimator of the parameter , where is the long-term work achieved by events within a specified magnitude range, expressed as a proportion of the long-term work achieved by events of all magnitudes. The estimation of is straightforward for the time-independent case where short-duration events occur randomly in time, and event magnitudes are independent random variables from a common distribution. For this model, exists as a true parameter which can be estimated by , where is the sample proportion of work contributed by events within the specified magnitude range. This estimator is biased, but it is almost median-unbiased for large samples. An approximate expression for var ( ) can be obtained from standard results. A similar approach to the estimation of can be applied to estimating the long-term work contribution of the largest events in consecutiveR-year periods. An example is presented using riverbank erosion data. Within the constraints of the time-independent model, the estimation procedure is quite general and can be applied with or without prior specification of the probability distribution of event magnitudes. In some situations, estimation can also be achieved indirectly by using a sample of the causal events which generate the individual work events. This indirect estimation is particularly simple if work magnitude is a power transformation of causal magnitude, and the distribution of causal event magnitudes can be approximated by a lognormal distribution or a Weibull distribution. The relative work achieved by events within ever-smaller magnitude ranges leads in the limit to the work intensity function,P(y). A plot of this function shows the relative importance ofy—magnitude events with respect to their long-term work contributions. Estimation ofP(y) is carried out by first fitting a probability distribution to a sample of event magnitude data. The functionP(y) is unimodal with respect to the following probability distributions of event magnitudes: lognormal, Weibull, unimodal beta, gamma, and inverse Gaussian. A lognormal distribution of event magnitudes produces the maximum work intensity at the lognormal median. In a strict mathematical sense, the long-term work contribution of very large and very small events is insignificant. However, little can be deduced concerning the pattern of work intensity between these two extremes. In particular, there appears no reason to suppose that the maximum work intensity will coincide with work magnitudes classified as intermediate.  相似文献   

14.
The lognormal distribution is widely used to represent the distribution of deposit or reserve size of oil and gas fields. The purpose of this paper has been to investigate the potential usefulness of the loghyperbolic distribution as an alternative to the lognormal distribution. This hypothesis is tested using a set of data from the Denver basin. The results indicate that the loghyperbolic distribution shows a better fit to the empirical data than the lognormal distribution.  相似文献   

15.
The aim of this paper is to present a fast method based on bootstrapping, for simulating recoverable reserves for input to financial Monte Carlo simulations. In mining, the three parameters defining recoverable reserves are the cutoff grade, z, the ore tonnage above cutoff, T, and the metal quantity above cutoff, Q. After introducing the concept of 3-dimensional QTz curves, the statistical technique called bootstrapping is reviewed and applied to a set of South African gold grades. As selective mining is carried out on blocks not points, these curves have to be calculated for blocks. The QTz curves obtained by bootstrapping are compared to those obtained by conditionally simulating the same deposit. The procedure has been extended to incorporate geologists' ideas of the likely size of the ore volume. Lastly, the recoverable reserves obtained by bootstrapping are compared with those obtained by traditional risk analysis (base case ± 10% or 20%).  相似文献   

16.
The Kolar Gold Fields are some of the best known gold deposits in India. An example of ore valuation utilizing 49 ore blocks of the Oriental lode of the West Reefs, explored and developed in the Nundydroog mines, is given. In this reef system, there are large ore reserves of sulfidebearing quartz reefs, and the gold distribution is erratic both along strike and downdip. Ore valuation at present is based on the arithmetic mean of samples taken at peripheral positions of the blocks. Samples taken from internal portions of the blocks give a totally different picture of the value. To correct this discrepancy, normal regression and lognormal regression of internal block and total block values, over peripheral block values have been used to evaluate the deposits. The valuation efficiency criterion shows the logarithmic variance for distribution of ratios of unregressed and regressed block values with the corresponding arithmetic mean of internal stope values as observed inside the blocks. The studies have shown that the logarithmic variance is minimum if the logarithmic regression is used, thereby indicating maximum efficiency. Further, the undervaluation and overvaluation of low- and high-grade blocks is less for the logarithmic example. With help of the logarithmic regression equation an effective pay limit of 177.8 in.-dwt has been found for selective mining, for peripheral block values corresponding to the official pay limit of 240 in.-dwt.NGRI contribution number 71-281.  相似文献   

17.
甘肃马泉金矿金品位分布特征   总被引:1,自引:0,他引:1       下载免费PDF全文
计政科  姜启明 《甘肃地质》2009,(2):30-34,71
通过对马泉金矿床主体(三、四号矿带)及外围矿带矿石品位分布资料的研究后认为:四号矿带金品位高于三号矿带,夹石含量少予三号矿带,易选矿石多于三号矿带,因而矿石质量优于三号矿带;外围矿带品位低于矿床主体品位.以低品位矿石为主。通过对矿石品位分布直方图(数学模型)的偏度系数和峰度系数概率(〉95%)假设检验后认为:矿床主体品位分布概率曲线基本符合对数正态分布规律。通过对矿床平均品位算术平均值和变异系数的研究后认为:矿床属品位均匀型矿床,建议矿床的氧化矿石边界品位为0.5×10^-6原生矿石边界品位为1.0×10^-6;特高品位下界为29.0×1^-6。  相似文献   

18.
    
A theoretical study of the general case of the estimation of regionalized variables with a lognormal distribution is presented. The results of this study are compared to those obtained assuming conservation of lognormality. The numerical significance of the different solutions is illustrated by several simple examples.  相似文献   

19.
BLU Estimators and Compositional Data   总被引:5,自引:0,他引:5  
One of the principal objections to the logratio approach for the statistical analysis of compositional data has been the absence of unbiasedness and minimum variance properties of some estimators: they seem not to be BLU estimator. Using a geometric approach, we introduce the concept of metric variance and of a compositional unbiased estimator, and we show that the closed geometric mean is a c-BLU estimator (compositional best linear unbiased estimator with respect to the geometry of the simplex) of the center of the distribution of a random composition. Thus, it satisfies analogous properties to the arithmetic mean as a BLU estimator of the expected value in real space. The geometric approach used gives real meaning to the concepts of measure of central tendency and measure of dispersion and opens up a new way of understanding the statistical analysis of compositional data.  相似文献   

20.
Normal and lognormal estimation   总被引:3,自引:0,他引:3  
A comprehensive theoretical study of the problem of estimation of regionalized variables with normal or lognormal distribution is presented. Unbiased linear estimators are derived, under both assumptions that the population mean is known and unknown, and their error variance is calculated. The minimum variance kriging estimators are studied in more detail and are compared with the conditional expectations. The emphasis is on the study of lognormally distributed variates. The derived mathematical formulas are applicable to the optimal contouring of sample values with the appropriate distribution, as well as the optimal estimation of blocks of ore in mineral deposits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号