首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
A common characteristic of gold deposits is highly skewed frequency distributions. Lognormal and three-parameter lognormal distributions have worked well for Witwatersrand-type deposits. Epithermal gold deposits show evidence of multiple pulses of mineralization, which make fitting simple distribution models difficult. A new approach is proposed which consists of the following steps: (1) ordering the data in descending order. (2) Finding the cumulative coefficient of variation for each datum. Look for the quantile where there is a sudden acceleration of the cumulative C.V. Typically, the quantile will be above 0.85. (3) Fitting a lognormal model to the data above that quantile. Establish the mean above the quantile, Z H * . This is done by fitting a single or double truncated lognormal model. (4) Use variograms to establish the spatial continuity of below-quantile data (ZL) and indicator variable (1 if below quantile, 0 if above). (5) Estimate grade of blocks by (1*) (Z L * )+(1 – 1*) (Z H * ), where 1* is the kriged estimate of the indicator, and Z L * is the kriged estimate of the below quantile portion of the distribution. The method is illustrated for caldera, Carlin-type, and hot springs-type deposits. For the latter two types, slight variants of the above steps are developed.  相似文献   

2.
Spatial declustering weights   总被引:1,自引:0,他引:1  
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

3.
One of the tasks routinely carried out by geostatisticians is the evaluation of global mining reserves corresponding to a given cutoff grade and size of selective mining units. A long with these recovery figures, the geostatistician generally provides an assessment of the global estimation variance, which represents the precision of the overall average grade estimate, when no cutoff is applied. Such a global estimation variance is of limited interest for evaluating mining projects; what is required is the reliability of the estimate of recovered reserves or, in other words, the conditional estimation variance. Unfortunately, classical linear geostatistical methods fail to provide an easy way to estimate this variance. Through the use of simulated deposits (representing various types of regionalization)the present paper reviews and discusses the effects of changes in cutoff grade and selective mining unit size on the conditional estimation variance. It is shown that, when the cutoff grade is applied to a pointsupport (sample-size)distribution, the conditional estimation variance appears to be readily accessible by classical formulas, once the conditional semivariogram is known. However, the evaluation of the conditional estimation variance seems to be less straightforward for the general case when a cutoff is applied to the average grade distribution of selective mining units. Empirical approximation formulas for the conditional estimation variance are tentatively proposed, and their performance in the case of the simulated deposits is shown. The limitations of these approximations are discussed, and possible ways of formalizing the problem suggested.  相似文献   

4.
The Nu Expression for Probabilistic Data Integration   总被引:4,自引:0,他引:4  
The general problem of data integration is expressed as that of combining probability distributions conditioned to each individual datum or data event into a posterior probability for the unknown conditioned jointly to all data. Any such combination of information requires taking into account data interaction for the specific event being assessed. The nu expression provides an exact analytical representation of such a combination. This representation allows a clear and useful separation of the two components of any data integration algorithm: individual data information content and data interaction, the latter being different from data dependence. Any estimation workflow that fails to address data interaction is not only suboptimal, but may result in severe bias. The nu expression reduces the possibly very complex joint data interaction to a single multiplicative correction parameter ν 0, difficult to evaluate but whose exact analytical expression is given; availability of such an expression provides avenues for its determination or approximation. The case ν 0=1 is more comprehensive than data conditional independence; it delivers a preliminary robust approximation in presence of actual data interaction. An experiment where the exact results are known allows the results of the ν 0=1 approximation to be checked against the traditional estimators based on assumption of data independence.  相似文献   

5.
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

6.
Approximate local confidence intervals are constructed from uncertainty models in the form of the conditional distribution of the random variable Z given values of variables [Zi, i=1,...,n]. When the support of the variable Z is any support other than that of the data, the conditional distributions require a change of support correction. This paper investigates the effect of change of support on the approximate local confidence intervals constructed by cumulative indicator kriging, class indicator kriging, and probability kriging under a variety of conditions. The conditions are generated by three simulated deposits with grade distributions of successively higher degree of skewness; a point support and two different block supports are considered. The paper also compares the confidence intervals obtained from these methods using the most used measures of confidence interval effectiveness.  相似文献   

7.
Correcting the Smoothing Effect of Estimators: A Spectral Postprocessor   总被引:1,自引:0,他引:1  
The postprocessing algorithm introduced by Yao for imposing the spectral amplitudes of a target covariance model is shown to be efficient in correcting the smoothing effect of estimation maps, whether obtained by kriging or any other interpolation technique. As opposed to stochastic simulation, Yao's algorithm yields a unique map starting from an original, typically smooth, estimation map. Most importantly it is shown that reproduction of a covariance/semivariogram model (global accuracy) is necessarily obtained at the cost of local accuracy reduction and increase in conditional bias. When working on one location at a time, kriging remains the most accurate (in the least squared error sense) estimator. However, kriging estimates should only be listed, not mapped, since they do not reflect the correct (target) spatial autocorrelation. This mismatch in spatial autocorrelation can be corrected via stochastic simulation, or can be imposed a posteriori via Yao's algorithm.  相似文献   

8.
The Kappa model of probability and higher-order rock sequences   总被引:2,自引:0,他引:2  
In any depositional environment, the sequence of sediments follows specific high- and low-frequency patterns of rock occurrences or events. The occurrence of a rock in a spatial location is conditional to a prior rock event at a distant location. Subsequently, a third rock occurs between the two locations. This third event is conditional to both prior events and is driven by a third-order conditional probability P(C ∣ (A ∩ B)). Such probability has to be characterized beyond the classic conditional independence model, and this research has found that exact computation requires a third-order co-cumulant term. The co-cumulants provide the higher-order redundancy among multiple indicator variables. A Bayesian analysis has been performed with “known” numerical co-cumulants yielding a novel model of conditional probability that is called the “Kappa model.” This model was applied to three-point variables, and the concept has been extended for multiple events P(G ∣ A ∩ B ∩ C ∩ D... ∩ N), allowing the reproduction of complex transitions of rocks in sequence stratigraphy. The Kappa model and co-cumulants have been illustrated with simple numerical examples for clastic rock sequences. In addition, the co-cumulant has been used to discover an extension of the variogram called the indicator cumulogram. In this way, multiple prior events are no longer ignored for evaluating the probability of a posterior event with higher-order co-cumulant considerations.  相似文献   

9.
Direct Sequential Simulation and Cosimulation   总被引:7,自引:0,他引:7  
Sequential simulation of a continuous variable usually requires its transformation into a binary or a Gaussian variable, giving rise to the classical algorithms of sequential indicator simulation or sequential Gaussian simulation. Journel (1994) showed that the sequential simulation of a continuous variable, without any prior transformation, succeeded in reproducing the covariance model, provided that the simulated values are drawn from local distributions centered at the simple kriging estimates with a variance corresponding to the simple kriging estimation variance. Unfortunately, it does not reproduce the histogram of the original variable, which is one of the basic requirements of any simulation method. This has been the most serious limitation to the practical application of the direct simulation approach. In this paper, a new approach for the direct sequential simulation is proposed. The idea is to use the local sk estimates of the mean and variance, not to define the local cdf but to sample from the global cdf. Simulated values of original variable are drawn from intervals of the global cdf, which are calculated with the local estimates of the mean and variance. One of the main advantages of the direct sequential simulation method is that it allows joint simulation of N v variables without any transformation. A set of examples of direct simulation and cosimulation are presented.  相似文献   

10.
The numerical algorithm of calculation of Lyapounov coefficients (L k) of any order is developed. The apparatus of analytical calculations is not used in this algorithm. The proposed algorithm is of use for usual computer languages and allows us to find the numerical value of L k for any k and to make complete qualitative analyses of dynamic models on the plane.  相似文献   

11.
 Models for estimating the pressure and temperature of igneous rocks from co-existing clino-pyroxene and liquid compositions are calibrated from existing data and from new data obtained from experiments performed on several mafic bulk compositions (from 8–30 kbar and 1100–1475° C). The resulting geothermobarometers involve thermodynamic expressions that relate temperature and pressure to equilibrium constants. Specifically, the jadeite (Jd; NaAlSi2O6)–diopside/hedenbergite (DiHd; Ca(Mg, Fe) Si2O6) exchange equilibrium between clinopyroxene and liquid is temperature sensitive. When compositional corrections are made to the calibrated equilibrium constant the resulting geothermometer is (i) 104 T=6.73−0.26* ln [Jdpx*Caliq*FmliqDiHdpx*Naliq*Alliq] −0.86* ln [MgliqMgliq+Feliq]+0.52*ln [Caliq] an expression which estimates temperature to ±27 K. Compared to (i), the equilibrium constant for jadeite formation is more sensitive to pressure resulting in a thermobarometer (ii) P=−54.3+299*T104+36.4*T104 ln [Jdpx[Siliq]2*Naliq*Alliq] +367*[Naliq*Alliq] which estimates pressure to ± 1.4 kbar. Pressure is in kbar, T is in Kelvin. Quantities such as Naliq represent the cation fraction of the given oxide (NaO0.5) in the liquid and Fm=MgO+FeO. The mole fractions of Jd and diopside+hedenbergite (DiHd) components are calculated from a normative scheme which assigns the lesser of Na or octahedral Al to form Jd; any excess AlVI forms Calcium Tschermak’s component (CaTs; CaAlAlSiO6); Ca remaining after forming CaTs and CaTiAl2O6 is taken as DiHd. Experimental data not included in the regressions were used to test models (i) and (ii). Error on predictions of T using model (i) is ±40 K. A pressure-dependent form of (i) reduces this error to ±30 K. Using model (ii) to predict pressures, the error on mean values of 10 isobaric data sets (0–25 kbar, 118 data) is ±0.3 kbar. Calculating thermodynamic properties from regression coefficients in (ii) gives VJd f of 23.4 ±1.3 cm3/mol, close to the value anticipated from bar molar volume data (23.5 cm3/mol). Applied to clinopyroxene phenocrysts from Mauna Kea, Hawaii lavas, the expressions estimate equilibration depths as great as 40 km. This result indicates that transport was sufficiently rapid that at least some phenocrysts had insufficient time to re-equilibrate at lower pressures. Received: 16 May 1994/Accepted: 15 June 1995  相似文献   

12.
Consider the assessment of any unknown event A through its conditional probability P(A | B,C) given two data events B, C of different sources. Each event could involve many locations jointly, but the two data events are assumed such that the probabilities P(A | B) and P(A | C) can be evaluated. The challenge is to recombine these two partially conditioned probabilities into a model for P(A | B,C) without having to assume independence of the two data events B and C. The probability P(A | B,C) is then used for estimation or simulation of the event A. In presence of actual data dependence, the combination algorithm provided by the traditional conditional independence hypothesis is shown to be nonrobust leading to various inconsistencies. An alternative based on a permanence of updating ratios is proposed, which guarantees all limit conditions even in presence of complex data interdependence. The resulting recombination formula is extended to any number n of data events and a paradigm is offered to introduce formal data interdependence.  相似文献   

13.
We present an analysis of spectrophotometric observations of the latest cycle of activity of the symbiotic binary Z And from 2006 to 2010. We estimate the temperature of the hot component of Z And to be ≈150 000−170 000 K at minimum brightness, decreasing to ≈90 000 K at the brightness maximum. Our estimate of the electron density in the gaseous nebula is N e = 1010−1012 cm−3 in the region of formation of lines of neutral helium and N e = 106−107 cm−3 in the region of formation of the [OIII] and [NeIII] nebular lines. A trend for the gas density derived from helium lines to increase and the gas density derived from [OIII] and [NeIII] lines to simultaneously decrease with increasing brightness of the system was observed. Our estimates show that the ratios of the theoretical and observed fluxes in the [OIII] and [NeIII] lines agree best when the O/Ne ratio is similar to its value for planetary nebulae. The model spectral energy distribution showed that, in addition to a cool component and gaseous nebula, a relatively cool pseudophotosphere (5250–11 500 K) is present in the system. The simultaneous presence of a relatively cool pseudophotosphere and high-ionization spectral lines is probably related to a disk-like structure of the pseudophotosphere. The pseudophotosphere formed very rapidly—over several weeks—during a period of increasing brightness of Z And. We infer that in 2009, as in 2006, the activity of the system was accompanied by a collimated bipolar ejection of matter (jets). In contrast to the situation in 2006, the jets were detected even before the system reached its maximum brightness. Moreover, components with velocities close to 1200 km/s disappeared at the maximum, while those with velocities close to 1800 km/s appeared.  相似文献   

14.
Describing how soil properties vary spatially is of particular importance in stochastic analyses of geotechnical problems, because spatial variability has a significant influence on local material and global geotechnical response. In particular, the scale of fluctuation θ is a key parameter in the correlation model used to represent the spatial variability of a site through a random field. It is, therefore, of fundamental importance to accurately estimate θ in order to best model the actual soil heterogeneity. In this paper, two methodologies are investigated to assess their abilities to estimate the vertical and horizontal scales of fluctuation of a particular site using in situ cone penetration test (CPT) data. The first method belongs to the family of more traditional approaches, which are based on best fitting a theoretical correlation model to available CPT data. The second method involves a new strategy which combines information from conditional random fields with the traditional approach. Both methods are applied to a case study involving the estimation of θ at three two-dimensional sections across a site and the results obtained show general agreement between the two methods, suggesting a similar level of accuracy between the new and traditional approaches. However, in order to further assess the relative accuracy of estimates provided by each method, a second numerical analysis is proposed. The results confirm the general consistency observed in the case study calculations, particularly in the vertical direction where a large amount of data are available. Interestingly, for the horizontal direction, where data are typically scarce, some additional improvement in terms of relative error is obtained with the new approach.  相似文献   

15.
Conditional Simulation of Random Fields by Successive Residuals   总被引:2,自引:0,他引:2  
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.  相似文献   

16.
This paper is devoted to multi‐scale modeling of elastic–plastic deformation of a class of geomaterials with a polycrystalline microstructure. We have extended and improved the simplified polycrystalline model presented in [Zeng T. et al., 2014. Mech. Mater. 69 (1):132–145]. A rigorous and fully consistent self‐consistent (SC) scheme is proposed to describe the interaction among plastic mineral grains. We have also deeply discussed the numerical issues related to the numerical implementation of the proposed micromechanical model. The efficiency of the proposed model and the related numerical procedure is evaluated in several representative cases. We have compared the numerical results respectively obtained from the fully SC model and two simplified ones. It is found that the SC model produces a softer stress–strain response than that of the simplified models. The comparisons between the estimation of overall behavior of a granite in different loading conditions and experimental data are also conducted. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
The Heidelberg Basin (HDB) hosts one of the thickest Quaternary sediment successions in central Europe. To establish a reliable Middle and Upper Pleistocene chronology for a recently drilled core from the depocentre of the Heidelberg Basin, we applied multiple luminescence dating approaches, including quartz optically stimulated luminescence (OSL), two feldspar post‐IR IRSL protocols using second IR stimulation temperatures of 225 °C (pIRIR225) and 290 °C (pIRIR290), and two fading correction models. Relatively high anomalous fading was observed for both the pIRIR225 and pIRIR290 signals, with mean fading rates of 2.13±0.27 and 2.08±0.49%/decade, respectively. Poor dose recovery behaviour of the pIRIR290 signal suggests that the pIRIR290 ages are not reliable. The comparison of two fading correction methods for the K‐feldspar ages indicates that the correction method proposed by Kars et al. (2008) Radiation Measurements 43, 786, yields reliable ages, whereas the dose‐rate correction method proposed by Lamothe et al. (2003) Radiation Measurements 37, 493, does not. A chronology for the HDB is established using the quartz ages and reliable fading corrected feldspar pIRIR225 ages. Our chronology shows that the sediments in the upper Mannheim Formation were deposited during Marine Isotope Stage (MIS) 4 (c. 70 ka), constrained by two quartz ages in the upper 20 m of the core. Four fading corrected pIRIR225 ages of c. 400 ka show that the upper Ludwigshafen Formation was deposited during MIS 12–11, correlated with the Elsterian‐Holsteinian stage. Two ages of 491±76 and 487±79 ka indicate that the Middle and Upper Ludwigshafen Formation were probably deposited during the Cromerian Complex. This luminescence chronology is consistent with palynological results. It also indicates that the IR‐RF ages reported earlier are probably underestimated due to anomalous fading.  相似文献   

18.
On statistical models for fission track counts   总被引:26,自引:0,他引:26  
The statistical basis for the usual analysis of fission track counts obtained by the external detector method is discussed and illustrated with examples. A consequence is that if any observed correlation between counts of spontaneous and induced tracks is due to heterogeneity in the density of uranium, then the model proposed by McGee and Johnson (1979)for assessing the experimental error is inappropriate and results based on it could be misleading. The same remark applies to the method proposed by Johnson, McGee, and Naeser (1979).  相似文献   

19.
The high-K alkaline volcano Muriah is situated in central Javaand has erupted two lava series, a younger highly potassic series(HK) and an older potassic series (K). The HK series has higherK2O contents for a given MgO content; greater silica undersaturation;and higher concentrations of LILE (Rb, Sr, Ba, and K), LREE(La and Ce), and HFSE (Nb, Zr, Ti, and P), than the K series.The HK series lavas have incompatible trace element patternssimilar in many respects to ocean island basalts. The K serieshas slightly higher 87Sr/86Sr (O70453 [GenBank] -O70498) and 18O (+6?2to + 8?4%o) and lower 143Nd/144Nd (0?512530–0?5126588)than the HK series (for which 87Sr/86Sr = 0?70426–0?70451,<518O = +6?52 to +7–0%o, and 1*3Nd/1*4Nd= 0?512623–0?512679),and higher LILE/HFSE and LREE/ HFSE ratios. A7/4 and A8/4 arehigh and do not show any systematic change from the K to theHK series. The proposed model for the Muriah lavas involvesthree source components: (1) the astheno-sphere of the mantlewedge of the Sunda arc, which has Indian Ocean MORB characteristics;(2) a metasomatic layer situated at the base of the lithosphere,which has characteristics similar to enriched mantle (i.e.,EMU); (3) subducted pelagic sediments from the Indian Ocean. Trace element and isotope data indicate that the characteristicsof the K series are produced by mixing of two endmember magmas:an undersaturated magma derived wholly from within-plate sourcesand a calc-alkaline magma derived from the subduction-modifiedasthenospheric mantle. The calc-alkaline magma is believed tobe contaminated by the arc crust before mixing. Low-pressurefractionation took place in the K series after mixing. Initiallithospheric extension in the Bawean trough (in which Muriahis located), may be responsible for decompressive melting ofthe metasomatic layer and thus the production of the HK serieslavas. The magmas erupted from Muriah show a transition fromintraplate to subduction zone processes in their genesis.  相似文献   

20.
Suppose that ¯(x1),...,¯Z(xn). are observations of vector-valued random function ¯(x). In the isotropic situation, the sample variogram γ*(h) for a given lag h is $$\bar \gamma ^ * (h) = \frac{1}{{2N(h)}}\mathop \sum \limits_{s(h)} (\overline Z (x_1 ) - \overline Z (x_1 )) \overline {(Z} (x_1 ) - \overline Z (x_1 ))^T $$ where s(h) is a set of paired points with distance h and N(h) is the number of pairs in s(h).. For a selection of lags h1, h2, .... hk such that N (h1) > O. we obtain a ktuple of (semi) positive definite matrices $\bar \gamma ^ * (h_{ 1} ),. . . ., \bar \gamma ^ * (h_{ k} )$ . We want to determine an orthonormal matrix B which simultaneously diagonalizes the $\bar \gamma ^ * (h_{ 1} ),. . . ., \bar \gamma ^ * (h_{ k} )$ or nearly diagonalizes them in the sense that the sum of squares of offdiagonal elements is small compared to the sum of squares of diagonal elements. If such a B exists, we linearly transform $\overline Z (x)$ by $\overline Y (x) = B\overline Z (x)$ . Then, the resulting vector function $\overline Y (x)$ has less spatial correlation among its components than $\overline Z (x)$ does. The components of $\overline Y (x)$ with little contribution to the variogram structure may be dropped, and small crossvariograms fitted by straightlines. Variogram models obtained by this scheme preserve the negative definiteness property of variograms (in the matrix-valued function sense). A simplified analysis and computation in cokriging can be carried out. The principles of this scheme arc presented in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号