首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Lognormal kriging was developed early in geostatistics to take account of the often seen skewed distribution of the experimental mining data. Intuitively, taking the distribution of the data into account should lead to a better local estimate than that which would have been obtained when it is ignored. In practice however, the results obtained are sometimes disappointing. This paper tries to explain why this is so from the behavior of the lognormal kriging estimator. The estimator is shown to respect certain unbiasedness properties when considering the whole working field using the regression curve and its confidence interval for both simple or ordinary kriging. When examined locally, however, the estimator presents a behavior that is neither expected nor intuitive. These results lead to the question: is the theoretically correct lognormal kriging estimator suited to the practical problem of local estimation?  相似文献   

2.
Approximate local confidence intervals are constructed from uncertainty models in the form of the conditional distribution of the random variable Z given values of variables [Zi, i=1,...,n]. When the support of the variable Z is any support other than that of the data, the conditional distributions require a change of support correction. This paper investigates the effect of change of support on the approximate local confidence intervals constructed by cumulative indicator kriging, class indicator kriging, and probability kriging under a variety of conditions. The conditions are generated by three simulated deposits with grade distributions of successively higher degree of skewness; a point support and two different block supports are considered. The paper also compares the confidence intervals obtained from these methods using the most used measures of confidence interval effectiveness.  相似文献   

3.
Based on the maximum entropy concept, optimal numbers of class intervals(K) for a closed array of samples has been determined. From the analysis, two values ofK (8 and 19) are selected as the most appropriate. ForK=8, the probability of occurrence on each unequal-size interval isp i=0.125, whereasK=19 results inp i=0.0526. An objective method for determining interval limits, modified from a previous method by Full et al. (1984), is also included.Contribution 114, Instituto Argentino de Oceanografia.  相似文献   

4.
基于三维copula函数的多水文区丰枯遭遇分析   总被引:5,自引:0,他引:5       下载免费PDF全文
谢华  罗强  黄介生 《水科学进展》2012,23(2):186-193
不同水文区的丰枯遭遇概率分析属于多变量概率分布问题,涉及的水文区越多,变量的维数就越高,问题就越复杂.为找到一种简单通用的多变量(n≥3)水文概率问题的求解方法,以不同水文区丰枯遭遇概率分析为例,引入三维copula函数构建多变量联合概率模型,将其用于分析长江、淮河及黄河流域的径流量的联合概率和条件概率问题。研究结果表明,当变量维数n≥3时,由copula函数可以很容易地构建多变量概率分布模型;对一组水文数据系列,有多个不同copula函数可以选择,可采用拟合优度检验方法择优;copula函数构建的多变量概率模型,可以计算各种条件下的联合概率分布,可以分析各种不同量级水文变量的遭遇概率和条件概率;通过与多维转换为一维方法的比较,三维Frank copula函数具有更优良的拟合优度、无偏性及有效性,且计算更简便。  相似文献   

5.
A spatial quantile regression model is proposed to estimate the quantile curve for a given probability of non-exceedance, as function of locations and covariates. Canonical vines copulas are considered to represent the spatial dependence structure. The marginal at each location is an asymmetric Laplace distribution where the parameters are functions of the covariates. The full conditional quantile distribution is given using the Joe–Clayton copula. Simulations show the flexibility of the proposed model to estimate the quantiles with special dependence structures. A case study illustrates its applicability to estimate quantiles for spatial temperature anomalies.  相似文献   

6.
Approximate local confidence intervals can be produced by nonlinear methods designed to estimate indicator variables. The most precise of these methods, the conditional expectation, can only be used in practice in the multi-Gaussian context. Theoretically, less efficient methods have to be used in more general cases. The methods considered here are indicator kriging, probability kriging (indicator-rank co-kriging), and disjunctive kriging (indicator co-kriging). The properties of these estimators are studied in this paper in the multi-Gaussian context, for this allows a more detailed study than under more general models. Conditional distribution approximation is first studied. Exact results are given for mean squared errors and conditional bias. Then conditional quantile estimators are compared empirically. Finally, confidence intervals are compared from the points of view of bias and precision.  相似文献   

7.
基于Bootstrap抽样技术提出了有限数据条件下边坡可靠度分析方法。简要介绍了传统的边坡可靠度分析方法。采用Bootstrap方法模拟了抗剪强度参数概率分布函数的统计不确定性。以无限边坡为例研究了抗剪强度分布参数和分布类型不确定性对边坡可靠度的影响规律。结果表明:基于有限数据估计的样本均值、样本标准差和AIC值具有较大的变异性,这种变异性进一步导致了抗剪强度参数概率分布函数存在明显的统计不确定性。在考虑抗剪强度参数概率分布函数的统计不确定性时,边坡可靠度指标应为具有一定置信度水平的置信区间,而不是传统可靠度分析中的固定值。边坡可靠度指标的置信区间变化范围随安全系数的增加而增大,同时考虑分布参数和分布类型不确定性计算的可靠度指标具有更大的变异性和更宽的置信区间变化范围。Bootstrap方法为有限数据条件下抗剪强度参数概率分布函数统计不确定性的模拟以及边坡可靠度的评估提供了一条有效的途径。  相似文献   

8.
A common characteristic of gold deposits is highly skewed frequency distributions. Lognormal and three-parameter lognormal distributions have worked well for Witwatersrand-type deposits. Epithermal gold deposits show evidence of multiple pulses of mineralization, which make fitting simple distribution models difficult. A new approach is proposed which consists of the following steps: (1) ordering the data in descending order. (2) Finding the cumulative coefficient of variation for each datum. Look for the quantile where there is a sudden acceleration of the cumulative C.V. Typically, the quantile will be above 0.85. (3) Fitting a lognormal model to the data above that quantile. Establish the mean above the quantile, Z H * . This is done by fitting a single or double truncated lognormal model. (4) Use variograms to establish the spatial continuity of below-quantile data (ZL) and indicator variable (1 if below quantile, 0 if above). (5) Estimate grade of blocks by (1*) (Z L * )+(1 – 1*) (Z H * ), where 1* is the kriged estimate of the indicator, and Z L * is the kriged estimate of the below quantile portion of the distribution. The method is illustrated for caldera, Carlin-type, and hot springs-type deposits. For the latter two types, slight variants of the above steps are developed.  相似文献   

9.
A temporal analysis of doline collapse on the Western Highland Rim (Tennessee) indicated one approach to estimating the probability of collapse in areas where the geologic and hydrologic criteria associated with collapse have been identified. The distribution of collapse was examined for trend, autocorrelation, and goodness of fit. The distribution of doline collapse during one 12-month period conformed to a Poisson distribution with a mean occurrence rate of λ=0.346 collapses per week and with the interoccurrence times being exponentially distributed (0.01 level). Although the proposed model is spatially and temporally restricted, it may provide an initial framework for estimating the probability of doline collapse in other karst terrains of similar geologic and hydrologic settings.  相似文献   

10.
Consideration of order relations is key to indicator kriging, indicator cokriging, and probability kriging, especially for the latter two methods wherein the additional modeling of cross-covariance contributes to an increased chance of violating order relations. Herein, Gaussian-type curves are fit to estimates of the cumulative distribution function (cdf) at data quantiles to: (1) yield smoothed estimates of the cdf; and (2) to correct for violations of order relations (i.e., to correct for situations wherein the estimate of the cdf for a larger quantile is less than that for a smaller quantile). Smoothed estimates of the cdf are sought as a means to improve the approximation to the integral equation for the expected value of the regionalized variable in probability kriging. Experiments show that this smoothing yields slightly improved estimation of the expected value (in probability kriging). Another experiment, one that uses the same variogram for all indicator functions, does not yield improved estimates.Presented at the 25th Anniversary Meeting of the IAMG, Prague, Czech Republic, October 10–15, 1993.  相似文献   

11.
We describe and give hydrological applications of a probabilistic model based on extreme value theory which can be used to study the values of a hydrologic process that exceed a certain threshold level Q B .This model is useful in estimating extreme events X T of return period T based on N years of available hydrologic record. We also present easy-to-use tables which give confidence intervals for X T .The hydrologic applications reported are a flood frequency analysis, a methodology for estimating flood damage, an estimation of precipitation probabilities, and a prediction of extreme tide levels.  相似文献   

12.
In this article, we model the volcanism near the proposed nuclear waste repository at Yucca Mountain, Nevada, U.S.A. by estimating the instantaneous recurrence rate using a nonhomogeneous Poisson process with Weibull intensity and by using a homogeneous Poisson process to predict future eruptions. We then quantify the probability that any single eruption is disruptive in terms of a (prior) probability distribution, since not every eruption would result in disruption of the repository. Bayesian analysis is performed to evaluate the volcanic risk. Based on the Quaternary data, a 90% confidence interval for the instantaneous recurrence rate near the Yucca Mountain site is (1.85×10–6/yr, 1.26×10–5/yr). Also, using-these confidence bounds, the corresponding 90% confidence interval for the risk (probability of at least one disruptive eruption) for an isolation time of 104 years is (1.0×10–3, 6.7×10–3), if it is assumed that the intensity remains constant during the projected time frame.  相似文献   

13.
The research presented in this paper focuses on the application of a newly developed physically based watershed modeling approach, which is called representative elementary watershed approach. The study stressed the effects of uncertainty of input parameters on the watershed responses (i.e., simulated discharges). The approach was applied to the Zwalm catchment, which is an agriculture-dominated watershed with a drainage area of 114 km2 located in East Flanders, Belgium. Uncertainty analysis of the model parameters is limited to the saturated hydraulic conductivity because of its high influence on the watershed hydrologic behavior and availability of the data. The assessment of output uncertainty is performed using the Monte Carlo method. The ensemble statistical watershed responses and their uncertainties are calculated and compared with measurements. The results show that the measured discharges fall within the 95% confidence interval of the modeled discharge. This provides the uncertainty bounds of the discharges that account for the uncertainty in saturated hydraulic conductivity. The methodology can be extended to address other uncertain parameters as far as the probability density function of the parameter is defined.  相似文献   

14.
采用频率分析法计算入库设计洪水时,需要通过相关分析将坝址洪水系列插补得到对应的入库洪水系列。常用的线性回归法假设两者满足线性关系且入库洪水系列服从正态分布,可能与实际情况并不相符。引入Copula函数构建坝址洪水与入库洪水的联合概率分布和条件概率分布,计算给定坝址洪水时入库洪水的条件最可能值和置信区间,提出了一种基于Copula函数的入库洪水插补新方法。三峡水库的应用实例表明:线性回归法得到的入库洪水值在坝址洪水量级较大时明显偏小,甚至稀遇洪水时不在90%置信区间内。所提方法能较好地反映坝址洪水与入库洪水的内在关系,不仅可以计算入库洪水的各种点估计值,而且能够定量评价估计的不确定性。  相似文献   

15.
两变量水文频率分布模型研究述评   总被引:10,自引:1,他引:9       下载免费PDF全文
谢华  黄介生 《水科学进展》2008,19(3):443-452
水文变量多特征属性的频率分析,以及各种水文事件的遭遇及联合概率分布问题需要采用多变量概率分布模型解决。总结了当前应用最广泛的几种两变量概率分布模型,对各种模型的适用性和局限性做了详细分析,并介绍了一种新的两变量概率模型——Copula函数。现有模型大都基于变量之间的线性相关关系而建立,对于非线性、非对称的随机变量难以很好地描述;大部分模型假定各变量服从相同的边际分布或对变量间的相关性有严格的限定,从而限制了其应用。Copula函数所构造的两变量概率分布模型克服了现有模型的不足,它具有任意的边际分布,可以描述变量间非线性、非对称的相关关系。作为一种用于构造灵活的多变量联合分布的工具,Copula函数在水科学领域具有广阔的应用前景。  相似文献   

16.
建立基于模拟退火遗传算法(Sjmualted Annealing Genetic Algorithm,SAGA)的改进极大似然法,即将似然函数相反数求解极小值的表达式作为目标函数,依据矩法估计参数取值范围作为约束条件,然后应用SAGA进行参数估计.与常规极大似然法思路有本质不同,改进极大似然法通过遗传算法进行参数优化.通过蒙特卡罗试验,验证了改进极大似然法在参数估计和不同频率设计值估计两个方面均具有很好的准确性,与基于最大熵原理的方法效果相当,优于其他方法;同时该方法不受线型类型、参数数目和约束条件的限制;可以避免应用常规极大似然法时出现似然方程无解等情况;且求解过程简便快捷,使极大似然法在理论上和实际应用中都成为有效的方法.  相似文献   

17.
《Applied Geochemistry》2005,20(10):1857-1874
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile–quantile (Q–Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities.Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q–Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q–Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides.None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed.  相似文献   

18.
The sampling distribution of the Wstatistic of disjunction has been estimated by Monte Carlo simulation for the case where the underlying distribution is a random rectangular (Poisson) variable that is divided into two groups at an arbitrary position. A transformation to sinh ?1 log e Wgave a variable that was acceptably normal, and from this a simple approximation for the distribution is given, together with a diagram of confidence limits of Wfor this case.  相似文献   

19.
The sampling distribution of the Wstatistic of disjunction has been estimated by Monte Carlo simulation for the case where the underlying distribution is a random rectangular (Poisson) variable that is divided into two groups at an arbitrary position. A transformation to sinh –1 log e Wgave a variable that was acceptably normal, and from this a simple approximation for the distribution is given, together with a diagram of confidence limits of Wfor this case.  相似文献   

20.
水文频率分析计算过程中,水文极值样本系列容量一般都较小、代表性不高,使得水文设计值估计具有不确定性。利用Bootstrap方法,研究样本抽样不确定性对水文设计值的影响。与传统水文频率分析方法相比,基于Bootstrap方法不仅可提供设计值的点估计和区间估计,同时能够对设计值的不确定性进行定量评价。此外,基于Bootstrap技术,结合矩法、权函数法及线性矩法,设置3套方案,分析了该方法在不同参数估计方法间的有效性。以南通市1970-2011年共42年的年降雨量数据资料为例,对所提方法进行实例应用分析,结果表明,从期望设计值、90%置信区间及最终设计值角度而言,基于所提方法的设计成果受参数估计方法的选取影响不大,且可回避规范中B值诺莫图通用性较差及误差显著问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号