首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
The estimation of parameters and their errors is considered using the observed light curve of the eclipsing binary system YZ Cas as an example. The error intervals are calculated using the differential-correction and confidence-region methods. The error intervals and reliability of the methods are investigated, and the reliability of limb-darkening coefficients derived from the observed light curve analyzed. A new method for calculating parameter errors is proposed.  相似文献   

2.
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap.  相似文献   

3.
If a particular distribution for kriging error may be assumed, confidence intervals can be estimated and contract risk can be assessed. Contract risk is defined as the probability that a block grade will exceed some specified limit. In coal mining, this specified limit will be set in a coal sales agreement. A key assumption necessary to implement the geostatistical model is that of local stationarity in the variogram. In a typical project, data limitations prevent a detailed examination of the stationarity assumption. In this paper, the distribution of kriging error and scale of variogram stationarity are examined for a coal property in northern West Virginia.  相似文献   

4.
Stochastic spatial simulation allows generation of multiple realizations of spatial variables. Due to the computational time required for evaluating the transfer function, uncertainty quantification of these multiple realizations often requires a selection of a small subset of realization. However, by selecting only a few realizations, one may risk biasing the P10, P50, and P90 estimates as compared to the original multiple realizations. The objective of this study is to develop a methodology to quantify confidence intervals for the estimated P10, P50, and P90 quantiles when only a few models are retained for response evaluation. We use the parametric bootstrap technique, which evaluates the variability of the statistics obtained from uncertainty quantification and constructs confidence intervals. Using this technique, we compare the confidence intervals when using two selection methods: the traditional ranking technique and the distance-based kernel clustering technique (DKM). The DKM has been recently developed and has been shown to be effective in quantifying uncertainty. The methodology is demonstrated using two examples. The first example is a synthetic example, which uses bi-normal variables and serves to demonstrate the technique. The second example is from an oil field in West Africa where the uncertain variable is the cumulative oil production coming from 20 wells. The results show that, for the same number of transfer function evaluations, the DKM method has equal or smaller error and confidence interval compared to ranking.  相似文献   

5.
Estimation of Pearson’s correlation coefficient between two time series, in the evaluation of the influences of one time-dependent variable on another, is an often used statistical method in climate sciences. Data properties common to climate time series, namely non-normal distributional shape, serial correlation, and small data sizes, call for advanced, robust methods to estimate accurate confidence intervals to support the correlation point estimate. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, Math Geol 35(6):651–665, 2003), where the main intention is to obtain accurate confidence intervals for correlation coefficients between two time series by taking the serial dependence of the data-generating process into account. However, Monte Carlo experiments show that the coverage accuracy of the confidence intervals for smaller data sizes can be substantially improved. In the present paper, the existing program is adapted into a new version, called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique that performs a second bootstrap loop (it resamples from the bootstrap resamples). It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap resampling is used to preserve the serial dependence of both time series. The calibration is applied to standard error-based bootstrap Student’s $t$ confidence intervals. The performance of the calibrated confidence interval is examined with Monte Carlo simulations and compared with the performance of confidence intervals without calibration. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small already (i.e., within a few percentage points) for data sizes as small as 20.  相似文献   

6.
This paper deals with the application of universal kriging to interpolate water table elevations from their measurements at random locations. Geographic information system tools were used to generate the continuous surface of water table elevations for the Carlsbad area alluvial aquifer located to the southeast of New Mexico, USA. Water table elevations in the 38 monitoring wells that are common to 1996 and 2003 irrigation years follows normal distribution. A generalized MATLAB? code was developed to generate omni-directional and directional semi-variograms (at 22.5° intervals). Low-order polynomials were used to model the trend as the water table profile exhibits a south-east gradient. Different theoretical semi-variogram models were tried to select the base semi-variogram for performing geostatistical interpolation. The contour maps of water table elevations exhibit significant decrease in the water table from 1996 to 2003. Statistical analysis performed on the estimated contours revealed that the decrease in water table is between 0.6 and 4.5 m at 90% confidence. The estimation variance contours show that the error in estimation was more than 8 m2 in the west and south-west portions of the aquifer due to the absence of monitoring wells.  相似文献   

7.
This paper extends Nair's exact Table of 952 and 99% confidence intervals for the median to data sets containing up to 300 observations.
It provides for an approximate relationship especially useful when the number of observations is large, and allows to calculate quickly confidence intervals affected only seldom and occasionally by tolerable error.  相似文献   

8.
Parameters employed in the Cooper-Jacob equation to describe drawdown are transmissivity, storativity, radial distance, time and pumping rate. An approach is described for quantifying how error or uncertainty in any one of the parameters used causes error in estimated drawdown. Dimensionless fractional error in estimated drawdown is expressed quantitatively as a function of (1) dimensionless fractional error of a given parameter, and (2) dimensionless argument of the well function, u. Fractional error in estimated drawdown is a linear function of fractional error in pumping rate and, for any given value of u, a nonlinear function of fractional error in transmissivity, storativity, radial distance or time. Fractional error in estimated drawdown for a given fractional parameter error varies considerably between parameters. The greatest sensitivity is for transmissivity and flow rate. Sensitivity is less for radial distance and time, and even less for storativity. The magnitude of the fractional error in drawdown may be affected by the sign of the fractional parameter error.  相似文献   

9.
Destruction of geomaterials and geomedia material, as well as general brittle and ductile materials, have been treated theoretically and experimentally using the general approach of nonlinear dynamic systems. The process of destruction in loaded solids (inelastic deformation, damage accumulation, fracture) is presented as a space-time evolution of a nonlinear dynamic system, which allows interpreting all deformation and fracture within the limits of a single theory. The space-time hierarchies of nonlinear systems were found out to undergo collective effects and self-organization. The experimental and theoretical studies of the evolution of loaded solids revealed their universal fractality and showed brittle fracture and plastic deformation to be self-similar processes at different scales, for which scaling parameters have been estimated. The evolution of inelastic strain and destruction of solids is modeled numerically in terms of hierarchic systems.  相似文献   

10.
在现有的采用接触预压式元件的钻孔变形法地应力测量中,不论是用哈斯特率定法还是围压率定法,在计算孔壁的折算位移时都存在着不足。1984年潘立宙首先指出了这些不足,并给出了计算孔壁折算位移的精确公式[1]。但由于公式中的某些参数不能直接获得,所以公式一时无法应用于实际测量。本文通过有限元法的数值计算,确定了文献[1]的计算公式中的参数值,从而得出了用以往方法计算折算位移所产生的误差。又经过数值的量级比较,明确了文献[1]的公式中所考虑的那些对计算折算位移有影响的因素的主次。   相似文献   

11.
The paper describes an algorithm for estimating the hypocentral coordinates and origin time of local earthquakes when the wave speed model to be employed is a layered one with dipping interfaces. A constrained least-squared error problem has been solved using the penalty function approach, in conjunction with the sequential unconstrained optimization technique of Fiacco and McCormick. Joint confidence intervals for the computed parameters are estimated using the approach of Bard for nonlinear problems. These results show that when a hypocentre lies outside the array of recording stations and head waves from a dipping interface are involved, then its inclination must be taken into account for dip angles exceeding 5°.  相似文献   

12.
York's (1969) method of regression, determining the best-fit line to data with errors in both variables using a least-squares solution, has become an integral part of isotope geochemistry. Although other methods agree with York's best-fit line (e.g., maximum likelihood), there is little agreement on the standard-error estimates for slope and intercept values. The reasons for this are differing levels of approximation used to compute the standard error, doubts concerning procedures for determining a confidence interval once the standard error has been estimated, and a typographical error in the original publication. This paper examines York's method of regression and standard errors of the parameters of a best-fit line. A very accurate method for determining the standard error in slope and intercept values is introduced, which eliminates the need to multiply the standard-error estimate by the goodness-of-fit parameter known as MSWD. In addition, a derivation of a fixed-intercept method of regression is introduced, and interpretations of MSWD and use of the t-adjustment in confidence intervals are discussed. The accuracy of the standard-error computations is determined by comparing the results to slope and intercept statistics generated from several thousand Monte Carlo regressions using synthetic 40Ar/39Ar inverse isochron data.  相似文献   

13.
冻土水热耦合模型数值求解及结果检验   总被引:1,自引:0,他引:1  
首先对作者所建立的基于多孔介质理论的季节冻土水热迁移耦合模型进行数值求解;对模型方程进行修正, 并给出了模型方程中参数的确定方法。然后以长春松原公路段土体为研究对象, 对实际工程中冻结情况下水分迁移的情况进行预测;给定模型边界条件对模型求解, 将结果与野外实际监测结果进行对比。温度变化对比数据表明, 模型可以较好地预测终值情况, 而中间过程的误差较大, 但是趋势基本一致。水分迁移方向及量的对比数据表明, 模型计算结果要小于实测结果, 但是整体上计算结果与实测结果的变化趋势较一致, 且同样是和最终值吻合较好, 误差最小。结果表明, 模型计算结果可较好地模拟参数最终值, 但存在一定误差。  相似文献   

14.
Generalized cross-validation for covariance model selection   总被引:4,自引:0,他引:4  
A weighted cross-validation technique known in the spline literature as generalized cross-validation (GCV), is proposed for covariance model selection and parameter estimation. Weights for prediction errors are selected to give more importance to a cluster of points than isolated points. Clustered points are estimated better by their neighbors and are more sensitive to model parameters. This rational weighting scheme also provides a simplifying significantly the computation of the cross-validation mean square error of prediction. With small- to medium-size datasets, GCV is performed in a global neighborhood. Optimization of usual isotropic models requires only a small number of matrix inversions. A small dataset and a simulation are used to compare performances of GCV to ordinary cross-validation (OCV) and least-squares filling (LS).  相似文献   

15.
Properties of stratigraphic completeness are determined here from a Brownian motion model of sediment accumulation. This avoids flaws inherent in application of a discrete-time random walk to the time span, rather than thickness, of sediment layers. Both discrete and continuous models show that the concept of stratigraphic completeness is meaningful only when the time scale is specified. From the discrete model, not surprisingly, completeness improves with increasing relative frequency and average thickness of depositional increments and the error of completeness estimation should decrease for longer sections. The continuous model shows that two dimensionless products determine the probability that a given time interval will be recorded by some preserved sediment. The first is the ratio of the age of the interval to its time span; the second is the product of the square root of the time span and ratio of the mean to the standard deviation of accumulation rate. Expected completeness is the average of these probabilities for all successive intervals of the given time span. For long sections, completeness may be estimated from the second dimensionless product alone. The two dimensionless products are sufficient to predict the relationship of accumulation rate to time span, the distribution of bed thickness, and the weak association of completeness and section thickness.  相似文献   

16.
Matthews, J. A. & Owen, G. 2009: Schmidt hammer exposure-age dating: developing linear age-calibration curves using Holocene bedrock surfaces from the Jotunheimen–Jostedalsbreen regions of southern Norway. Boreas , 10.1111/j.1502-3885.2009.00107.x. ISSN 0300-9483.
The approach to calibrated-age dating of rock surfaces using Schmidt hammer R-values is developed, potential errors in dating Holocene rock surfaces are estimated and limitations are assessed. Multiple sites from glacially abraded bedrock outcrops of two ages (glacier forelands deglaciated for c . 100 years and adjacent late-Preboreal terrain deglaciated for c . 9700 years) are used to analyse the variability of mean R-values and to construct linear age-calibration curves for three sub-regions in the Jotunheimen–Jostedalsbreen regions of southern Norway. Conservative potential dating errors of 246–632 years are estimated using 95% confidence intervals associated with two control points, the width of the error limits being significantly greater for the Preboreal surfaces than for the younger Little Ice Age surfaces. Substantial improvements over previous age calibrations are largely attributable to the use of multiple sites as part of a research design that has effectively controlled for geological differences between the three sub-regions. In the context of the Holocene time scale, the technique is seen as complementary to cosmogenic-nuclide dating (which currently has lower precision) and lichenometric dating (which has a lower temporal range).  相似文献   

17.
The aim of upscaling is to determine equivalent homogeneous parameters at a coarse-scale from a spatially oscillating fine-scale parameter distribution. To be able to use a limited number of relatively large grid-blocks in numerical oil reservoir simulators or groundwater models, upscaling of the permeability is frequently applied. The spatial fine-scale permeability distribution is generally obtained from geological and geostatistical models. After upscaling, the coarse-scale permeabilities are incorporated in the relatively large grid-blocks of the numerical model. If the porous rock may be approximated as a periodic medium, upscaling can be performed by the method of homogenization. In this paper the homogenization is performed numerically, which gives rise to an approximation error. The complementarity between two different numerical methods – the conformal-nodal finite element method and the mixed-hybrid finite element method – has been used to quantify this error. These two methods yield respectively upper and lower bounds for the eigenvalues of the coarse-scale permeability tensor. Results of 3D numerical experiments are shown, both for the far field and around wells.  相似文献   

18.
Based on a 2-layer land surface model, a rather general variational data assimilation framework for estimating model state variables is developed. The method minimizes the error of surface soil temperature predictions subject to constraints imposed by the prediction model. Retrieval experiments for soil prognostic variables are performed and the results verified against model simulated data as well as real observations for the Oklahoma Atmospheric Surface layer Instrumentation System (OASIS). The optimization scheme is robust with respect to a wide range of initial guess errors in surface soil temperature (as large as 30 K) and deep soil moisture (within the range between wilting point and saturation). When assimilating OASIS data, the scheme can reduce the initial guess error by more than 90%, while for Observing Simulation System Experiments (OSSEs), the initial guess error is usually reduced by over four orders of magnitude. Using synthetic data, the robustness of the retrieval scheme as related to information content of the data and the physical meaning of the adjoint variables and their use in sensitivity studies are investigated. Through sensitivity analysis, it is confirmed that the vegetation coverage and growth condition determine whether or not the optimally estimated initial soil moisture condition leads to an optimal estimation of the surface fluxes. This reconciles two recent studies. With the real data experiments, it is shown that observations during the daytime period are the most effective for the retrieval. Longer assimilation windows result in more accurate initial condition retrieval, underlining the importance of information quantity, especially for schemes assimilating noisy observations.  相似文献   

19.
在用马斯京根法进行河道流量演算时,由于传统的试算法在精度和客观性上的欠缺,目前广泛使用最小二乘法来进行优化计算.在应用最小二乘法时,发现选择不同的目标函数会对最终的流量计算结果的精度产生影响.因此,本文应用了两种目标函数:河槽蓄量误差最小和出流量误差最小,推导了它们在最小二乘意义上的流量演进参数解析式,进而研究了对流量计算精度的影响.对3场洪水过程的模拟结果表明,以出流量误差最小为目标函数所获得的流量计算精度更高:与河槽蓄量误差最小相比,相对平均绝对误差分别降低了4%,25%和25%,说明使用出流量误差最小作为优化的目标函数更为有效.  相似文献   

20.
目前,测流不确定度通过误差试验或通过经验数值来确定,但这些方式存在着工作量大或不确定估计不足等局限性。为解决此问题,对基于实测数据和统计理论的插值方差估计法在不同测流条件下进行了验证,选取白河、襄阳和沙洋3个流量站进行了实测数据的不确定度分析,同时对白河站进行了Monte Carlo试验,比较插值方差估计法得到的不确定度与真实误差的差异。结果表明,插值方差估计法能较好地反映水位变化的影响,插值方差估计法所得到的不确定度与真实测流误差的相关系数达0.64,与断面水位变化的Spearman相关系数达0.79,高、中水位情况下插值方差估计法的不确定度估计结果较为合理,低水位情况下偏高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号