首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When estimating the mean value of a variable, or the total amount of a resource, within a specified region it is desirable to report an estimated standard error for the resulting estimate. If the sample sites are selected according to a probability sampling design, it usually is possible to construct an appropriate design-based standard error estimate. One exception is systematic sampling for which no such standard error estimator exists. However, a slight modification of systematic sampling, termed 2-step tessellation stratified (2TS) sampling, does permit the estimation of design-based standard errors. This paper develops a design-based standard error estimator for 2TS sampling. It is shown that the Taylor series approximation to the variance of the sample mean under 2TS sampling may be expressed in terms of either a deterministic variogram or a deterministic covariance function. Variance estimation then can be approached through the estimation of a variogram or a covariance function. The resulting standard error estimators are compared to some more traditional variance estimators through a simulation study. The simulation results show that estimators based on the new approach may perform better than traditional variance estimators.  相似文献   

2.
Parameter identification is one of the key elements in the construction of models in geosciences. However, inherent difficulties such as the instability of ill-posed problems or the presence of multiple local optima may impede the execution of this task. Regularization methods and Bayesian formulations, such as the maximum a posteriori estimation approach, have been used to overcome those complications. Nevertheless, in some instances, a more in-depth analysis of the inverse problem is advisable before obtaining estimates of the optimal parameters. The Markov Chain Monte Carlo (MCMC) methods used in Bayesian inference have been applied in the last 10 years in several fields of geosciences such as hydrology, geophysics or reservoir engineering. In the present paper, a compilation of basic tools for inference and a case study illustrating the practical application of them are given. Firstly, an introduction to the Bayesian approach to the inverse problem is provided together with the most common sampling algorithms with MCMC chains. Secondly, a series of estimators for quantities of interest, such as the marginal densities or the normalization constant of the posterior distribution of the parameters, are reviewed. Those reduce the computational cost significantly, using only the time needed to obtain a sample of the posterior probability density function. The use of the information theory principles for the experimental design and for the ill-posedness diagnosis is also introduced. Finally, a case study based on a highly instrumented well test found in the literature is presented. The results obtained are compared with the ones computed by the maximum likelihood estimation approach.  相似文献   

3.
The ensemble Kalman filter (EnKF) is now widely used in diverse disciplines to estimate model parameters and update model states by integrating observed data. The EnKF is known to perform optimally only for multi-Gaussian distributed states and parameters. A new approach, the normal-score EnKF (NS-EnKF), has been recently proposed to handle complex aquifers with non-Gaussian distributed parameters. In this work, we aim at investigating the capacity of the NS-EnKF to identify patterns in the spatial distribution of the model parameters (hydraulic conductivities) by assimilating dynamic observations in the absence of direct measurements of the parameters themselves. In some situations, hydraulic conductivity measurements (hard data) may not be available, which requires the estimation of conductivities from indirect observations, such as piezometric heads. We show how the NS-EnKF is capable of retrieving the bimodal nature of a synthetic aquifer solely from piezometric head data. By comparison with a more standard implementation of the EnKF, the NS-EnKF gives better results with regard to histogram preservation, uncertainty assessment, and transport predictions.  相似文献   

4.
In many circumstances involving heat and mass transfer issues, it is considered impractical to measure the input flux and the resulting state distribution in the domain. Therefore, the need to develop techniques to provide solutions for such problems and estimate the inverse mass flux becomes imperative. Adaptive state estimator (ASE) is increasingly becoming a popular inverse estimation technique which resolves inverse problems by incorporating the semi-Markovian concept into a Bayesian estimation technique, thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The ASE is particularly designed for a system that encompasses independent unknowns and /or random switching of input and measurement biases. The present study describes the scheme to estimate the groundwater input contaminant flux and its transient distribution in a conjectural two-dimensional aquifer by means of ASE, which in particular is because of its unique ability to efficiently handle the process noise giving an estimation of keeping the relative error range within 10% in 2-dimensional problems. Numerical simulation results show that the proposed estimator presents decent estimation performance for both smoothly and abruptly varying input flux scenarios. Results also show that ASE enjoys a better estimation performance than its competitor, Recursive Least Square Estimator (RLSE) due to its larger error tolerance in greater process noise regimes. ASE’s inherent deficiency of being slower than the RLSE, resulting from the complexity of algorithm, was also noticed. The chosen input scenarios are tested to calculate the effect of input area and both estimators show improved results with an increase in input flux area especially as sensors are moved closer to the assumed input location.  相似文献   

5.
In many circumstances involving heat and mass transfer issues,it is considered impractical to measure the input flux and the resulting state distribution in the domain.Therefore,the need to develop techniques to provide solutions for such problems and estimate the inverse mass flux becomes imperative.Adaptive state estimator(ASE)is increasingly becoming a popular inverse estimation technique which resolves inverse problems by incorporating the semi-Markovian concept into a Bayesian estimation technique,thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters.The ASE is particularly designed for a system that encompasses independent unknowns and/or random switching of input and measurement biases.The present study describes the scheme to estimate the groundwater input contaminant flux and its transient distribution in a conjectural two-dimensional aquifer by means of ASE,which in particular is because of its unique ability to efficiently handle the process noise giving an estimation of keeping the relative error range within 10%in 2-dimensional problems.Numerical simulation results show that the proposed estimator presents decent estimation performance for both smoothly and abruptly varying input flux scenarios.Results also show that ASE enjoys a better estimation performance than its competitor,Recursive Least Square Estimator(RLSE)due to its larger error tolerance in greater process noise regimes.ASE's inherent deficiency of being slower than the RLSE,resulting from the complexity of algorithm,was also noticed.The chosen input scenarios are tested to calculate the effect of input area and both estimators show improved results with an increase in input flux area especially as sensors are moved closer to the assumed input location.  相似文献   

6.
Looking at kriging problems with huge numbers of estimation points and measurements, computational power and storage capacities often pose heavy limitations to the maximum manageable problem size. In the past, a list of FFT-based algorithms for matrix operations have been developed. They allow extremely fast convolution, superposition and inversion of covariance matrices under certain conditions. If adequately used in kriging problems, these algorithms lead to drastic speedup and reductions in storage requirements without changing the kriging estimator. However, they require second-order stationary covariance functions, estimation on regular grids, and the measurements must also form a regular grid. In this study, we show how to alleviate these rather heavy and many times unrealistic restrictions. Stationarity can be generalized to intrinsicity and beyond, if decomposing kriging problems into the sum of a stationary problem and a formally decoupled regression task. We use universal kriging, because it covers arbitrary forms of unknown drift and all cases of generalized covariance functions. Even more general, we use an extension to uncertain rather than unknown drift coefficients. The sampling locations may now be irregular, but must form a subset of the estimation grid. Finally, we present asymptotically exact but fast approximations to the estimation variance and point out application to conditional simulation, cokriging and sequential kriging. The drastic gain in computational and storage efficiency is demonstrated in test cases. Especially high-resolution and data-rich fields such as rainfall interpolation from radar measurements or seismic or other geophysical inversion can benefit from these improvements.  相似文献   

7.
Summary Reliable ore reserve estimates for deposits with highly skewed grade distributions are difficult tasks to perform. Although some recent geostatistical techniques are available to handle problems with these estimations, ordinary kriging or conventional interpolation methods are still widely used to estimate the ore reserves for such deposits. The estimation results can be very sensitive to the search parameters used during the interpolation of grades with these methods.This paper compares the ore reserve estimates from ordinary kriging using several cases in which certain search parameters are varied. The comparisons are extended to different mineralizations to show the changing effects of these parameters.  相似文献   

8.
Classic mathematical statistics recommends maximum likelihood estimators of parameters of a model because they have minimal variance in the model. The theory of robustness showed that these estimators were unstable to small deviations of probability density. The estimator stability is necessary for applications, where reality is always more complex than any model, especially in geology, where objects are unique. Methods of calculus of variations give a measure of the estimator stability, and the maximum likelihood estimators have little stability. Simultaneous maximization of efficiency and stability gives new estimators more suitable for applications. The estimator instability is especially harmful in the estimation of the multivariate normal distribution. To avoid instability, multivariate problems are reduced to sequences of bivariate problems. An example of the solution of a geological problem shows that methods of classic statistics are not good and the reductive method is much better.  相似文献   

9.
Numerical modeling of complex rock engineering problems involves the use of various input parameters which control usefulness of the output results. Hence, it is of utmost importance to select the right range of input physical and mechanical parameters based on laboratory or field estimation, and engineering judgment. Joint normal and shear stiffnesses are two popular input parameters to describe discontinuities in rock, which do not have specific guidelines for their estimation in literature. This study attempts to provide simple methods to estimate joint normal and shear stiffnesses in the laboratory using the uniaxial compression and small-scale direct shear tests. Samples have been prepared using rocks procured from different depths, geographical locations and formations. The study uses a mixture of relatively smooth natural joints and saw-cut joints in the various rock samples tested. The results indicate acceptable levels of uncertainty in the calculation of the stiffness parameters and provide a database of good first estimates and empirical relations which can be used for calculating values for joint stiffnesses when laboratory estimation is not possible. Joint basic friction angles have also been estimated as by-products in the small scale direct shear tests.  相似文献   

10.
Sammen  Saad Sh.  Mohamed  T. A.  Ghazali  A. H.  Sidek  L. M.  El-Shafie  A. 《Natural Hazards》2017,87(1):545-566

The study of dam-break analysis is considered important to predict the peak discharge during dam failure. This is essential to assess economic, social and environmental impacts downstream and to prepare the emergency response plan. Dam breach parameters such as breach width, breach height and breach formation time are the key variables to estimate the peak discharge during dam break. This study presents the evaluation of existing methods for estimation of dam breach parameters. Since all of these methods adopt regression analysis, uncertainty analysis of these methods becomes necessary to assess their performance. Uncertainty was performed using the data of more than 140 case studies of past recorded failures of dams, collected from different sources in the literature. The accuracy of the existing methods was tested, and the values of mean absolute relative error were found to be ranging from 0.39 to 1.05 for dam breach width estimation and from 0.6 to 0.8 for dam failure time estimation. In this study, artificial neural network (ANN) was recommended as an alternate method for estimation of dam breach parameters. The ANN method is proposed due to its accurate prediction when it was applied to similar other cases in water resources.

  相似文献   

11.
Variograms of hydrologic characteristics are usually obtained by estimating the experimental variogram for distinct lag classes by commonly used estimators and fitting a suitable function to these estimates. However, these estimators may fail the conditionally positive-definite property and the better results for the statistics of cross-validation, which are two essential conditions for choosing a valid variogram model. To satisfy these two conditions, a multi-objective bilevel programming estimator (MOBLP) which is based on the process of cross-validation has been developed for better estimate of variogram parameters. This model is illustrated with some rainfall data from Luan River Basin in China. The case study demonstrated that MOBLP is an effective way to achieve a valid variogram model.  相似文献   

12.
王翠  闫澍旺  张荣安 《岩土力学》2007,28(Z1):220-224
土的蠕变问题进行数值分析一般都需要用黏弹塑性有限元的方法来求解,如何取参数是关键性的问题。软黏土的蠕变过程可能持续几十年甚至上百年的时间,而实验室中的实验一般只进行几个小时或者几天最多几个月的时间,因次,用短时间的实验数据来外推长时间范围的结果就显得非常重要。提出了利用实验数据估算软黏土蠕变模型有限元参数的方法,并利用黏弹塑性有限元程序计算天津港突堤蠕变问题的计算结果来分析模型参数的变动对蠕变问题的计算结果的影响。采用了所提出的参数分析和评价方法的黏弹塑性有限元系统被运用于多项工程计算中,取得了良好的效果。  相似文献   

13.
Euclidean Distance Matrix Analysis (EDMA) of form is a coordinate free approach to the analysis of form using landmark data. In this paper, the problem of estimation of mean form, variance-covariance matrix, and mean form difference under the Gaussian perturbation model is considered using EDMA. The suggested estimators are based on the method of moments. They are shown to be consistent, that is as the sample size increases these estimators approach the true parameters. They are also shown to be computationally very simple. A method to improve their efficiency is suggested. Estimation in the presence of missing data is studied. In addition, it is shown that the superimposition method of estimation leads to incorrect mean form and variance-covariance structure.  相似文献   

14.
陆面上总体输送系数研究进展   总被引:8,自引:0,他引:8  
从陆面上总体输送系数的研究历程、计算方法和时空变化特征以及与地面各要素之间的依赖关系等几个方面,对该领域的研究工作进行了回顾和总结。在此基础上,探讨了该领域研究中存在的几点科学问题:①需要解决非均匀下垫面地表参数的计算和观测以及发展适用于非均匀下垫面的相似性理论等难题;②对我国湿润地区地表总体输送系数的研究需要加强;③能否对非均匀复杂下垫面陆气相互作用过程进行全面的认识,地表参数由“点”到“面”的推广是关键也是难题。最后对利用卫星遥感结合地面观测资料估算非均匀下垫面地表特征参数和能量通量的可行方法进行了讨论与展望。  相似文献   

15.
Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared.  相似文献   

16.
Reconstruction of architectural structures from photographs has recently experienced intensive efforts in computer vision research. This is achieved through the solution of nonlinear least squares (NLS) problems to obtain accurate structure and motion estimates. In Photogrammetry, NLS contribute to the determination of the 3-dimensional (3D) terrain models from the images taken from photographs. The traditional NLS approach for solving the resection-intersection problem based on implicit formulation on the one hand suffers from the lack of provision by which the involved variables can be weighted. On the other hand, incorporation of explicit formulation expresses the objectives to be minimized in different forms, thus resulting in different parametric values for the estimated parameters at non-zero residuals. Sometimes, these objectives may conflict in a Pareto sense, namely, a small change in the parameters results in the increase of one objective and a decrease of the other, as is often the case in multi-objective problems. Such is often the case with error-in-all-variable (EIV) models, e.g., in the resection-intersection problem where such change in the parameters could be caused by errors in both image and reference coordinates. This study proposes the Pareto optimal approach as a possible improvement to the solution of the resection-intersection problem, where it provides simultaneous estimation of the coordinates and orientation parameters of the cameras in a two or multistation camera system on the basis of a properly weighted multi-objective function. This objective represents the weighted sum of the square of the direct explicit differences of the measured and computed ground as well as the image coordinates. The effectiveness of the proposed method is demonstrated by two camera calibration problems, where the internal and external orientation parameters are estimated on the basis of the collinearity equations, employing the data of a Manhattan-type test field as well as the data of an outdoor, real case experiment. In addition, an architectural structural reconstruction of the Merton college court in Oxford (UK) via estimation of camera matrices is also presented. Although these two problems are different, where the first case considers the error reduction of the image and spatial coordinates, while the second case considers the precision of the space coordinates, the Pareto optimality can handle both problems in a general and flexible way.  相似文献   

17.
In this paper, a stochastic collocation-based Kalman filter (SCKF) is developed to estimate the hydraulic conductivity from direct and indirect measurements. It combines the advantages of the ensemble Kalman filter (EnKF) for dynamic data assimilation and the polynomial chaos expansion (PCE) for efficient uncertainty quantification. In this approach, the random log hydraulic conductivity field is first parameterized by the Karhunen–Loeve (KL) expansion and the hydraulic pressure is expressed by the PCE. The coefficients of PCE are solved with a collocation technique. Realizations are constructed by choosing collocation point sets in the random space. The stochastic collocation method is non-intrusive in that such realizations are solved forward in time via an existing deterministic solver independently as in the Monte Carlo method. The needed entries of the state covariance matrix are approximated with the coefficients of PCE, which can be recovered from the collocation results. The system states are updated by updating the PCE coefficients. A 2D heterogeneous flow example is used to demonstrate the applicability of the SCKF with respect to different factors, such as initial guess, variance, correlation length, and the number of observations. The results are compared with those from the EnKF method. It is shown that the SCKF is computationally more efficient than the EnKF under certain conditions. Each approach has its own advantages and limitations. The performance of the SCKF decreases with larger variance, smaller correlation ratio, and fewer observations. Hence, the choice between the two methods is problem dependent. As a non-intrusive method, the SCKF can be easily extended to multiphase flow problems.  相似文献   

18.
Ordinary kriging and non-linear geostatistical estimators are now well accepted methods in mining grade control and mine reserve estimation. In kriging, the search volume or ‘kriging neighbourhood’ is defined by the user. The definition of the search space can have a significant impact on the outcome of the kriging estimate. In particular, too restrictive neighbourhood, can result in serious conditional bias. Kriging is commonly described as a ‘minimum variance estimator’ but this is only true when the neighbourhood is properly selected. Arbitrary decisions about search space are highly risky. The criteria to consider when evaluating a particular kriging neighbourhood are the slope of the regression of the ‘true’ and ‘estimated’ block grades, the number of kriging negative weights and the kriging variance. Search radius is one of the most important parameters of search volume which often is determined on the basis of influence of the variogram. In this paper the above-mentioned parameters are used to determine optimal search radius.  相似文献   

19.
Evaluation and comparison of spatial interpolators II   总被引:4,自引:0,他引:4  
The performance of several variations on ordinary kriging and inverse distance estimators is evaluated. Mean squared errors (MSE) were calculated for estimates made on multiple resamplings from five exhaustive data bases representing two distinctly different types of estimation problem. Ordinary kriging, when performed with variograms estimated from the sample data, was more robust than inverse-distance methods to the type of estimation problem, and to the choice of estimation parameters such as number of neighbors.Notice: Although the research described in this article has been funded in part by the United States Environmental Protection Agency through Cooperative Agreement CR818526 to the Harry Reid Center for Environmental Studies, University of Nevada-Las Vegas, it has not been subjected to Agency review. Therefore it does not necessarily reflect the views of the Agency. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.  相似文献   

20.
Statistical models are one of the most preferred methods among many landslide susceptibility assessment methods. As landslide occurrences and influencing factors have spatial variations, global models like neural network or logistic regression (LR) ignore spatial dependence or autocorrelation characteristics of data between the observations in susceptibility assessment. However, to assess the probability of landslide within a specified period of time and within a given area, it is important to understand the spatial correlation between landslide occurrences and influencing factors. By including these relations, the predictive ability of the developed model increases. In this respect, spatial regression (SR) and geographically weighted regression (GWR) techniques, which consider spatial variability in the parameters, are proposed in this study for landslide hazard assessment to provide better realistic representations of landslide susceptibility. The proposed model was implemented to a case study area from More and Romsdal region of Norway. Topographic (morphometric) parameters (slope angle, slope aspect, curvature, plan, and profile curvatures), geological parameters (geological formations, tectonic uplift, and lineaments), land cover parameter (vegetation coverage), and triggering factor (precipitation) were considered as landslide influencing factors. These influencing factors together with past rock avalanche inventory in the study region were considered to obtain landslide susceptibility maps by using SR and LR models. The comparisons of susceptibility maps obtained from SR and LR show that SR models have higher predictive performance. In addition, the performances of SR and LR models at the local scale were investigated by finding the differences between GWR and SR and GWR and LR maps. These maps which can be named as comparison maps help to understand how the models estimate the coefficients at local scale. In this way, the regions where SR and LR models over or under estimate the landslide hazard potential were identified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号