首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A method for variance component estimation (VCE) in errors-in-variables (EIV) models is proposed, which leads to a novel rigorous total least-squares (TLS) approach. To achieve a realistic estimation of parameters, knowledge about the stochastic model, in addition to the functional model, is required. For an EIV model, the existing TLS techniques either do not consider the stochastic model at all or assume approximate models such as those with only one variance component. In contrast to such TLS techniques, the proposed method considers an unknown structure for the stochastic model in the adjustment of an EIV model. It simultaneously predicts the stochastic model and estimates the unknown parameters of the functional model. Moreover the method shows how an EIV model can support the Gauss-Helmert model in some cases. To make the VCE theory into EIV model more applicable, two simplified algorithms are also proposed. The proposed methods can be applied to linear regression and datum transformation. We apply these methods to these examples. In particular a 3-D non-linear close to identical similarity transformation is performed. Two simulation studies besides an experimental example give insight into the efficiency of the algorithms.  相似文献   

2.
The usual least-squares adjustment within an Errors-in-Variables (EIV) model is often described as Total Least-Squares Solution (TLSS), just as the usual least-squares adjustment within a Random Effects Model (REM) has become popular under the name of Least-Squares Collocation (without trend). In comparison to the standard Gauss-Markov Model (GMM), the EIV-Model is less informative whereas the REM is more informative. It is known under which conditions exactly the GMM or the REM can be equivalently replaced by a model of condition equations or, more generally, by a Gauss-Helmert Model. Similar equivalency conditions are, however, still unknown for the EIV-Model once it is transformed into such a model of condition equations. In a first step, it is shown in this contribution how the respective residual vector and residual matrix look like if the TLSS is applied to condition equations with a random coefficient matrix to describe the transformation of the random error vector. The results are demonstrated using a numeric example which shows that this approach may be valuable in its own right.  相似文献   

3.
Total least squares (TLS) can solve the issue of parameter estimation in the errors-invariables (EIV) model, however, the estimated parameters are affected or even severely distorted when the observation vector and coefficient matrix are contaminated by gross errors. Currently, the use of existing robust TLS (RTLS) methods for the EIV model is unreasonable. Original residuals are directly used in most studies to construct the weight factor function, thus the robustness for the structure space is not considered. In this study, a robust weighted total least squares (RWTLS) algorithm for the partial EIV model is proposed based on Newton-Gauss method and the equivalent weight principle of general robust estimation. The algorithm utilizes the standardized residuals to construct the weight factor function and employs the median method to obtain a robust estimator of the variance component. Therefore, the algorithm possesses good robustness in both the observation and structure spaces. To obtain standardized residuals, we use the linearly approximate cofactor propagation law for deriving the expression of the cofactor matrix of WTLS residuals. The iterative procedure and precision assessment approach for RWTLS are presented. Finally, the robustness of RWTLS method is verified by two experiments involving line fitting and plane coordinate transformation. The results show that RWTLS algorithm possesses better robustness than the general robust estimation and the robust total least squares algorithm directly constructed with original residuals.  相似文献   

4.
Data-snooping procedure applied to errors-in-variables models   总被引:1,自引:0,他引:1  
The theory of Baarda’s data snooping — normal and F tests respectively based on the known and unknown posteriori variance — is applied to detect blunders in errors-invariables (EIV) models, in which gross errors are in the vector of observations and/or in the coefficient matrix. This work is a follow-up to an earlier work in which we presented the formulation of the weighted total least squares (WTLS) based on the standard least squares theory. This method allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables models. Among those applications, data snooping methods in an EIV model are of particular interest, which is the subject of discussion in the present contribution. This paper generalizes the Baarda’s data snooping procedure of the standard least squares theory to an EIV model. Two empirical examples, a linear regression model and a 2-D affine transformation, using simulated and real data are presented to show the efficacy of the presented formulation. It is highlighted that the method presented is capable of detecting outlying equations (rather than outlying observations) in a straightforward manner. Further, the WTLS method can be used to handle different TLS problems. For example, the WTLS problem for the conditions and mixed models, the WTLS problem subject to constraints and variance component estimation for an EIV model can easily be established. These issues are in progress for future publications.  相似文献   

5.
Proper incorporation of linear and quadratic constraints is critical in estimating parameters from a system of equations. These constraints may be used to avoid a trivial solution, to mitigate biases, to guarantee the stability of the estimation, to impose a certain “natural” structure on the system involved, and to incorporate prior knowledge about the system. The Total Least-Squares (TLS) approach as applied to the Errors-In-Variables (EIV) model is the proper method to treat problems where all the data are affected by random errors. A set of efficient algorithms has been developed previously to solve the TLS problem, and a few procedures have been proposed to treat TLS problems with linear constraints and TLS problems with a quadratic constraint. In this contribution, a new algorithm is presented to solve TLS problems with both linear and quadratic constraints. The new algorithm is developed using the Euler-Lagrange theorem while following an optimization process that minimizes a target function. Two numerical examples are employed to demonstrate the use of the new approach in a geodetic setting.  相似文献   

6.
Cartesian coordinate transformation between two erroneous coordinate systems is considered within the Errors-In-Variables (EIV) model. The adjustment of this model is usually called the total Least-Squares (LS). There are many iterative algorithms given in geodetic literature for this adjustment. They give equivalent results for the same example and for the same user-defined convergence error tolerance. However, their convergence speed and stability are affected adversely if the coefficient matrix of the normal equations in the iterative solution is ill-conditioned. The well-known numerical techniques, such as regularization, shifting-scaling of the variables in the model, etc., for fixing this problem are not applied easily to the complicated equations of these algorithms. The EIV model for coordinate transformations can be considered as the nonlinear Gauss-Helmert (GH) model. The (weighted) standard LS adjustment of the iteratively linearized GH model yields the (weighted) total LS solution. It is uncomplicated to use the above-mentioned numerical techniques in this LS adjustment procedure. In this contribution, it is shown how properly diminished coordinate systems can be used in the iterative solution of this adjustment. Although its equations are mainly studied herein for 3D similarity transformation with differential rotations, they can be derived for other kinds of coordinate transformations as shown in the study. The convergence properties of the algorithms established based on the LS adjustment of the GH model are studied considering numerical examples. These examples show that using the diminished coordinates for both systems increases the numerical efficiency of the iterative solution for total LS in geodetic datum transformation: the corresponding algorithm working with the diminished coordinates converges much faster with an error of at least 10-5 times smaller than the one working with the original coordinates.  相似文献   

7.
8.
Studia Geophysica et Geodaetica - In this contribution, an iterative algorithm for variance-covariance component estimation based on the structured errors-in-variables (EIV) model is proposed. We...  相似文献   

9.
In the analysis of regionalized data, irregular sampling patterns are often responsible for large deviations (fluctuations) between the theoretical and sample semi-variograms. This article proposes a new semi-variogram estimator that is unbiased irrespective of the actual multivariate distribution of the data (provided an assumption of stationarity) and has the minimal variance under a given multivariate distribution model. Such an estimator considerably reduces fluctuations in the sample semi-variogram when the data are strongly correlated and clustered in space, and proves to be robust to a misspecification of the multivariate distribution model. The traditional and proposed semi-variogram estimators are compared through an application to a pollution dataset.  相似文献   

10.
A weighted least-squares (WLS) solution to a 3-D non-linear symmetrical similarity transformation within a Gauss-Helmert (GH) model, and/or an errors-in-variables (EIV) model is developed, which does not require linearization. The geodetic weight matrix is the inverse of the observation dispersion matrix (second-order moment). We suppose that the dispersion matrices are non-singular. This is in contrast to Procrustes algorithm within a Gauss-Markov (GM) model, or even its generalized algorithms within the GH and/or EIV models, which cannot accept geodetic weights. It is shown that the errors-invariables in the source system do not affect the estimation of the rotation matrix with arbitrary rotational angles and also the geodetic weights do not participate in the estimation of the rotation matrix. This results in a fundamental correction to the previous algorithm used for this problem since in that algorithm, the rotation matrix is calculated after the multiplication by row-wise weights. An empirical example and a simulation study give insight into the efficiency of the proposed procedure.  相似文献   

11.
Non-stationarity in statistical properties of the subsurface is often ignored. In a classical linear Bayesian inversion setting of seismic data, the prior distribution of physical parameters is often assumed to be stationary. Here we propose a new method of handling non-stationarity in the variance of physical parameters in seismic data. We propose to infer the model variance prior to inversion using maximum likelihood estimators in a sliding window approach. A traditional, and a localized shrinkage estimator is defined for inferring the prior model variance. The estimators are assessed in a synthetic base case with heterogeneous variance of the acoustic impedance in a zero-offset seismic cross section. Subsequently, this data is inverted for acoustic impedance using a non-stationary model set up with the inferred variances. Results indicate that prediction as well as posterior resolution is greatly improved using the non-stationary model compared with a common prior model with stationary variance. The localized shrinkage predictor is shown to be slightly more robust than the traditional estimator in terms of amplitude differences in the variance of acoustic impedance and size of local neighbourhood. Finally, we apply the methodology to a real data set from the North Sea basin. Inversion results show a more realistic posterior model than using a conventional approach with stationary variance.  相似文献   

12.
In Seo and Smith (this issue), a set of estimators was built in a Bayesian framework to estimate rainfall depth at an ungaged location using raingage measurements and radar rainfall data. The estimators are equivalent to lognormal co-kriging (simple co-kriging in the Gaussian domain) with uncertain mean and variance of gage rainfall. In this paper, the estimators are evaluated via cross-validation using hourly radar rainfall data and simulated hourly raingage data. Generation of raingage data is based on sample statistics of actual raingage measurements and radar rainfall data. The estimators are compared with lognormal co-kriging and nonparametric estimators. The Bayesian estimators are shown to provide some improvement over lognormal co-kriging under the criteria of mean error, root mean square error, and standardized mean square error. It is shown that, if the prior could be assessed more accurately, the margin of improvement in predicting estimation variance could be larger. In updating the uncertain mean and variance of gage rainfall, inclusion of radar rainfall data is seen to provide little improvement over using raingage data only.  相似文献   

13.
In the geostatistical analysis of regionalized data, the practitioner may not be interested in mapping the unsampled values of the variable that has been monitored, but in assessing the risk that these values exceed or fall short of a regulatory threshold. This kind of concern is part of the more general problem of estimating a transfer function of the variable under study. In this paper, we focus on the multigaussian model, for which the regionalized variable can be represented (up to a nonlinear transformation) by a Gaussian random field. Two cases are analyzed, depending on whether the mean of this Gaussian field is considered known or not, which lead to the simple and ordinary multigaussian kriging estimators respectively. Although both of these estimators are theoretically unbiased, the latter may be preferred to the former for practical applications since it is robust to a misspecification of the mean value over the domain of interest and also to local fluctuations around this mean value. An advantage of multigaussian kriging over other nonlinear geostatistical methods such as indicator and disjunctive kriging is that it makes use of the multivariate distribution of the available data and does not produce order relation violations. The use of expansions into Hermite polynomials provides three additional results: first, an expression of the multigaussian kriging estimators in terms of series that can be calculated without numerical integration; second, an expression of the associated estimation variances; third, the derivation of a disjunctive-type estimator that minimizes the variance of the error when the mean is unknown.  相似文献   

14.
Ad hoc techniques for estimating the quantiles of the Generalized Pareto (GP) and the Generalized Extreme Values (GEV) distributions are introduced. The estimators proposed are based on new estimators of the position and the scale parameters recently introduced in the Literature. They provide valuable estimates of the quantiles of interest both when the shape parameter is known and when it is unknown (this latter case being of great relevance in practical applications). In addition, weakly-consistent estimators are introduced, whose calculation does not require the knowledge of any parameter. The procedures are tested on simulated data, and comparisons with other techniques are shown. The research was partially supported by Contract n. ENV4-CT97-0529 within the project “FRAMEWORK” of the European Community – D.G. XII. Grants by “Progetto Giovani Ricercatori” are also acknowledged.  相似文献   

15.
1 Introduction The process of remotely sensed data acquisition isaffected by factors such as the rotation of the earth, finite scan rate of some sensors, curvature of the earth, non-ideal sensor, variation in platform altitude, attitude, velocity, etc.[1]. One important procedurewhich should be done prior to analyzing remotely sensed data, is geometric correction (image to map) or registration (image to image) of remotely sensed data. The purpose of geometric correction or registration is to e…  相似文献   

16.
Moving window kriging with geographically weighted variograms   总被引:2,自引:2,他引:0  
This study adds to our ability to predict the unknown by empirically assessing the performance of a novel geostatistical-nonparametric hybrid technique to provide accurate predictions of the value of an attribute together with locally-relevant measures of prediction confidence, at point locations for a single realisation spatial process. The nonstationary variogram technique employed generalises a moving window kriging (MWK) model where classic variogram (CV) estimators are replaced with information-rich, geographically weighted variogram (GWV) estimators. The GWVs are constructed using kernel smoothing. The resultant and novel MWK–GWV model is compared with a standard MWK model (MWK–CV), a standard nonlinear model (Box–Cox kriging, BCK) and a standard linear model (simple kriging, SK), using four example datasets. Exploratory local analyses suggest that each dataset may benefit from a MWK application. This expectation was broadly confirmed once the models were applied. Model performance results indicate much promise in the MWK–GWV model. Situations where a MWK model is preferred to a BCK model and where a MWK–GWV model is preferred to a MWK–CV model are discussed with respect to model performance, parameterisation and complexity; and with respect to sample scale, information and heterogeneity.  相似文献   

17.
针对Mogi模型垂直位移与水平位移联合反演中的病态问题,改进火山形变总体最小二乘(Total Least Squares,TLS)联合反演的虚拟观测法,并使用方差分量估计(Variance Components Estimation,VCE)方法确定病态问题的正则化参数.将附有先验信息的参数作为观测方程,与垂直位移和水平位移的观测方程联合解算,推导了三类观测方程联合反演的求解公式及基于总体最小二乘方差分量估计确定正则化参数的表达式,给出了算法的迭代流程.通过算例实验,研究了总体最小二乘联合反演的虚拟观测法在火山Mogi模型形变反演中的应用;算例结果表明,三类数据的联合平差及方差分量估计方法可以确定权比因子并得到修正后的压力源参数,具有一定的实际参考价值.  相似文献   

18.
—?A maximum-likelihood (ML) estimator of the correlation dimension d 2 of fractal sets of points not affected by the left-hand truncation of their inter-distances is defined. Such truncation might produce significant biases of the ML estimates of d 2 when the observed scale range of the phenomenon is very narrow, as often occurs in seismological studies. A second very simple algorithm based on the determination of the first two moments of the inter-distances distribution (SOM) is also proposed, itself not biased by the left-hand truncation effect. The asymptotic variance of the ML estimates is given. Statistical tests carried out on data samples with different sizes extracted from populations of inter-distances following a power law, suggested that the sample variance of the estimates obtained by the proposed methods are not significantly different, and are well estimated by the asymptotic variance also for samples containing a few hundred inter-distances. To examine the effects of different sources of systematic errors, the two estimators were also applied to sets of inter-distances between points belonging to statistical fractal distributions, baker's maps and experimental distributions of earthquake epicentres. For a full evaluation of the results achieved by the methods proposed here, these were compared with those obtained by the ML estimator for untruncated samples or by the least-squares algorithm.  相似文献   

19.
The multi-Gaussian model is used in geostatistical applications to predict functions of a regionalized variable and to assess uncertainty by determining local (conditional to neighboring data) distributions. The model relies on the assumption that the regionalized variable can be represented by a transform of a Gaussian random field with a known mean value, which is often a strong requirement. This article presents two variations of the model to account for an uncertain mean value. In the first one, the mean of the Gaussian random field is regarded as an unknown non-random parameter. In the second model, the mean of the Gaussian field is regarded as a random variable with a very large prior variance. The properties of the proposed models are compared in the context of non-linear spatial prediction and uncertainty assessment problems. Algorithms for the conditional simulation of Gaussian random fields with an uncertain mean are also examined, and problems associated with the selection of data in a moving neighborhood are discussed.  相似文献   

20.
A previously published mixing length (ML) model for evaluating the Darcy–Weisbach friction factor for a large‐scale roughness condition (depth to sediment height ratio ranging from 1 to 4) is brie?y reviewed and modi?ed (MML). Then the MML model and a modi?ed drag (MD) model are experimentally tested using laboratory measurements carried out for gravel‐bed channels and large‐scale roughness condition. This analysis showed that the MML gives accurate estimates of the Darcy–Weisbach coef?cient and for Froude number values greater than 0·5 the MML model coincides with the ML one. Testing of the MD model shows limited accuracy in estimating ?ow resistance. Finally, the MML and MD models are compared with the performance of a quasi‐theoretical (QT) model deduced applying the P‐theorem of the dimensional analysis and the incomplete self‐similarity condition for the depth/sediment ratio and the Froude number. Using the experimental gravel‐bed data to calibrate the QT model, a constant value of the exponent of the Froude number is determined while two relationships are proposed for estimating the scale factor and the exponent of the depth/sediment ratio. This indirect estimate procedure of the coef?cients (b0, b1 and b2) of the QT model can produce a negligible overestimation or underestimation of the friction factor. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号