首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Precise estimates of the covariance parameters are essential in least-squares collocation (LSC) in the case of increased accuracy requirements. This paper implements restricted maximum likelihood (REML) method for the estimation of three covariance parameters in LSC with the Gauss-Markov second-order function (GM2), which is often used in interpolation of gravity anomalies. The estimates are then validated with the use of an independent technique, which has been often omitted in the previous works that are confined to covariance parameters errors based on the information matrix. The crossvalidation of REML estimates with the use of hold-out method (HO) helps in understanding of REML estimation errors. We analyzed in detail the global minimum of negative log-likelihood function (NLLF) in the estimation of covariance parameters, as well, as the accuracy of the estimates. We found that the correlation between covariance parameters may critically contribute to the errors of their estimation. It was also found that knowing some intrinsic properties of the covariance function may help in the scoring process.  相似文献   

2.
Multidimensional scaling (MDS) has played an important role in non-stationary spatial covariance structure estimation and in analyzing the spatiotemporal processes underlying environmental studies. A combined cluster-MDS model, including geographical spatial constraints, has been previously proposed by the authors to address the estimation problem in oversampled domains in a least squares framework. In this paper is formulated a general latent class model with spatial constraints that, in a maximum likelihood framework, allows to partition the sample stations into classes and simultaneously to represent the cluster centers in a low-dimensional space, while the stations and clusters retain their spatial relationships. A model selection strategy is proposed to determine the number of latent classes and the dimensionality of the problem. Real and artificial data sets are analyzed to test the performance of the model.  相似文献   

3.
The properties of linear spatial interpolators of single realizations and trend components of regionalized variables are examined in this work. In the case of the single realization estimator explicit and exact expressions for the weighting vector and the variances of estimator and estimation error were obtained from a closed-form expression for the inverse of the Lagrangian matrix. The properties of the trend estimator followed directly from the Gauss-Markoff theorem. It was shown that the single realization estimator can be decomposed into two mutually orthogonal random functions of the data, one of which is the trend estimator. The implementation of liear spatial estimation was illustrated with three different methods, i.e., full information maximum likelihood (FIML), restricted maximum likelihood (RML), and Rao's minimum norm invariant quadratic unbiased estimation (MINQUE) for the single realization case and via generalized least squares (GLS) for the trend. The case study involved large correlation length-scale in the covariance of specific yield producing a nested covariance structure that was nearly positive semidefinite. The sensitivity of model parameters, i.e., drift and variance components (local and structured) to the correlation length-scale, choice of covariance model (i.e., exponential and spherical), and estimation method was examined. the same type of sensitivity analysis was conducted for the spatial interpolators. It is interesting that for this case study, characterized by a large correlation length-scale of about 50 mi (80 km), both parameter estimates and linear spatial interpolators were rather insensitive to the choice of covariance model and estimation method within the range of credible values obtained for the correlation length-scale, i.e., 40–60 mi (64–96 km), with alternative estimates falling within ±5% of each other.  相似文献   

4.
In the analysis of the unsaturated zone, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the porous media. Adaptative multi-scale parameterization consists in solving the problem through successive approximations by refining the parameter at the next finer scale all over the domain and stopping the process when the refinement does not induce significant decrease of the objective function any more. In this context, the refinement indicators algorithm provides an adaptive parameterization technique that opens the degrees of freedom in an iterative way driven at first order by the model to locate the discontinuities of the sought parameters. We present a refinement indicators algorithm for adaptive multi-scale parameterization that is applicable to the estimation of multi-dimensional hydraulic parameters in unsaturated soil water flow. Numerical examples are presented which show the efficiency of the algorithm in case of noisy data and missing data.  相似文献   

5.
The errors-in-variables (EIV) model is a nonlinear model, the parameters of which can be solved by singular value decomposition (SVD) method or the general iterative algorithm. The existing formulae for covariance matrix of total least squares (TLS) parameter estimates don’t fully consider the randomness of quantities in iterative algorithm and the biases of parameter estimates and residuals. In order to reflect more reasonable precision information for TLS adjustment, the derivative-free unscented transformation with scaled symmetric sampling strategy, i.e. scaled unscented transformation (SUT), is introduced and implemented. In this contribution, we firstly discuss the existing various solutions of TLS adjustment and covariance matrices of TLS parameter estimates and derive the general first-order approximate cofactor matrices of random quantities in TLS adjustment. Secondly, based on the combination of TLS iterative algorithm and calculation process of SUT, we design the two SUT algorithms to calculate the biases and the second-order approximate covariance matrices. Finally, the straight line fitting model and plane coordinate transformation model are used to demonstrate that applying SUT for precision estimation of TLS adjustment is feasible and effective.  相似文献   

6.
Scale recursive estimation (SRE) is adopted for short term quantitative precipitation forecast (QPF). The precipitation field is modelled using a lognormal random cascade, well suited to properly represent the scaling properties of rainfall fields. To estimate the random cascade parameters, scale recursive maximum likelihood estimation (MLE) is carried out by the iterative expectation maximization (EM) algorithm. To illustrate the potentiality of the SRE, forecast of a synthetically generated rainfall time series is shown. Adaptive estimation of the process parameters is carried out and precipitation forecasts are issued. The forecasts from the SRE are compared with those from standard ARMA models, showing a good performance. The SRE is then adopted for forecasting of an observed half hourly precipitation series for a two day storm event in northern Italy. The SRE provides good performance and it can therefore be adopted as a tool for short term QPF.  相似文献   

7.
Abstract: Linear continuous time stochastic Nash cascade conceptual models for runoff are developed. The runoff is modeled as a simple system of linear stochastic differential equations driven by white Gaussian and marked point process noises. In the case of d reservoirs, the outputs of these reservoirs form a d dimensional vector Markov process, of which only the dth coordinate process is observed, usually at a discrete sample of time points. The dth coordinate process is not Markovian. Thus runoff is a partially observed Markov process if it is modeled using the stochastic Nash cascade model. We consider how to estimate the parameters in such models. In principle, maximum likelihood estimation for the complete process parameters can be carried out directly or through some form of the EM (estimation and maximization) algorithm or variation thereof, applied to the observed process data. In this research we consider a direct approximate likelihood approach and a filtering approach to an algorithm of EM type, as developed in Thompson and Kaseke (1994). These two methods are applied to some real life runoff data from a catchment in Wales, England. We also consider a special case of the martingale estimating function approach on the runoff model in the presence of rainfall. Finally, some simulations of the runoff process are given based on the estimated parameters.  相似文献   

8.
We present a nonlinear stochastic inverse algorithm that allows conditioning estimates of transient hydraulic heads, fluxes and their associated uncertainty on information about hydraulic conductivity (K) and hydraulic head (h  ) data collected in a randomly heterogeneous confined aquifer. Our algorithm is based on Laplace-transformed recursive finite-element approximations of exact nonlocal first and second conditional stochastic moment equations of transient flow. It makes it possible to estimate jointly spatial variations in natural log-conductivity (Y=lnK)(Y=lnK), the parameters of its underlying variogram, and the variance–covariance of these estimates. Log-conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete “pilot points”. Whereas prior values of Y at pilot point are obtained by generalized kriging, posterior estimates at pilot points are obtained through a maximum likelihood fit of computed and measured transient heads. These posterior estimates are then projected onto the computational grid by kriging. Optionally, the maximum likelihood function may include a regularization term reflecting prior information about Y. The relative weight assigned to this term is evaluated separately from other model parameters to avoid bias and instability. We illustrate and explore our algorithm by means of a synthetic example involving a pumping well. We find that whereas Y and h can be reproduced quite well with parameters estimated on the basis of zero-order mean flow equations, all model quality criteria identify the second-order results as being superior to zero-order results. Identifying the weight of the regularization term and variogram parameters can be done with much lesser ambiguity based on second- than on zero-order results. A second-order model is required to compute predictive error variances of hydraulic head (and flux) a posteriori. Conditioning the inversion jointly on conductivity and hydraulic head data results in lesser predictive uncertainty than conditioning on conductivity or head data alone.  相似文献   

9.
The coupled flow-mass transport inverse problem is formulated using the maximum likelihood estimation concept. An evolutionary computational algorithm, the genetic algorithm, is applied to search for a global or near-global solution. The resulting inverse model allows for flow and transport parameter estimation, based on inversion of spatial and temporal distributions of head and concentration measurements. Numerical experiments using a subset of the three-dimensional tracer tests conducted at the Columbus, Mississippi site are presented to test the model's ability to identify a wide range of parameters and parametrization schemes. The results indicate that the model can be applied to identify zoned parameters of hydraulic conductivity, geostatistical parameters of the hydraulic conductivity field, angle of hydraulic conductivity anisotropy, solute hydrodynamic dispersivity, and sorption parameters. The identification criterion, or objective function residual, is shown to decrease significantly as the complexity of the hydraulic conductivity parametrization is increased. Predictive modeling using the estimated parameters indicated that the geostatistical hydraulic conductivity distribution scheme produced good agreement between simulated and observed heads and concentrations. The genetic algorithm, while providing apparently robust solutions, is found to be considerably less efficient computationally than a quasi-Newton algorithm.  相似文献   

10.
11.
In the mathematical theory of seismic signal detection and parameter estimation given, the seismic measurements are assumed to consist of a sum of signals corrupted by additive Gaussian white noise uncorrelated to the signals. Each signal is assumed to consist of a signal pulse multiplied by a space-dependent amplitude function and with a space-dependent arrival time. The signal pulse, amplitude, and arrival time are estimated by the method of maximum likelihood. For this signal-and-noise model, the maximum likelihood method is equivalent to the method of least squares which will be shown to correspond to using the signal energy as coherency measure. The semblance coefficient is equal to the signal energy divided by the measurement energy. For this signal model we get a more general form of the semblance coefficient which reduces to the usual expression in the case of a constant signal amplitude function. The signal pulse, amplitude, and arrival time can be estimated by a simple iterative algorithm. The effectiveness of the algorithm on seismic field data is demonstrated.  相似文献   

12.
Many stochastic process models for environmental data sets assume a process of relatively simple structure which is in some sense partially observed. That is, there is an underlying process (Xn, n 0) or (Xt, t 0) for which the parameters are of interest and physically meaningful, and an observable process (Yn, n 0) or (Yt, t 0) which depends on the X process but not otherwise on those parameters. Examples are wide ranging: the Y process may be the X process with missing observations; the Y process may be the X process observed with a noise component; the X process might constitute a random environment for the Y process, as with hidden Markov models; the Y process might be a lower dimensional function or reduction of the X process. In principle, maximum likelihood estimation for the X process parameters can be carried out by some form of the EM algorithm applied to the Y process data. In the paper we review some current methods for exact and approximate maximum likelihood estimation. We illustrate some of the issues by considering how to estimate the parameters of a stochastic Nash cascade model for runoff. In the case of k reservoirs, the outputs of these reservoirs form a k dimensional vector Markov process, of which only the kth coordinate process is observed, usually at a discrete sample of time points.  相似文献   

13.
A regression model is used to study spatiotemporal distributions of solute content ion concentration data (calcium, chloride and nitrate), which provide important water contamination indicators. The model consists of three random and one deterministic components. The random space/time component is assumed to be homogeneous/stationary and to have a separable covariance. The purely spatial and the purely temporal random components are assumed to have homogenous and stationary increments, respectively. The deterministic component represents the space/time mean function. Inferences of the random components involve maximum likelihood and semi-parametric methods under some restrictions on the data configuration. Computational advantages and modelling limitations of the assumptions underlying the regression model are discussed. The regression model leads to simplifications in the space/time kriging and cokriging systems used to obtain space/time estimates at unobservable locations/instants. The application of the regression model in the study of the solute content ions was done at a global scale that covers the entire region of interest. The variability analysis focuses on the calculation of the spatial direct and cross-variograms and the evaluation of correlations between the three solute content ions. The space/time kriging system is developed in terms of the space direct and cross-variograms, and allows the separate estimation of the regression model components. Maps of these components are then obtained for each one of the three ions. Using the estimates of the purely spatial component, spatial dependencies between the ions are studied. Physical causes and consequences of the space/time variability are discussed, and comparisons are made with previous analyses of the solute content dataset.  相似文献   

14.
Hydrological model parameter estimation is an important aspect in hydrologic modelling. Usually, parameters are estimated through an objective function minimization, quantifying the mismatch between the model results and the observations. The objective function choice has a large impact on the sensitivity analysis and calibration outcomes. In this study, it is assessed whether spectral objective functions can compete with an objective function in the time domain for optimization of the Soil and Water Assessment Tool (SWAT). Three empirical spectral objective functions were applied, based on matching (i) Fourier amplitude spectra, (ii) periodograms and (iii) Fourier series of simulated and observed discharge time series. It is shown that most sensitive parameters and their optimal values are distinct for different objective functions. The best results were found through calibration with an objective function based on the square difference between the simulated and observed discharge Fourier series coefficients. The potential strengths and weaknesses of using a spectral objective function as compared to utilising a time domain objective function are discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
为描述强震预测的不确定性,在地震预报极值分析模型的参数估计中,引入轮廓似然估计法。对广义极值分布中形状参数和地震重现水平的轮廓似然估计原理及数值算法进行了详细地阐述,并利用构建的广义极值分布模型对东昆仑地震带进行了地震危险性分析。关于形状参数和重现水平的点估计,以及10年以内的重现水平置信区间的估计,轮廓似然估计法与极大似然估计法效果基本相同,但在中长期地震重现水平置信区间的预测中,轮廓似然估计法得到的关于置信水平不对称的置信区间,在强震水平下对预测震级的不确定性表达更准确,预测结果更加有效。   相似文献   

16.
Many stochastic process models for environmental data sets assume a process of relatively simple structure which is in some sense partially observed. That is, there is an underlying process (Xn, n 0) or (Xt, t 0) for which the parameters are of interest and physically meaningful, and an observable process (Yn, n 0) or (Yt, t 0) which depends on the X process but not otherwise on those parameters. Examples are wide ranging: the Y process may be the X process with missing observations; the Y process may be the X process observed with a noise component; the X process might constitute a random environment for the Y process, as with hidden Markov models; the Y process might be a lower dimensional function or reduction of the X process. In principle, maximum likelihood estimation for the X process parameters can be carried out by some form of the EM algorithm applied to the Y process data. In the paper we review some current methods for exact and approximate maximum likelihood estimation. We illustrate some of the issues by considering how to estimate the parameters of a stochastic Nash cascade model for runoff. In the case of k reservoirs, the outputs of these reservoirs form a k dimensional vector Markov process, of which only the kth coordinate process is observed, usually at a discrete sample of time points.  相似文献   

17.
A new parameter estimation algorithm based on ensemble Kalman filter (EnKF) is developed. The developed algorithm combined with the proposed problem parametrization offers an efficient parameter estimation method that converges using very small ensembles. The inverse problem is formulated as a sequential data integration problem. Gaussian process regression is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen–Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative regularized EnKF algorithm. The filter is converted to an optimization algorithm by using a pseudo time-stepping technique such that the model output matches the time dependent data. The EnKF Kalman gain matrix is regularized using truncated SVD to filter out noisy correlations. Numerical results show that the proposed algorithm is a promising approach for parameter estimation of subsurface flow models.  相似文献   

18.
It is often convenient to use synthetically generated random fields to study the hydrologic effects of spatial heterogeneity. Although there are many ways to produce such fields, spectral techniques are particularly attractive because they are fast and conceptually straightforward. This paper describes a spectral algorithm for generating sets of random fields which are correlated with one another. The algorithm is based on a discrete version of the Fourier-Stieltjes representation for multidimensional random fields. The Fourier increment used in this representation depends on a random phase angle process and a complex-valued spectral factor matrix which can be readily derived from a specified set of cross-spectral densities (or cross-covariances). The inverse Fourier transform of the Fourier increment is a complex random field with real and imaginary parts which each have the desired coveriance structure. Our complex-valued spectral formulation provides an especially convenient way to generate a set of random fields which all depend on a single underlying (independent) field, provided that the fields in question can be related by space-invariant linear transformations. We illustrate this by generating multi-dimensional mass conservative groundwater velocity fields which can be used to simulate solute transport through heterogeneous anisotropic porous media.  相似文献   

19.
The estimation of velocity and depth is an important stage in seismic data processing and interpretation. We present a method for velocity-depth model estimation from unstacked data. This method is formulated as an iterative algorithm producing a model which maximizes some measure of coherency computed along traveltimes generated by tracing rays through the model. In the model the interfaces are represented as cubic splines and it is assumed that the velocity in each layer is constant. The inversion includes the determination of the velocities in all the layers and the location of the spline knots. The process input consists of unstacked seismic data and an initial velocity-depth model. This model is often based on nearby well information and an interpretation of the stacked section. Inversion is performed iteratively layer after layer; during each iteration synthetic travel-time curves are calculated for the interface under consideration. A functional characterizing the main correlation properties of the wavefield is then formed along the synthetic arrival times. It is assumed that the functional reaches a maximum value when the synthetic arrival time curves match the arrival times of the events on the field gathers. The maximum value of the functional is obtained by an effective algorithm of non-linear programming. The present inversion algorithm has the advantages that event picking on the unstacked data is not required and is not based on curve fitting of hyperbolic approximations of the arrival times. The method has been successfully applied to both synthetic and field data.  相似文献   

20.
On the geostatistical approach to the inverse problem   总被引:5,自引:0,他引:5  
The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statistical parameters (“structural analysis”) followed by estimation of the distributed parameter conditional on the observations (“cokriging” or “weighted least squares”). It is argued that in inverse problems, which are algebraically undetermined, the challenge is not so much to reproduce the data as to select an algorithm with the prospect of giving good estimates where there are no observations. The essence of the geostatistical approach is that instead of adjusting a grid-dependent and potentially large number of block conductivities (or other distributed parameters), a small number of structural parameters are fitted to the data. Once this fitting is accomplished, the estimation of block conductivities ensues in a predetermined fashion without fitting of additional parameters. Also, the methodology is compared with a straightforward maximum a posteriori probability estimation method. It is shown that the fundamental differences between the two approaches are: (a) they use different principles to separate the estimation of covariance parameters from the estimation of the spatial variable; (b) the method for covariance parameter estimation in the geostatistical approach produces statistically unbiased estimates of the parameters that are not strongly dependent on the discretization, while the other method is biased and its bias becomes worse by refining the discretization into zones with different conductivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号