首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In this paper, the maximum likelihood method for inferring the parameters of spatial covariances is examined. The advantages of the maximum likelihood estimation are discussed and it is shown that this method, derived assuming a multivariate Gaussian distribution for the data, gives a sound criterion of fitting covariance models irrespective of the multivariate distribution of the data. However, this distribution is impossible to verify in practice when only one realization of the random function is available. Then, the maximum entropy method is the only sound criterion of assigning probabilities in absence of information. Because the multivariate Gaussian distribution has the maximum entropy property for a fixed vector of means and covariance matrix, the multinormal distribution is the most logical choice as a default distribution for the experimental data. Nevertheless, it should be clear that the assumption of a multivariate Gaussian distribution is maintained only for the inference of spatial covariance parameters and not necessarily for other operations such as spatial interpolation, simulation or estimation of spatial distributions. Various results from simulations are presented to support the claim that the simultaneous use of maximum likelihood method and the classical nonparametric method of moments can considerably improve results in the estimation of geostatistical parameters.  相似文献   

3.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

4.
A novel RANSAC robust estimation technique is presented as an effiecient method for solving the seven-parameter datum transformation problem in the presence of outliers. RANSAC method, which is frequently employed in geodesy, has two sensitive features: (i) the user adjusts some parameters of the algorithm, making it subjective and a rather difficult procedure, and (ii) in its shell, a nonlinear system of equation should be solved repeatedly. In this contribution, we suggest an automatic adjustment strategy for the most important parameter, ‘the threshold value’, based on the ‘early stopping’ principle of the machine-learning technology. Instead of using iterative numerical methods, we propose the use of an algebraic polynomial system developed via a dual-quaternion technique and solved by a non-iterative homotophy method, thereby reducing the computation time considerably. The novelty of the proposed approach lies in three major contributions: (i) the provision for automatically finding the proper error limit parameter for RANSAC method, which has until now been a trial-and-error technique; (ii) employing the algebraic polynomial form of the dual-quaternion solution in the RANSAC shell, thereby accelerating the repeatedly requested solution process; and (iii) avoiding iterations via a heuristic approach of the scaling parameter. To illustrate the proposed method, the transformation parameters of the Western Australian Geodetic Datum (AGD 84) to Geocentric Datum Australia (GDA 94) are computed.  相似文献   

5.
Application of EM algorithms for seismic facices classification   总被引:1,自引:0,他引:1  
Identification of the geological facies and their distribution from seismic and other available geological information is important during the early stage of reservoir development (e.g. decision on initial well locations). Traditionally, this is done by manually inspecting the signatures of the seismic attribute maps, which is very time-consuming. This paper proposes an application of the Expectation-Maximization (EM) algorithm to automatically identify geological facies from seismic data. While the properties within a certain geological facies are relatively homogeneous, the properties between geological facies can be rather different. Assuming that noisy seismic data of a geological facies, which reflect rock properties, can be approximated with a Gaussian distribution, the seismic data of a reservoir composed of several geological facies are samples from a Gaussian mixture model. The mean of each Gaussian model represents the average value of the seismic data within each facies while the variance gives the variation of the seismic data within a facies. The proportions in the Gaussian mixture model represent the relative volumes of different facies in the reservoir. In this setting, the facies classification problem becomes a problem of estimating the parameters defining the Gaussian mixture model. The EM algorithm has long been used to estimate Gaussian mixture model parameters. As the standard EM algorithm does not consider spatial relationship among data, it can generate spatially scattered seismic facies which is physically unrealistic. We improve the standard EM algorithm by adding a spatial constraint to enhance spatial continuity of the estimated geological facies. By applying the EM algorithms to acoustic impedance and Poisson’s ratio data for two synthetic examples, we are able to identify the facies distribution.  相似文献   

6.
目前针对模型结构不确定性的研究方法主要为贝叶斯模型平均方法,而该方法受到模型权重计算困难等影响,应用受限。基于数据驱动的模型结构误差统计学习方法最近得到关注。研究采用高斯过程回归方法对地下水模型结构误差进行统计模拟,并将DREAMzs算法与高斯过程回归相结合,对地下水模型和统计模型的参数同时进行识别。基于此方法,分别以理想岩溶裂隙海水入侵过程和溶质运移柱体实验为例,进行地下水数值模拟及预测结果的不确定性分析。相对于不考虑模型结构误差条件的不确定性分析,结果表明,考虑结构误差之后,能够明显减少参数识别过程中的参数补偿影响,且能显著提高模型的预测性能。因此,基于高斯过程回归的模型结构不确定性分析可以一定程度控制地下水数值模拟的不确定性,提高模型预测可靠性。  相似文献   

7.
Empirical Maximum Likelihood Kriging: The General Case   总被引:4,自引:0,他引:4  
Although linear kriging is a distribution-free spatial interpolator, its efficiency is maximal only when the experimental data follow a Gaussian distribution. Transformation of the data to normality has thus always been appealing. The idea is to transform the experimental data to normal scores, krige values in the “Gaussian domain” and then back-transform the estimates and uncertainty measures to the “original domain.” An additional advantage of the Gaussian transform is that spatial variability is easier to model from the normal scores because the transformation reduces effects of extreme values. There are, however, difficulties with this methodology, particularly, choosing the transformation to be used and back-transforming the estimates in such a way as to ensure that the estimation is conditionally unbiased. The problem has been solved for cases in which the experimental data follow some particular type of distribution. In general, however, it is not possible to verify distributional assumptions on the basis of experimental histograms calculated from relatively few data and where the uncertainty is such that several distributional models could fit equally well. For the general case, we propose an empirical maximum likelihood method in which transformation to normality is via the empirical probability distribution function. Although the Gaussian domain simple kriging estimate is identical to the maximum likelihood estimate, we propose use of the latter, in the form of a likelihood profile, to solve the problem of conditional unbiasedness in the back-transformed estimates. Conditional unbiasedness is achieved by adopting a Bayesian procedure in which the likelihood profile is the posterior distribution of the unknown value to be estimated and the mean of the posterior distribution is the conditionally unbiased estimate. The likelihood profile also provides several ways of assessing the uncertainty of the estimation. Point estimates, interval estimates, and uncertainty measures can be calculated from the posterior distribution.  相似文献   

8.
Building of models in the Earth Sciences often requires the solution of an inverse problem: some unknown model parameters need to be calibrated with actual measurements. In most cases, the set of measurements cannot completely and uniquely determine the model parameters; hence multiple models can describe the same data set. Bayesian inverse theory provides a framework for solving this problem. Bayesian methods rely on the fact that the conditional probability of the model parameters given the data (the posterior) is proportional to the likelihood of observing the data and a prior belief expressed as a prior distribution of the model parameters. In case the prior distribution is not Gaussian and the relation between data and parameters (forward model) is strongly non-linear, one has to resort to iterative samplers, often Markov chain Monte Carlo methods, for generating samples that fit the data likelihood and reflect the prior model statistics. While theoretically sound, such methods can be slow to converge, and are often impractical when the forward model is CPU demanding. In this paper, we propose a new sampling method that allows to sample from a variety of priors and condition model parameters to a variety of data types. The method does not rely on the traditional Bayesian decomposition of posterior into likelihood and prior, instead it uses so-called pre-posterior distributions, i.e. the probability of the model parameters given some subset of the data. The use of pre-posterior allows to decompose the data into so-called, “easy data” (or linear data) and “difficult data” (or nonlinear data). The method relies on fast non-iterative sequential simulation to generate model realizations. The difficult data is matched by perturbing an initial realization using a perturbation mechanism termed “probability perturbation.” The probability perturbation method moves the initial guess closer to matching the difficult data, while maintaining the prior model statistics and the conditioning to the linear data. Several examples are used to illustrate the properties of this method.  相似文献   

9.
随机模拟是地质统计方法的重要内容。在矿石品位估计方法中克里格方法作为一种无偏估计方法,常被用于矿石品位的估计。但克里格法估值存在平滑效应。作者在分析序贯高斯模拟和普通克里格法基本原理的基础上,运用序贯高斯模拟方法和普通克里格方法对某铁矿体内全铁(TFe)品位进行估计,给出了品位估计结果模型。研究从勘探线方向、垂直勘探线方向和竖直方向分别计算变差函数,对球状模型、指数模型、高斯模型的变差函数拟合效果进行了优选,结果表明球型模型拟合效果最好。针对序贯模拟和克里格品位估值效果进行了分析,结果显示:序贯高斯模拟结果在品位分布形态上更接近样品品位分布形态,其平滑效应更小;克里格方法估计与序贯高斯模拟方法相比仅在品位均值方面更接近样本品位均值。因此,认为序贯高斯模拟方法可以更好地刻画矿体内品位分布状态。  相似文献   

10.
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gaussian distribution, one may represent the posterior distribution as a sum of Gaussian kernels. The resulting Gaussian mixture filter has the advantage of both a local Kalman type correction and the weighting/resampling step of a particle filter. The Gaussian mixture approximation relies on a bandwidth parameter which often has to be kept quite large in order to avoid a weight collapse in high dimensions. As a result, the Kalman correction is too large to capture highly non-Gaussian posterior distributions. In this paper, we have extended the Gaussian mixture filter (Hoteit et al., Mon Weather Rev 136:317–334, 2008) and also made the connection to particle filters more transparent. In particular, we introduce a tuning parameter for the importance weights. In the last part of the paper, we have performed a simulation experiment with the Lorenz40 model where our method has been compared to the EnKF and a full implementation of a particle filter. The results clearly indicate that the new method has advantages compared to the standard EnKF.  相似文献   

11.
This paper describes the development and application of new mathematical models for estimation of well productivity during drainage of methane gob gas associated with coal extraction. It is established that the relationship between methane emission from surface gob gas wells and the duration of well production can be described by Gaussian (normal) distribution. Mathematical models based on using the Gaussian error distribution function and the Gaussian density function were proposed to describe the correlation between parameters of methane emission from gob gas wells, duration of well production, and time coordinate of maximum gas emission. These models allow prediction of the total volume of gas which can be extracted for the entire period of well production, the maximum volumetric flow rate of gas emission and the time coordinate of maximum gas emission using at least three measurement of gas volumetric rate (or gas volume) from a gas well at any time during the well production period.  相似文献   

12.
Statistical modelling of thermal annealing of fission tracks in apatite   总被引:8,自引:0,他引:8  
We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider “fanning Arrhenius” models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen.

This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix.

We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.  相似文献   


13.
Object models are widely used to model the distribution of facies in a reservoir. Several computer programs exist for modelling fluvial channels or more general facies objects. This paper focuses on a marked point model with objects that are able to orient locally according to a vector field. In this way, objects with locally varying curvature are created. With this kind of objects it is possible to model complex depositional basins, that are not easily modelled with conventional methods. The new object type is called Backbone objects. The objects have a piecewise linear centerline and are able to follow the direction of a three-dimensional vector field locally in lateral and vertical direction. How well the objects follow the vector field is determined by three parameters. Use of different coordinate systems and mapping between the systems make it possible to generate Gaussian random fields that follow the shape and direction of the objects. The Gaussian fields can be used to model petrophysical variables, which is important for fluid flow modelling.  相似文献   

14.
Classical 3D/4D variation fusion is based on the theory that error follows Gaussian distribution. When using minimization iteration, the gradient of objective function is involved, and the solution of which requires the continuity of data. This paper adopted the extended classical 3D/4D variation fusion method, and explicitly applied the prior knowledge, which was based on L1-norm, as regularization constraint to the classical variation fusion method. Original data was firstly projected into the wavelet domain during the implementation process, and new fusion model was adopted for data fusion in wavelet space, then inverse wavelet transform was used to project the result to the observation space. Ideal experiment was carried out by using linear advection-diffusion equation as four-dimensional prediction model, which made a hypothesis of the discontinuity with the data between background and observation, and that meant the derivatives between left and right were not equal on some points. The result of the experiment showed that the method adopted here was practicable. A further research was also done for multi-source precipitation fusion. Firstly, CMORPH inversion precipitation data were corrected through PDF (Probability Density Function, PDF) matching method based on GAMMA fitting function. Then corrected data was fused with the observation one. By comparison with the reference field, the result showed that this method can keep some outliers better, which might represent certain weather phenomenon. The L1-norm regularization variation fusion in this paper provided a possible way to deal with discrete data, especially for jump point.  相似文献   

15.
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, absolute older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples.  相似文献   

16.
This paper introduces geostatistical approaches (i.e., kriging estimation and simulation) for a group of non-Gaussian random fields that are power algebraic transformations of Gaussian and lognormal random fields. These are power random fields (PRFs) that allow the construction of stochastic polynomial series. They were derived from the exponential random field, which is expressed as Taylor series expansion with PRF terms. The equations developed from computation of moments for conditional random variables allow the correction of Gaussian kriging estimates for the non-Gaussian space. The introduced PRF geostatistics shall provide tools for integration of data that requires simple algebraic transformations, such as regression polynomials that are commonly encountered in the practical applications of estimation. The approach also allows for simulations drawn from skewed distributions.  相似文献   

17.
边少锋  Menz.J 《地球科学》2000,25(2):195-200
首先引入利用旋转面作为基函数的函数逼近概念, 在此基础上经过复杂的矩阵推导证明泛克立格法可表示为传统的带权最小二乘多项式拟合与以旋转面作为基函数的函数逼近, 并在一定条件下(随机场高度连续无块金效应) 论证了协方差(即旋转面) 的参数可通过数学分析的方法确定, 给出了以高斯函数为例确定协方差函数的两个准则.   相似文献   

18.
Analysis of bimodal orientation data   总被引:1,自引:0,他引:1  
Statistical models underlying the analysis of orientation data commonly assume a unimodal symmetric population, such as the circular normal distribution. If the sample distribution is skewed or bimodal, standard procedures usually produce misleading results. Where such situations occur, a mixture of two or more circular normal distributions may be used as the population model. The parameters describing each mode and the mixing proportion may be estimated by the method of maximum likelihood using numerical techniques. This approach is applied to a distinctly bimodal set of cross-bedding data from the Mississippian Salem Limestone of central Indiana.  相似文献   

19.
The modified Weibull spectrum is utilized to calculate the zeroth spectral moment (mo) using Monte Carlo integration methods. Then significant wave height (Hs) is calculated using the formula This is validated with observed buoy data and numerical wave model (WAM) predicted significant wave heights. The Weibull parameters have been calculated using energy densities from observed spectra recorded by DS5 buoy (13.80° N, 82.52° E, depth 3355.48 m) by the method of maximum likelihood (MLE).The relative root mean square error (RRMS) and relative bias error criteria show that modified Weibull spectrum estimated significant wave heights are better than those predicted by WAM model. The monthly averaged observed wave power spectra for the year 2005 recorded by deep water buoy DS5 is considered in this work. The spectra exhibit bimodal sea states for several months of the year.  相似文献   

20.
This paper presents an approach to modelling fracture networks in hot dry rock geothermal reservoirs. A detailed understanding of the fracture network within a geothermal reservoir is critically important for assessments of reservoir potential and optimal production design. One important step in fracture network modelling is to estimate the fracture density and the fracture geometries, particularly the size and orientation of fractures. As fracture networks in these reservoirs can never be directly observed there is significant uncertainty about their true nature and the only feasible approach to modelling is a stochastic one. We propose a global optimization approach using simulated annealing which is an extension of our previous work. The fracture model consists of a number of individual fractures represented by ellipses passing through the micro-seismic points detected during the fracture stimulation process, i.e. the fracture model is conditioned on the seismic points. The distances of the seismic points from fitted fracture planes (ellipses) are, therefore, important in assessing the goodness-of-fit of the model. Our aims in the proposed approach are to formulate an appropriate objective function for the optimal fitting of a set of fracture planes to the micro-seismic data and to derive an efficient modification scheme to update the model parameters. The proposed objective function consists of three components: orthogonal projection distances of the seismic points from the nearest fitted fractures, the amount of fracturing (fitted fracture areas) and the volumes of the convex hull of the associated points of fitted fractures. The functions used in the model update scheme allow the model to achieve an acceptable fit to the points and to converge to acceptable fitted fracture sizes. These functions include two groups of proposals: one for updating fracture parameters and the other for determining the size of the fracture network. To increase the efficiency of the optimization, a spatial clustering approach, the Distance-Directional Transform, was developed to generate parameters for newly proposed fractures. A simulated dataset was used as an example to evaluate our approach and we compared the results to those derived using our previously published algorithm on a real dataset from the Habanero geothermal field in the Cooper Basin, South Australia. In a real application, such as the Habanero dataset, it is difficult to determine definitively which algorithm performs better due to the many uncertainties but the number of association points, the number of final fractures and the error are three important factors that quantify the effectiveness of our algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号