首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
This paper is concerned with vector random fields on spheres with second-order increments, which are intrinsically stationary and mean square continuous and have isotropic variogram matrix functions. A characterization of the continuous and isotropic variogram matrix function on a sphere is derived, in terms of an infinite sum of the products of positive definite matrices and ultraspherical polynomials. It is valid for Gaussian or elliptically contoured vector random fields, but may not be valid for other non-Gaussian vector random fields on spheres such as a χ 2, log-Gaussian, or skew-Gaussian vector random field. Some parametric variogram matrix models are derived on spheres via different constructional approaches. A simulation study is conducted to illustrate the implementation of the proposed model in estimation and cokriging, whose performance is compared with that using the linear model of coregionalization.  相似文献   

2.
The variogram matrix function is an important measure for the dependence of a vector random field with second-order increments, and is a useful tool for linear predication or cokriging. This paper proposes an efficient approach to construct variogram matrix functions, based on three ingredients: a univariate variogram, a conditionally negative definite matrix, and a Bernstein function, and derives three classes of variogram matrix functions for vector elliptically contoured random fields. Moreover, various dependence structures among components can be derived through appropriate mixture procedures demonstrated in this paper. We also obtain covariance matrix functions for second-order vector random fields through the Schoenberg–Lévy kernels.  相似文献   

3.
    
An algorithm for producing a nonconditional simulation by multiplying the square root of the covariance matrix by a random vector is described. First, the square root of a matrix (or a function of a matrix in general) is defined. The square root of the matrix can be approximated by a minimax matrix polynomial. The block Toeplitz structure of the covariance matrix is used to minimize storage. Finally, multiplication of the block Toeplitz matrix by the random vector can be evaluated as a convolution using the fast Fourier transform. This results in an algorithm which is not only efficient in terms of storage and computation but also easy to implement.  相似文献   

4.
An algorithm for producing a nonconditional simulation by multiplying the square root of the covariance matrix by a random vector is described. First, the square root of a matrix (or a function of a matrix in general) is defined. The square root of the matrix can be approximated by a minimax matrix polynomial. The block Toeplitz structure of the covariance matrix is used to minimize storage. Finally, multiplication of the block Toeplitz matrix by the random vector can be evaluated as a convolution using the fast Fourier transform. This results in an algorithm which is not only efficient in terms of storage and computation but also easy to implement.  相似文献   

5.
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

6.
In reservoir characterization, the covariance is often used to describe the spatial correlation and variation in rock properties or the uncertainty in rock properties. The inverse of the covariance, on the other hand, is seldom discussed in geostatistics. In this paper, I show that the inverse is required for simulation and estimation of Gaussian random fields, and that it can be identified with the differential operator in regularized inverse theory. Unfortunately, because the covariance matrix for parameters in reservoir models can be extremely large, calculation of the inverse can be a problem. In this paper, I discuss four methods of calculating the inverse of the covariance, two of which are analytical, and two of which are purely numerical. By taking advantage of the assumed stationarity of the covariance, none of the methods require inversion of the full covariance matrix.  相似文献   

7.
This paper describes a novel approach for creating an efficient, general, and differentiable parameterization of large-scale non-Gaussian, non-stationary random fields (represented by multipoint geostatistics) that is capable of reproducing complex geological structures such as channels. Such parameterizations are appropriate for use with gradient-based algorithms applied to, for example, history-matching or uncertainty propagation. It is known that the standard Karhunen–Loeve (K–L) expansion, also called linear principal component analysis or PCA, can be used as a differentiable parameterization of input random fields defining the geological model. The standard K–L model is, however, limited in two respects. It requires an eigen-decomposition of the covariance matrix of the random field, which is prohibitively expensive for large models. In addition, it preserves only the two-point statistics of a random field, which is insufficient for reproducing complex structures. In this work, kernel PCA is applied to address the limitations associated with the standard K–L expansion. Although widely used in machine learning applications, it does not appear to have found any application for geological model parameterization. With kernel PCA, an eigen-decomposition of a small matrix called the kernel matrix is performed instead of the full covariance matrix. The method is much more efficient than the standard K–L procedure. Through use of higher order polynomial kernels, which implicitly define a high-dimensionality feature space, kernel PCA further enables the preservation of high-order statistics of the random field, instead of just two-point statistics as in the K–L method. The kernel PCA eigen-decomposition proceeds using a set of realizations created by geostatistical simulation (honoring two-point or multipoint statistics) rather than the analytical covariance function. We demonstrate that kernel PCA is capable of generating differentiable parameterizations that reproduce the essential features of complex geological structures represented by multipoint geostatistics. The kernel PCA representation is then applied to history match a water flooding problem. This example demonstrates that kernel PCA can be used with gradient-based history matching to provide models that match production history while maintaining multipoint geostatistics consistent with the underlying training image.  相似文献   

8.

A spectral algorithm is proposed to simulate an isotropic Gaussian random field on a sphere equipped with a geodesic metric. This algorithm supposes that the angular power spectrum of the covariance function is explicitly known. Direct analytic calculations are performed for exponential and linear covariance functions. In addition, three families of covariance functions are presented where the calculation of the angular power spectrum is simplified (shot-noise random fields, Yadrenko covariance functions and solutions of certain stochastic partial differential equations). Numerous illustrative examples are given.

  相似文献   

9.
Spatial declustering weights   总被引:1,自引:0,他引:1  
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

10.
This paper studies vector (multivariate, multiple, or multidimensional) random fields in space and/or time with second-order increments, for which the variogram matrix is an important tool to measure the dependence within each component and between each pair of distinct components. We introduce an efficient approach to construct Gaussian or non-Gaussian vector random fields from the univariate random field with higher dimensional index domain, and particularly to generate a class of variogram matrices.  相似文献   

11.
Numerical models encompassing source zones and receptors, based on representative conceptual models and accounting for aquifer heterogeneity, are needed to understand contaminant migration and fate; however, aquifer characterization seldom provides the necessary data. This study aimed to develop a workflow for field characterization and data integration, which could: (1) be adapted to the definition of subwatershed-scale aquifer heterogeneity (over 10 km2) and (2) adequately support mass transport model development. The study involved the field investigation of a shallow granular aquifer in a 12-km2 subwatershed in Saint-Lambert-de-Lauzon, Canada, in which a decommissioned landfill is emitting a leachate plume managed by natural attenuation. Using proven field methods, the characterization sequence was designed to optimize each method in terms of location, scale of acquisition, density and quality. The emphasis was on the acquisition of detailed indirect geophysical data that were integrated with direct hydraulic and geochemical data. This report focuses on the first qualitative and geostatistical data integration steps of the workflow leading to the development of a hydrogeological conceptual model. This is a prerequisite for further integration steps: prediction of hydrofacies and hydraulic conductivity (K), geostatistical simulations of K, studies of geochemical processes and numerical modeling.  相似文献   

12.
In this paper, the maximum likelihood method for inferring the parameters of spatial covariances is examined. The advantages of the maximum likelihood estimation are discussed and it is shown that this method, derived assuming a multivariate Gaussian distribution for the data, gives a sound criterion of fitting covariance models irrespective of the multivariate distribution of the data. However, this distribution is impossible to verify in practice when only one realization of the random function is available. Then, the maximum entropy method is the only sound criterion of assigning probabilities in absence of information. Because the multivariate Gaussian distribution has the maximum entropy property for a fixed vector of means and covariance matrix, the multinormal distribution is the most logical choice as a default distribution for the experimental data. Nevertheless, it should be clear that the assumption of a multivariate Gaussian distribution is maintained only for the inference of spatial covariance parameters and not necessarily for other operations such as spatial interpolation, simulation or estimation of spatial distributions. Various results from simulations are presented to support the claim that the simultaneous use of maximum likelihood method and the classical nonparametric method of moments can considerably improve results in the estimation of geostatistical parameters.  相似文献   

13.
Moving averages for Gaussian simulation in two and three dimensions   总被引:6,自引:0,他引:6  
The square-root method provides a simple and computationally inexpensive way to generate multidimensional Gaussian random fields. It is applied by factoring the multidimensional covariance operator analytically, then sampling the factorization at discrete points to compute an array of weighted averages that can be convolved with an array of random normal deviates to generate a correlated random field. In many respects this is similar to the LUdecomposition method and to the one-dimensional method of moving averages. However it has been assumed that the method of moving averages could not be used in higher dimensions, whereas direct application of the matrix decomposition approach is too expensive to be practical on large grids. In this paper, I show that it is possible to calculate the square root of many two- and three dimensional covariance operators analytically so that the method of moving averages can be applied directly to the problem of multidimensional simulation. A few numerical examples of nonconditional simulation on a 256×256 grid that show the simplicity of the method are included. The method is fast and can be applied easily to nested and anisotropic variograms.  相似文献   

14.
Isotropic covariance functions are successfully used to model spatial continuity in a multitude of scientific disciplines. Nevertheless, a satisfactory characterization of the class of permissible isotropic covariance models has been missing. The intention of this note is to review, complete, and extend the existing literature on the problem. As it turns out, a famous conjecture of Schoenberg (1938) holds true: any measurable, isotropic covariance function on d (d 2) admits a decomposition as the sum of a pure nugget effect and a continuous covariance function. Moreover, any measurable, isotropic covariance function defined on a ball in d can be extended to an isotropic covariance function defined on the entire space d .  相似文献   

15.
The question being tackled in this study is to which extent grain rearrangement contributes to porosity reduction in very well sorted quartzose sands (ideal reservoir sands). A numerical model, RAMPAGE (an acronym of random packing generator), has been developed to address this long-standing problem. RAMPAGE represents a synthesis of various algorithms designed to simulate packing of equal-sized spheres, which have been used to represent ideal solids, liquids, and gases, as well as natural porous media. The results of RAMPAGE simulations compare favourably to theoretical and experimental data from various disciplines and allow delineation of the field of gravitationally stable random packing of equal-sized spheres in the 2-D state space of porosity (P) versus mean coordination number (N). Three end-member packing states have been identified: random loose packing (RLP: P = 45.4%, N = 5.2), random close packing (RCP: P = 36.3%, N = 7.0), and bridged random close packing (Bridged RCP: P = 39.5%, N = 5.2). Unlike previously proposed models, RAMPAGE can simulate the transition from RLP to any other point in the stability field. The RLP state is fully consistent with wet-packed porosities of synthetic sands with lognormal mass-size distributions reported in the literature. The much higher in-situ porosity values reported for modern (air-packed) sands are unlikely to be preserved at depth on geological time scales. Data on the relation between intergranular volume and burial depth indicate that the observed intergranular volume reduction in the upper ~ 800 m of the sediment column corresponds to the evolution of RLP to RCP, and is thus fully explained by non-destructive grain rearrangement.  相似文献   

16.
The numerical stability of linear systems arising in kriging, estimation, and simulation of random fields, is studied analytically and numerically. In the state-space formulation of kriging, as developed here, the stability of the kriging system depends on the condition number of the prior, stationary covariance matrix. The same is true for conditional random field generation by the superposition method, which is based on kriging, and the multivariate Gaussian method, which requires factoring a covariance matrix. A large condition number corresponds to an ill-conditioned, numerically unstable system. In the case of stationary covariance matrices and uniform grids, as occurs in kriging of uniformly sampled data, the degree of ill-conditioning generally increases indefinitely with sampling density and, to a limit, with domain size. The precise behavior is, however, highly sensitive to the underlying covariance model. Detailed analytical and numerical results are given for five one-dimensional covariance models: (1) hole-exponential, (2) exponential, (3) linear-exponential, (4) hole-Gaussian, and (5) Gaussian. This list reflects an approximate ranking of the models, from best to worst conditioned. The methods developed in this work can be used to analyze other covariance models. Examples of such representative analyses, conducted in this work, include the spherical and periodic hole-effect (hole-sinusoidal) covariance models. The effect of small-scale variability (nugget) is addressed and extensions to irregular sampling schemes and higher dimensional spaces are discussed.  相似文献   

17.
Computational power poses heavy limitations to the achievable problem size for Kriging. In separate research lines, Kriging algorithms based on FFT, the separability of certain covariance functions, and low-rank representations of covariance functions have been investigated, all three leading to drastic speedup factors. The current study combines these ideas, and so combines the individual speedup factors of all ideas. This way, we reduce the mathematics behind Kriging to a computational complexity of only $\mathcal{O}(dL^{*} \log L^{*})$ , where L ? is the number of points along the longest edge of the involved lattice of estimation points, and d is the physical dimensionality of the lattice. For separable (factorized) covariance functions, the results are exact, and nonseparable covariance functions can be approximated well through sums of separable components. Only outputting the final estimate as an explicit map causes computational costs of $\mathcal{O}(n)$ , where n is the number of estimation points. In illustrative numerical test cases, we achieve speedup factors up to 108 (eight orders of magnitude), and we can treat problem sizes of up to 15 trillion and two quadrillion estimation points for Kriging and spatial design, respectively, within seconds on a contemporary desktop computer. The current study assumes second-order stationarity and simple Kriging on a regular, equispaced lattice, without working with restricted neighborhoods. Extensions to many other cases are straightforward.  相似文献   

18.
The concept of a random function and, consequently, the application of kriging cells for the implicit assumption that the data locations are embedded within an infinite domain. An implication of this assumption is that, all else being equal, outlying data locations will receive greater weight because they are seen as less redundant, hence, more informative of the infinite domain. A two- step kriging procedure is proposed for correcting this siring effect. The first step is to establish the total kriging weight attributable to each string. The distribution of that total weight to the samples in the string is accomplished by a second stage of kriging. In the second stage, a spatial redundancy measure r(n) is used in place of the covariance measure in the data-data kriging matrix. This measure is constructed such that each datum has the same redundancy with the (n)data of the string to which it belongs. This paper documents the problem of kriging with strings of data, develops the redundancy measure r(n),and presents a number of examples.  相似文献   

19.
岩土参数随机场离散的三角形单元局部平均法   总被引:2,自引:0,他引:2  
王涛  周国庆  阴琪翔 《岩土力学》2014,35(5):1482-1488
将不确定性岩土参数建模为随机场而非传统意义上的随机变量,基于随机场的局部平均理论,提出了用于二维随机场离散的三角形单元局部平均法。通过面积坐标变换和高斯数值积分,给出了三角形单元局部平均随机场协方差矩阵的解析计算方法和数值计算方法。采用算例再现了所提方法的分析过程和有效性,并与传统二维随机场四边形单元离散法进行了对比。结果表明:提出的二维随机场三角形单元离散法能与有限元三角形单元离散法完美结合,随机场单元与有限元单元的对应关系清晰,易于随机有限元程序的编制;对于随机场单元的均值,传统四边形单元离散法与所提方法的计算结果相同;对于随机场单元的方差,传统四边形单元离散法计算结果偏小,所提方法显得更加科学、合理。  相似文献   

20.
Updating of Population Parameters and Credibility of Discriminant Analysis   总被引:1,自引:0,他引:1  
The uncertainty of classification in discriminant analysis may result from the original characteristics of the phenomena studied, the approach of inferring population parameters, and the credibility of the parameters which are estimated by geologist or statistician. A credibility function and a significance function are proposed. Both can be used to appraise the uncertainty of classification. The former is involved with the uncertainty resulting from the errors in the reward-penalty matrix, while the latter may be involved with the uncertainty resulting from the original characteristics of the phenomena studied and the statistical approach. Inappropriate classified results may be originated from the bias estimates of population parameters (mean vector and covariance matrix), which are estimated by bias samples. These bias estimates can be updated by constraining the varying region of the mean vector. The equations for updating Bayesian estimates of the mean vector and the covariance matrix are demonstrated if the mean vector is restricted to a subregion of the entire real space. Results for a gas reservoir indicate that the discriminant rules based on the updated equations are more efficient than the traditional discriminant rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号