首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A class of non-stationary covariance functions with compact support   总被引:1,自引:1,他引:0  
This article describes the use of non-stationary covariance functions with compact support to estimate and simulate a random function. Based on the kernel convolution theory, the functions are derived by convolving hyperspheres in \(\mathbb{R}^n\) followed by a Radon transform. The order of the Radon transform controls the differentiability of the covariance functions. By varying spatially the hyperspheres radius one defines non-stationary isotropic versions of the spherical, the cubic and the penta-spherical models. Closed-form expressions for the non-stationary covariances are derived for the isotropic spherical, cubic, and penta-spherical models. Simulation of the different non-stationary models is easily obtained by weighted average of independent standard Gaussian variates in both the isotropic and the anisotropic case. The non-stationary spherical covariance model is applied to estimate the overburden thickness over an area composed of two different geological domains. The results are compared to the estimation with a single stationary model and the estimation with two stationary models, one for each geological domain. It is shown that the non-stationary model enables a reduction of the mean square error and a more realistic transition between the two geological domains.  相似文献   

2.
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data.  相似文献   

3.
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation.
Ben IngramEmail:
  相似文献   

4.
Radial basis functions with compact support for multivariate geostatistics   总被引:3,自引:3,他引:0  
Matrix-valued radially symmetric covariance functions (also called radial basis functions in the numerical analysis literature) are crucial for the analysis, inference and prediction of Gaussian vector-valued random fields. This paper provides different methodologies for the construction of matrix-valued mappings that are positive definite and compactly supported over the sphere of a d-dimensional space, of a given radius. In particular, we offer a representation based on scaled mixtures of Askey functions; we also suggest a method of construction based on B-splines. Finally, we show that the very appealing convolution arguments are indeed effective when working in one dimension, prohibitive in two and feasible, but substantially useless, when working in three dimensions. We exhibit the statistical performance of the proposed models through simulation study and then discuss the computational gains that come from our constructions when the parameters are estimated via maximum likelihood. We finally apply our constructions to a North American Pacific Northwest temperatures dataset.  相似文献   

5.
This paper presents an algorithm for simulating Gaussian random fields with zero mean and non-stationary covariance functions. The simulated field is obtained as a weighted sum of cosine waves with random frequencies and random phases, with weights that depend on the location-specific spectral density associated with the target non-stationary covariance. The applicability and accuracy of the algorithm are illustrated through synthetic examples, in which scalar and vector random fields with non-stationary Gaussian, exponential, Matérn or compactly-supported covariance models are simulated.  相似文献   

6.
Despite their apparent high dimensionality, spatially distributed hydraulic properties of geologic formations can often be compactly (sparsely) described in a properly designed basis. Hence, the estimation of high-dimensional subsurface flow properties from dynamic performance and monitoring data can be formulated and solved as a sparse reconstruction inverse problem. Recent advances in statistical signal processing, formalized under the compressed sensing paradigm, provide important guidelines on formulating and solving sparse inverse problems, primarily for linear models and using a deterministic framework. Given the uncertainty in describing subsurface physical properties, even after integration of the dynamic data, it is important to develop a practical sparse Bayesian inversion approach to enable uncertainty quantification. In this paper, we use sparse geologic dictionaries to compactly represent uncertain subsurface flow properties and develop a practical sparse Bayesian method for effective data integration and uncertainty quantification. The multi-Gaussian assumption that is widely used in classical probabilistic inverse theory is not appropriate for representing sparse prior models. Following the results presented by the compressed sensing paradigm, the Laplace (or double exponential) probability distribution is found to be more suitable for representing sparse parameters. However, combining Laplace priors with the frequently used Gaussian likelihood functions leads to neither a Laplace nor a Gaussian posterior distribution, which complicates the analytical characterization of the posterior. Here, we first express the form of the Maximum A-Posteriori (MAP) estimate for Laplace priors and then use the Monte-Carlo-based Randomize Maximum Likelihood (RML) method to generate approximate samples from the posterior distribution. The proposed Sparse RML (SpRML) approximate sampling approach can be used to assess the uncertainty in the calibrated model with a relatively modest computational complexity. We demonstrate the suitability and effectiveness of the SpRML formulation using a series of numerical experiments of two-phase flow systems in both Gaussian and non-Gaussian property distributions in petroleum reservoirs and successfully apply the method to an adapted version of the PUNQ-S3 benchmark reservoir model.  相似文献   

7.
This paper is concerned with developing computational methods and approximations for maximum likelihood estimation and minimum mean square error smoothing of irregularly observed two-dimensional stationary spatial processes. The approximations are based on various Fourier expansions of the covariance function of the spatial process, expressed in terms of the inverse discrete Fourier transform of the spectral density function of the underlying spatial process. We assume that the underlying spatial process is governed by elliptic stochastic partial differential equations (SPDE's) driven by a Gaussian white noise process. SPDE's have often been used to model the underlying physical phenomenon and the elliptic SPDE's are generally associated with steady-state problems.A central problem in estimation of underlying model parameters is to identify the covariance function of the process. The cumbersome exact analytical calculation of the covariance function by inverting the spectral density function of the process, has commonly been used in the literature. The present work develops various Fourier approximations for the covariance function of the underlying process which are in easily computable form and allow easy application of Newton-type algorithms for maximum likelihood estimation of the model parameters. This work also develops an iterative search algorithm which combines the Gauss-Newton algorithm and a type of generalized expectation-maximization (EM) algorithm, namely expectation-conditional maximization (ECM) algorithm, for maximum likelihood estimation of the parameters.We analyze the accuracy of the covariance function approximations for the spatial autoregressive-moving average (ARMA) models analyzed in Vecchia (1988) and illustrate the performance of our iterative search algorithm in obtaining the maximum likelihood estimation of the model parameters on simulated and actual data.  相似文献   

8.
The estimation of overburden sediment thickness is important in hydrogeology, geotechnics and geophysics. Usually, thickness is known precisely at a few sparse borehole data. To improve precision of estimation, one useful complementary information is the known position of outcrops. One intuitive approach is to code the outcrops as zero thickness data. A problem with this approach is that the outcrops are preferentially observed compared to other thickness information. This introduces a strong bias in the thickness estimation that kriging is not able to remove. We consider a new approach to incorporate point or surface outcrop information based on the use of a non-stationary covariance model in kriging. The non-stationary model is defined so as to restrict the distance of influence of the outcrops. Within this distance of influence, covariance parameters are assumed simple regular functions of the distance to the nearest outcrop. Outside the distance of influence of the outcrops, the thickness covariance is assumed stationary. The distance of influence is obtained thru a cross-validation. Compared to kriging based on a stationary model with or without zero thickness at outcrop locations, the non-stationary model provides more precise estimation, especially at points close to an outcrop. Moreover, the thickness map obtained with the non-stationary covariance model is more realistic since it forces the estimates to zero close to outcrops without the bias incurred when outcrops are simply treated as zero thickness in a stationary model.  相似文献   

9.
反演问题的时空间分辨率或称时空分辨长度是评估模型精细程度的重要参数,决定了该模型应用的范围和价值,但是分辨长度估算却是比反演更复杂和麻烦的数学问题。除了层析成像中广泛利用理论模型恢复试验定性提取空间分辨长度外,通过求解分辨率矩阵可定量获得分辨长度。通过矩阵操作给出的分辨率矩阵包括三类:直接分辨率矩阵、正则化分辨率矩阵和混合分辨率矩阵。这三类矩阵包含了反演本身不同侧面的信息,因此在一个反演应用中,同时提供这三类分辨率矩阵可更全面地评估反演模型分辨率分布。最近An(2012)提出了从大量随机理论模型及其解中统计出分辨率矩阵的方法。这种分辨率矩阵是从模拟真实反演实验的输入和输出模型中通过反演得到的,因此这种分辨率矩阵更能反映整个反演所涉及到的更多因素和过程;同时由于这种分辨率矩阵计算过程无需进行矩阵操作且不依赖于具体正演和反演方法,因此可以被应用于更普遍的反演问题。实际应用证明统计分辨率分析方法适用于对二维和三维层析成像反演模型进行分辨率分析。  相似文献   

10.
Data assimilation is widely used to improve flood forecasting capability, especially through parameter inference requiring statistical information on the uncertain input parameters (upstream discharge, friction coefficient) as well as on the variability of the water level and its sensitivity with respect to the inputs. For particle filter or ensemble Kalman filter, stochastically estimating probability density function and covariance matrices from a Monte Carlo random sampling requires a large ensemble of model evaluations, limiting their use in real-time application. To tackle this issue, fast surrogate models based on polynomial chaos and Gaussian process can be used to represent the spatially distributed water level in place of solving the shallow water equations. This study investigates the use of these surrogates to estimate probability density functions and covariance matrices at a reduced computational cost and without the loss of accuracy, in the perspective of ensemble-based data assimilation. This study focuses on 1-D steady state flow simulated with MASCARET over the Garonne River (South-West France). Results show that both surrogates feature similar performance to the Monte-Carlo random sampling, but for a much smaller computational budget; a few MASCARET simulations (on the order of 10–100) are sufficient to accurately retrieve covariance matrices and probability density functions all along the river, even where the flow dynamic is more complex due to heterogeneous bathymetry. This paves the way for the design of surrogate strategies suitable for representing unsteady open-channel flows in data assimilation.  相似文献   

11.
In this paper we present a stochastic model reduction method for efficiently solving nonlinear unconfined flow problems in heterogeneous random porous media. The input random fields of flow model are parameterized in a stochastic space for simulation. This often results in high stochastic dimensionality due to small correlation length of the covariance functions of the input fields. To efficiently treat the high-dimensional stochastic problem, we extend a recently proposed hybrid high-dimensional model representation (HDMR) technique to high-dimensional problems with multiple random input fields and integrate it with a sparse grid stochastic collocation method (SGSCM). Hybrid HDMR can decompose the high-dimensional model into a moderate M-dimensional model and a few one-dimensional models. The moderate dimensional model only depends on the most M important random dimensions, which are identified from the full stochastic space by sensitivity analysis. To extend the hybrid HDMR, we consider two different criteria for sensitivity test. Each of the derived low-dimensional stochastic models is solved by the SGSCM. This leads to a set of uncoupled deterministic problems at the collocation points, which can be solved by a deterministic solver. To demonstrate the efficiency and accuracy of the proposed method, a few numerical experiments are carried out for the unconfined flow problems in heterogeneous porous media with different correlation lengths. The results show that a good trade-off between computational complexity and approximation accuracy can be achieved for stochastic unconfined flow problems by selecting a suitable number of the most important dimensions in the M-dimensional model of hybrid HDMR.  相似文献   

12.
常规协克里金方法反演重力或重力梯度数据具有抗噪性好、加入先验信息容易等优点,其反演的地下密度分布能够识别异常体中心位置,还原异常体基本形态,但反演图像光滑,分辨率低,这是由于常规方法估计的密度协方差矩阵全局发散、平稳.为了通过协克里金方法获得聚焦的密度分布需要改善密度协方差矩阵的性质.首先,本文推导了理论密度协方差公式,其性质表明,当理论模型聚焦分布时,其密度协方差矩阵是非平稳且聚焦分布的.为了打破常规协方差矩阵全局平稳、发散的特征,本文设置密度阈值处理协方差矩阵,通过不断更新协方差矩阵来迭代实现协克里金反演,最终得到相对聚焦的反演结果.用本文方法处理重力与重力梯度数据恢复两种密度模型,均得到了与正演模型匹配的反演结果;再将方法运用于文顿盐丘的实际测量重力与重力梯度数据,反演结果与已知的地质情况匹配较好.  相似文献   

13.
This paper develops concepts and methods to study stochastic hydrologic models. Problems regarding the application of the existing stochastic approaches in the study of groundwater flow are acknowledged, and an attempt is made to develop efficient means for their solution. These problems include: the spatial multi-dimensionality of the differential equation models governing transport-type phenomena; physically unrealistic assumptions and approximations and the inadequacy of the ordinary perturbation techniques. Multi-dimensionality creates serious mathematical and technical difficulties in the stochastic analysis of groundwater flow, due to the need for large mesh sizes and the poorly conditioned matrices arising from numerical approximations. An alternative to the purely computational approach is to simplify the complex partial differential equations analytically. This can be achieved efficiently by means of a space transformation approach, which transforms the original multi-dimensional problem to a much simpler unidimensional space. The space transformation method is applied to stochastic partial differential equations whose coefficients are random functions of space and/or time. Such equations constitute an integral part of groundwater flow and solute transport. Ordinary perturbation methods for studying stochastic flow equations are in many cases physically inadequate and may lead to questionable approximations of the actual flow. To address these problems, a perturbation analysis based on Feynman-diagram expansions is proposed in this paper. This approach incorporates important information on spatial variability and fulfills essential physical requirements, both important advantages over ordinary hydrologic perturbation techniques. Moreover, the diagram-expansion approach reduces the original stochastic flow problem to a closed set of equations for the mean and the covariance function.  相似文献   

14.
This paper develops concepts and methods to study stochastic hydrologic models. Problems regarding the application of the existing stochastic approaches in the study of groundwater flow are acknowledged, and an attempt is made to develop efficient means for their solution. These problems include: the spatial multi-dimensionality of the differential equation models governing transport-type phenomena; physically unrealistic assumptions and approximations and the inadequacy of the ordinary perturbation techniques. Multi-dimensionality creates serious mathematical and technical difficulties in the stochastic analysis of groundwater flow, due to the need for large mesh sizes and the poorly conditioned matrices arising from numerical approximations. An alternative to the purely computational approach is to simplify the complex partial differential equations analytically. This can be achieved efficiently by means of a space transformation approach, which transforms the original multi-dimensional problem to a much simpler unidimensional space. The space transformation method is applied to stochastic partial differential equations whose coefficients are random functions of space and/or time. Such equations constitute an integral part of groundwater flow and solute transport. Ordinary perturbation methods for studying stochastic flow equations are in many cases physically inadequate and may lead to questionable approximations of the actual flow. To address these problems, a perturbation analysis based on Feynman-diagram expansions is proposed in this paper. This approach incorporates important information on spatial variability and fulfills essential physical requirements, both important advantages over ordinary hydrologic perturbation techniques. Moreover, the diagram-expansion approach reduces the original stochastic flow problem to a closed set of equations for the mean and the covariance function.  相似文献   

15.
16.
Transient wave propagation in three-dimensional unbounded domains is studied. An efficient numerical approach is proposed, which is based on using the displacement unit-impulse response matrix representing the interaction force–displacement relationship on the near field/far field interface. Spatially, an approximation is used to reduce the computational effort associated with the large size of three-dimensional problems. It is based on subdividing the fully coupled unbounded domain into multiple subdomains. The displacement unit-impulse response matrices of all subdomains are calculated separately. The error associated with this spatial decoupling can be reduced by placing the near field/far field interface further away from the domain of interest. Detailed parameter studies have been conducted using numerical examples, in order to provide guidelines for the proposed spatially local schemes, and to demonstrate the accuracy and high efficiency of the proposed method for three-dimensional soil–structure interaction problems.  相似文献   

17.
Multidimensional scaling (MDS) has played an important role in non-stationary spatial covariance structure estimation and in analyzing the spatiotemporal processes underlying environmental studies. A combined cluster-MDS model, including geographical spatial constraints, has been previously proposed by the authors to address the estimation problem in oversampled domains in a least squares framework. In this paper is formulated a general latent class model with spatial constraints that, in a maximum likelihood framework, allows to partition the sample stations into classes and simultaneously to represent the cluster centers in a low-dimensional space, while the stations and clusters retain their spatial relationships. A model selection strategy is proposed to determine the number of latent classes and the dimensionality of the problem. Real and artificial data sets are analyzed to test the performance of the model.  相似文献   

18.
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.  相似文献   

19.
Traditional probabilistic seismic hazard analysis (PSHA) uses ground-motion models that are based on the ergodic assumption, which means that the distribution of ground motions over time at a given site is the same as their spatial distribution over different sites. Evaluations of ground-motion data sets with multiple measurements at a given site and multiple earthquakes in a given region have shown that the ergodic assumption is not appropriate as there are strong systematic region-specific source terms and site-specific path and site terms that are spatially correlated. We model these correlations using a spatial Gaussian process model. Different correlations functions are employed, both stationary and non-stationary, and the results are compared in terms of their predictive power. Spatial correlations of residuals are investigated on a Taiwanese strong-motion data set, and ground motions are collected at the ANZA, CA array. Source effects are spatially correlated, but provide a much stronger benefit in terms of prediction for the ANZA data set than for the Taiwanese data set. We find that systematic path effects are best modeled by a non-stationary covariance function that is dependent on source-to-site distance and magnitude. The correlation structure estimated from Californian data can be transferred to Taiwan if one carefully accounts for differences in magnitudes. About 50% of aleatory variance can be explained by accounting for spatial correlation.  相似文献   

20.
 Permissibility of a covariance function (in the sense of Bochner) depends on the norm (or metric) that determines spatial distance in several dimensions. A covariance function that is permissible for one norm may not be so for another. We prove that for a certain class of covariances of weakly homogeneous random fields, the spatial distance can be defined only in terms of the Euclidean norm. This class includes commonly used covariance functions. Functions that do not belong to this class may be permissible covariances for some non-Euclidean metric. Thus, a different class of covariances, for which non-Euclidean norms are valid spatial distances, is also discussed. The choice of a coordinate system and associated norm to describe a physical phenomenon depends on the nature of the properties being described. Norm-dependent permissibility analysis has important consequences in spatial statistics applications (e.g., spatial estimation or mapping), in which one is concerned about the validity of covariance functions associated with a physically meaningful norm (Euclidean or non-Euclidean).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号