首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 34 毫秒
1.
It is common in geostatistics to use the variogram to describe the spatial dependence structure and to use kriging as the spatial prediction methodology. Both methods are sensitive to outlying observations and are strongly influenced by the marginal distribution of the underlying random field. Hence, they lead to unreliable results when applied to extreme value or multimodal data. As an alternative to traditional spatial modeling and interpolation we consider the use of copula functions. This paper extends existing copula-based geostatistical models. We show how location dependent covariates e.g. a spatial trend can be accounted for in spatial copula models. Furthermore, we introduce geostatistical copula-based models that are able to deal with random fields having discrete marginal distributions. We propose three different copula-based spatial interpolation methods. By exploiting the relationship between bivariate copulas and indicator covariances, we present indicator kriging and disjunctive kriging. As a second method we present simple kriging of the rank-transformed data. The third method is a plug-in prediction and generalizes the frequently applied trans-Gaussian kriging. Finally, we report on the results obtained for the so-called Helicopter data set which contains extreme radioactivity measurements.  相似文献   

2.
Spatial interpolation methods for nonstationary plume data   总被引:1,自引:0,他引:1  
Plume interpolation consists of estimating contaminant concentrations at unsampled locations using the available contaminant data surrounding those locations. The goal of ground water plume interpolation is to maximize the accuracy in estimating the spatial distribution of the contaminant plume given the data limitations associated with sparse monitoring networks with irregular geometries. Beyond data limitations, contaminant plume interpolation is a difficult task because contaminant concentration fields are highly heterogeneous, anisotropic, and nonstationary phenomena. This study provides a comprehensive performance analysis of six interpolation methods for scatter-point concentration data, ranging in complexity from intrinsic kriging based on intrinsic random function theory to a traditional implementation of inverse-distance weighting. High resolution simulation data of perchloroethylene (PCE) contamination in a highly heterogeneous alluvial aquifer were used to generate three test cases, which vary in the size and complexity of their contaminant plumes as well as the number of data available to support interpolation. Overall, the variability of PCE samples and preferential sampling controlled how well each of the interpolation schemes performed. Quantile kriging was the most robust of the interpolation methods, showing the least bias from both of these factors. This study provides guidance to practitioners balancing opposing theoretical perspectives, ease-of-implementation, and effectiveness when choosing a plume interpolation method.  相似文献   

3.
 In geostatistics, stochastic simulation is often used either as an improved interpolation algorithm or as a measure of the spatial uncertainty. Hence, it is crucial to assess how fast realization-based statistics converge towards model-based statistics (i.e. histogram, variogram) since in theory such a match is guaranteed only on average over a number of realizations. This can be strongly affected by the random number generator being used. Moreover, the usual assumption of independence among simulated realizations of a random process may be affected by the random number generator used. Simulation results, obtained by using three different random number generators implemented in Geostatistical Software Library (GSLib), are compared. Some practical aspects are pointed out and some suggestions are given to users of the unconditional LU simulation method.  相似文献   

4.
Data collected along transects are becoming more common in environmental studies as indirect measurement devices, such as geophysical sensors, that can be attached to mobile platforms become more prevalent. Because exhaustive sampling is not always possible under constraints of time and costs, geostatistical interpolation techniques are used to estimate unknown values at unsampled locations from transect data. It is known that outlying observations can receive significantly greater ordinary kriging weights than centrally located observations when the data are contiguously aligned along a transect within a finite search window. Deutsch (1994) proposed a kriging algorithm, finite domain kriging, that uses a redundancy measure in place of the covariance function in the data-to-data kriging matrix to address the problem of overweighting the outlying observations. This paper compares the performances of two kriging techniques, ordinary kriging (OK) and finite domain kriging (FDK), on examining unexploded ordnance (UXO) densities by comparing prediction errors at unsampled locations. The impact of sampling design on object count prediction is also investigated using data collected from transects and at random locations. The Poisson process is used to model the spatial distribution of UXO for three 5000 × 5000 m fields; one of which does not have any ordnance target (homogeneous field), while the other two sites have an ordnance target in the center of the site (isotropic and anisotropic fields). In general, for a given sampling transects width, the differences between OK and FDK in terms of the mean error and the mean square error are not significant regardless of the sampled area and the choice of the field. When 20% or more of the site is sampled, the estimation of object counts is unbiased on average for all three fields regardless of the choice of the transect width and the choice of the kriging algorithm. However, for non-homogeneous fields (isotropic and anisotropic fields), the mean error fluctuates considerably when a small number of transects are sampled. The difference between the transect sampling and the random sampling in terms of prediction errors becomes almost negligible if more than 20% of the site is sampled. Overall, FDK is no better than OK in terms of the prediction performances when the transect sampling procedure is used.  相似文献   

5.
Kriging in the hydrosciences   总被引:1,自引:0,他引:1  
Most of the methods currently used in hydrosciences for interpolation and spatial averaging fail to quantify the accuracy of the estimates.The theory of regionalized variables enables one to point out the relationship between the spatial correlation of hydrometeorological or hydrogeological fields and the precision of interpolation, or determination of average values, over these fields.A new estimation method called kriging has proven to be quite well adapted to solving water resources problems. The author presents a series of case-studies in automatic contouring, data input for numerical models, estimation of average precipitation over a given catchment area, and measurement network design.  相似文献   

6.
刘洋  张鹏  刘财  张雅晨 《地球物理学报》2018,61(4):1400-1412
人工地震方法由于受到野外观测系统和经济因素等的限制,采集的数据在空间方向总是不规则分布.但是,许多地震数据处理技术的应用(如:多次波衰减,偏移和时移地震)都基于空间规则分布条件下的地震数据体.因此,数据插值技术是地震数据处理流程中关键环节之一.失败的插值方法往往会引入虚假信息,给后续处理环节带来严重的影响.迭代插值方法是目前广泛应用的地震数据重建思路,但是常规的迭代插值方法往往很难保证插值精度,并且迭代收敛速度较慢,尤其存在随机噪声的情况下,插值地震道与原始地震道之间存在较大的信噪比差异.因此开发快速的、有效的迭代数据插值方法具有重要的工业价值.本文将地震数据插值归纳为数学基追踪问题,在压缩感知理论框架下,提出新的非线性Bregman整形迭代算法来求解约束最小化问题,同时在迭代过程中提出两种匹配的迭代控制准则,通过有效的稀疏变换对缺失数据进行重建.通过理论模型和实际数据测试本文方法,并且与常规迭代插值算法进行比较,结果表明Bregman整形迭代插值方法能够更加有效地恢复含有随机噪声的缺失地震信息.  相似文献   

7.
Interpolations of groundwater table elevation in dissected uplands   总被引:3,自引:0,他引:3  
Chung JW  Rogers JD 《Ground water》2012,50(4):598-607
The variable elevation of the groundwater table in the St. Louis area was estimated using multiple linear regression (MLR), ordinary kriging, and cokriging as part of a regional program seeking to assess liquefaction potential. Surface water features were used to determine the minimum water table for MLR and supplement the principal variables for ordinary kriging and cokriging. By evaluating the known depth to the water and the minimum water table elevation, the MLR analysis approximates the groundwater elevation for a contiguous hydrologic system. Ordinary kriging and cokriging estimate values in unsampled areas by calculating the spatial relationships between the unsampled and sampled locations. In this study, ordinary kriging did not incorporate topographic variations as an independent variable, while cokriging included topography as a supporting covariable. Cross validation suggests that cokriging provides a more reliable estimate at known data points with less uncertainty than the other methods. Profiles extending through the dissected uplands terrain suggest that: (1) the groundwater table generated by MLR mimics the ground surface and elicits a exaggerated interpolation of groundwater elevation; (2) the groundwater table estimated by ordinary kriging tends to ignore local topography and exhibits oversmoothing of the actual undulations in the water table; and (3) cokriging appears to give the realistic water surface, which rises and falls in proportion to the overlying topography. The authors concluded that cokriging provided the most realistic estimate of the groundwater surface, which is the key variable in assessing soil liquefaction potential in unconsolidated sediments.  相似文献   

8.
Estimating and mapping spatial uncertainty of environmental variables is crucial for environmental evaluation and decision making. For a continuous spatial variable, estimation of spatial uncertainty may be conducted in the form of estimating the probability of (not) exceeding a threshold value. In this paper, we introduced a Markov chain geostatistical approach for estimating threshold-exceeding probabilities. The differences of this approach compared to the conventional indicator approach lie with its nonlinear estimators—Markov chain random field models and its incorporation of interclass dependencies through transiograms. We estimated threshold-exceeding probability maps of clay layer thickness through simulation (i.e., using a number of realizations simulated by Markov chain sequential simulation) and interpolation (i.e., direct conditional probability estimation using only the indicator values of sample data), respectively. To evaluate the approach, we also estimated those probability maps using sequential indicator simulation and indicator kriging interpolation. Our results show that (i) the Markov chain approach provides an effective alternative for spatial uncertainty assessment of environmental spatial variables and the probability maps from this approach are more reasonable than those from conventional indicator geostatistics, and (ii) the probability maps estimated through sequential simulation are more realistic than those through interpolation because the latter display some uneven transitions caused by spatial structures of the sample data.  相似文献   

9.
In many fields of study, and certainly in hydrogeology, uncertainty propagation is a recurring subject. Usually, parametrized probability density functions (PDFs) are used to represent data uncertainty, which limits their use to particular distributions. Often, this problem is solved by Monte Carlo simulation, with the disadvantage that one needs a large number of calculations to achieve reliable results. In this paper, a method is proposed based on a piecewise linear approximation of PDFs. The uncertainty propagation with these discretized PDFs is distribution independent. The method is applied to the upscaling of transmissivity data, and carried out in two steps: the vertical upscaling of conductivity values from borehole data to aquifer scale, and the spatial interpolation of the transmissivities. The results of this first step are complete PDFs of the transmissivities at borehole locations reflecting the uncertainties of the conductivities and the layer thicknesses. The second step results in a spatially distributed transmissivity field with a complete PDF at every grid cell. We argue that the proposed method is applicable to a wide range of uncertainty propagation problems.  相似文献   

10.
The Bayesian maximum entropy (BME) method can be used to predict the value of a spatial random field at an unsampled location given precise (hard) and imprecise (soft) data. It has mainly been used when the data are non-skewed. When the data are skewed, the method has been used by transforming the data (usually through the logarithmic transform) in order to remove the skew. The BME method is applied for the transformed variable, and the resulting posterior distribution transformed back to give a prediction of the primary variable. In this paper, we show how the implementation of the BME method that avoids the use of a transform, by including the logarithmic statistical moments in the general knowledge base, gives more appropriate results, as expected from the maximum entropy principle. We use a simple illustration to show this approach giving more intuitive results, and use simulations to compare the approaches in terms of the prediction errors. The simulations show that the BME method with the logarithmic moments in the general knowledge base reduces the errors, and we conclude that this approach is more suitable to incorporate soft data in a spatial analysis for lognormal data.  相似文献   

11.
Flow and transport in porous media is determined by its structure. Beside spatial correlation, especially the connectivity of heterogeneous conductivities is acknowledged to be a key factor. This has been demonstrated for well defined random fields having different topological properties. Yet, it remains an open question which morphological measures carry sufficient information to actually predict flow and transport in porous media. We analyze flow and transport in classical, two-dimensional random fields showing different topology and we determine a selection of structural characteristics including classical two-point statistics, chord-length distribution and Minkowski functions (four-point statistics) including the Euler number as a topological measure. Using the approach of simulated annealing for global optimization we generate analog random fields that are forced to reproduce one or several of theses structural characteristics. Finally we evaluate in how far the generated analogons reproduce the original flow and transport behavior as well as some more elaborate structural characteristics including percolation probabilities and the pair connectivity function. The results confirm that two-point statistics is insufficient to capture functional properties since it is not sensitive to connectivity. In contrast, the combination of Minkowski functions and chord length distributions carries sufficient information to reproduce the breakthrough curve of a conservative solute. Hence, global topology provided by the Euler number together with local clustering provided by the chord length distribution seems to be a powerful condensation of structural complexity with respect to functional properties.  相似文献   

12.
DC (direct current) electrical and shallow seismic methods are indispensable to the near surface geophysical exploration, but the near surface areas are very difficult environments for any geophysical exploration due to the random noise caused by near surface inhomogeneities. As a new algorithm based on higher-order statistics theory, the higher-order correlation stacking algorithm for seismic data smoothing in the wavelet domain has been developed and applied efficiently to filter some correlation noise that the conventional second-order correlation stacking could not inhibit. In this paper, this higher-order statistics correlation stacking technology is presented for DC electrical data in wavelet domain. Taking into account the single section and multiple section data, we present two new formulations of correlation stacking for DC electrical data. Synthetic examples with Gaussian noise are designed to analyze the overall efficiency of the new algorithm and to determine its efficacy. Meanwhile, comparison with the traditional least-squares optimization inversion method for field examples from electrical imaging surveys and time-domain IP measurement in China shows its significant advantages. The quality of the new algorithm also has been assessed by physical simulation experiments. This new technology in DC electrical exploration measurements provides a new application in engineering and mining investigation.  相似文献   

13.
Spatial interpolation methods used for estimation of missing precipitation data generally under and overestimate the high and low extremes, respectively. This is a major limitation that plagues all spatial interpolation methods as observations from different sites are used in local or global variants of these methods for estimation of missing data. This study proposes bias‐correction methods similar to those used in climate change studies for correcting missing precipitation estimates provided by an optimal spatial interpolation method. The methods are applied to post‐interpolation estimates using quantile mapping, a variant of equi‐distant quantile matching and a new optimal single best estimator (SBE) scheme. The SBE is developed using a mixed‐integer nonlinear programming formulation. K‐fold cross validation of estimation and correction methods is carried out using 15 rain gauges in a temperate climatic region of the U.S. Exhaustive evaluation of bias‐corrected estimates is carried out using several statistical, error, performance and skill score measures. The differences among the bias‐correction methods, the effectiveness of the methods and their limitations are examined. The bias‐correction method based on a variant of equi‐distant quantile matching is recommended. Post‐interpolation bias corrections have preserved the site‐specific summary statistics with minor changes in the magnitudes of error and performance measures. The changes were found to be statistically insignificant based on parametric and nonparametric hypothesis tests. The correction methods provided improved skill scores with minimal changes in magnitudes of several extreme precipitation indices. The bias corrections of estimated data also brought site‐specific serial autocorrelations at different lags and transition states (dry‐to‐dry, dry‐to‐wet, wet‐to‐wet and wet‐to‐dry) close to those from the observed series. Bias corrections of missing data estimates provide better serially complete precipitation time series useful for climate change and variability studies in comparison to uncorrected filled data series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Abstract

Gridded meteorological data are available for all of Norway as time series dating from 1961. A new way of interpolating precipitation in space from observed values is proposed. Based on the criteria that interpolated precipitation fields in space should be consistent with observed spatial statistics, such as spatial mean, variance and intermittency, spatial fields of precipitation are simulated from a gamma distribution with parameters determined from observed data, adjusted for intermittency. The simulated data are distributed in space, using the spatial pattern derived from kriging. The proposed method is compared to indicator kriging and to the current methodology used for producing gridded precipitation data. Cross-validation gave similar results for the three methods with respect to RMSE, temporal mean and standard deviation, whereas a comparison on estimated spatial variance showed that the new method has a near perfect agreement with observations. Indicator kriging underestimated the spatial variance by 60–80% and the current method produced a significant scatter in its estimates.

Citation Skaugen, T. & Andersen, J. (2010) Simulated precipitation fields with variance-consistent interpolation. Hydrol. Sci. J. 55(5), 676–686.  相似文献   

15.
Remotely sensed images as a major data source to observe the earth, have been extensively integrated into spatial-temporal analysis in environmental research. Information on spatial distribution and spatial-temporal dynamic of natural entities recorded by series of images, however, usually bears various kinds of uncertainties. To deepen our insight into the uncertainties that are inherent in these observations of natural phenomena from images, a general data modeling methodology is developed to embrace different kinds of uncertainties. The aim of this paper is to propose a random set method for uncertainty modeling of spatial objects extracted from images in environmental study. Basic concepts of random set theory are introduced and primary random spatial data types are defined based on them. The method has been applied to dynamic wetland monitoring in the Poyang Lake national nature reserve in China. Four Landsat images have been used to monitor grassland and vegetation patches. Their broad gradual boundaries are represented by random sets, and their statistical mean and median are estimated. Random sets are well suited to estimate these boundaries. We conclude that our method based on random set theory has a potential to serve as a general framework in uncertainty modeling and is applicable in a spatial environmental analysis.  相似文献   

16.
This paper presents a review of methods for stochastic representation of non-Gaussian random fields. One category of such methods is through transformation from Gaussian random fields, and the other category is through direct simulation. This paper also gives a reflection on the simulation of non-Gaussian random fields, with the focus on its primary application for uncertainty quantification, which is usually associated with a large number of simulations. Dimension reduction is critical in the representation of non-Gaussian random fields with the aim of efficient uncertainty quantification. Aside from introducing the methods for simulating non-Gaussian random fields, critical components related to suitable stochastic approaches for efficient uncertainty quantification are stressed in this paper. Numerical examples of stochastic groundwater flow are also presented to investigate the applicability and efficiency of the methods for simulating non-Gaussian random fields for the purpose of uncertainty quantification.  相似文献   

17.
Intrinsic random fields of order k, defined as random fields whose high-order increments (generalized increments of order k) are second-order stationary, are used in spatial statistics to model regionalized variables exhibiting spatial trends, a feature that is common in earth and environmental sciences applications. A continuous spectral algorithm is proposed to simulate such random fields in a d-dimensional Euclidean space, with given generalized covariance structure and with Gaussian generalized increments of order k. The only condition needed to run the algorithm is to know the spectral measure associated with the generalized covariance function (case of a scalar random field) or with the matrix of generalized direct and cross-covariances (case of a vector random field). The algorithm is applied to synthetic examples to simulate intrinsic random fields with power generalized direct and cross-covariances, as well as an intrinsic random field with power and spline generalized direct covariances and Matérn generalized cross-covariance.  相似文献   

18.
In Europe, since 1990, a survey on environmental monitoring has been taking place every 5 years, using moss samples to study the distribution of heavy metal concentration and assess contamination sources, resulting on the identification of statistical association of several heavy metal concentrations in mosses. With this work, we propose an extension of an existing spatio-temporal model, introduced in Høst et al. (JASA 90(431):853–861, 1995), allowing for prediction at unsampled locations of pollution data in the presence of covariates related to each country specificities, when separately modelling the spatial mean field, the spatial variance field and the space–time residual field. Moreover, this model allows to estimate an interpolation error, as an accuracy measure, derived dependently on the case study. For a validation purpose, a simulation study is conducted, showing that the use of the proposed model leads to more accurate prediction values. Results obtained by the proposed methodology for the most recent available survey, are compared with results obtained with no temporal information, namely when Ordinary Kriging, according to the definition in Cressie (Statistics for spatial data, Wiley, New York, 1993), is used to derive illustrative prediction maps based only on the most recent data. An exercise of cross-validation is performed relative to each of the scenarios considered and the average interpolation errors are presented. While assessing interpolation errors, we conclude that the monitoring specificities of each country and the information of preceding surveys allow for more accurate prediction results.  相似文献   

19.
20.
选取最小曲率、克里格、改进Shepard、反距离加权和径向基函数等5种网格化数学模型,对小江断裂地磁总强度加密区岩石圈磁场数据进行数据网格化,采用均方根预测误差和插值数据残差均方根等评价指标对网格化结果进行评价,结果表明,克里格插值与反距离加权插值法的精度最高。进一步比较克里格插值与反距离加权插值法的网格化图形质量,结果显示克里格插值网格化过程中兼顾了数据的平滑性和各实测点与待估点之间的空间位置关系,避免了系统误差,得出克里格插值更适用于岩石圈磁场数据网格化的结论。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号