首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
This work deals with the geostatistical simulation of a family of stationary random field models with bivariate isofactorial distributions. Such models are defined as the sum of independent random fields with mosaic-type bivariate distributions and infinitely divisible univariate distributions. For practical applications, dead leaf tessellations are used since they provide a wide range of models and allow conditioning the realizations to a set of data via an iterative procedure (simulated annealing). The model parameters can be determined by comparing the data variogram and madogram, and enable to control the spatial connectivity of the extreme values in the realizations. An illustration to a forest dataset is presented, for which a negative binomial model is used to characterize the distribution of coniferous trees over a wooded area.  相似文献   

2.
Nonparametric bias-corrected variogram estimation under non-constant trend   总被引:1,自引:1,他引:0  
In geostatistics, the approximation of the spatial dependence structure of a process, through the estimation of the variogram or the covariogram of the variable under consideration, is an important issue. In this work, under a general spatial model, including a mean or trend function, and without assuming any parametric model for this function and for the dependence structure of the process, a general nonparametric estimator of the variogram is proposed. The new approach consists in applying an iterative algorithm, using the residuals obtained from a nonparametric local linear estimation of the trend function, jointly with a correction of the bias due to the use of these residuals. A simulation study checks the validity of the presented approaches in practice. The broad applicability of the procedures is demonstrated on a real data set.  相似文献   

3.
Most lumped rainfall-runoff models separate the interflow and groundwater components from the measured runoff hydrograph in an attempt to model these as hydrologic reservoir units. Similarly, rainfall losses due to infiltration as well as other abstractions are separated from the measured rainfall hyetograph, which are then used as inputs to the various hydrologic reservoir units. This data pre-processing is necessary in order to use the linear unit hydrograph theory, as well as for maintaining a hydrologic budget between the surface and subsurface flow processes. Since infiltration determines the shape of the runoff hydrograph, it must be estimated as accurately as possible. When measured infiltration data is available, Horton’s exponential infiltration model is preferable due to its simplicity. However, estimating the parameters from Horton’s model constitutes a nonlinear least squares fitting problem. Hence, an iterative procedure that requires initialization is subject to convergence. In a similar context, the separation of direct runoff, interflow, and baseflow from the total hydrograph is typically done in an ad hoc manner. However, many practitioners use exponential models in a rather “layer peeling” fashion to perform this separation. In essence, this also constitutes an exponential data fitting problem. Likewise, certain variogram functions can be fitted using exponential data fitting techniques. In this paper we show that fitting a Hortonian model to experimental data, as well as performing hydrograph separation, and total hydrograph and variogram fitting can all be formulated as a system identification problem using Hankel-based realization algorithms. The main advantage is that the parameters can be estimated in a noniterative fashion, using robust numerical linear algebra techniques. As such, the system identification algorithms overcome the problem of convergence inherent in iterative techniques. In addition, the algorithms are robust to noise in the data since they optimally separate the signal and noise subspaces from the observed noisy data. The algorithms are tested with real data from field experiments performed in Surinam, as well as with real hydrograph data from a watershed in Louisiana. The system identification techniques presented herein can also be used with any other type of exponential data such as exponential decays from nuclear experiments, tracer studies, and compartmental analysis studies.  相似文献   

4.
Modern methods of geostatistics deliver an essential contribution to Environmental Impact Assessment (EIA). These methods allow for spatial interpolation, forecast and risk assessment of expected impact during and after mining projects by integrating different sources of data and information. Geostatistical estimation and simulation algorithms are designed to provide both, a most likely forecast as well as information about the accuracy of the prediction. The representativeness of these measures depends strongly on the quality of the inferred model parameters, which are mainly defined by the parameters of the variogram or the covariance function. Available data may be sparse, trend affected and of different data type making the inference of representative geostatistical model parameters difficult. This contribution introduces a new method for best fitting of the geostatistical model parameters in the presence of a trend, which utilizes the empirical and theoretical differences between Universal Kriging and trend-predictions. The method extends well known approaches of cross validation in two aspects. Firstly, the model evaluation is not only limited to sample data locations but is performed on any prediction locations of the attribute in the domain. Secondly, it extends the measure used in cross validation, based on a single point replacement by using error curves. These allow defining rings of influence representing errors resulting from separate variogram lags. By analyzing the different variogram lags the fit of the complete covariance can be assessed and the influence of the several model parameters separated. The use of the proposed method in an EIA context is illustrated in a case study related on the prediction of mining-induced ground movements.  相似文献   

5.
Accurate estimation of aquifer parameters, especially from crystalline hard rock area, assumes a special significance for management of groundwater resources. The aquifer parameters are usually estimated through pumping tests carried out on water wells. While it may be costly and time consuming for carrying out pumping tests at a number of sites, the application of geophysical methods in combination with hydro-geochemical information proves to be potential and cost effective to estimate aquifer parameters. Here a method to estimate aquifer parameters such as hydraulic conductivity, formation factor, porosity and transmissivity is presented by utilizing electrical conductivity values analysed via hydro-geochemical analysis of existing wells and the respective vertical electrical sounding (VES) points of Sindhudurg district, western Maharashtra, India. Further, prior to interpolating the distribution of aquifer parameters of the study area, variogram modelling was carried out using data driven techniques of kriging, automatic relevance determination based Bayesian neural networks (ARD-BNN) and adaptive neuro-fuzzy neural networks (ANFIS). In total, four variogram model fitting techniques such as spherical, exponential, ARD-BNN and ANFIS were compared. According to the obtained results, the spherical variogram model in interpolating transmissivity, ARD-BNN variogram model in interpolating porosity, exponential variogram model in interpolating aquifer thickness and ANFIS variogram model in interpolating hydraulic conductivity outperformed rest of the variogram models. Accordingly, the accurate aquifer parameters maps of the study area were produced by using the best variogram model. The present results suggest that there are relatively high value of hydraulic conductivity, porosity and transmissivity at Parule, Mogarne, Kudal, and Zarap, which would be useful to characterize the aquifer system over western Maharashtra.  相似文献   

6.
We analyze the impact of the choice of the variogram model adopted to characterize the spatial variability of natural log-transmissivity on the evaluation of leading (statistical) moments of hydraulic heads and contaminant travel times and trajectories within mildly (randomly) heterogeneous two-dimensional porous systems. The study is motivated by the fact that in several practical situations the differences between various variogram types and a typical noisy sample variogram are small enough to suggest that one would often have a hard time deciding which of the tested models provides the best fit. Likewise, choosing amongst a set of seemingly likely variogram models estimated by means of geostatistical inverse models of flow equations can be difficult due to lack of sensitivity of available model discrimination criteria. We tackle the problem within the framework of numerical Monte Carlo simulations for mean uniform and radial flow scenarios. The effect of three commonly used isotropic variogram models, i.e., Gaussian, Exponential and Spherical, is analyzed. Our analysis clearly shows that (ensemble) mean values of the quantities of interest are not considerably influenced by the variogram shape for the range of parameters examined. Contrariwise, prediction variances of the quantities examined are significantly affected by the choice of the variogram model of the log-transmissivity field. The spatial distribution of the largest/lowest values of the relative differences observed amongst the tested models depends on a combination of variogram shape and parameters and relative distance from internal sources and the outer domain boundary. Our findings suggest the need of developing robust techniques to discriminate amongst a set of seemingly equally likely alternative variogram models in order to provide reliable uncertainty estimates of state variables.  相似文献   

7.
The multivariate Gaussian random function model is commonly used in stochastic hydrogeology to model spatial variability of log-conductivity. The multi-Gaussian model is attractive because it is fully characterized by an expected value and a covariance function or matrix, hence its mathematical simplicity and easy inference. Field data may support a Gaussian univariate distribution for log hydraulic conductivity, but, in general, there are not enough field data to support a multi-Gaussian distribution. A univariate Gaussian distribution does not imply a multi-Gaussian model. In fact, many multivariate models can share the same Gaussian histogram and covariance function, yet differ by their patterns of spatial continuity at different threshold values. Hence the decision to use a multi-Gaussian model to represent the uncertainty associated with the spatial heterogeneity of log-conductivity is not databased. Of greatest concern is the fact that a multi-Gaussian model implies the minimal spatial correlation of extreme values, a feature critical for mass transport and a feature that may be in contradiction with some geological settings, e.g. channeling. The possibility for high conductivity values to be spatially correlated should not be discarded by adopting a congenial model just because data shortage prevents refuting it. In this study, three alternatives to a multi-Gaussian model, all sharing the same Gaussian histogram and the same covariance function, but with different continuity patterns for extreme values, were considered to model the spatial variability of log-conductivity. The three alternative models, plus the traditional multi-Gaussian model, are used to perform Monte Carlo analyses of groundwater travel times from a hypothetical nuclear repository to the ground surface through a synthetic formation similar to the Finnsjön site in Sweden. The results show that the groundwater travel times predicted by the multi-Gaussian model could be ten times slower than those predicted by the other models. The probabilities of very short travel times could be severely underestimated using the multi-Gaussian model. Consequently, if field measured data are not sufficient to determine the higher-order moments necessary to validate the multi-Gaussian model — which is the usual situation in practice — other alternative models to the multi-Gaussian one ought to be considered.  相似文献   

8.
The variogram is a key parameter for geostatistical estimation and simulation. Preferential sampling may bias the spatial structure and often leads to noisy and unreliable variograms. A novel technique is proposed to weight variogram pairs in order to compensate for preferential or clustered sampling . Weighting the variogram pairs by global kriging of the quadratic differences between the tail and head values gives each pair the appropriate weight, removes noise and minimizes artifacts in the experimental variogram. Moreover, variogram uncertainty could be computed by this technique. The required covariance between the pairs going into variogram calculation, is a fourth order covariance that must be calculated by second order moments. This introduces some circularity in the calculation whereby an initial variogram must be assumed before calculating how the pairs should be weighted for the experimental variogram. The methodology is assessed by synthetic and realistic examples. For synthetic example, a comparison between the traditional and declustered variograms shows that the declustered variograms are better estimates of the true underlying variograms. The realistic example also shows that the declustered sample variogram is closer to the true variogram.  相似文献   

9.
10.
Spatial prediction of river channel topography by kriging   总被引:2,自引:0,他引:2  
Topographic information is fundamental to geomorphic inquiry, and spatial prediction of bed elevation from irregular survey data is an important component of many reach‐scale studies. Kriging is a geostatistical technique for obtaining these predictions along with measures of their reliability, and this paper outlines a specialized framework intended for application to river channels. Our modular approach includes an algorithm for transforming the coordinates of data and prediction locations to a channel‐centered coordinate system, several different methods of representing the trend component of topographic variation and search strategies that incorporate geomorphic information to determine which survey data are used to make a prediction at a specific location. For example, a relationship between curvature and the lateral position of maximum depth can be used to include cross‐sectional asymmetry in a two‐dimensional trend surface model, and topographic breaklines can be used to restrict which data are retained in a local neighborhood around each prediction location. Using survey data from a restored gravel‐bed river, we demonstrate how transformation to the channel‐centered coordinate system facilitates interpretation of the variogram, a statistical model of reach‐scale spatial structure used in kriging, and how the choice of a trend model affects the variogram of the residuals from that trend. Similarly, we show how decomposing kriging predictions into their trend and residual components can yield useful information on channel morphology. Cross‐validation analyses involving different data configurations and kriging variants indicate that kriging is quite robust and that survey density is the primary control on the accuracy of bed elevation predictions. The root mean‐square error of these predictions is directly proportional to the spacing between surveyed cross‐sections, even in a reconfigured channel with a relatively simple morphology; sophisticated methods of spatial prediction are no substitute for field data. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
Determination of hydraulic head, H, as a function of spatial coordinates and time, in ground water flow is the basis for aquifer management and for prediction of contaminant transport. Several computer codes are available for this purpose. Spatial distribution of the transmissivity, T(x,y), is a required input to these codes. In most aquifers, T varies in an erratic manner, and it can be characterized statistically in terms of a few moments: the expected value, the variance, and the variogram. Knowledge of these moments, combined with a few measurements, permits one to estimate T at any point using geostatistical methods. In a review of transmissivity data from 19 unconsolidated aquifers, Hoeksema and Kitanidis (1985) identified two types of the logtransmissivity Y= ln(T) variations: correlated variations with variance sigma2Yc and correlation scale, I(Y), on the order of kilometers, and uncorrelated variations with variance sigma2Yn. Direct identification of the logtransmissivity variogram, Gamma(Y), from measurements is difficult because T data are generally scarce. However, many head measurements are commonly available. The aim of the paper is to introduce a methodology to identify the transmissivity variogram parameters (sigma2Yc, I(Y), and sigma2Yn) using head data in formations characterized by large logtransmissivity variance. The identification methodology uses a combination of precise numerical simulations (carried out using analytic element method) and a theoretical model. The main objective is to demonstrate the application of the methodology to a regional ground water flow in Eagle Valley basin in west-central Nevada for which abundant transmissivity and head measurements are available.  相似文献   

12.
Sequential Gaussian simulation is one of the most widespread algorithms for simulating regionalized variables in the earth sciences. Simplicity and flexibility of this algorithm are the most important reasons that make it popular, but its implementation is highly dependent on a screen effect approximation that allows users to use a moving neighborhood instead of a unique neighborhood. Because of this, the size of the moving neighborhood the number of conditioning data and the size of variogram range are important in the simulation process and should be chosen carefully. In this work, different synthetic and real case studies are presented to show the effect of the neighborhood size the number of conditioning data and the size of variogram range on the simulation result, with respect to the reproduction of the model first and second-order parameters. Results indicate that, in both conditional and non-conditional simulation cases, using a neighborhood with <50 conditioning data may lead to an inaccurate reproduction of the model statistics, and some cases require considering more than 200 conditioning data. It also can be understood from the result of example 3 that when the variogram range is beg compared to the simulation domain determination of inaccurate simulation program is harder.  相似文献   

13.
José Návar 《水文研究》2013,27(11):1626-1633
The quantitative importance of rainfall interception loss and the performance of the reformulated Gash model were evaluated as a function of basal area in Mexico's northeastern temperate forest communities. A sensitivity analysis as well as an iterative search of parameters matched interception loss measurements and assessments and isolated coefficient values that drive the model performance. Set hypothesis was tested with a total of 73 rainfalls recorded on four forest stands with different canopy cover for model fitting (39) and validation (34). The reformulated Gash model predicted well rainfall interception loss because mean deviations between recorded and modelled interception loss as a function of gross rainfall, MD, were <2.6% and 5.3% for fitting and validating parameter data sets, respectively. Basal area was negatively related to the model performance, but maximum projected MD range values can be found in most interception loss studies, for example, <7% when basal area is <5 m2 ha?1. The wet canopy evaporation rate and the canopy storage coefficient drive interception loss and the iterative parameter search showed that high wet canopy evaporation rates were expected in these forests. These parameters must be further studied to physically explain drivers of high wet canopy evaporation rates. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Exploring a valid model for the variogram of an isotropic spatial process   总被引:1,自引:1,他引:0  
The variogram is one of the most important tools in the assessment of spatial variability and a crucial parameter for kriging. It is widely known that an estimator for the variogram cannot be used as its representator in some contexts because of its lack of conditional semi negative definiteness. Consequently, once the variogram is estimated, a valid family must be chosen to fit an appropriate model. Under isotropy, this selection is carried out by eye from the observation of the variogram estimated curve. In this paper, a statistical methodology is proposed to explore a valid model for the variogram. The statistic for this approach is based on quadratic forms depending on smoothed random variables which gather the underlying spatial variation. The distribution of the test statistic is approximated by a shifted chi-square distribution. A simulation study is also carried out to check the power and size of the test. Reference bands, as a complementary graphical tool, are calculated. An example from the literature is used to illustrate the methodologies presented.  相似文献   

15.
重磁位场转换计算中迭代法的综合分析与研究   总被引:9,自引:6,他引:3       下载免费PDF全文
处理转换计算在重磁资料解释中发挥着重要的作用,但一些计算如向下延拓、化极等有时是很不稳定的,在频率域中则表现为其转换因子具有明显的放大作用,所以其FFT理论计算结果是不稳定的.因此,很多研究工作都是围绕增加计算的稳定性、提高计算效果进行的,其中迭代法是近来在研究中受到普遍重视的方法技术,并取得了较好的成果.但也存在对迭代法研究还不够深入,对其存在的缺点认识不够充分、客观等问题,例如,迭代法进行延拓及化极等计算时,对一些具体应用虽能在一定程度上获得较好的计算结果,但却存在计算结果并不会随着迭代次数的增加而得到持续改善的问题,对于原本不稳定的计算,迭代法在迭代次数比较大时,所得的结果依然是不稳定的.为此,本文在对迭代法进行分析研究的基础上,进一步推导了迭代法的通式,并分析了对迭代法收敛性影响的各种因素.分析结果表明:迭代法收敛到FFT理论直接计算结果的决定因素是计算过程中如何选择原始数据到目标数据的映射函数;在选择了合适的映射函数的情况下,迭代次数不仅仅是决定计算成本,而是决定结果好坏的关键因素;增加迭代次数虽然能够使计算收敛到FFT直接计算理论结果,但如果该理论结果本身就是不稳定的,则迭代法计算如果收敛,也是收敛到一个不稳定的结果.所以针对位场处理转换中一些不稳定计算采用迭代法,并没有从根本上解决计算的不稳定性问题.  相似文献   

16.
Cartesian coordinate transformation between two erroneous coordinate systems is considered within the Errors-In-Variables (EIV) model. The adjustment of this model is usually called the total Least-Squares (LS). There are many iterative algorithms given in geodetic literature for this adjustment. They give equivalent results for the same example and for the same user-defined convergence error tolerance. However, their convergence speed and stability are affected adversely if the coefficient matrix of the normal equations in the iterative solution is ill-conditioned. The well-known numerical techniques, such as regularization, shifting-scaling of the variables in the model, etc., for fixing this problem are not applied easily to the complicated equations of these algorithms. The EIV model for coordinate transformations can be considered as the nonlinear Gauss-Helmert (GH) model. The (weighted) standard LS adjustment of the iteratively linearized GH model yields the (weighted) total LS solution. It is uncomplicated to use the above-mentioned numerical techniques in this LS adjustment procedure. In this contribution, it is shown how properly diminished coordinate systems can be used in the iterative solution of this adjustment. Although its equations are mainly studied herein for 3D similarity transformation with differential rotations, they can be derived for other kinds of coordinate transformations as shown in the study. The convergence properties of the algorithms established based on the LS adjustment of the GH model are studied considering numerical examples. These examples show that using the diminished coordinates for both systems increases the numerical efficiency of the iterative solution for total LS in geodetic datum transformation: the corresponding algorithm working with the diminished coordinates converges much faster with an error of at least 10-5 times smaller than the one working with the original coordinates.  相似文献   

17.
The sequential algorithm is widely used to simulate Gaussian random fields. However, a rigorous application of this algorithm is impractical and some simplifications are required, in particular a moving neighborhood has to be defined. To examine the effect of such restriction on the quality of the realizations, a reference case is presented and several parameters are reviewed, mainly the histogram, variogram, indicator variograms, as well as the ergodic fluctuations in the first and second-order statistics. The study concludes that, even in a favorable case where the simulated domain is large with respect to the range of the model, the realizations may poorly reproduce the second-order statistics and be inconsistent with the stationarity and ergodicity assumptions. Practical tips such as the multiple-grid strategy do not overcome these impediments. Finally, extending the original algorithm by using an ordinary kriging should be avoided, unless an intrinsic random function model is sought after.  相似文献   

18.
Stochastic models can generate profiles that resemble topography by taking uncorrelated, zero-average noise as input, introducing some correlation in the time series of noise, and integrating the resulting correlated noise. The output profile will depict a nonstationary, randomly rough surface. Two models have been chosen for comparison: a fractal model, in which the noise is correlated even at large distances, and an autoregressive model of order 1, in which the correlation of the noise decays rapidly. Both models have as an end-member a random walk, which is the integration of uncorrelated noise. The models have been fitted to profiles of submarine topography, and the sample autocorrelation, power spectrum and variogram have been compared to the theoretical predictions. The results suggest that a linear system approach is a viable method to model and classify sea-floor topography. The comparison does not show substantial disagreement of the data with either the autoregressive or the fractal model, although a fractal model seems to give a better fit. However, the amplitudes predicted by a nonstationary fractal model for long wavelengths (of the order of 1000 km) are unreasonably large. When viewed through a large window, ocean floor topography is likely to have an expected value determined by isostasy, and to be stationary. Nonstationary models are best applied to wavelengths of the order of 100 km or less.  相似文献   

19.
 In geostatistics, stochastic simulation is often used either as an improved interpolation algorithm or as a measure of the spatial uncertainty. Hence, it is crucial to assess how fast realization-based statistics converge towards model-based statistics (i.e. histogram, variogram) since in theory such a match is guaranteed only on average over a number of realizations. This can be strongly affected by the random number generator being used. Moreover, the usual assumption of independence among simulated realizations of a random process may be affected by the random number generator used. Simulation results, obtained by using three different random number generators implemented in Geostatistical Software Library (GSLib), are compared. Some practical aspects are pointed out and some suggestions are given to users of the unconditional LU simulation method.  相似文献   

20.
Moving window kriging with geographically weighted variograms   总被引:2,自引:2,他引:0  
This study adds to our ability to predict the unknown by empirically assessing the performance of a novel geostatistical-nonparametric hybrid technique to provide accurate predictions of the value of an attribute together with locally-relevant measures of prediction confidence, at point locations for a single realisation spatial process. The nonstationary variogram technique employed generalises a moving window kriging (MWK) model where classic variogram (CV) estimators are replaced with information-rich, geographically weighted variogram (GWV) estimators. The GWVs are constructed using kernel smoothing. The resultant and novel MWK–GWV model is compared with a standard MWK model (MWK–CV), a standard nonlinear model (Box–Cox kriging, BCK) and a standard linear model (simple kriging, SK), using four example datasets. Exploratory local analyses suggest that each dataset may benefit from a MWK application. This expectation was broadly confirmed once the models were applied. Model performance results indicate much promise in the MWK–GWV model. Situations where a MWK model is preferred to a BCK model and where a MWK–GWV model is preferred to a MWK–CV model are discussed with respect to model performance, parameterisation and complexity; and with respect to sample scale, information and heterogeneity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号