首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Geostatistical models should be checked to ensure consistency with conditioning data and statistical inputs. These are minimum acceptance criteria. Often the first and second-order statistics such as the histogram and variogram of simulated geological realizations are compared to the input parameters to check the reasonableness of the simulation implementation. Assessing the reproduction of statistics beyond second-order is often not considered because the “correct” higher order statistics are rarely known. With multiple point simulation (MPS) geostatistical methods, practitioners are now explicitly modeling higher-order statistics taken from a training image (TI). This article explores methods for extending minimum acceptance criteria to multiple point statistical comparisons between geostatistical realizations made with MPS algorithms and the associated TI. The intent is to assess how well the geostatistical models have reproduced the input statistics of the TI; akin to assessing the histogram and variogram reproduction in traditional semivariogram-based geostatistics. A number of metrics are presented to compare the input multiple point statistics of the TI with the statistics of the geostatistical realizations. These metrics are (1) first and second-order statistics, (2) trends, (3) the multiscale histogram, (4) the multiple point density function, and (5) the missing bins in the multiple point density function. A case study using MPS realizations is presented to demonstrate the proposed metrics; however, the metrics are not limited to specific MPS realizations. Comparisons could be made between any reference numerical analogue model and any simulated categorical variable model.  相似文献   

2.

Prediction of true classes of surficial and deep earth materials using multivariate spatial data is a common challenge for geoscience modelers. Most geological processes leave a footprint that can be explored by geochemical data analysis. These footprints are normally complex statistical and spatial patterns buried deep in the high-dimensional compositional space. This paper proposes a spatial predictive model for classification of surficial and deep earth materials derived from the geochemical composition of surface regolith. The model is based on a combination of geostatistical simulation and machine learning approaches. A random forest predictive model is trained, and features are ranked based on their contribution to the predictive model. To generate potential and uncertainty maps, compositional data are simulated at unsampled locations via a chain of transformations (isometric log-ratio transformation followed by the flow anamorphosis) and geostatistical simulation. The simulated results are subsequently back-transformed to the original compositional space. The trained predictive model is used to estimate the probability of classes for simulated compositions. The proposed approach is illustrated through two case studies. In the first case study, the major crustal blocks of the Australian continent are predicted from the surface regolith geochemistry of the National Geochemical Survey of Australia project. The aim of the second case study is to discover the superficial deposits (peat) from the regional-scale soil geochemical data of the Tellus Project. The accuracy of the results in these two case studies confirms the usefulness of the proposed method for geological class prediction and geological process discovery.

  相似文献   

3.
传统的地质统计学所形成的算法理论和方法(如克里格算法)过分依赖样品的数据,变异函数参数众多,给地质模拟造成很大困难。基于马尔科夫链的地质属性建模采用转移概率描述样品区域的各种参数变量,通过转移概率矩阵直接推导地质属性分布比例、平均长度,其简化了地质空间各向异性处理过程,克服了传统地质统计学中参数众多且复杂难以计算和地质体分布过程中存在不对称性等缺陷,使得整个地质属性建模的过程更简洁、清晰,容易理解,且建模的结果可以很好地反映地质体空间分布的复杂性。该文利用马尔科夫链对南京市河西地区的新近地质层进行了地质属性建模,实例应用表明,使用该模型进行地质属性建模可为进一步的数值模拟提供支持。  相似文献   

4.
Spatial uncertainty analysis is a complex and difficult task for orebody estimation in the mining industry. Conventional models (kriging and its variants) with variogram-based statistics fail to capture the spatial complexity of an orebody. Due to this, the grade and tonnage are incorrectly estimated resulting in inaccurate mine plans, which lead to costly financial decision. Multiple-point geostatistical simulation model can overcome the limitations of the conventional two-point spatial models. In this study, a multiple-point geostatistical method, namely SNESIM, was applied to generate multiple equiprobable orebody models for a copper deposit in Africa, and it helped to analyze the uncertainty of ore tonnage of the deposit. The grade uncertainty was evaluated by sequential Gaussian simulation within each equiprobable orebody models. The results were validated by reproducing the marginal distribution and two- and three-point statistics. The results show that deviations of volume of the simulated orebody models vary from ? 3 to 5% compared to the training image. The grade simulation results demonstrated that the average grades from the different simulation are varied from 3.77 to 4.92% and average grade 4.33%. The results also show that the volume and grade uncertainty model overestimates the orebody volume as compared to the conventional orebody. This study demonstrates that incorporating grade and volume uncertainty leads to significant changes in resource estimates.  相似文献   

5.
Minimum Acceptance Criteria for Geostatistical Realizations   总被引:2,自引:0,他引:2  
Geostatistical simulation is being used increasingly for numerical modeling of natural phenomena. The development of simulation as an alternative to kriging is the result of improved characterization of heterogeneity and a model of joint uncertainty. The popularity of simulation has increased in both mining and petroleum industries. Simulation is widely available in commercial software. Many of these software packages, however, do not necessarily provide the tools for careful checking of the geostatistical realizations prior to their use in decision-making. Moreover, practitioners may not understand all that should be checked. There are some basic checks that should be performed on all geostatistical models. This paper identifies (1) the minimum criteria that should be met by all geostatistical simulation models, and (2) the checks required to verify that these minimum criteria are satisfied. All realizations should honor the input information including the geological interpretation, the data values at their locations, the data distribution, and the correlation structure, within acceptable statistical fluctuations. Moreover, the uncertainty measured by the differences between simulated realizations should be a reasonable measure of uncertainty. A number of different applications are shown to illustrate the various checks. These checks should be an integral part of any simulation modeling work flow.  相似文献   

6.

Incorporating locally varying anisotropy (LVA) in geostatistical modeling improves estimates for structurally complex domains where a single set of anisotropic parameters modeled globally do not account for all geological features. In this work, the properties of two LVA-geostatistical modeling frameworks are explored through application to a complexly folded gold deposit in Ghana. The inference of necessary parameters is a significant requirement of geostatistical modeling with LVA; this work focuses on the case where LVA orientations, derived from expert geological interpretation, are used to improve the grade estimates. The different methodologies for inferring the required parameters in this context are explored. The results of considering different estimation frameworks and alternate methods of parameterization are evaluated with a cross-validation study, as well as visual inspection of grade continuity along select cross sections. Results show that stationary methodologies are outperformed by all LVA techniques, even when the LVA framework has minimal guidance on parameterization. Findings also show that additional improvements are gained by considering parameter inference where the LVA orientations and point data are used to infer the local range of anisotropy. Considering LVA for geostatistical modeling of the deposit considered in this work results in better reproduction of curvilinear geological features.

  相似文献   

7.
Additional Samples: Where They Should Be Located   总被引:2,自引:0,他引:2  
Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additional sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.  相似文献   

8.
An important aim of modern geostatistical modeling is to quantify uncertainty in geological systems. Geostatistical modeling requires many input parameters. The input univariate distribution or histogram is perhaps the most important. A new method for assessing uncertainty in the histogram, particularly uncertainty in the mean, is presented. This method, referred to as the conditional finite-domain (CFD) approach, accounts for the size of the domain and the local conditioning data. It is a stochastic approach based on a multivariate Gaussian distribution. The CFD approach is shown to be convergent, design independent, and parameterization invariant. The performance of the CFD approach is illustrated in a case study focusing on the impact of the number of data and the range of correlation on the limiting uncertainty in the parameters. The spatial bootstrap method and CFD approach are compared. As the number of data increases, uncertainty in the sample mean decreases in both the spatial bootstrap and the CFD. Contrary to spatial bootstrap, uncertainty in the sample mean in the CFD approach decreases as the range of correlation increases. This is a direct result of the conditioning data being more correlated to unsampled locations in the finite domain. The sensitivity of the limiting uncertainty relative to the variogram and the variable limits are also discussed.  相似文献   

9.
Five decades of geostatistical development are reviewed to summarize the state of the art for spatial interpolation vis-à-vis kriging or a form thereof. Although a search of the literature reveals a variety of kriging methods, there are but two infrastructures for geostatistical interpolation: simple cokriging, for estimating a single variable using two variables, and generalized cokriging, for estimating one or more variables using the same number of variables that are estimated. The many forms of kriging are varieties of these two interpolation infrastructures. This notion is emphasized to aid the selection of an appropriate interpolation model for a nonrenewable resource. These models are discussed, and literature for the models and for applicable software is cited. Additionally, all aspects of spatial interpolation are discussed, including the adequacy of spatial sampling, distribution characteristics of spatial samples, semivariograms, search parameters, and selection of interpolation models in conformance with spatial data characteristics. Finally, the relationship between interpolation and raster-based geographic information systems is emphasized.  相似文献   

10.
This article addresses the problem of the prediction of the breccia pipe elevation named Braden at the El Teniente mine in Chile. This mine is one of the world’s largest known porphyry-copper ore bodies. Knowing the exact location of the pipe surface is important, as it constitutes the internal limit of the deposit. The problem is tackled by applying a non-stationary geostatistical method based on space deformation, which involves transforming the study domain into a new domain where a standard stationary geostatistical approach is more appropriate. Data from the study domain is mapped into the deformed domain, and classical stationary geostatistical techniques for prediction can then be applied. The predicted results are then mapped back into the original domain. According to the results, this non-stationary geostatistical method outperforms the conventional stationary one in terms of prediction accuracy and conveys a more informative uncertainty model of the predictions.  相似文献   

11.
This paper focuses on two common problems encountered when using Light Detection And Ranging (LiDAR) data to derive digital elevation models (DEMs). Firstly, LiDAR measurements are obtained in an irregular configuration and on a point, rather than a pixel, basis. There is usually a need to interpolate from these point data to a regular grid so it is necessary to identify the approaches that make best use of the sample data to derive the most accurate DEM possible. Secondly, raw LiDAR data contain information on above‐surface features such as vegetation and buildings. It is often the desire to (digitally) remove these features and predict the surface elevations beneath them, thereby obtaining a DEM that does not contain any above‐surface features. This paper explores the use of geostatistical approaches for prediction in this situation. The approaches used are inverse distance weighting (IDW), ordinary kriging (OK) and kriging with a trend model (KT). It is concluded that, for the case studies presented, OK offers greater accuracy of prediction than IDW while KT demonstrates benefits over OK. The absolute differences are not large, but to make the most of the high quality LiDAR data KT seems the most appropriate technique in this case.  相似文献   

12.
Cause-Effect Analysis in Assessment of Mineral Resources   总被引:1,自引:0,他引:1  
Cause-effect analyses is a deterministic methodology intended for processing qualitative (e.g. texts, conventional maps) and mixed, qualitative and quantitative, data. The main idea, employed in cause-effect analysis, is the plurality and interaction of causes. This idea is described by mathematical logic formulae that can be converted into a single Boolean equation. The latter represents a mathematical model of a general shape of cause-effect relations for the study problem. In particular, such a model can express relations between some property of the mineralization and features of other geological phenomena. By processing data, logical dependencies satisfying to the theoretical model are determined in a data file. These dependences, expressed by Boolean function formulae, describe cause-effect relations for a case study, and they are used for predicting. Software realizing the cause-effect analysis is an expert system with artificial intelligence capabilities. There are two methods of using the cause-effect analysis in assessment of mineral resources. The first method consists in detecting the regularity in locations of known mineral deposits and occurrences with the following using the regularity formula for generation of predictive maps. The second method is the evaluation of individual mineral occurrences by obtained Boolean formulae expressing cause-effect relations between deposit sizes and geological environment of deposits. Both methods are illustrated by case studies of predicting gold-bearing deposits of Middle Asia in the former USSR.  相似文献   

13.
Topodata: Brazilian full coverage refinement of SRTM data   总被引:2,自引:0,他引:2  
This work presents the selection of a set of geostatistical coefficients suitable for a unified SRTM data refinement from 3″ to 1″ through kriging over the entire Brazilian territory. This selection aimed at data potential for geomorphometric derivations, given by the preservation of detailed geometric characteristics of the resulting digital elevation models (DEM), which are sensitive to refining procedures. The development contained a long-term experimentation stage, when data refinement through kriging was locally developed to support distinct regional projects, followed by a unified selection stage, where the acquired experience was applied to select a single and unified interpolation scheme. In this stage, the selected geostatistical models with promising performances were tested for unified refinement on 40 Brazilian areas with distinct geological settings. Tested areas encompass reliefs varying from mountainous to plain. The effects of data preparation were observed on the perception of patterns (texture and roughness), as well as of singularities (edges, peaks, thalwegs etc.). Results were evaluated mainly through the examination of shaded reliefs, transects and perspectives observed in different scales. Terrains with low slopes and small amplitudes had their DEM promptly affected by the refining methods, as opposed to mountainous terrains. The evaluation, unambiguously confirmed by all consulted interpreters, converged into a refining model with outstanding performance in all tested conditions.  相似文献   

14.
《自然地理学》2013,34(2):130-153
Contamination of ground water has been a major environmental concern in recent years. The potential for ground-water contamination by pesticides depends on porous media, solute, and hydrologic parameters. Although sophisticated deterministic computer models are available for assessing aquifer-contamination potential on a site-by-site basis, most deterministic models are too complex for vulnerability assessment on a regional scale because they require input data that are spatially and temporally variable, and which may not be available at this scale. Therefore, development of an affordable model that is robust under conditions of uncertainty at the watershed scale with minimum input of field data becomes a useful ground-water management tool. The purpose of this study was to examine the usefulness of fuzzy rule-based techniques in predicting aquifer vulnerability to pesticides at the regional scale. The objectives were to (1) develop fuzzy rule-based models using the same input parameters contained in an index-based model (i.e., the modified DRASTIC model), (2) determine the sensitivity of fuzzy rule model predictions, (3) compare the outputs of the fuzzy rule-based models with those of the modified DRASTIC model and with the results of aquifer water-quality analyses, and (4) examine the spatial variability of field parameters around contaminated wells of the Alluvial aquifer in Woodruff County Arkansas. The fuzzy rule-based model for objective (1) was developed using similar parameter weights and ratings as the modified DRASTIC model. For objective (2), fuzzy rule-based models were created using fewer parameters than the modified DRASTIC model. Sensitivity of the fuzzy rule-based models was determined using different combinations of weights of the four input parameters in DRASTIC. It was found that variations in the weights of the input parameters and number of fuzzy sets influenced the location of the aquifer-vulnerability categories as well as the area within each fuzzy category. The fuzzy rule models tended to predict somewhat higher vulnerabilities of the Alluvial aquifer than the modified DRASTIC model. The fuzzy rule base that had the soil-leaching index (S) as the highest weight was chosen as the best fuzzy rule model in predicting potential contamination by pesticides of the aquifer. In general, the fuzzy rule models tended to overestimate the vulnerability of the aquifer in the study area.  相似文献   

15.

The temperature distribution at depth is a key variable when assessing the potential of a supercritical geothermal resource as well as a conventional geothermal resource. Data-driven estimation by a machine-learning approach is a promising way to estimate temperature distributions at depth in geothermal fields. In this study, we developed two methodologies—one based on Bayesian estimation and the other on neural networks—to estimate temperature distributions in geothermal fields. These methodologies can be used to supplement existing temperature logs, by estimating temperature distributions in unexplored regions of the subsurface, based on electrical resistivity data, observed geological/mineralogical boundaries, and microseismic observations. We evaluated the accuracy and characteristics of these methodologies using a numerical model of the Kakkonda geothermal field, Japan, where a temperature above 500 °C was observed below a depth of about 3.7 km. When using geological and geophysical knowledge as prior information for the machine learning methods, the results demonstrate that the approaches can provide subsurface temperature estimates that are consistent with the temperature distribution given by the numerical model. Using a numerical model as a benchmark helps to understand the characteristics of the machine learning approaches and may help to identify ways of improving these methods.

  相似文献   

16.
In this study, we demonstrate a novel use of comaps to explore spatially the performance, specification and parameterisation of a non-stationary geostatistical predictor. The comap allows the spatial investigation of the relationship between two geographically referenced variables via conditional distributions. Rather than investigating bivariate relationships in the study data, we use comaps to investigate bivariate relationships in the key outputs of a spatial predictor. In particular, we calibrate moving window kriging (MWK) models, where a local variogram is found at every target location. This predictor has often proved worthy for processes that are heterogeneous, and most standard (global variogram) kriging algorithms can be adapted in this manner. We show that the use of comaps enables a better understanding of our chosen MWK models, which in turn allows a more informed choice when selecting one MWK specification over another. As case studies, we apply four variants of MWK to two heterogeneous example data sets: (i) freshwater acidification critical load data for Great Britain and (ii) London house price data. As both of these data sets are strewn with local anomalies, three of our chosen models are robust (and novel) extensions of MWK, where at least one of which is shown to perform better than a non-robust counterpart.  相似文献   

17.
A fundamental task for petroleum exploration decision-making is to evaluate the uncertainty of well outcomes. The recent development of geostatistical simulation techniques provides an effective means to the generation of a full uncertainty model for any random variable. Sequential indicator simulation has been used as a tool to generate alternate, equal-probable stochastic models, from which various representations of uncertainties can be created. These results can be used as input for the quantification of various risks associated with a wildcat drilling program or the estimation of petroleum resources. A simple case study is given to demonstrate the use of sequential indicator simulation. The data involves a set of wildcat wells in a gas play. The multiple simulated stochastic models are then post-processed to characterize various uncertainties associated with drilling outcomes.  相似文献   

18.
The term physical accessibility has long been used by geographers, economists, and urban planners and reflects the relative ease of access to/from several urban/rural services by considering the traveling costs. Numerous accessibility measures, ranging from simple to sophisticated, can be observed in the geographical information systems (GIS)-based accessibility modeling literature. However, these measures are generally calculated from a constant catchment boundary (a most likely or average catchment boundary) based on constant deterministic transportation costs. This is one of the fundamental shortcomings of the current GIS-based accessibility modeling and creates uncertainty about the accuracy and reliability of the accessibility measures, especially when highly variable speeds in road segments are considered. The development of a new stochastic approach by using global positioning system (GPS)-based floating car data and Monte Carlo simulation (MCS) technique could enable handling the variations in transportation costs in a probabilistic manner and help to consider all possible catchment boundaries, instead of one average catchment boundary, in accessibility modeling process. Therefore, this article proposes a stochastic methodology for GIS-based accessibility modeling by using GPS-based floating car data and MCS technique. The proposed methodology is illustrated with a case study on medical emergency service accessibility in Eskisehir, Turkey. Moreover, deterministic and stochastic accessibility models are compared to demonstrate the differences between the models. The proposed model could provide better decision support for the decision-makers who are supposed to deal with accessibility, location/allocation, and service/catchment area related issues.  相似文献   

19.
Jeuken  Rick  Xu  Chaoshui  Dowd  Peter 《Natural Resources Research》2020,29(4):2529-2546

In most modern coal mines, there are many coal quality parameters that are measured on samples taken from boreholes. These data are used to generate spatial models of the coal quality parameters, typically using inverse distance as an interpolation method. At the same time, downhole geophysical logging of numerous additional boreholes is used to measure various physical properties but no coal quality samples are taken. The work presented in this paper uses two of the most important coal quality variables—ash and volatile matter—and assesses the efficacy of using a number of geostatistical interpolation methods to improve the accuracy of the interpolated models, including the use of auxiliary variables from geophysical logs. A multivariate spatial statistical analysis of ash, volatile matter and several auxiliary variables is used to establish a co-regionalization model that relates all of the variables as manifestations of an underlying geological characteristic. A case study of a coal mine in Queensland, Australia, is used to compare the interpolation methods of inverse distance to ordinary kriging, universal kriging, co-kriging, regression kriging and kriging with an external drift. The relative merits of these six methods are compared using the mean error and the root mean square error as measures of bias and accuracy. The study demonstrates that there is significant opportunity to improve the estimations of coal quality when using kriging with an external drift. The results show that when using the depth of a sample as an external drift variable there is a significant improvement in the accuracy of estimation for volatile matter, and when using wireline density logs as the drift variable there is improvement in the estimation of the in situ ash. The economic benefit of these findings is that cheaper proxies for coal quality parameters can significantly increase data density and the quality of estimations.

  相似文献   

20.
Two general approaches have been applied to understanding the fractal structure of fluvial topography: (1) deterministic, process-based models, and (2) stochastic partial differential equations (PDE). Deterministic models reproduce the fractal behavior of fluvial topography but have two limitations: they often underestimate the amount of lateral valley and ridge migration that occurs in nature, and the complexity has made it difficult to identify the precise origin of fractal behavior in fluvial landscapes. The simplicity of stochastic PDE models has made them useful for investigating fractal behavior, but they incorrectly suggest that fractal behavior is only possible with stochastic forcing. In this paper I investigate whether simplified, deterministic PDE models of landform evolution also exhibit fractal behavior and other features of complexity (i.e. deterministic chaos). These models are based on the KPZ equation, well known in the physics literature. This equation combines diffusion (i.e. hillslope processes) and nonlinear advection (i.e. bedrock or alluvial channel incision). Two models are considered: (1) a deterministic model with uniform erodibility and random initial topography, and (2) a deterministic model with random erodibility and uniform initial topography. Results illustrate that both of these deterministic models exhibit fractal behavior and deterministic chaos. In this context, chaotic behavior means that valley and ridge migration and nonlinear amplification of small perturbations in these models prevent an ideal steady state landscape from ever developing in the large-system limit. These results suggest that fractal structure and deterministic chaos are intrinsic features of the evolution of fluvial landforms, and that these features result from an inverse cascade of energy from small to large wavelengths in drainage basins. This inverse cascade differs from the direct cascade of three-dimensional turbulence in which energy flows from large to small wavelengths.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号