首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   0篇
大气科学   1篇
地球物理   2篇
地质学   18篇
自然地理   1篇
  2022年   3篇
  2021年   3篇
  2020年   1篇
  2019年   3篇
  2017年   1篇
  2013年   1篇
  2012年   1篇
  2011年   3篇
  2009年   1篇
  2008年   2篇
  2007年   1篇
  2005年   2篇
排序方式: 共有22条查询结果,搜索用时 722 毫秒
1.
2.

In the field of mineral resources extraction, one main challenge is to meet production targets in terms of geometallurgical properties. These properties influence the processing of the ore and are often represented in resource modeling by coregionalized variables with a complex relationship between them. Valuable data are available about geometalurgical properties and their interaction with the beneficiation process given sensor technologies during production monitoring. The aim of this research is to update resource models as new observations become available. A popular method for updating is the ensemble Kalman filter. This method relies on Gaussian assumptions and uses a set of realizations of the simulated models to derive sample covariances that can propagate the uncertainty between real observations and simulated ones. Hence, the relationship among variables has a compositional nature, such that updating these models while keeping the compositional constraints is a practical requirement in order to improve the accuracy of the updated models. This paper presents an updating framework for compositional data based on ensemble Kalman filter which allows us to work with compositions that are transformed into a multivariate Gaussian space by log-ratio transformation and flow anamorphosis. This flow anamorphosis, transforms the distribution of the variables to joint normality while reasonably keeping the dependencies between components. Furthermore, the positiveness of those variables, after updating the simulated models, is satisfied. The method is implemented in a bauxite deposit, demonstrating the performance of the proposed approach.

  相似文献   
3.
Mathematical Geosciences - In the geosciences it is still uncommon to include measurement uncertainties into statistical methods such as discriminant analysis, but, especially for trace elements,...  相似文献   
4.
Frequently, regionalized positive variables are treated by preliminarily applying a logarithm, and kriging estimates are back-transformed using classical formulae for the expectation of a lognormal random variable. This practice has several problems (lack of robustness, non-optimal confidence intervals, etc.), particularly when estimating block averages. Therefore, many practitioners take exponentials of the kriging estimates, although the final estimations are deemed as non-optimal. Another approach arises when the nature of the sample space and the scale of the data are considered. Since these concepts can be suitably captured by an Euclidean space structure, we may define an optimal kriging estimator for positive variables, with all properties analogous to those of linear geostatistical techniques, even for the estimation of block averages. In this particular case, no assumption on preservation of lognormality is needed. From a practical point of view, the proposed method coincides with the median estimator and offers theoretical ground to this extended practice. Thus, existing software and routines remain fully applicable.  相似文献   
5.
It has been suggested that climate change might modify the occurrence rate of large storms and their magnitude, due to a higher availability of energy in the atmosphere-ocean system. Forecasting physical models are commonly used to assess the effects. No one expects the physical model forecasts for one specific day to be accurate; we consider them to be good if they adequately describe the statistical characteristics of the climate. The Peak-Over-Threshold (POT) method is a common way to statistically treat the occurrence and magnitude of hazardous events: here, occurrence is modelled as a Poisson process and magnitude over a given threshold is assumed to follow a Generalized Pareto Distribution (GPD). We restrict our attention to Weibull-related GPDs, which exhibit an upper bound, to comply with the fact that any physical process has a finite upper limit. This contribution uses this framework to model time series of log-significant wave-height constructed joining quasi-collocated hindcast data and buoy measurements. Two of the POT model parameters (inhomogeneous Poisson rate and logarithm of the GPD shape parameter are considered to be a combination of a linear function of time and a series indicator function. The third parameter, logarithm of the GPD upper bound, is considered to have only a series indicator component. The resulting parameters are estimated using Bayesian methods. Using hincast and buoy series, the time span of the data set is extended, enhancing the precision of statistical results about potential linear changes. Simultaneously the statistical behaviour of hincast and buoy series are compared. At the same time, the step function allows to calibrate the statistical reproduction of storms by hindcasting.  相似文献   
6.
Geostatistics for Compositional Data: An Overview   总被引:1,自引:0,他引:1  
Mathematical Geosciences - This paper presents an overview of results for the geostatistical analysis of collocated multivariate data sets, whose variables form a composition, where the components...  相似文献   
7.

Prediction of true classes of surficial and deep earth materials using multivariate spatial data is a common challenge for geoscience modelers. Most geological processes leave a footprint that can be explored by geochemical data analysis. These footprints are normally complex statistical and spatial patterns buried deep in the high-dimensional compositional space. This paper proposes a spatial predictive model for classification of surficial and deep earth materials derived from the geochemical composition of surface regolith. The model is based on a combination of geostatistical simulation and machine learning approaches. A random forest predictive model is trained, and features are ranked based on their contribution to the predictive model. To generate potential and uncertainty maps, compositional data are simulated at unsampled locations via a chain of transformations (isometric log-ratio transformation followed by the flow anamorphosis) and geostatistical simulation. The simulated results are subsequently back-transformed to the original compositional space. The trained predictive model is used to estimate the probability of classes for simulated compositions. The proposed approach is illustrated through two case studies. In the first case study, the major crustal blocks of the Australian continent are predicted from the surface regolith geochemistry of the National Geochemical Survey of Australia project. The aim of the second case study is to discover the superficial deposits (peat) from the regional-scale soil geochemical data of the Tellus Project. The accuracy of the results in these two case studies confirms the usefulness of the proposed method for geological class prediction and geological process discovery.

  相似文献   
8.
Nowadays, numerical modeling is a common tool used in the study of sedimentary basins, since it allows to quantify the processes simulated and to determine interactions among them. One of such programs is SIMSAFADIM-CLASTIC, a 3D forward-model process-based code to simulate the sedimentation in a marine basin at a geological time scale. It models the fluid flow, siliciclastic transport and sedimentation, and carbonate production. In this article, we present the last improvements in the carbonate production model, in particular about the usage of Generalized Lotka-Volterra equations that include logistic growth and interaction among species. Logistic growth is constrained by environmental parameters such as water depth, energy of the medium, and depositional profile. The environmental parameters are converted to factors and combined into one single environmental value to model the evolution of species. The interaction among species is quantified using the community matrix that captures the beneficial or detrimental effects of the presence of each species on the other. A theoretical example of a carbonate ramp is computed to show the interaction among carbonate and siliciclastic sediment, the effect of environmental parameters to the modeled species associations, and the interaction among these species associations. The distribution of the modeled species associations in the theoretical example presented is compared with the carbonate Oligocene-Miocene Asmari Formation in Iran and the Miocene Ragusa Platform in Italy.  相似文献   
9.
Joint Consistent Mapping of High-Dimensional Geochemical Surveys   总被引:1,自引:0,他引:1  
Geochemical surveys often contain several tens of components, obtained from different horizons and with different analytical techniques. These are used either to obtain elemental concentration maps or to explore links between the variables. The first task involves interpolation, the second task principal component analysis (PCA) or a related technique. Interpolation of all geochemical variables (in wt% or ppm) should guarantee consistent results: At any location, all variables must be positive and sum up to 100 %. This is not ensured by any conventional geostatistical technique. Moreover, the maps should ideally preserve any link present in the data. PCA also presents some problems, derived from the spatial dependence between the observations, and the compositional nature of the data. Log-ratio geostatistical techniques offer a consistent solution to all these problems. Variation-variograms are introduced to capture the spatial dependence structure: These are direct variograms of all possible log ratios of two components. They can be modeled with a function analogous to the linear model of coregionalization (LMC), where for each spatial structure there is an associated variation matrix describing the links between the components. Eigenvalue decompositions of these matrices provide a PCA of that particular spatial scale. The whole data set can then be interpolated by cokriging. Factorial cokriging can also be used to map a certain spatial structure, eventually projected onto those principal components (PCs) of that structure with relevant contribution to the spatial variability. If only one PC is used for a certain structure, the maps obtained represent the spatial variability of a geochemical link between the variables. These procedures and their advantages are illustrated with the horizon C Kola data set, with 25 components and 605 samples covering most of the Kola peninsula (Finland, Norway, Russia).  相似文献   
10.
Indicator Kriging without Order Relation Violations   总被引:2,自引:1,他引:1  
Indicator kriging (IK) is a spatial interpolation technique aimed at estimating the conditional cumulative distribution function (ccdf) of a variable at an unsampled location. Obtained results form a discrete approximation to this ccdf, and its corresponding discrete probability density function (cpdf) should be a vector, where each component gives the probability of an occurrence of a class. Therefore, this vector must have positive components summing up to one, like in a composition in the simplex. This suggests a simplicial approach to IK, based on the algebraic-geometric structure of this sample space: simplicial IK actually works with log-odds. Interpolated log-odds can afterwards be easily re-expressed as the desired cpdf or ccdf. An alternative but equivalent approach may also be based on log-likelihoods. Both versions of the method avoid by construction all conventional IK standard drawbacks: estimates are always within the (0,1) interval and present no order-relation problems (either with kriging or co-kriging). Even the modeling of indicator structural functions is clarified.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号