首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   2篇
地球物理   8篇
地质学   16篇
  2016年   1篇
  2015年   1篇
  2013年   12篇
  2012年   1篇
  2011年   2篇
  2010年   5篇
  2008年   1篇
  2005年   1篇
排序方式: 共有24条查询结果,搜索用时 15 毫秒
1.
For mineral resource assessment, techniques based on fuzzy logic are attractive because they are capable of incorporating uncertainty associated with measured variables and can also quantify the uncertainty of the estimated grade, tonnage etc. The fuzzy grade estimation model is independent of the distribution of data, avoiding assumptions and constraints made during advanced geostatistical simulation, e.g., the turning bands method. Initially, fuzzy modelling classifies the data using all the component variables in the data set. We adopt a novel approach by taking into account the spatial irregularity of mineralisation patterns using the Gustafson–Kessel classification algorithm. The uncertainty at the point of estimation was derived through antecedent memberships in the input space (i.e., spatial coordinates) and transformed onto the output space (i.e., grades) through consequent membership at the point of estimation. Rather than probabilistic confidence intervals, this uncertainty was expressed in terms of fuzzy memberships, which indicated the occurrence of mixtures of different mineralogical phases at the point of estimation. Data from different sources (other than grades) could also be utilised during estimation. Application of the proposed technique on a real data set gave results that were comparable to those obtained from a turning bands simulation.  相似文献   
2.
In the assessment of potentially contaminated land, the number of samples and the uncertainty of the measurements (including that from sampling) are both important factors in the planning and implementation of an investigation. Both parameters also effect the interpretation of the measurements produced, and the process of making decisions based upon those measurements. However, despite their importance, previously there has been no method for assessing if an investigation is fit‐for‐purpose with respect to both of these parameters. The Whole Site Optimised Contaminated Land Investigation (WSOCLI) method has been developed to address this issue, and to allow the optimisation of an investigation with respect to both the number of samples and the measurement uncertainty, using an economic loss function. This function was developed to calculate an ‘expectation of (financial) loss’, incorporating costs of the investigation itself, subsequent land remediation, and potential consequential costs. To allow the evaluation of the WSOCLI method a computer program ‘OCLISIM’ has been developed to produce sample data from simulated contaminated land investigations. One advantage of such an approach is that as the ‘true’ contaminant concentrations are created by the program, these values are known, which is not the case in a real contaminated land investigation. This enables direct comparisons between functions of the ‘true’ concentrations and functions of the simulated measurements. A second advantage of simulation for this purpose is that the WSOCLI method can be tested on many different patterns and intensities of contamination. The WSOCLI method performed particularly well at high sampling densities producing expectations of financial loss that approximated to the true costs, which were also calculated by the program. WSOCLI was shown to produce notable trends in the relationship between the overall cost (i.e., expectation of loss) and both the number of samples and the measurement uncertainty, which are: (a) low measurement uncertainty was optimal when the decision threshold was between the mean background and the mean hot spot concentrations. (b) When the hot spot mean concentration is equal to or near the decision threshold, then mid‐range measurement uncertainties were optimal. (c) When the decision threshold exceeds the mean of the hot spot, mid‐range measurement uncertainties were optimal. The trends indicate that the uncertainty may continue to rise if the difference between hot spot mean and the decision threshold increases further. (d) In any of the above scenarios, the optimal measurement uncertainty was lower if there is a large geochemical variance (i.e., heterogeneity) within the hot spot. (e) The optimal number of samples for each scenario was indicated by the WSOCLI method, and was between 50 and 100 for the scenarios considered generally; although there was significant noise in the predictions, which needs to be addressed in future work to allow such conclusions to be clearer.  相似文献   
3.
Abstract

Abstract The aim of this study was to estimate the uncertainties in the streamflow simulated by a rainfall–runoff model. Two sources of uncertainties in hydrological modelling were considered: the uncertainties in model parameters and those in model structure. The uncertainties were calculated by Bayesian statistics, and the Metropolis-Hastings algorithm was used to simulate the posterior parameter distribution. The parameter uncertainty calculated by the Metropolis-Hastings algorithm was compared to maximum likelihood estimates which assume that both the parameters and model residuals are normally distributed. The study was performed using the model WASMOD on 25 basins in central Sweden. Confidence intervals in the simulated discharge due to the parameter uncertainty and the total uncertainty were calculated. The results indicate that (a) the Metropolis-Hastings algorithm and the maximum likelihood method give almost identical estimates concerning the parameter uncertainty, and (b) the uncertainties in the simulated streamflow due to the parameter uncertainty are less important than uncertainties originating from other sources for this simple model with fewer parameters.  相似文献   
4.
Abstract

The well-established physical and mathematical principle of maximum entropy (ME), is used to explain the distributional and autocorrelation properties of hydrological processes, including the scaling behaviour both in state and in time. In this context, maximum entropy is interpreted as maximum uncertainty. The conditions used for the maximization of entropy are as simple as possible, i.e. that hydrological processes are non-negative with specified coefficients of variation (CV) and lag one autocorrelation. In this first part of the study, the marginal distributional properties of hydrological variables and the state scaling behaviour are investigated. Application of the ME principle under these very simple conditions results in the truncated normal distribution for small values of CV and in a nonexponential type (Pareto) distribution for high values of CV. In addition, the normal and the exponential distributions appear as limiting cases of these two distributions. Testing of these theoretical results with numerous hydrological data sets on several scales validates the applicability of the ME principle, thus emphasizing the dominance of uncertainty in hydrological processes. Both theoretical and empirical results show that the state scaling is only an approximation for the high return periods, which is merely valid when processes have high variation on small time scales. In other cases the normal distributional behaviour, which does not have state scaling properties, is a more appropriate approximation. Interestingly however, as discussed in the second part of the study, the normal distribution combined with positive autocorrelation of a process, results in time scaling behaviour due to the ME principle.  相似文献   
5.
IPCC reports provide a synthesis of the state of the science in order to inform the international policy process. This task is made difficult by the presence of deep uncertainty in the climate problem that results from long time scales and complexity. This paper focuses on how deep uncertainty can be effectively communicated. We argue that existing schemes do an inadequate job of communicating deep uncertainty and propose a simple approach that distinguishes between various levels of subjective understanding in a systematic manner. We illustrate our approach with two examples. To cite this article: M. Kandlikar et al., C. R. Geoscience 337 (2005).  相似文献   
6.
The preparation and characterisation of three nickel ores and two nickel concentrate certified reference materials are described in this paper. The samples of nickel ore and nickel concentrate were collected from the Hongqiling nickel deposit in Jilin province. The raw materials were crushed and passed through a 2.0‐mm sieve. The rough samples were then ground for 48 hr in a high‐alumina ball mill to a final size of < 0.074 mm. Homogeneity of the samples was tested by X‐ray fluorescence spectrometry (WD‐XRF) and inductively coupled plasma‐atomic emission spectrometry (ICP‐AES). The relative standard deviations (RSD) of results on mass fraction measurements by WD‐XRF were < 1.0% m/m for eighteen components. F‐tests showed that all five samples were homogeneous. Nineteen laboratories contributed with measurement results (2127 in total) for the certification of mass fractions for twenty‐three elements and compounds. Twenty‐three components in the nickel ores and twenty components in the nickel concentrates were characterised as certified values, while the Ni mass fractions ranges from 0.1 to 9.02% m/m in these certified reference materials. These five samples were approved as national certified reference materials by the National Organisation of Reference Materials of China in 2012.  相似文献   
7.
The Institute of Hydrogeology and Environmental Geology, Chinese Academy of Geological Sciences recently prepared four certified reference materials for hydrogen and oxygen stable isotopes in water, which are called ‘China Standard Water' (CSW)‐HO1–HO4 (hereafter referred to as HO1–HO4). These reference materials are intended for calibration purposes and provide reference values of their relative difference in 2H/1H and 18O/16O isotope‐amount ratios expressed in delta notation, normalised to the VSMOW–SLAP scale. The certified values of the reference materials were determined by an interlaboratory comparison of results from eleven participating laboratories. This paper describes in detail the production and certification procedure of the four reference materials. The first analytical data for the reference materials are also provided using a variety of analytical techniques, namely CO2–H2O equilibration and laser spectroscopy for δ18O and Cr reduction, as well as H2–H2O equilibration, laser spectroscopy, and high‐temperature conversion for δ2H. The reference values for materials HO1–HO4 and their associated uncertainties are assigned.  相似文献   
8.
Abstract

Abstract The utility of simulations of Global Climate Models (GCMs) for regional water resources prediction and management on the Korean Peninsula was assessed by a probabilistic measure. Global Climate Model simulations of an indicator variable (e.g. surface precipitation or temperature) were used for discriminating high vs low regional observations of a target variable (e.g. watershed precipitation or reservoir inflow). The formulation uses the significance probability of the Kolmogorov-Smirnov test for detecting differences between two distributions. High resolution Atmospheric Model Intercomparison Project-II (AMIP-II) type GCM simulations performed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and AMIP-I type GCM simulations performed by the Korean Meteorological Research Institute (METRI) were used to obtain information for the indicator variables. Observed mean areal precipitation and temperature, and watershed-outlet discharge values for seven major river basins in Korea were used as the target variables. The results suggest that the use of the climate model nodal output from both climate models in the vicinity of the target basin with monthly resolution will be beneficial for water resources planning and management analysis that depends on watershed mean areal precipitation and temperature, and outlet discharge.  相似文献   
9.
10.
The ‘Appropriate Sampling for Optimised Measurement’ (ASOM) approach considers measurement to be the focus of the sampling process, and sampling to be only the first part of the measurement process. To achieve ASOM, the uncertainty of measurements, including its contribution from sampling, needs to be estimated and optimised in order to achieve fitness‐for‐purpose. Such samples are then ‘sufficiently’ representative. The ‘Theory of Sampling’ (TOS) focuses on the processes of primary sampling and sample preparation and assumes that samples are ‘representative’ if they are correctly prepared by nominally ‘correct’ protocols. It defines around ten sampling ‘errors’, which are either modelled or minimised to improve sampling quality. It is argued that the ASOM approach is more effective in achieving appropriate measurement quality than in applying TOS to just the first part of the measurement process. The comparison is made less effective by the different objectives, scopes, terminology and assumptions of the two approaches. ASOM can be applied to in situ materials that are too variable to be modelled accurately, or where sources of uncertainty are unsuspected. The proposed integration of ASOM with TOS (Esbensen and Wagner 2014, Trends in Analytical Chemistry, 57, 93–106) is therefore effectively impossible. However, some TOS procedures can be useful within the ASOM approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号