首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
矿产潜力预测不确定性评价是矿产定量预测的重要环节,其主要研究内容包括未发现矿床数的不确定性、未发现矿床品位、吨位及资源量的不确定性等.本文简要介绍了矿产预测不确定性的主要来源与评价不确定性的途径与方法,并利用模糊集评价了未发现矿床数、品位、吨位及资源量的不确定性.  相似文献   

2.
地震偏移成像不确定性分析初探   总被引:7,自引:7,他引:0       下载免费PDF全文
勘探地球物理领域存在大量宏观动力学过程的物理数学表达及尚待深入探索的不确定性事实,其中一类是源自对偶量的不可交换性或不可对易性.本文从地震偏移成像中的不确定性分析入手,探讨偏移成像中受到波场延拓过程不确定性的影响.文中给出了若干深度偏移算法对Marmousi模型的偏移不确定度分布表达及图像显示;文中还对若干波场延拓方法,从不确定性估计的角度进行了比较.笔者认为对应用技术领域中的宏观不确定性分析,将有益于深化对宏观动力学过程的理性认识,也为偏移算法评价提供一种途径;重要的是,进行不确定性分析将有利于把握所用偏移算法的不确定度分布,继而有针对性地采取相应的改善措施,以期提高算法的整体确定性程度.  相似文献   

3.
矿产靶区预测是一种从统计单元集合中识别找矿靶区的非线性模式识别过程,可以利用Boltzmann机能够对外部刺激进行编码和重建的功能,实现基于Boltzmann机的矿产靶区非线性统计预测过程.鉴于此,笔者定义了面向矿产靶区预测的三层Boltzmann机模型,模型输入层神经元数目等于找矿证据数目,输出层只有一个神经元,隐藏层神经元数目由用户根据矿产靶区预测的精度要求确定;模型应用Hebbian编码和模拟退火算法相结合的随机学习算法进行训练,根据学习训练后模型输入层与隐藏层神经元之间的连接权确定找矿证据的权系数;根据证据权系数和统计单元证据组合特征计算单元成矿有利度,圈定找矿靶区.在GDAL数字图像输入输出函数库基础上,用VC++语言开发了面向栅格数据的矿产靶区预测Boltzmann机算法程序并应用于新疆阿勒泰地区的矿产靶区预测研究.结果表明,Boltzmann机模型预测的统计单元成矿有利度能够正确反映研究区已知矿床(点)的空间分布规律,因此,基于Boltzmann机的矿产靶区非线性统计预测模型是有效的.  相似文献   

4.
矿产资源预测是综合地学信息,进行优选靶区的有效手段之一.本文以MapInfo为平台,VC为二次开发工具,以综合信息矿产预测理论与方法为指导建立了MapInfo综合信息矿产资源评价系统.文中对MapInfo综合信息矿产资源评价系统的流程进行较为详细的论述,并以陕南地区金矿为例进行了综合评价.评价结果与其它系统相似或相近,证明该系统的有效性,为矿产预测提供了又一种技术方法.  相似文献   

5.
从国际地震界关于VAN方法预测效能引发的大辩论出发,讨论了在前兆异常与地震相关性评价中的不确定性问题.在对前兆与地震相关性进行统计检验时,涉及到对地震孕育、发展和前兆基础的理论研究及认识,以及将预测转换成具体操作程序时遇到的许多不确定性问题.文中对这些不确定性逐一进行了介绍."地震界需要一套评估地震预报方法的基本方案"是在辩论后达成的共识,亦应成为中国学者的一项重要研究内容.  相似文献   

6.
湖泊富营养化响应与流域优化调控决策的模型研究进展   总被引:2,自引:0,他引:2  
湖泊富营养化是全球水环境领域面临的长期挑战,富营养化响应与流域优化决策模型是制定经济和高效调控方案的关键.然而已有的模型研究综述主要集中于模型开发、案例应用、敏感性分析、不确定性分析等单一方面,而缺少针对非线性响应、生态系统长期演变等最新湖泊治理挑战的研究总结.本文对数据驱动的统计模型、因果驱动的机理模型和决策导向的优化模型进行了综述.其中,统计模型包含经典统计、贝叶斯统计和机器学习模型,常用于建立响应关系、时间序列特征分析以及预报预警;机理模型包含流域的水文与污染物输移模拟以及湖泊的水文、水动力、水质、水生态等过程的模拟,用于不同时空尺度的变化过程模拟,其中复杂机理模型的敏感性分析、参数校验、模型不确定性等需要较高的计算成本;优化模型结合机理模型形成“模拟优化”体系,在不确定性条件下衍生出随机、区间优化等多种方法,通过并行计算、简化与替代模型可一定程度上解决计算时间成本的瓶颈.本文识别了湖泊治理面临的挑战,包括:①如何定量表征外源输入的非线性叠加和湖泊氮、磷、藻变化的非均匀性?②如何提高优化调控决策和水质目标的关联与精准性?③如何揭示湖泊生态系统的长期变化轨迹与驱动因素?最后,本文针对这些挑战提出研究展望,主要包括:①基于多源数据融合与机器学习算法以提升湖泊的短期水质预测精度;②以生物量为基础的机理模型与行为驱动的个体模型的升尺度或降尺度耦合以表达多种尺度的物质交互过程;③机器学习算法与机理模型的直接耦合或数据同化以降低模拟误差;④时空尺度各异的多介质模拟模型融合以实现精准和动态的优化调控.  相似文献   

7.
磁法在我国矿产预测中的应用   总被引:9,自引:1,他引:8       下载免费PDF全文
磁法作为一种有效的辅助手段,在矿产预测中的应用已有多年历史.随着经济对资源需求的不断增加和找矿难度的加大,磁法在矿产预测中,尤其是在寻找隐伏构造和岩矿体方面变得越来越重要.本文综合叙述了磁法技术在矿产预测中的应用现状和应用方法,并对将来磁法技术在矿产预测中的应用前景进行了展望.  相似文献   

8.
段雯娟 《地球》2014,(3):72-75
全国重要矿产成矿地质背景研究是全国矿产资源潜力评价的一项最要工作内容,也是实施矿产预测的一项基础工作和重要技术途径。  相似文献   

9.
地下水渗流系统灰色数值仿真模拟研究   总被引:4,自引:0,他引:4  
根据灰集合、灰数及灰色运算规则, 建立了地下水渗流系统的灰色数值模型, 提出了求解这类模型的一整套灰色数值算法, 并从灰色数学的理论上证明, 普通灰色解法仅仅是灰色数值算法的一个特例. 同时, 对灰色数值算法、普通灰色解法和经典数值方法进行了全面比较, 论证了灰色数值算法较好地刻画了地下水系统的灰信息传递过程. 在理论分析基础上, 选择了矿井涌水量预测和水源地供水资源评价的两个基本地下水渗流实例, 分别就水文地质条件概化与灰化、灰色数值模型建立、模型识别和预测评价等进行了比较系统的分析研究. 提出运用灰色数值模型预测矿井涌水量时, 需保证疏降水位“灰带”的上界低于矿井安全生产的设计水位; 而在水源地可采资源评价时, 应保证地下水水位“灰带”的下界不低于临界控制水位.  相似文献   

10.
渗透率是储层评价和油气藏开发的关键参数.传统测井方法与常规机器学习方法估算的渗透率都是固定值.但由于测井数据本身存在噪声,渗透率的预测结果可能受到噪声的影响出现测量性的随机误差(即任意不确定性);同时,当测试数据与训练数据存在差异时,机器学习模型在预测渗透率时可能出现模型参数的不确定性(即认知不确定性).为实现渗透率的准确预测并量化两种不确定性对结果的影响,本文提出基于数据分布域变换和贝叶斯神经网络同时实现渗透率预测及其不确定性的估计.提出方法主要包括两个部分:一部分是不同域数据分布的相互转换,另一部分是基于贝叶斯理论的神经网络渗透率建模预测和不确定性估计.由于贝叶斯神经网络存在数据分布的假设,当标签的概率分布与网络的分布保持一致时,贝叶斯神经网络可以更好的学习到数据之间的关系.因此通过寻找一个函数将一个原始域的渗透率标签转换为目标域的与渗透率有关的变量(我们称为目标域渗透率),使得该变量符合贝叶斯神经网络的分布假设.我们使用贝叶斯神经网络预测目标域渗透率以及任意不确定性和认知不确定性.随后,通过分布域的逆变换,我们将目标域渗透率还原回原始域渗透率.应用本文方法到某油田的18口井的测井...  相似文献   

11.
It is the goal of remote sensing to infer information about objects or a natural process from a remote location. This invokes that uncertainty in measurement should be viewed as central to remote sensing. In this study, the uncertainty associated with water stages derived from a single SAR image for the Alzette (G.D. of Luxembourg) 2003 flood is assessed using a stepped GLUE procedure. Main uncertain input factors to the SAR processing chain for estimating water stages include geolocation accuracy, spatial filter window size, image thresholding value, DEM vertical precision and the number of river cross sections at which water stages are estimated. Initial results show that even with plausible parameter values uncertainty in water stages over the entire river reach is 2.8 m on average. Adding spatially distributed field water stages to the GLUE analysis following a one-at-a-time approach helps to considerably reduce SAR water stage uncertainty (0.6 m on average) thereby identifying appropriate value ranges for each uncertain SAR water stage processing factor. For the GLUE analysis a Nash-like efficiency criterion adapted to spatial data is proposed whereby acceptable SAR model simulations are required to outperform a simpler regression model based on the field-surveyed average river bed gradient. Weighted CDFs for all factors based on the proposed efficiency criterion allow the generation of reliable uncertainty quantile ranges and 2D maps that show the uncertainty associated with SAR-derived water stages. The stepped GLUE procedure demonstrated that not all field data collected are necessary to achieve maximum constraining. A possible efficient way to decide on relevant locations at which to sample in the field is proposed. It is also suggested that the resulting uncertainty ranges and flood extent or depth maps may be used to evaluate 1D or 2D flood inundation models in terms of water stages, depths or extents. For this, the extended GLUE approach, which copes with the presence of uncertainty in the observed data, may be adopted.  相似文献   

12.
《Journal of Hydrology》2002,255(1-4):90-106
A detailed uncertainty analysis of three-component mixing models based on the Haute–Mentue watershed (Switzerland) is presented. Two types of uncertainty are distinguished: the ‘model uncertainty’, which is affected by model assumptions, and the ‘statistical uncertainty’, which is due to temporal and spatial variability of chemical tracer concentrations of components. The statistical uncertainty is studied using a Monte Carlo procedure. The model uncertainty is investigated by the comparison of four different mixing models all based on the same tracers but considering for each component alternative hypotheses about their concentration and their spatio-temporal variability. This analysis indicates that despite the uncertainty, the flow sources, which generate the stream flow are clearly identified at the catchments scale by the application of the mixing model. However, the precision and the coherence of hydrograph separations can be improved by taking into account any available information about the temporal and spatial variability of component chemical concentrations.  相似文献   

13.
The effect of sampling uncertainty and spatial correlation on the pooling of site and regional information is studied in the context of the empirical Bayes (EB) normal probability model chosen because of its simplicity and generality. Because the EB model parameters must be evaluated from observed hydrologic data they are subject to uncertainty which can be considerable when samples are small and inter-site correlation exists. Procedures are developed which permit approximate assessment of the effect on inference of uncertainty in these EB parameters. Hence, the gains that pooled inference can make over site or regional inference can be realistically evaluated. It is also shown that when the site estimate differs much more than expected from the regional estimate pooling of information can be counterproductive in the sense of site or regional inference being more precise.  相似文献   

14.
In this paper we extend the generalized likelihood uncertainty estimation (GLUE) technique to estimate spatially distributed uncertainty in models conditioned against binary pattern data contained in flood inundation maps. Untransformed binary pattern data already have been used within GLUE to estimate domain‐averaged (zero‐dimensional) likelihoods, yet the pattern information embedded within such sources has not been used to estimate distributed uncertainty. Where pattern information has been used to map distributed uncertainty it has been transformed into a continuous function prior to use, which may introduce additional errors. To solve this problem we use here ‘raw’ binary pattern data to define a zero‐dimensional global performance measure for each simulation in a Monte Carlo ensemble. Thereafter, for each pixel of the distributed model we evaluate the probability that this pixel was inundated. This probability is then weighted by the measure of global model performance, thus taking into account how well a given parameter set performs overall. The result is a distributed uncertainty measure mapped over real space. The advantage of the approach is that it both captures distributed uncertainty and contains information on global likelihood that can be used to condition predictions of further events for which observed data are not available. The technique is applied to the problem of flood inundation prediction at two test sites representing different hydrodynamic conditions. In both cases, the method reveals the spatial structure in simulation uncertainty and simultaneously enables mapping of flood probability predicted by the model. Spatially distributed uncertainty analysis is shown to contain information over and above that available from global performance measures. Overall, the paper highlights the different types of information that may be obtained from mappings of model uncertainty over real and n‐dimensional parameter spaces. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
含噪声数据反演的概率描述   总被引:5,自引:4,他引:1       下载免费PDF全文
根据贝叶斯理论给出了对含噪声地球物理数据处理的具体流程和方法,主要包括似然函数估计和后验概率计算.我们将数据向量的概念扩展为数据向量的集合,通过引入数据空间内的信赖度,把数据噪声转移到模型空间的概率密度函数上,即获得了反映数据本身的不确定性的似然函数.该方法由于避免了处理阶段数据空间内的人工干预,因而可以保证模型空间中的概率密度单纯反映数据噪声,具有信息保真度高、保留可行解的优点.为了得到加入先验信息的后验分布,本文提出了使用加权矩阵的概率分析法,该方法在模型空间直接引入地质信息,对噪声引起的反演多解性有很强的约束效果.整个处理流程均以大地电磁反演为例进行了展示.  相似文献   

16.
Application of artificial neural network (ANN) models has been reported to solve variety of water resources and environmental related problems including prediction, forecasting and classification, over the last two decades. Though numerous research studies have witnessed the improved estimate of ANN models, the practical applications are sometimes limited. The black box nature of ANN models and their parameters hardly convey the physical meaning of catchment characteristics, which result in lack of transparency. In addition, it is perceived that the point prediction provided by ANN models does not explain any information about the prediction uncertainty, which reduce the reliability. Thus, there is an increasing consensus among researchers for developing methods to quantify the uncertainty of ANN models, and a comprehensive evaluation of uncertainty methods applied in ANN models is an emerging field that calls for further improvements. In this paper, methods used for quantifying the prediction uncertainty of ANN based hydrologic models are reviewed based on the research articles published from the year 2002 to 2015, which focused on modeling streamflow forecast/prediction. While the flood forecasting along with uncertainty quantification has been frequently reported in applications other than ANN in the literature, the uncertainty quantification in ANN model is a recent progress in the field, emerged from the year 2002. Based on the review, it is found that methods for best way of incorporating various aspects of uncertainty in ANN modeling require further investigation. Though model inputs, parameters and structure uncertainty are mainly considered as the source of uncertainty, information of their mutual interaction is still lacking while estimating the total prediction uncertainty. The network topology including number of layers, nodes, activation function and training algorithm has often been optimized for the model accuracy, however not in terms of model uncertainty. Finally, the effective use of various uncertainty evaluation indices should be encouraged for the meaningful quantification of uncertainty. This review article also discusses the effectiveness and drawbacks of each method and suggests recommendations for further improvement.  相似文献   

17.
This paper proposes a new orientation to address the problem of hydrological model calibration in ungauged basin. Satellite radar altimetric observations of river water level at basin outlet are used to calibrate the model, as a surrogate of streamflow data. To shift the calibration objective, the hydrological model is coupled with a hydraulic model describing the relation between streamflow and water stage. The methodology is illustrated by a case study in the Upper Mississippi Basin using TOPEX/Poseidon (T/P) satellite data. The generalized likelihood uncertainty estimation (GLUE) is employed for model calibration and uncertainty analysis. We found that even without any streamflow information for regulating model behavior, the calibrated hydrological model can make fairly reasonable streamflow estimation. In order to illustrate the degree of additional uncertainty associated with shifting calibration objective and identifying its sources, the posterior distributions of hydrological parameters derived from calibration based on T/P data, streamflow data and T/P data with fixed hydraulic parameters are compared. The results show that the main source is the model parameter uncertainty. And the contribution of remote sensing data uncertainty is minor. Furthermore, the influence of removing high error satellite observations on streamflow estimation is also examined. Under the precondition of sufficient temporal coverage of calibration data, such data screening can eliminate some unrealistic parameter sets from the behavioral group. The study contributes to improve streamflow estimation in ungauged basin and evaluate the value of remote sensing in hydrological modeling. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.  相似文献   

19.
Testing competing conceptual model hypotheses in hydrology is complicated by uncertainties from a wide range of sources, which result in multiple simulations that explain catchment behaviour. In this study, the limits of acceptability uncertainty analysis approach used to discriminate between 78 competing hypotheses in the Framework for Understanding Structural Errors for 24 catchments in the UK. During model evaluation, we test the model's ability to represent observed catchment dynamics and processes by defining key hydrologic signatures and time step‐based metrics from the observed discharge time series. We explicitly account for uncertainty in the evaluation data by constructing uncertainty bounds from errors in the stage‐discharge rating curve relationship. Our study revealed large differences in model performance both between catchments and depending on the type of diagnostic used to constrain the simulations. Model performance varied with catchment characteristics and was best in wet catchments with a simple rainfall‐runoff relationship. The analysis showed that the value of different diagnostics in constraining catchment response and discriminating between competing conceptual hypotheses varies according to catchment characteristics. The information content held within water balance signatures was found to better capture catchment dynamics in chalk catchments, where catchment behaviour is predominantly controlled by seasonal and annual changes in rainfall, whereas the information content in the flow‐duration curve and time‐step performance metrics was able to better capture the dynamics of rainfall‐driven catchments. We also investigate the effect of model structure on model performance and demonstrate its (in)significance in reproducing catchment dynamics for different catchments. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Information theory is the basis for understanding how information is transmitted as observations. Observation data can be used to compare uncertainty on parameter estimates and predictions between models. Jacobian Information (JI) is quantified as the determinant of the weighted Jacobian (sensitivity) matrix. Fisher Information (FI) is quantified as the determinant of the weighted FI matrix. FI measures the relative disorder of a model (entropy) in a set of models. One‐dimensional models are used to demonstrate the relationship between JI and FI, and the resulting uncertainty on estimated parameter values and model predictions for increasing model complexity, different model structures, different boundary conditions, and over‐fitted models. Greater model complexity results in increased JI accompanied by an increase in parameter and prediction uncertainty. FI generally increases with increasing model complexity unless model error is large. Models with lower FI have a higher level of disorder (increase in entropy) which results in greater uncertainty of parameter estimates and model predictions. A constant‐head boundary constrains the heads in the area near the boundary, reducing sensitivity of simulated equivalents to estimated parameters. JI and FI are lower for this boundary condition as compared to a constant‐outflow boundary in which the heads in the area of the boundary can adjust freely. Complex, over‐fitted models, in which the structure of the model is not supported by the observation dataset, result in lower JI and FI because there is insufficient information to estimate all parameters in the model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号