首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
The similarity between maximum entropy (MaxEnt) and minimum relative entropy (MRE) allows recent advances in probabilistic inversion to obviate some of the shortcomings in the former method. The purpose of this paper is to review and extend the theory and practice of minimum relative entropy. In this regard, we illustrate important philosophies on inversion and the similarly and differences between maximum entropy, minimum relative entropy, classical smallest model (SVD) and Bayesian solutions for inverse problems. MaxEnt is applicable when we are determining a function that can be regarded as a probability distribution. The approach can be extended to the case of the general linear problem and is interpreted as the model which fits all the constraints and is the one model which has the greatest multiplicity or “spreadout” that can be realized in the greatest number of ways. The MRE solution to the inverse problem differs from the maximum entropy viewpoint as noted above. The relative entropy formulation provides the advantage of allowing for non-positive models, a prior bias in the estimated pdf and `hard' bounds if desired. We outline how MRE can be used as a measure of resolution in linear inversion and show that MRE provides us with a method to explore the limits of model space. The Bayesian methodology readily lends itself to the problem of updating prior probabilities based on uncertain field measurements, and whose truth follows from the theorems of total and compound probabilities. In the Bayesian approach information is complete and Bayes' theorem gives a unique posterior pdf. In comparing the results of the classical, MaxEnt, MRE and Bayesian approaches we notice that the approaches produce different results. In␣comparing MaxEnt with MRE for Jayne's die problem we see excellent comparisons between the results. We compare MaxEnt, smallest model and MRE approaches for the density distribution of an equivalent spherically-symmetric earth and for the contaminant plume-source problem. Theoretical comparisons between MRE and Bayesian solutions for the case of the linear model and Gaussian priors may show different results. The Bayesian expected-value solution approaches that of MRE and that of the smallest model as the prior distribution becomes uniform, but the Bayesian maximum aposteriori (MAP) solution may not exist for an underdetermined case with a uniform prior.  相似文献   

2.
The similarity between maximum entropy (MaxEnt) and minimum relative entropy (MRE) allows recent advances in probabilistic inversion to obviate some of the shortcomings in the former method. The purpose of this paper is to review and extend the theory and practice of minimum relative entropy. In this regard, we illustrate important philosophies on inversion and the similarly and differences between maximum entropy, minimum relative entropy, classical smallest model (SVD) and Bayesian solutions for inverse problems. MaxEnt is applicable when we are determining a function that can be regarded as a probability distribution. The approach can be extended to the case of the general linear problem and is interpreted as the model which fits all the constraints and is the one model which has the greatest multiplicity or “spreadout” that can be realized in the greatest number of ways. The MRE solution to the inverse problem differs from the maximum entropy viewpoint as noted above. The relative entropy formulation provides the advantage of allowing for non-positive models, a prior bias in the estimated pdf and `hard' bounds if desired. We outline how MRE can be used as a measure of resolution in linear inversion and show that MRE provides us with a method to explore the limits of model space. The Bayesian methodology readily lends itself to the problem of updating prior probabilities based on uncertain field measurements, and whose truth follows from the theorems of total and compound probabilities. In the Bayesian approach information is complete and Bayes' theorem gives a unique posterior pdf. In comparing the results of the classical, MaxEnt, MRE and Bayesian approaches we notice that the approaches produce different results. In␣comparing MaxEnt with MRE for Jayne's die problem we see excellent comparisons between the results. We compare MaxEnt, smallest model and MRE approaches for the density distribution of an equivalent spherically-symmetric earth and for the contaminant plume-source problem. Theoretical comparisons between MRE and Bayesian solutions for the case of the linear model and Gaussian priors may show different results. The Bayesian expected-value solution approaches that of MRE and that of the smallest model as the prior distribution becomes uniform, but the Bayesian maximum aposteriori (MAP) solution may not exist for an underdetermined case with a uniform prior.  相似文献   

3.
Bayesian data fusion in a spatial prediction context: a general formulation   总被引:1,自引:1,他引:1  
In spite of the exponential growth in the amount of data that one may expect to provide greater modeling and predictions opportunities, the number and diversity of sources over which this information is fragmented is growing at an even faster rate. As a consequence, there is real need for methods that aim at reconciling them inside an epistemically sound theoretical framework. In a statistical spatial prediction framework, classical methods are based on a multivariate approach of the problem, at the price of strong modeling hypotheses. Though new avenues have been recently opened by focusing on the integration of uncertain data sources, to the best of our knowledges there have been no systematic attemps to explicitly account for information redundancy through a data fusion procedure. Starting from the simple concept of measurement errors, this paper proposes an approach for integrating multiple information processing as a part of the prediction process itself through a Bayesian approach. A general formulation is first proposed for deriving the prediction distribution of a continuous variable of interest at unsampled locations using on more or less uncertain (soft) information at neighboring locations. The case of multiple information is then considered, with a Bayesian solution to the problem of fusing multiple information that are provided as separate conditional probability distributions. Well-known methods and results are derived as limit cases. The convenient hypothesis of conditional independence is discussed by the light of information theory and maximum entropy principle, and a methodology is suggested for the optimal selection of the most informative subset of information, if needed. Based on a synthetic case study, an application of the methodology is presented and discussed.  相似文献   

4.
This paper gives a review of Bayesian parameter estimation. The Bayesian approach is fundamental and applicable to all kinds of inverse problems. Its basic formulation is probabilistic. Information from data is combined with a priori information on model parameters. The result is called the a posteriori probability density function and it is the solution to the inverse problem. In practice an estimate of the parameters is obtained by taking its maximum. Well-known estimation procedures like least-squares inversion or l1 norm inversion result, depending on the type of noise and a priori information given. Due to the a priori information the maximum will be unique and the estimation procedures will be stable except (in theory) for the most pathological problems which are very unlikely to occur in practice. The approach of Tarantola and Valette can be derived within classical probability theory. The Bayesian approach allows a full resolution and uncertainty analysis which is discussed in Part II of the paper.  相似文献   

5.
Seismic conditioning of static reservoir model properties such as porosity and lithology has traditionally been faced as a solution of an inverse problem. Dynamic reservoir model properties have been constrained by time‐lapse seismic data. Here, we propose a methodology to jointly estimate rock properties (such as porosity) and dynamic property changes (such as pressure and saturation changes) from time‐lapse seismic data. The methodology is based on a full Bayesian approach to seismic inversion and can be divided into two steps. First we estimate the conditional probability of elastic properties and their relative changes; then we estimate the posterior probability of rock properties and dynamic property changes. We apply the proposed methodology to a synthetic reservoir study where we have created a synthetic seismic survey for a real dynamic reservoir model including pre‐production and production scenarios. The final result is a set of point‐wise probability distributions that allow us to predict the most probable reservoir models at each time step and to evaluate the associated uncertainty. Finally we also show an application to real field data from the Norwegian Sea, where we estimate changes in gas saturation and pressure from time‐lapse seismic amplitude differences. The inverted results show the hydrocarbon displacement at the times of two repeated seismic surveys.  相似文献   

6.
含噪声数据反演的概率描述   总被引:5,自引:4,他引:1       下载免费PDF全文
根据贝叶斯理论给出了对含噪声地球物理数据处理的具体流程和方法,主要包括似然函数估计和后验概率计算.我们将数据向量的概念扩展为数据向量的集合,通过引入数据空间内的信赖度,把数据噪声转移到模型空间的概率密度函数上,即获得了反映数据本身的不确定性的似然函数.该方法由于避免了处理阶段数据空间内的人工干预,因而可以保证模型空间中的概率密度单纯反映数据噪声,具有信息保真度高、保留可行解的优点.为了得到加入先验信息的后验分布,本文提出了使用加权矩阵的概率分析法,该方法在模型空间直接引入地质信息,对噪声引起的反演多解性有很强的约束效果.整个处理流程均以大地电磁反演为例进行了展示.  相似文献   

7.
Compositional Bayesian indicator estimation   总被引:1,自引:1,他引:0  
Indicator kriging is widely used for mapping spatial binary variables and for estimating the global and local spatial distributions of variables in geosciences. For continuous random variables, indicator kriging gives an estimate of the cumulative distribution function, for a given threshold, which is then the estimate of a probability. Like any other kriging procedure, indicator kriging provides an estimation variance that, although not often used in applications, should be taken into account as it assesses the uncertainty of the estimate. An alternative approach to indicator estimation is proposed in this paper. In this alternative approach the complete probability density function of the indicator estimate is evaluated. The procedure is described in a Bayesian framework, using a multivariate Gaussian likelihood and an a priori distribution which are both combined according to Bayes theorem in order to obtain a posterior distribution for the indicator estimate. From this posterior distribution, point estimates, interval estimates and uncertainty measures can be obtained. Among the point estimates, the median of the posterior distribution is the maximum entropy estimate because there is a fifty-fifty chance of the unknown value of the estimate being larger or smaller than the median; that is, there is maximum uncertainty in the choice between two alternatives. Thus in some sense, the latter is an indicator estimator, alternative to the kriging estimator, that includes its own uncertainty. On the other hand, the mode of the posterior distribution estimator, assuming a uniform prior, is coincidental with the simple kriging estimator. Additionally, because the indicator estimate can be considered as a two-part composition which domain of definition is the simplex, the method is extended to compositional Bayesian indicator estimation. Bayesian indicator estimation and compositional Bayesian indicator estimation are illustrated with an environmental case study in which the probability of the content of a geochemical element in soil being over a particular threshold is of interest. The computer codes and its user guides are public domain and freely available.  相似文献   

8.
The Bayesian maximum entropy (BME) method can be used to predict the value of a spatial random field at an unsampled location given precise (hard) and imprecise (soft) data. It has mainly been used when the data are non-skewed. When the data are skewed, the method has been used by transforming the data (usually through the logarithmic transform) in order to remove the skew. The BME method is applied for the transformed variable, and the resulting posterior distribution transformed back to give a prediction of the primary variable. In this paper, we show how the implementation of the BME method that avoids the use of a transform, by including the logarithmic statistical moments in the general knowledge base, gives more appropriate results, as expected from the maximum entropy principle. We use a simple illustration to show this approach giving more intuitive results, and use simulations to compare the approaches in terms of the prediction errors. The simulations show that the BME method with the logarithmic moments in the general knowledge base reduces the errors, and we conclude that this approach is more suitable to incorporate soft data in a spatial analysis for lognormal data.  相似文献   

9.
The pioneering work of E. T. Jaynes in the field of Bayesian/Maximum Entropy methods has been successfully explored in a number of disciplines. The principle of maximum entropy (PME) is remarkably powerful and versatile and leads to results which are devoid of spurious structure. Minimum relative entropy (MRE) is a method which has all the important attributes of the maximum-entropy (ME) approach with the advantage that prior information may be easily included. These ‘soft’ prior constraints play a fundamental role in the solution of underdetermined problems. The MRE approach, like ME, has achieved considerable success in the field of spectral analysis where the spectrum is estimated from incomplete autocorrelations. In this paper we apply the MRE philosophy to 1D inverse problems where the model is not necessarily positive, and thus we show that MRE is a general method of tackling linear, underdetermined, inverse problems. We illustrate our discussion with examples which deal with the famous die problem introduced by Jaynes, the question of aliasing, determination of interval velocities from stacking velocities and, finally, the universal problem of band-limited extrapolation. It is found that the MRE solution for the interval velocities, when a uniform prior velocity is assumed, is exactly the Dix formulation which is generally used in the seismic industry.  相似文献   

10.
 Being a non-linear method based on a rigorous formalism and an efficient processing of various information sources, the Bayesian maximum entropy (BME) approach has proven to be a very powerful method in the context of continuous spatial random fields, providing much more satisfactory estimates than those obtained from traditional linear geostatistics (i.e., the various kriging techniques). This paper aims at presenting an extension of the BME formalism in the context of categorical spatial random fields. In the first part of the paper, the indicator kriging and cokriging methods are briefly presented and discussed. A special emphasis is put on their inherent limitations, both from the theoretical and practical point of view. The second part aims at presenting the theoretical developments of the BME approach for the case of categorical variables. The three-stage procedure is explained and the formulations for obtaining prior joint distributions and computing posterior conditional distributions are given for various typical cases. The last part of the paper consists in a simulation study for assessing the performance of BME over the traditional indicator (co)kriging techniques. The results of these simulations highlight the theoretical limitations of the indicator approach (negative probability estimates, probability distributions that do not sum up to one, etc.) as well as the much better performance of the BME approach. Estimates are very close to the theoretical conditional probabilities, that can be computed according to the stated simulation hypotheses.  相似文献   

11.
This paper, the first in a series of two, applies the entropy (or information) theory to describe the spatial variability of synthetic data that can represent spatially correlated groundwater quality data. The application involves calculating information measures such as transinformation, the information transfer index and the correlation coefficient. These measures are calculated using discrete and analytical approaches. The discrete approach uses the contingency table and the analytical approach uses the normal probability density function. The discrete and analytical approaches are found to be in reasonable agreement. The analysis shows that transinformation is useful and comparable with correlation to characterize the spatial variability of the synthetic data set, which is correlated with distance. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
Due to the fast pace increasing availability and diversity of information sources in environmental sciences, there is a real need of sound statistical mapping techniques for using them jointly inside a unique theoretical framework. As these information sources may vary both with respect to their nature (continuous vs. categorical or qualitative), their spatial density as well as their intrinsic quality (soft vs. hard data), the design of such techniques is a challenging issue. In this paper, an efficient method for combining spatially non-exhaustive categorical and continuous data in a mapping context is proposed, based on the Bayesian maximum entropy paradigm. This approach relies first on the definition of a mixed random field, that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function. When incorporating general knowledge about the first- and second-order moments of these fields, it is shown that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions, with parameters estimation that can be obtained from entropy maximization. A posterior distribution that incorporates the various (soft or hard) continuous and categorical data at hand can then be obtained by a straightforward conditionalization step. The use and potential of the method is illustrated by the way of a simulated case study. A comparison with few common geostatistical methods in some limit cases also emphasizes their similarities and differences, both from the theoretical and practical viewpoints. As expected, adding categorical information may significantly improve the spatial prediction of a continuous variable, making this approach powerful and very promising.  相似文献   

13.
地震岩相识别概率表征方法   总被引:4,自引:3,他引:1       下载免费PDF全文
储层岩相分布信息是油藏表征的重要参数,基于地震资料开展储层岩相识别通常具有较强的不确定性.传统方法仅获取唯一确定的岩相分布信息,无法解析反演结果的不确定性,增加了油藏评价的风险.本文引入基于概率统计的多步骤反演方法开展地震岩相识别,通过在其各个环节建立输入与输出参量的统计关系,然后融合各环节概率统计信息构建地震数据与储层岩相的条件概率关系以反演岩相分布概率信息.与传统方法相比,文中方法通过概率统计关系表征了地震岩相识别各个环节中地球物理响应关系的不确定性,并通过融合各环节概率信息实现了不确定性传递的数值模拟,最终反演的岩相概率信息能够客观准确地反映地震岩相识别结果的不确定性,为油藏评价及储层建模提供了重要参考信息.模型数据和实际资料应用验证了方法的有效性.  相似文献   

14.
Assimilation of fuzzy data by the BME method   总被引:1,自引:1,他引:0  
Modern spatiotemporal geostatistics provides a powerful framework for generation of predictive maps over a spatiotemporal domain by accounting for general knowledge to define a space of plausible events and then restricting this space of plausible events to be consistent with available site-specific knowledge. The Bayesian maximum entropy (BME) method is one of the most widely used modern geostatistics methods. BME results from assigning probabilities of plausible events based on general knowledge through information maximization and then applying operational Bayesian conditionalization that can explicitly assimilate stochastic representations of various uncertain (soft) data bases. The paper demonstrates that fuzzy data sets can be indirectly assimilated by BME through a two-step process: (a) reinterpretation of the fuzzy data as probabilistic through a generalized defuzzification procedure, and (b) efficient assimilation of the probabilistic results of generalized defuzzification by the BME method. A numerical demonstration involves site-specific probabilistic results obtained from the generalized defuzzification of a simulated fuzzy data set and general knowledge that includes the spatial mean trend and correlation structure models. The parameters of these models can be inferred from the hard data equivalent values of the probabilistic results. Accordingly, details of inference based on probabilistic soft data are also considered.  相似文献   

15.
In this paper we employ a novel method to find the optimal design for problems where the likelihood is not available analytically, but simulation from the likelihood is feasible. To approximate the expected utility we make use of approximate Bayesian computation methods. We detail the approach for a model on spatial extremes, where the goal is to find the optimal design for efficiently estimating the parameters determining the dependence structure. The method is applied to determine the optimal design of weather stations for modeling maximum annual summer temperatures.  相似文献   

16.
Flooding hazard evaluation is the basis of flooding risk assessment which has significances to natural environment, human life and social economy. This study develops a spatial framework integrating naïve Bayes (NB) and geographic information system (GIS) to assess flooding hazard at regional scale. The methodology was demonstrated in the Bowen Basin in Australia as a case study. The inputs into the framework are five indices: elevation, slope, soil water retention, drainage proximity and density. They were derived from spatial data processed in ArcGIS. NB as a simplified and efficient type of Bayesian methods was used, with the assistance of remotely sensed flood inundation extent in the sampling process, to infer flooding probability on a cell-by-cell basis over the study area. A likelihood-based flooding hazard map was output from the GIS-based framework. The results reveal elevation and slope have more significant impacts on evaluation than other input indices. Area of high likelihood of flooding hazard is mainly located in the west and the southwest where there is a high water channel density, and along the water channels in the east of the study area. High likelihood of flooding hazard covers 45 % of the total area, medium likelihood accounts for about 12 %, low and very low likelihood represents 19 and 24 %, respectively. The results provide baseline information to identify and assess flooding hazard when making adaptation strategies and implementing mitigation measures in future. The framework and methodology developed in the study offer an integrated approach in evaluation of flooding hazard with spatial distributions and indicative uncertainties. It can also be applied to other hazard assessments.  相似文献   

17.
Some recent research on fluvial processes suggests the idea that some hydrological variables, such as flood flows, are upper-bounded. However, most probability distributions that are currently employed in flood frequency analysis are unbounded to the right. This paper describes an exploratory study on the joint use of an upper-bounded probability distribution and non-systematic flood information, within a Bayesian framework. Accordingly, the current PMF maximum discharge appears as a reference value and a reasonable estimate of the upper-bound for maximum flows, despite the fact that PMF determination is not unequivocal and depends strongly on the available data. In the Bayesian context, the uncertainty on the PMF can be included into the analysis by considering an appropriate prior distribution for the maximum flows. In the sequence, systematic flood records, historical floods, and paleofloods can be included into a compound likelihood function which is then used to update the prior information on the upper-bound. By combining a prior distribution describing the uncertainties of PMF estimates along with various sources of flood data into a unified Bayesian approach, the expectation is to obtain improved estimates of the upper-bound. The application example was conducted with flood data from the American river basin, near the Folsom reservoir, in California, USA. The results show that it is possible to put together concepts that appear to be incompatible: the deterministic estimate of PMF, taken as a theoretical limit for floods, and the frequency analysis of maximum flows, with the inclusion of non-systematic data. As compared to conventional analysis, the combination of these two concepts within the logical context of Bayesian theory, contributes an advance towards more reliable estimates of extreme floods.  相似文献   

18.
《Journal of Hydrology》2006,316(1-4):28-42
This paper presents a synthesis of a probability-based approach and the underpinning, mathematical and philosophical foundations that have evolved to date, as well as applications in modeling of vertical and two-dimensional velocity distributions that have direct implications to measurements and estimation of transport of mass, momentum and energy in fluid flows. The approach draws inferences from a probability law identified by maximizing the information entropy under the constraints imposed by the available information. It gives predictions considered to be the most probable or objective on the basis of the available information. The probabilistic approach complements the deterministic approach of hydrodynamics. The difference in the point of view between the two approaches creates a different view about the available information. Some information, such as the location and magnitude of maximum velocity, the ratio of mean and maximum velocities, that may not appear to have direct use to the deterministic approach in flow predictions become important and useful to the probabilistic approach.  相似文献   

19.
In this paper, a Bayesian sequential sensor placement algorithm, based on the robust information entropy, is proposed for multi‐type of sensors. The presented methodology has two salient features. It is a holistic approach such that the overall performance of various types of sensors at different locations is assessed. Therefore, it provides a rational and effective strategy to design the sensor configuration, which optimizes the use of various available resources. This sequential algorithm is very efficient due to its Bayesian nature, in which prior distribution can be incorporated. Therefore, it avoids the possible unidentifiability problem encountered in a sequential process, which starts with small number of sensors. The proposed algorithm is demonstrated using a shear building and a lattice tower with consideration of up to four types of sensors. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
The well-known “Maximum Entropy Formalism” offers a powerful framework for deriving probability density functions given a relevant knowledge base and an adequate prior. The majority of results based on this approach have been derived assuming a flat uninformative prior, but this assumption is to a large extent arbitrary (any one-to-one transformation of the random variable will change the flat uninformative prior into some non-constant function). In a companion paper we introduced the notion of a natural reference point for dimensional physical variables, and used this notion to derive a class of physical priors that are form-invariant to changes in the system of dimensional units. The present paper studies effects of these priors on the probability density functions derived using the maximum entropy formalism. Analysis of real data shows that when the maximum entropy formalism uses the physical prior it yields significantly better results than when it is based on the commonly used flat uninformative prior. This improvement reflects the significance of the incorporating additional information (contained in physical priors), which is ignored when flat priors are used in the standard form of the maximum entropy formalism. A potentially serious limitation of the maximum entropy formalism is the assumption that sample moments are available. This is not the case in many macroscopic real-world problems, where the knowledge base available is a finite sample rather than population moments. As a result, the maximum entropy formalism generates a family of “nested models” parameterized by the unknown values of the population parameters. In this work we combine this formalism with a model selection scheme based on Akaike’s information criterion to derive the maximum entropy model that is most consistent with the available sample. This combination establishes a general inference framework of wide applicability in scientific/engineering problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号