首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Revisiting prior distributions,Part II: Implications of the physical prior in maximum entropy analysis
Authors:Rafi Baker  George Christakos
Institution:(1) Faculty of Civil and Environmental Engineering, Technion, Israel Institute of Technology, Technion City, Haifa, 32 000, Israel;(2) Department of Geography, San Diego State University, San Diego, CA 98182-4493, USA
Abstract:The well-known “Maximum Entropy Formalism” offers a powerful framework for deriving probability density functions given a relevant knowledge base and an adequate prior. The majority of results based on this approach have been derived assuming a flat uninformative prior, but this assumption is to a large extent arbitrary (any one-to-one transformation of the random variable will change the flat uninformative prior into some non-constant function). In a companion paper we introduced the notion of a natural reference point for dimensional physical variables, and used this notion to derive a class of physical priors that are form-invariant to changes in the system of dimensional units. The present paper studies effects of these priors on the probability density functions derived using the maximum entropy formalism. Analysis of real data shows that when the maximum entropy formalism uses the physical prior it yields significantly better results than when it is based on the commonly used flat uninformative prior. This improvement reflects the significance of the incorporating additional information (contained in physical priors), which is ignored when flat priors are used in the standard form of the maximum entropy formalism. A potentially serious limitation of the maximum entropy formalism is the assumption that sample moments are available. This is not the case in many macroscopic real-world problems, where the knowledge base available is a finite sample rather than population moments. As a result, the maximum entropy formalism generates a family of “nested models” parameterized by the unknown values of the population parameters. In this work we combine this formalism with a model selection scheme based on Akaike’s information criterion to derive the maximum entropy model that is most consistent with the available sample. This combination establishes a general inference framework of wide applicability in scientific/engineering problems.
Keywords:Probability  Random  Information  Prior  Entropy  Knowledge integration  Model selection
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号