首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 86 毫秒
1.
We modeled the spatial distribution of the most important Chagas disease vectors in Argentina, in order to obtain a predictive mapping method for the probability of presence of the vector species. We analyzed both the binary variable of presence-absence of Chagas disease and the vector species richness in Argentina, in combination with climatic and topographical covariates associated to the region of interest. We used several statistical techniques to produce distribution maps of presence–absence for the different insect species as well as species richness, using a hierarchical Bayesian framework within the context of multivariate geostatistical modeling. Our results show that the inclusion of covariates improves the quality of the fitted models, and that there is spatial interaction between neighboring cells/pixels, so mapping methods used in the past, which assumed spatial independence, are not adequate as they provide unreliable results.  相似文献   

2.
The estimation of site intensity occurrence probabilities in low seismic activity regions has been studied from different points of view. However, no method has been definitively established because several problems arise when macroseismic historical data are incomplete and the active zones are not well determined. The purpose of this paper is to present a method that estimates site occurrence probabilities and at the same time measures the uncertainties inherent in these probabilities in low activity regions. The region to be studied is divided into very broad seismic zones. An exponential intensity probability law is adjusted for each zone and the degree of uncertainty in the assumed incompleteness of the catalogue is evaluated for each intensity. These probabilities are used to establish what may be termed ‘prior site occurrence models’. A Bayesian method is used to improve ‘prior models’ and to obtain the ‘posterior site occurrence models’. Epicentre locations are used to recover spatial information lost in the prior broad zoning. This Bayesian correction permits the use of specific attenuation for different events and may take into account, by means of conservative criteria, epicentre location errors. Following Bayesian methods, probabilities are assumed to be random variables and their distribution may be used to estimate the degree of uncertainty arising from (a) the statistical variance of estimators, (b) catalogue incompleteness and (c) mismatch of data to prior assumptions such as Poisson distribution for events and exponential distribution for intensities. The results are maps of probability and uncertainty for each intensity. These maps exhibit better spatial definition than those obtained by means of simple, broad zones. Some results for Catalonia (NE of Iberian Peninsula) are shown.  相似文献   

3.
Categorical data play an important role in a wide variety of spatial applications, while modeling and predicting this type of statistical variable has proved to be complex in many cases. Among other possible approaches, the Bayesian maximum entropy methodology has been developed and advocated for this goal and has been successfully applied in various spatial prediction problems. This approach aims at building a multivariate probability table from bivariate probability functions used as constraints that need to be fulfilled, in order to compute a posterior conditional distribution that accounts for hard or soft information sources. In this paper, our goal is to generalize further the theoretical results in order to account for a much wider type of information source, such as probability inequalities. We first show how the maximum entropy principle can be implemented efficiently using a linear iterative approximation based on a minimum norm criterion, where the minimum norm solution is obtained at each step from simple matrix operations that converges to the requested maximum entropy solution. Based on this result, we show then how the maximum entropy problem can be related to the more general minimum divergence problem, which might involve equality and inequality constraints and which can be solved based on iterated minimum norm solutions. This allows us to account for a much larger panel of information types, where more qualitative information, such as probability inequalities can be used. When combined with a Bayesian data fusion approach, this approach deals with the case of potentially conflicting information that is available. Although the theoretical results presented in this paper can be applied to any study (spatial or non-spatial) involving categorical data in general, the results are illustrated in a spatial context where the goal is to predict at best the occurrence of cultivated land in Ethiopia based on crowdsourced information. The results emphasize the benefit of the methodology, which integrates conflicting information and provides a spatially exhaustive map of these occurrence classes over the whole country.  相似文献   

4.
A methodological approach for modelling the occurrence patterns of species for the purpose of fisheries management is proposed here. The presence/absence of the species is modelled with a hierarchical Bayesian spatial model using the geographical and environmental characteristics of each fishing location. Maps of predicted probabilities of presence are generated using Bayesian kriging. Bayesian inference on the parameters and prediction of presence/absence in new locations (Bayesian kriging) are made by considering the model as a latent Gaussian model, which allows the use of the integrated nested Laplace approximation ( INLA ) software (which has been seen to be quite a bit faster than the well-known MCMC methods). In particular, the spatial effect has been implemented with the stochastic partial differential equation (SPDE) approach. The methodology is evaluated on Mediterranean horse mackerel (Trachurus mediterraneus) in the Western Mediterranean. The analysis shows that environmental and geographical factors can play an important role in directing local distribution and variability in the occurrence of species. Although this approach is used to recognize the habitat of mackerel, it could also be for other different species and life stages in order to improve knowledge of fish populations and communities.  相似文献   

5.
Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to theex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971–1988.  相似文献   

6.
Hydrologists use the generalized Pareto (GP) distribution in peaks-over-threshold (POT) modelling of extremes. A model with similar uses is the two-parameter kappa (KAP) distribution. KAP has had fewer hydrological applications than GP, but some studies have shown it to merit wider use. The problem of choosing between GP and KAP arises quite often in frequency analyses. This study, by comparing some discrimination methods between these two models, aims to show which method(s) is (are) recommended. Three specific methods are considered: one uses the Anderson-Darling goodness-of-fit (GoF) statistic, another uses the ratio of maximized likelihood (closely related to the Akaike information criterion and the Bayesian information criterion), and the third employs a normality transformation followed by application of the Shapiro-Wilk statistic. We show this last method to be the most recommendable, due to its advantages with sizes typically encountered in hydrology. We apply the simulation results to some flood POT datasets.
EDITOR D. Koutsoyiannis; ASSOCIATE EDITOR E. Volpi  相似文献   

7.
Due to the fast pace increasing availability and diversity of information sources in environmental sciences, there is a real need of sound statistical mapping techniques for using them jointly inside a unique theoretical framework. As these information sources may vary both with respect to their nature (continuous vs. categorical or qualitative), their spatial density as well as their intrinsic quality (soft vs. hard data), the design of such techniques is a challenging issue. In this paper, an efficient method for combining spatially non-exhaustive categorical and continuous data in a mapping context is proposed, based on the Bayesian maximum entropy paradigm. This approach relies first on the definition of a mixed random field, that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function. When incorporating general knowledge about the first- and second-order moments of these fields, it is shown that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions, with parameters estimation that can be obtained from entropy maximization. A posterior distribution that incorporates the various (soft or hard) continuous and categorical data at hand can then be obtained by a straightforward conditionalization step. The use and potential of the method is illustrated by the way of a simulated case study. A comparison with few common geostatistical methods in some limit cases also emphasizes their similarities and differences, both from the theoretical and practical viewpoints. As expected, adding categorical information may significantly improve the spatial prediction of a continuous variable, making this approach powerful and very promising.  相似文献   

8.
If a parameter field to be calibrated consists of more than one statistical population, usually not only the parameter values are uncertain, but the spatial distributions of the populations are uncertain as well. In this study, we demonstrate the potential of the multimodal calibration method we proposed recently for the calibration of such fields, as applied to real-world ground water models with several additional stochastic parameter fields. Our method enables the calibration of the spatial distribution of the statistical populations, as well as their spatially correlated parameterization, while honoring the complete prior geostatistical definition of the multimodal parameter field. We illustrate the implications of the method in terms of the reliability of the posterior model by comparing its performance to that of a "conventional" calibration approach in which the positions of the statistical populations are not allowed to change. Information from synthetic calibration runs is used to show how ignoring the uncertainty involved in the positions of the statistical populations not only denies the modeler the opportunity to use the measurement information to improve these positions but also unduly influences the posterior intrapopulation distributions, causes unjustified adjustments to the cocalibrated parameter fields, and results in poorer observation reproduction. The proposed multimodal calibration allows a more complete treatment of the relevant uncertainties, which prevents the abovementioned adverse effects and renders a more trustworthy posterior model.  相似文献   

9.
Chain dependent models for daily precipitation typically model the occurrence process as a Markov chain and the precipitation intensity process using one of several probability distributions. It has been argued that the mixed exponential distribution is a superior model for the rainfall intensity process, since the value of its information criterion (Akaike information criterion or Bayesian information criterion) when fit to precipitation data is usually less than the more commonly used gamma distribution. The differences between the criterion values of the best and lesser models are generally small relative to the magnitude of the criterion value, which raises the question of whether these differences are statistically significant. Using a likelihood ratio statistic and nesting the gamma and mixed exponential distributions in a parent distribution, we show indirectly that generally the superiority of the mixed exponential distribution over the gamma distribution for modeling precipitation intensity is statistically significant. Comparisons are also made with a common-a gamma model, which are less informative.  相似文献   

10.
The Bayesian maximum entropy (BME) method can be used to predict the value of a spatial random field at an unsampled location given precise (hard) and imprecise (soft) data. It has mainly been used when the data are non-skewed. When the data are skewed, the method has been used by transforming the data (usually through the logarithmic transform) in order to remove the skew. The BME method is applied for the transformed variable, and the resulting posterior distribution transformed back to give a prediction of the primary variable. In this paper, we show how the implementation of the BME method that avoids the use of a transform, by including the logarithmic statistical moments in the general knowledge base, gives more appropriate results, as expected from the maximum entropy principle. We use a simple illustration to show this approach giving more intuitive results, and use simulations to compare the approaches in terms of the prediction errors. The simulations show that the BME method with the logarithmic moments in the general knowledge base reduces the errors, and we conclude that this approach is more suitable to incorporate soft data in a spatial analysis for lognormal data.  相似文献   

11.
Populus euphratica is a dominant tree species in riparian Tugai forests and forms a natural barrier that maintains the stability of local oases in arid inland river basins. Despite being critical information for local environmental protection and recovery, establishing the specific spatial distribution of P. euphratica has rarely been attempted via precise and reliable species distribution models in such areas. In this research, the potential geographic distribution of P. euphratica in the Heihe River Basin was simulated with MaxEnt software based on species occurrence data and 29 environmental variables. The result showed that in the Heihe River Basin, 820 km~2 of land primarily distributed along the banks of the lower reaches of the river is a suitable habitat for P. euphratica. We built other MaxEnt models based on different environmental variables and another eight models employing different mathematical algorithms based on the same 29 environmental variables to demonstrate the superiority of this method.MaxEnt based on 29 environmental variables performed the best among these models, as it precisely described the essential characteristics of the distribution of P. euphratica forest land. This study verified that MaxEnt can serve as an effective tool for species distribution in extremely arid regions with sufficient and reliable environmental variables. The results suggest that there may be a larger area of P. euphratica forest distribution in the study area and that ecological conservation and management of P.euphratica should prioritize suitable habitat. This research provides valuable insights for the conservation and management of degraded P. euphratica riparian forests.  相似文献   

12.
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.  相似文献   

13.
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ?1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.  相似文献   

14.
Snow water equivalent (SWE) estimates at the end of the winter season have been compared for the 2002–2006 period in a 200 km2 mountainous area in Switzerland, using three different models. The first model, ALPINE3D, is a physically based process-oriented model, which solves the snowpack energy and mass balance equations. The other two models, SWE-SEM and HS-SWE, are statistical algorithms interpolating snow data on a grid. While SWE-SEM interpolates local estimates of SWE, HS-SWE converts interpolated snow depth maps into maps of SWE using a regionally-calibrated conversion model. We discuss similarities and differences among the models’ results, both in terms of total volume, and spatial distribution of SWE. The comparison shows a general good agreement of the results of the three models, with a mean difference in the total volumes between the two statistical models of ∼8%, and between the physical model and the statistical ones of ∼−3% to −10%.  相似文献   

15.
Several risk and decision analysis applications are characterized by spatial elements: there are spatially dependent uncertain variables of interest, decisions are made at spatial locations, and there are opportunities for spatial data acquisition. Spatial dependence implies that the data gathered at one coordinate could inform and assist a decision maker at other locations as well, and one should account for this learning effect when analyzing and comparing information gathering schemes. In this paper, we present concepts and methods for evaluating sequential information gathering schemes in spatial decision situations. Static and sequential information gathering schemes are outlined using the decision theoretic notion of value of information, and we use heuristics for approximating the value of sequential information in large-size spatial applications. We illustrate the concepts using a Bayesian network example motivated from risks associated with CO2 sequestration. We present a case study from mining where there are risks of rock hazard in the tunnels, and information about the spatial distribution of joints in the rocks may lead to a better allocation of resources for choosing rock reinforcement locations. In this application, the spatial variables are modeled by a Gaussian process. In both examples there can be large values associated with adaptive information gathering.  相似文献   

16.
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis–Hastings (M–H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M–H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M–H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M–H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.  相似文献   

17.
18.
All Quaternary dating methods involve the measurement of one or more variables to estimate the age of a sample. Each measured quantity has an associated error and uncertainty, and may also be subject to natural variation. We review the statistical estimation of such uncertainties and variation for comparing and interpreting age estimates, with specific reference to the estimation of equivalent dose (De) values in the optically stimulated luminescence (OSL) dating of sediments. We discuss statistical aspects of OSL signal and background estimation, the determination of De values for multi-grain aliquots and individual mineral grains from the same and different samples, and the extent of variation commonly observed among such estimates. Examples are drawn from geological and archaeological contexts. We discuss the strengths and weaknesses of various graphical methods of displaying multiple, independent estimates of De, along with statistical tests and models to compare and appropriately combine them. Many of our recommendations are applicable also to the clear presentation of data obtained using other Quaternary dating methods. We encourage the use of models and methods that are based on well established statistical principles and, ideally, are validated by appropriate numerical simulations; and we discourage the adoption of ad hoc methods developed using a particular set of measurement conditions and tested on a limited number of samples, as these may not be applicable more generally. We emphasise that the choice of statistical models should not be made solely on statistical grounds (or arbitrary rules) but should take into account the broader scientific context of each sample and any additional pertinent information.  相似文献   

19.
In many studies, the distribution of soil attributes depends on both spatial location and environmental factors, and prediction and process identification are performed using existing methods such as kriging. However, it is often too restrictive to model soil attributes as dependent on a known, parametric function of environmental factors, which kriging typically assumes. This paper investigates a semiparametric approach for identifying and modeling the nonlinear relationships of spatially dependent soil constituent levels with environmental variables and obtaining point and interval predictions over a spatial region. Frequentist and Bayesian versions of the proposed method are applied to measured soil nitrogen levels throughout Florida, USA and are compared to competing models, including frequentist and Bayesian kriging, based an array of point and interval measures of out-of-sample forecast quality. The semiparametric models outperformed competing models in all cases. Bayesian semiparametric models yielded the best predictive results and provided empirical coverage probability nearly equal to nominal.  相似文献   

20.
The Ko-g and Ma-f~j tephras are two key isochronous marker layers in northern Japan, which are from the largest Plinian eruptions of Komagatake volcano (VEI = 5) and Mashu caldera (VEI = 6), respectively. Despite extensive radiocarbon studies associated with the two tephras, individual calibrated results show considerable variations and thus accurate ages of these important eruptions remain controversial. Bayesian statistical approaches to calibrating radiocarbon determinations have proven successful in increasing accuracy and sometimes precision for dating tephras, which is achieved through the incorporation of additional stratigraphic information and the combination of evidence from multiple records. Here we use Bayesian approaches to analyse the proximal and distal information associated with the two tephra markers. Through establishing phase and deposition models, we have taken into account all of the currently available stratigraphic and chronological information. The cross-referencing of phase models with the deposition model allows the refinement of eruption ages and the deposition model itself. Using this we are able to provide the most robust current age estimates for the two tephra layers. The Ko-g and Ma-f~j tephras are hereby dated to 6657-6505 (95.4%; 6586±40, μ±σ) cal yr BP, and 7670-7395 (95.4%; 7532±72, μ±σ) cal yr BP, respectively. These updated age determinations underpin the reported East Asian Holocene tephrostratigraphic framework, and allow sites where the tephra layers are present to be dated more precisely and accurately. Our results encourage further applications of Bayesian modelling techniques in the volcanically active East Asian region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号