首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
Contemporary variants of the lichenometric dating technique depend upon statistical correlations between surface age and maximum lichen sizes, rather than an understanding of lichen biology. To date three terminal moraines of an Alaskan glacier, we used a new lichenometric technique in which surfaces are dated by comparing lichen population distributions with the predictions of ecological demography models with explicit rules for the biological processes that govern lichen populations: colonization, growth, and survival. These rules were inferred from size–frequency distributions of lichens on calibration surfaces, but could be taken directly from biological studies. Working with two lichen taxa, we used multinomial‐based likelihood functions to compare model predictions with measured lichen populations, using only the thalli in the largest 25% of the size distribution. Joint likelihoods that combine the results of both species estimated moraine ages of ad 1938, 1917, and 1816. Ages predicted by Rhizocarpon alone were older than those of P. pubescens. Predicted ages are geologically plausible, and reveal glacier terminus retreat after a Little Ice Age maximum advance around ad 1816, with accelerated retreat starting in the early to mid twentieth century. Importantly, our technique permits calculation of prediction and model uncertainty. We attribute large confidence intervals for some dates to the use of the biologically variable Rhizocarpon subgenus, small sample sizes, and high inferred lichen mortality. We also suggest the need for improvement in demographic models. A primary advantage of our technique is that a process‐based approach to lichenometry will allow direct incorporation of ongoing advances in lichen biology.  相似文献   

2.
Different calibration methods and data manipulations are being employed for quantitative paleoenvironmental reconstructions, but are rarely compared using the same data. Here, we compare several diatom-based models [weighted averaging (WA), weighted averaging with tolerance-downweighting (WAT), weighted averaging partial least squares, artificial neural networks (ANN) and Gaussian logit regression (GLR)] in different situations of data manipulation. We tested whether log-transformation of environmental gradients and square-root transformation of species data improved the predictive abilities and the reconstruction capabilities of the different calibration methods and discussed them in regard to species response models along environmental gradients. Using a calibration data set from New England, we showed that all methods adequately modelled the variables pH, alkalinity and total phosphorus (TP), as indicated by similar root mean square errors of prediction. However, WAT had lower performance statistics than simple WA and showed some unusual values in reconstruction, but setting a minimum tolerance for the modern species, such as available in the new computer program C2 version 1.4, resolved these problems. Validation with the instrumental record from Walden Pond (Massachusetts, USA) showed that WA and WAT reconstructed most closely pH and that GLR reconstructions showed the best agreement with measured alkalinity, whereas ANN and GLR models were superior in reconstructing the secondary gradient variable TP. Log-transformation of environmental gradients improved model performance for alkalinity, but not much for TP. While square-root transformation of species data improved the performance of the ANN models, they did not affect the WA models. Untransformed species data resulted in better accordance of the TP inferences with the instrumental record using WA, indicating that, in some cases, ecological information encoded in the modern and fossil species data might be lost by square-root transformation. Thus it may be useful to consider different species data transformations for different environmental reconstructions. This study showed that the tested methods are equally suitable for the reconstruction of parameters that mainly control the diatom assemblages, but that ANN and GLR may be superior in modelling a secondary gradient variable. For example, ANN and GLR may be advantageous for modelling lake nutrient levels in North America, where TP gradients are relatively short.  相似文献   

3.
A Bayesian approach to palaeoecological environmental reconstruction deriving from the unimodal responses generally exhibited by organisms to an environmental gradient is described. The approach uses Bayesian model selection to calculate a collection of probability-weighted, species-specific response curves (SRCs) for each taxon within a training set, with an explicit treatment for zero abundances. These SRCs are used to reconstruct the environmental variable from sub-fossilised assemblages. The approach enables a substantial increase in computational efficiency (several orders of magnitude) over existing Bayesian methodologies. The model is developed from the Surface Water Acidification Programme (SWAP) training set and is demonstrated to exhibit comparable predictive power to existing Weighted Averaging and Maximum Likelihood methodologies, though with improvements in bias; the additional explanatory power of the Bayesian approach lies in an explicit calculation of uncertainty for each individual reconstruction. The model is applied to reconstruct the Holocene acidification history of the Round Loch of Glenhead, including a reconstruction of recent recovery derived from sediment trap data. The Bayesian reconstructions display similar trends to conventional (Weighted Averaging Partial Least Squares) reconstructions but provide a better reconstruction of extreme pH and are more sensitive to small changes in diatom assemblages. The validity of the posteriors as an apparently meaningful representation of assemblage-specific uncertainty and the high computational efficiency of the approach open up the possibility of highly constrained multiproxy reconstructions.  相似文献   

4.
This study introduces a transition probability-based Bayesian updating (BU) approach for spatial classification through expert system. Transition probabilities are interpreted as expert opinions for updating the prior marginal probabilities of categorical response variables. The main objective of this paper is to provide a spatial categorical variable prediction method which has a solid theoretical foundation and yields relatively higher classification accuracy compared with conventional ones. The basic idea is to first build a linear Bayesian updating (LBU) model that corresponds to an application of Bayes’ theorem. Since the linear opinion pool is intrinsically suboptimal and underconfident, the beta-transformed Bayesian updating (BBU) model is proposed to overcome this limitation. Another type of BU approach, conditional independent Bayesian updating (CIBU), is derived based on conditional independent experts. It is shown that traditional Markovian-type categorical prediction (MCP) is equivalent to a particular CIBU model with specific parameters. As three variants of the BU method, these techniques are illustrated in synthetic and real-world case studies, comparison results with both the LBU and MCP favor the BBU model.  相似文献   

5.
土地利用变化预测的案例推理方法   总被引:3,自引:0,他引:3  
杜云艳  王丽敬  季民  曹峰 《地理学报》2009,64(12):1421-1429
当前,基于案例的推理(Case-Based Reasoning,CBR)在解决复杂的地学问题时.对地学案例的表达和历史案例的相似性计算与推理存在明显缺陷,需要在CBR的表达模型和空间相似性计算与推理算法进行拓展.本文针对土地利用变化问题,首先在分析土地利用变化各种定量方法基础上,提出利用CBR进行土地利用变化分析的研究思路:其次,针对土地利用变化的空间特性及隐含的空间关系特性.给出土地利用变化案例的表达模型,案例间内蕴空间关系抽取算法,以及考虑案例间空间关系的CBR相似性推理模型:最后,进行珠江口区域土地利用变化的CBR方法试验.预测精度达到80%.为了进一步评价CBR方法对土地利用变化预测的有效性.在实例部分采用同样的实验数据进行贝叶斯网络的预测方法实验.由两种方法对比可知.CBR是从复杂到简单进行地学问题求解的一种有效方法.  相似文献   

6.
We have developed a new geodetic inversion method for space–time distribution of fault slip velocity with time-varying smoothing regularization in order to reconstruct accurate time histories of aseismic fault slip transients. We introduce a temporal smoothing regularization on slip and slip velocity through a Bayesian state space approach in which the strength of regularization (temporal smoothness of slip velocity) is controlled by a hyperparameter. The time-varying smoothing regularization is realized by treating the hyperparameter as a time-dependent stochastic variable and adopting a hierarchical Bayesian state space model, in which a prior distribution on the hyperparameter is introduced in addition to a conventional Bayesian state space model. We have tested this inversion method on two synthetic data sets generated by simulated aseismic slip transients. Results show that our method reproduces well both rapid changes of slip velocity and steady-state velocity without significant oversmoothing and undersmoothing, which has been hard to overcome by the conventional Bayesian approach with time-independent smoothing regularization. Application of this method to transient deformation in 2002 caused by a silent earthquake off the Boso peninsula, Japan, also shows similar advantages of this method over the conventional approach.  相似文献   

7.
How to Choose Priors for Bayesian Estimation of the Discovery Process Model   总被引:1,自引:0,他引:1  
The Bayesian version of the discovery process model provides an effective way to estimate the parameters of the superpopulation, the efficiency of the exploration effort, the number of pools and the undiscovered potential in a play. The posterior estimates are greatly influenced by the prior distribution of these parameters. Some empirical and statistical relationships for these parameters can be obtained from Monte Carlo simulations of the discovery model. For example, there is a linear relationship between the expectation of a pool size in logarithms and the order of its discovery, the slope of which is related to the discoverability factor. Some simple estimates for these unknown play parameters can be derived based upon these empirical and statistical conclusions and may serve as priors for the Bayesian approach. The priors and posteriors from this empirical Bayesian approach are compared with the estimates from Lee and Wang's modified maximum likelihood approach using the same data.  相似文献   

8.
Traditionally,one form of preprocessing in multivariate calibration methods such as principal componentregression and partial least squares is mean centering the independent variables(responses)and thedependent variables(concentrations).However,upon examination of the statistical issue of errorpropagation in multivariate calibration,it was found that mean centering is not advised for some datastructures.In this paper it is shown that for response data which(i)vary linearly with concentration,(ii)have no baseline(when there is a component with a non-zero response that does not change inconcentration)and(iii)have no closure in the concentrations(for each sample the concentrations of allcomponents add to a constant,e.g.100%)it is better not to mean center the calibration data.That is,the prediction errors as evaluated by a root mean square error statistic will be smaller for a model madewith the raw data than a model made with mean-centered data.With simulated data relativeimprovements ranging from 1% to 13% were observed depending on the amount of error in thecalibration concentrations and responses.  相似文献   

9.
This study investigated the distribution of subfossil diatom assemblages in surficial sediments of 100 lakes along steep ecological and climatic gradients in northernmost Sweden (Abisko region, 67.07° N to 68.48° N latitude, 17.67° E to 23.52° E longitude) to develop and cross-validate transfer functions for paleoenvironmental reconstruction. Of 19 environmental variables determined for each site, 15 were included in the statistical analysis. Lake-water pH (8.0%), sedimentary loss-on-ignition (LOI, 5.9% and estimated mean July air temperature (July T, 4.8%) explained the greatest amounts of variation in the distribution of diatom taxa among the 100 lakes. Temperature and pH optima and tolerances were calculated for abundant taxa. Transfer functions, based on WA-PLS (weighted averaging partial least squares), were developed for pH (r2 = 0.77, root-mean-square-error of prediction (RMSEP) = 0.19 pH units, maximum bias = 0.31, as assessed by leave-one-out cross-validation) based on 99 lakes and for July T (r2 = 0.75, RMSEP = 0.96 °C, max. bias = 1.37 °C) based on the full 100 lake set. We subsequently assessed the ability of the diatom transfer functions to estimate lake-water pH and July T using a form of independent cross-validation. To do this, the 100-lake set was divided in two subsets. An 85-lake training-set (based on single limnological measurements) was used to develop transfer functions with similar performance as those based on the full 100 lakes, and a 15-lake test-set (with 2 years of monthly limnological measurements throughout the ice-free seasons) was used to test the transfer functions developed from the 85-lake training-set. Results from the intra-set cross-validation exercise demonstrated that lake-specific prediction errors (RMSEP) for the 15-lake test-set corresponded closely with the median measured values (pH) and the estimations based on spatial interpolations of data from weather stations (July T). The prediction errors associated with diatom inferences were usually within the range of seasonal and interannual variability. Overall, our results confirm that diatoms can provide reliable and robust estimates of lake-water pH and July T, that WA-PLS is a robust calibration method and that long-term environmental data are needed for further improvement of paleolimnological transfer functions.  相似文献   

10.
以深圳市坝光银叶园和大鹏半岛自然保护区19种湿地森林树种叶片可见光近红外光谱与全氮(Total Nitrogen, TN)、全磷(Total Phosphorus, TP)、全钾(Total Potassium, TK)含量关系为基础,分析了11种光谱预处理方式、3种光谱数据降维方式和2种建模方法对模型精度的影响。结果表明,标准正态变换(Standard Normal Variate, SNV)结合一阶导数(first derivative, 1 st)预处理方式下模型精度最高;主成分分析(Principal Component Analysis, PCA)降维处理对模型的降维效果最好;支持向量回归(Support Vector Regression, SVR)的建模效果精度最高。对于TN、TP、TK最佳模型的预测确定系数均在0.80以上,模型RPD值也在2.0以上,SVR模型可以用于树种叶片TN、TP、TK的快速检测。  相似文献   

11.
筛选全球5839个水文站逐日径流数据,采用超阈值采样法提取洪水发生频率及时间,将各季节最大日流量作为季节洪水量级,以优选的多个大尺度气候因子的最佳前置月份序列作为潜在预报因子,基于贝叶斯模型平均法构建全球尺度洪水中长期预报模型,并利用均方误差技术指数(MSESS)评价模型的预报效果。结果表明:全球范围内,洪水量级和频率模拟预报效果合格(0.6>MSESS>0.2)的水文站点占比分别为48%和28%;利用前置季节气候因子数据,驱动所构建的洪水中长期预报模型,有效预报了2020年鄱阳湖流域洪水量级将异常偏高。  相似文献   

12.
The explosive growth of geographic and temporal data has attracted much attention in information retrieval (IR) field. Since geographic and temporal information are often available in unstructured text, the IR task becomes a non-straightforward process. In this article, we propose a novel geo-temporal context mining approach and a geo-temporal ranking model for improving the search performance. Queries target implicitly ‘what’, ‘when’ and ‘where’ components. We model geographic and temporal query-dependent frequent patterns, called contexts. These contexts are derived based on extracting and ranking geographic and temporal entities found in pseudo-relevance feedback documents. Two methods are proposed for inferring the query-dependent contexts: (1) a frequency-based statistical approach and (2) a frequent pattern mining approach using a support threshold. The derived geographic and temporal query contexts are then exploited into a probabilistic ranking model. Finally, geographic, temporal and content-based scores are combined together for improving the geo-temporal search performance. We evaluate our approach on the New York Times news collection. The experimental results show that our proposed approach outperforms significantly a well-known baseline search, namely the probabilistic BM25 ranking model and state-of-the-art approaches in the field as well.  相似文献   

13.
An October–June precipitation reconstruction was developed from a Pinus halepensis regional tree-ring chronology from four sites in northwestern Tunisia for the period of 1771–2002. The reconstruction is based on a reliable and replicable statistical relationship between climate and tree-ring growth and shows climate variability on both interannual and interdecadal time scales. Thresholds (12th and 88th percentiles) based on the empirical cumulative distribution of observed precipitation for the 1902–2002 calibration period were used to delineate dry years and wet years of the long-term reconstruction. The longest reconstructed drought by this classification in the 232-year reconstruction is 2 years, which occurred in the 19th century. Analysis of 500 mb height data for the period 1948–2002 suggests reconstructed extreme dry and wet events can provide information on past atmospheric circulation anomalies over a broad region including the Mediterranean, Europe and eastern Asia.  相似文献   

14.
Summary. For a long-term predictor from which a joint distribution of earthquake occurrence time and magnitude has been obtained, and also a record of past successes, false alarms and failures, Bayesian statistical methods yield predictive information of the kind needed as a basis for decision-making on precautionary measures. The information is presented in terms of risk refinement, intensity probability and success probability. After the event the relative likelihood that a prediction was a success or failure can be estimated. Comparisons can also be made of the performance of different forecasting models. The application of these methods is illustrated by an example based on the proposed swarm-magnitude predictor.  相似文献   

15.
This study addresses the question of what diatom taxa to includein a modern calibration set based on their relative contribution in apalaeolimnological calibration model. Using a pruning algorithm for ArtificialNeural Networks (ANNs) which determines the functionality of individual taxa interms of model performance, we pruned the Surface Water Acidification Project(SWAP) pH-diatom data-set until the predictive performance of thepruned set (as assessed by a jackknifing procedure) was statistically differentfrom the initial full-set. Our results, based on the validation at each5% data-set reduction, show that (i) 85% of the taxa canbe removed without any effect on the pH model calibration performance, and (ii)that the complexity and the dimensionality reduction of the model by theremoval of these non-essential or redundant taxa greatly improve therobustness of the calibration. A comparison between the commonly usedmarginal criteria for inclusion (species tolerance andHill's N2) and our functionality criterion shows that the importance ofeach taxon in an ANN palaeolimnological model calibration does not appear todepend on these marginal characteristics.  相似文献   

16.
Abstract

This is the second of two papers which elaborates a framework for embedding urban models within GIS. In the first paper (Batty and Xie 1994), we outlined how the display functions of a proprietary GIS could be used to organize a series of external software modules which contained the central elements of the modelling process, namely dataset selection and analysis, and model specification, calibration, and prediction. In that paper, we dwelt on display and data analysis functions whereas here we outline the model-based functions of the system. We begin by reviewing residential location models based on population density theory, stating continuous and discrete model forms, and calibration methods. We then illustrate a pass through the software using data for the Buffalo urban region, showing how observed data and model estimates can be evaluated through graphic display. We present ways in which the system can be used to explore and fit a variety of models to different zoning systems and in so doing, show how subset selection and aggregation can be used to find models with good fit. Finally we draw conclusions and outline an agenda for further research.  相似文献   

17.
It is possible to reconstruct the past variation of an environmental variable from measured historical indicators when the modern values of the variable and the indicators are known. In a Bayesian statistical approach, the selection of a prior probability distribution for the past values of the environmental variable can then be crucial and the selection therefore should be made carefully. This is particularly the case when the data are noisy and the statistical model used is complex since the influence of the prior on the results can then be especially strong. It can be difficult to elicit the prior probability distribution from the available information, since usually there are no measured data on the past values of the variable one wants to reconstruct and different reconstructions are typically consistent with each other only at a coarse level. To overcome these difficulties we propose to use a non-informative smoothing prior, possibly in combination with an informative prior, that simply penalizes for roughness of the reconstruction as measured by the variability of its values. We believe that it can sometimes be easier to set an overall prior distribution on the roughness than to agree on a prior for the actual values of the reconstructed variable. Note that by using a smoothing prior one incorporates into the model itself the smoothing step usually done before or after the actual numerical reconstruction. Another idea proposed in this paper is to integrate the reconstruction model with a multiscale feature analysis technique known as SiZer. Multiscale analysis of the posterior distribution of the reconstructed variable makes it possible to infer its statistically significant features such as trends, maxima and minima at several different time scales. While only temperature is considered in this paper, the technique can be applied to other environmental variables.  相似文献   

18.
ABSTRACT

An increasing number of social media users are becoming used to disseminate activities through geotagged posts. The massive available geotagged posts enable collections of users’ footprints over time and offer effective opportunities for mobility prediction. Using geotagged posts for spatio-temporal prediction of future location, however, is challenging. Previous studies either focus on next-place prediction or rely on dense data sources such as GPS data. Introduced in this article is a novel method for future location prediction of individuals based on geotagged social media data. This method employs the hierarchical density-based clustering algorithm with adaptive parameter selection to identify the regions frequently visited by a social media user. A multi-feature weighted Bayesian model is then developed to forecast users’ spatio-temporal locations by combining multiple factors affecting human mobility patterns. Further, an updating strategy is designed to efficiently adjust, over time, the proposed model to the dynamics in users’ mobility patterns. Based on two real-life datasets, the proposed approach outperforms a state-of-the-art method in prediction accuracy by up to 5.34% and 3.30%. Tests show prediction reliability is high with quality predictions, but low in the identification of erroneous locations.  相似文献   

19.
The resolution achievable for chironomid identifications has increased in recent years because of significant improvements in taxonomic literature. However, high taxonomic resolution requires more training for analysts. Furthermore, with greater taxonomic resolution, misidentifications and the number of rare, poorly represented taxa in chironomid calibration datasets may increase. We assessed the effects of various levels of taxonomic resolution on the performance of chironomid-based temperature inference models (transfer functions) and temperature reconstruction. A calibration dataset consisting of chironomid assemblage and temperature data from 100 lakes was examined at four levels of taxonomic detail. The coarsest taxonomic resolution primarily represented identifications to genus or suprageneric level. At the highest level of taxonomic resolution, identification to genus level was possible for 37% of taxa, and identification below genus was possible for 60% of taxa. Transfer functions were obtained using Weighted Averaging (WA) and Weighted Averaging-Partial Least Squares (WA-PLS) regression. Cross-validated performance statistics, such as the root mean square error of prediction (RMSEP) and the coefficient of determination (r 2) between inferred and observed values improved considerably from the lowest taxonomic resolution level (WA: RMSEP 1.91°C, r 2 0.78; WA-PLS: RMSEP 1.59°C, r 2 0.86) to the highest taxonomic resolution level (WA: RMSEP 1.66°C, r 2 0.84; WA-PLS: RMSEP 1.41°C, r 2 0.89). Reconstructed July air temperatures during the Lateglacial period based on fossil chironomid assemblages from Hijkermeer (The Netherlands) were similar for all levels of taxonomic resolution, except the coarsest level. At the coarsest taxonomic level, reconstruction failed to infer one of the known Lateglacial cold episodes in the record. Also, the difference in reconstructed values based on lowest and highest taxonomic resolutions exceeded sample-specific estimated standard errors of prediction in several instances. Our results suggest that chironomid-based transfer functions at the highest taxonomic resolution outperform models based on lower-resolution calibration data. However, transfer functions of intermediate taxonomic resolution produced results very similar to models based on high-resolution taxonomic data. In studies that include analysts with different levels of expertise, inference models based on intermediate taxonomic resolution, therefore, might provide an alternative to transfer functions of maximum taxonomic detail in order to ensure taxonomic consistency between calibration datasets and down-core records produced by different analysts.  相似文献   

20.
《Polar Science》2014,8(3):242-254
In this paper we examine 2- and 3-way chemometric methods for analysis of Arctic and Antarctic water samples. Standard CTD (conductivity–temperature–depth) sensor devices were used during two oceanographic expeditions (July 2007 in the Arctic; February 2009 in the Antarctic) covering a total of 174 locations. The output from these devices can be arranged in a 3-way data structure (according to sea water depth, measured variables, and geographical location). We used and compared 2- and 3-way statistical tools including PCA, PARAFAC, PLS, and N-PLS for exploratory analysis, spatial patterns discovery and calibration. Particular importance was given to the correlation and possible prediction of fluorescence from other physical variables. MATLAB's mapping toolbox was used for geo-referencing and visualization of the results. We conclude that: 1) PCA and PARAFAC models were able to describe data in a satisfactory way, but PARAFAC results were easier to interpret; 2) applying a 2-way model to 3-way data raises the risk of flattening the covariance structure of the data and losing information; 3) the distinction between Arctic and Antarctic seas was revealed mostly by PC1, relating to the physico-chemical properties of the water samples; and 4) we confirm the ability to predict fluorescence values from physical measurements when the 3-way data structure is used in N-way PLS regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号