首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An important aim of modern geostatistical modeling is to quantify uncertainty in geological systems. Geostatistical modeling requires many input parameters. The input univariate distribution or histogram is perhaps the most important. A new method for assessing uncertainty in the histogram, particularly uncertainty in the mean, is presented. This method, referred to as the conditional finite-domain (CFD) approach, accounts for the size of the domain and the local conditioning data. It is a stochastic approach based on a multivariate Gaussian distribution. The CFD approach is shown to be convergent, design independent, and parameterization invariant. The performance of the CFD approach is illustrated in a case study focusing on the impact of the number of data and the range of correlation on the limiting uncertainty in the parameters. The spatial bootstrap method and CFD approach are compared. As the number of data increases, uncertainty in the sample mean decreases in both the spatial bootstrap and the CFD. Contrary to spatial bootstrap, uncertainty in the sample mean in the CFD approach decreases as the range of correlation increases. This is a direct result of the conditioning data being more correlated to unsampled locations in the finite domain. The sensitivity of the limiting uncertainty relative to the variogram and the variable limits are also discussed.  相似文献   

2.
地统计学方法在土壤学中的应用   总被引:24,自引:1,他引:24  
地统计学目前在土壤科学中得到广泛的应用和发展,成为认识土壤特征的一个重要工具.地统计学的空间变异函数和克里格插值等方法是土壤性状分析的主要手段,空间变异函数主要用于描述土壤理化性状空间变异特性,不同的插值方法可优化田间试验设计和田间采样方案,克里格插值方法则特别适用于未测量点土壤属性值的估测等.近年插值方法又被广泛应用于确定区域土壤环境容量和土壤质量标准,随机模拟则用于对土壤特性进行不确定性估计等.因而,地统计学方法对我国大量土壤学资料的整合与分析具有极大的应用前景.  相似文献   

3.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

4.

Strong correlations have been observed between potential indices and the densities of spatial variables. The conventional null hypothesis of bivariate regression is inappropriate for testing their significance. A randomization test is proposed and is applied to 1975 US population data by state. The resulting relationship has a significant correlation, but its slope could occur frequently under the null hypothesis. The correlation is shown to be related to the spatial autocorrelation of densities by constructing arrangements with prescribed values of the modified Moran index.  相似文献   

5.
Abstract

An error model for spatial databases is defined here as a stochastic process capable of generating a population of distorted versions of the same pattern of geographical variation. The differences between members of the population represent the uncertainties present in raw or interpreted data, or introduced during processing. Defined in this way, an error model can provide estimates of the uncertainty associated with the products of processing in geographical information systems. A new error model is defined in this paper for categorical data. Its application to soil and land cover maps is discussed in two examples: the measurement of area and the measurement of overlay. Specific details of implementation and use are reviewed. The model provides a powerful basis for visualizing error in area class maps, and for measuring the effects of its propagation through processes of geographical information systems.  相似文献   

6.

Prediction of true classes of surficial and deep earth materials using multivariate spatial data is a common challenge for geoscience modelers. Most geological processes leave a footprint that can be explored by geochemical data analysis. These footprints are normally complex statistical and spatial patterns buried deep in the high-dimensional compositional space. This paper proposes a spatial predictive model for classification of surficial and deep earth materials derived from the geochemical composition of surface regolith. The model is based on a combination of geostatistical simulation and machine learning approaches. A random forest predictive model is trained, and features are ranked based on their contribution to the predictive model. To generate potential and uncertainty maps, compositional data are simulated at unsampled locations via a chain of transformations (isometric log-ratio transformation followed by the flow anamorphosis) and geostatistical simulation. The simulated results are subsequently back-transformed to the original compositional space. The trained predictive model is used to estimate the probability of classes for simulated compositions. The proposed approach is illustrated through two case studies. In the first case study, the major crustal blocks of the Australian continent are predicted from the surface regolith geochemistry of the National Geochemical Survey of Australia project. The aim of the second case study is to discover the superficial deposits (peat) from the regional-scale soil geochemical data of the Tellus Project. The accuracy of the results in these two case studies confirms the usefulness of the proposed method for geological class prediction and geological process discovery.

  相似文献   

7.
ABSTRACT

We argue that the use of American Community Survey (ACS) data in spatial autocorrelation statistics without considering error margins is critically problematic. Public health and geographical research has been slow to recognize high data uncertainty of ACS estimates, even though ACS data are widely accepted data sources in neighborhood health studies and health policies. Detecting spatial autocorrelation patterns of health indicators on ACS data can be distorted to the point that scholars may have difficulty in perceiving the true pattern. We examine the statistical properties of spatial autocorrelation statistics of areal incidence rates based on ACS data. In a case study of teen birth rates in Mecklenburg County, North Carolina, in 2010, Global and Local Moran’s I statistics estimated on 5-year ACS estimates (2006–2010) are compared to ground truth rate estimates on actual counts of births certificate records and decennial-census data (2010). Detected spatial autocorrelation patterns are found to be significantly different between the two data sources so that actual spatial structures are misrepresented. We warn of the possibility of misjudgment of the reality and of policy failure and argue for new spatially explicit methods that mitigate the biasedness of statistical estimations imposed by the uncertainty of ACS data.  相似文献   

8.
ABSTRACT

The focus of this work is general methods for prioritization or screening of project sites based on the favorability of multiple spatial criteria. We present a threshold-based transformation of each underlying spatial favorability factor into a continuous scale with a common favorability interpretation across all criteria. We compare several methods of computing site favorability and propagating uncertainty from the data to the favorability metrics. Including uncertainty allows decision makers to determine if seeming differences among sites are significant. We address uncertainty using Taylor series approximations and analytical distributions, which are compared to computationally intensive Monte Carlo simulations. Our methods are applied to siting direct-use geothermal energy projects in the Appalachian Basin, where our knowledge about any particular site is limited, yet sufficient data exist to estimate favorability. We consider four factors that contribute to site favorability: the thermal resource described by the depth to 80°C rock, natural reservoir productivity described by rock permeability and thickness, potential for induced seismicity, and the estimated cost of surface infrastructure for heat distribution. Those factors are combined in three ways. We develop favorability uncertainty propagation and sensitivity analysis methods. All methods are general and can be applied to other multi-criteria spatial screening problems.  相似文献   

9.
ABSTRACT

Choropleth mapping provides a simple but effective visual presentation of geographical data. Traditional choropleth mapping methods assume that data to be displayed are certain. This may not be true for many real-world problems. For example, attributes generated based on surveys may contain sampling and non-sampling error, and results generated using statistical inferences often come with a certain level of uncertainty. In recent years, several studies have incorporated uncertain geographical attributes into choropleth mapping with a primary focus on identifying the most homogeneous classes. However, no studies have yet accounted for the possibility that an areal unit might be placed in a wrong class due to data uncertainty. This paper addresses this issue by proposing a robustness measure and incorporating it into the optimal design of choropleth maps. In particular, this study proposes a discretization method to solve the new optimization problem along with a novel theoretical bound to evaluate solution quality. The new approach is applied to map the American Community Survey data. Test results suggest a tradeoff between within-class homogeneity and robustness. The study provides an important perspective on addressing data uncertainty in choropleth map design and offers a new approach for spatial analysts and decision-makers to incorporate robustness into the mapmaking process.  相似文献   

10.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

11.
Abstract

A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields.  相似文献   

12.
ABSTRACT

The stochastic perturbation of urban cellular automata (CA) model is difficult to fine-tune and does not take the constraint of known factors into account when using a stochastic variable, and the simulation results can be quite different when using the Monte Carlo method, reducing the accuracy of the simulated results. Therefore, in this paper, we optimize the stochastic component of an urban CA model by the use of a maximum entropy model to differentially control the intensity of the stochastic perturbation in the spatial domain. We use the kappa coefficient, figure of merit, and landscape metrics to evaluate the accuracy of the simulated results. Through the experimental results obtained for Wuhan, China, the effectiveness of the optimization is proved. The results show that, after the optimization, the kappa coefficient and figure of merit of the simulated results are significantly improved when using the stochastic variable, slightly improved when using Monte Carlo methods. The landscape metrics for the simulated results and actual data are much closer when using the stochastic variable, and slightly closer when using the Monte Carlo method, but the difference between the simulated results is narrowed, reflecting the fact that the results are more reliable.  相似文献   

13.
Abstract

This paper reports on software to construct alternative weight matrices and to compute spatial autocorrelation statistics, namely the Moran coefficient and the Geary coefficient using Arc/Info’s data structure. As such it is an addition to recent efforts in linking GIS with exploratory spatial data analysis. The software is interfaced with Arc/Info via the Arc Macro Language (AML) so that it can be run in the ARC environment. This allows the user to perform exploratory analysis within GIS which may provide insights in subsequent spatial analysis and modelling.  相似文献   

14.
Abstract.

A formal, yet practical, GeoRelational Data Model (GRDM) is presented for the logical database design phase of the development of spatial information systems. Geographic applications are viewed in the context of information systems development. The generic needs of modelling spatial data are analyzed; it is concluded that they are not served satisfactorily by existing data models, so specifications of modelling tools for spatial application design are given. GRDM provides a set of representational constructs (relations and layers for the logical schema; virtual layers, object classes and spatial constraints for the user views) on top of well-established models. It constitutes part of a full, easily automated application design methodology. Extensive examples demonstrate the relevance, and ease-of-use of the platform-independent GRDM.  相似文献   

15.
Notes     
《The Journal of geography》2012,111(6):279-288
Abstract

The concept of correlation is becoming increasingly important to students of all ages as the use of electronic data base technology becomes more common. Data maps offer a significant and new format for secondary students to use along with tables of values and scatter plots as they learn and apply the correlation concept. Data maps are particularly effective in promoting the interdisciplinary treatment of important content by allowing students to interpret social or physical relationships within the geographic context in which they arise.  相似文献   

16.
ABSTRACT

This paper proposes a new classification method for spatial data by adjusting prior class probabilities according to local spatial patterns. First, the proposed method uses a classical statistical classifier to model training data. Second, the prior class probabilities are estimated according to the local spatial pattern and the classifier for each unseen object is adapted using the estimated prior probability. Finally, each unseen object is classified using its adapted classifier. Because the new method can be coupled with both generative and discriminant statistical classifiers, it performs generally more accurately than other methods for a variety of different spatial datasets. Experimental results show that this method has a lower prediction error than statistical classifiers that take no spatial information into account. Moreover, in the experiments, the new method also outperforms spatial auto-logistic regression and Markov random field-based methods when an appropriate estimate of local prior class distribution is used.  相似文献   

17.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

18.

Accurately mapping a region’s ground water quality depends upon the spatial sampling strategies employed, including where and how often field data are collected. This study compares the relative values of three field sampling strategies for mapping a known migrating plume of volcanic ground water in Sierra Valley, California. The first strategy sampled wells once each year during 1957, 1972, and 1980 (n=63, 45, and 57, respectively) and portrayed spatial–temporal changes in ground water quality more clearly on maps than did two alternative sampling strategies. One of these alternatives, Strategy 2, sampled one well per township per year during 1957, 1972, and 1980 (n=11) and did not detect the migrating plume, despite being a recommended strategy. The other alternative, Strategy 3, frequently sampled in time a small, fixed group of indicator wells (n=13) every four years for the same period, again producing maps with little correlation to the original pattern detected by Strategy 1.  相似文献   

19.
Abstract

GIS is a technology which is ideally suited to analysis of the market values of properties, since such values are based upon spatial comparisons as well as individual property attributes. Great Britain now has a new mechanism of local taxation, the council tax, which is based upon the capital values of properties. Central to the implementation of this tax has been the potentially controversial assignment of properties to valuation ‘bands’. This paper posits that a geographical model embedded within a GIS provides an alternative means of devising credible capital values, and anticipates some of the prospects for the use of GIS in local revenue-raising.  相似文献   

20.
Abstract

In previous work, a relational data structure aimed at the exchange of spatial data between systems was developed. As this data structure was relational it was of first normal form, but compliance with the higher normal forms was not investigated. Recently, a new procedural method for composing fully normalized data structures from the basic data fields has been developed by H. C. Smith, as an alternative to the process of non-loss decomposition which is difficult to understand. Smith's method has been applied to data fields required to store points, lines and polygons in a chain-node spatial data model. When geographic domain, coverage layer and map are also considered, the procedure naturally leads to a catalogue model, needed for the exchange of spatial data. Although the method produces a fully normalized data structure, it is not as easy to identify which normal forms are responsible for the ultimate arrangement of the data fields into relations, but the benefits of these criteria for data base development also apply to spatial data structures and related ancillary data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号