首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The abstract classification system Nature in Norway (NiN) has detailed ecological definitions of a high number of ecosystem units, but its applicability in practical vegetation mapping is unknown because it was not designed with a specific mapping method in mind. To investigate this further, two methods for mapping – 3D aerial photographic interpretation of colour infrared photos and field survey – were used to map comparable neighbouring sites of 1 km2 in Hvaler Municipality, south-eastern Norway. The classification accuracy of each method was evaluated using a consensus classification of 160 randomly distributed plots within the study sites. The results showed an overall classification accuracy of 62.5% for 3D aerial photographic interpretation and 82.5% for field survey. However, the accuracy varied for the ecosystem units mapped. The classification accuracy of ecosystem units in acidic, dry and open terrain was similar for both methods, whereas classification accuracy of calcareous units was highest using field survey. The mapping progress using 3D aerial photographic interpretation was more than two times faster than that of field survey. Based on the results, the authors recommend a method combining 3D aerial photographic interpretation and field survey to achieve effectively accurate mapping in practical applications of the NiN system.  相似文献   

2.
We examined the impact of temporal dependence between patterns of error in classified time-series imagery through a simulation modeling approach. This research extended the land-cover-change simulation model we previously developed to investigate: (1) the assumption of temporal independence between patterns of error in classified time-series imagery; and (2) the interaction of patterns of change and patterns of error in a post-classification change analysis. In this research, the thematic complexity of the classified land-cover maps was increased by increasing the number of simulated land-cover classes. Simulating maps with increased categorical resolution permitted the incorporation of: (1) higher-order, more complex spatial and temporal interactions between land-cover classes; and (2) patterns of error that better reproduce the complex error interactions that often occur in time-series classified imagery. The overall modeling framework was divided into two primary components: (1) generation of a map representing true change; and (2) generation of a suite of change maps that had been perturbed by specific patterns of error. All component maps in the model were produced using simulated annealing, which enabled us to create a series of map realizations with user-defined spatial and temporal patterns. Comparing the true map of change to the error-perturbed maps of change using accuracy assessment statistics showed that increasing the temporal dependence between classification errors did not improve the accuracy of resulting maps of change when the categorical scale of the land-cover classified maps was increased. The increased structural complexity within the time series of maps effectively inhibited the impact of temporal dependence. However, results demonstrated that there are interactions between patterns of error and patterns of change in a post-classification change analysis. These interactions played a major role in determining the accuracy associated with the maps of change.  相似文献   

3.
With the increase in the number of applications using digital vector maps and the development of surveying techniques, a large volume of GIS (geographic information system) vector maps having high accuracy and precision is being produced. However, to achieve their effective transmission while preserving their high positional quality, these large amounts of vector map data need to be compressed. This paper presents a compression method based on a bin space partitioning data structure, which preserves a high-level accuracy and exact precision of spatial data. To achieve this, the proposed method a priori divides a map into rectangular local regions and classifies the bits of each object in the local regions to three types of bins, defined as category bin (CB), direction bin (DB), and accuracy bin (AB). Then, it encodes objects progressively using the properties of the classified bins, such as adjacency and orientation, to obtain the optimum compression ratio. Experimental results verify that our method can encode vector map data constituting less than 20% of the original map data at a 1-cm accuracy degree and that constituting less than 9% at a 1-m accuracy degree. In addition, its compression efficiency is greater than that of previous methods, whereas its complexity is lower for close to real-time applications.  相似文献   

4.
白燕  廖顺宝  孙九林 《地理学报》2011,66(5):709-717
选择在600 m~30 km 16 个尺度上,在ArcGIS 中利用常用的面积最大值法(Rule ofMaximum Area,RMA) 对2005 年四川省1:25 万土地覆被矢量数据进行栅格化,并采用两种属性精度损失评估方法:传统的常规分析方法和一种新的基于栅格单元分析方法,来对比分析在这两种评估方法下RMA栅格化的属性(这里是指面积) 精度损失随尺度的变化特征。结果表明:(1) 在同一尺度下采用基于栅格单元方法分析所得的研究区平均属性精度损失大于常规分析方法分析得到的平均属性精度损失,且二者之间的差异在1~10 km内很明显,当栅格单元大于10km时,两种方法得到的平均属性精度损失的差值稳定,且其随尺度的变化曲线趋于平行;(2) 基于栅格单元分析方法不仅能够准确地定量估计RMA栅格化的属性精度损失,而且能客观地反映属性精度损失的空间分布规律;(3) 对四川省1:25 万土地覆被数据进行面积最大值法(RMA)栅格化的适宜尺度域最好不要超过800 m,在该尺度域内数据工作量适宜,且RMA栅格化属性精度损失小于2.5%。  相似文献   

5.
ABSTRACT

Cellular automata (CA) models are in growing use for land-use change simulation and future scenario prediction. It is necessary to conduct model assessment that reports the quality of simulation results and how well the models reproduce reliable spatial patterns. Here, we review 347 CA articles published during 1999–2018 identified by a Scholar Google search using ‘cellular automata’, ‘land’ and ‘urban’ as keywords. Our review demonstrates that, during the past two decades, 89% of the publications include model assessment related to dataset, procedure and result using more than ten different methods. Among all methods, cell-by-cell comparison and landscape analysis were most frequently applied in the CA model assessment; specifically, overall accuracy and standard Kappa coefficient respectively rank first and second among all metrics. The end-state assessment is often criticized by modelers because it cannot adequately reflect the modeling ability of CA models. We provide five suggestions to the method selection, aiming to offer a background framework for future method choices as well as urging to focus on the assessment of input data and error propagation, procedure, quantitative and spatial change, and the impact of driving factors.  相似文献   

6.
Volume 111 Index     
《The Journal of geography》2012,111(6):264-265
Abstract

Concepts related to alternative map projections can be difficult to explain to students given the diversity and complexity of available projections. Students frequently have trouble understanding how distortions caused by the choice of a projection can affect map readability and comprehension. Programs available for personal computers now provide geography and cartography instructors with a method for interactively educating students concerning the distortions associated with alternative map projections. Such software can be incorporated into laboratory assignments in introductory geography courses or in more advanced courses that deal with map design or thematic cartography.  相似文献   

7.
ABSTRACT

This paper proposes a new classification method for spatial data by adjusting prior class probabilities according to local spatial patterns. First, the proposed method uses a classical statistical classifier to model training data. Second, the prior class probabilities are estimated according to the local spatial pattern and the classifier for each unseen object is adapted using the estimated prior probability. Finally, each unseen object is classified using its adapted classifier. Because the new method can be coupled with both generative and discriminant statistical classifiers, it performs generally more accurately than other methods for a variety of different spatial datasets. Experimental results show that this method has a lower prediction error than statistical classifiers that take no spatial information into account. Moreover, in the experiments, the new method also outperforms spatial auto-logistic regression and Markov random field-based methods when an appropriate estimate of local prior class distribution is used.  相似文献   

8.
积温数据栅格化方法的实验   总被引:18,自引:2,他引:16  
廖顺宝  李泽辉 《地理研究》2004,23(5):633-640
根据中国 4 0 0多个气象站 1995年的积温数据分析 ,≥ 0℃积温和≥ 10℃积温与经纬度、海拔高度的相关性分别为r2 =0 96 5 6和r2 =0 94 0 2。以≥ 0℃积温数据为例 ,利用聚类分析法 ,将全国分为 7个积温计算区 ,每个区分别构建模型 ,通过“回归方程计算 +空间残差”的方法对全国积温数据进行栅格化。验证结果为 :计算值与实测值之间的线性相关系数r2 =0 9889,平均相对误差 3 5 6 % ,相对误差在 5 %以内的气象站个数占验证气象站总数的 86 %。  相似文献   

9.
ABSTRACT

Choropleth mapping provides a simple but effective visual presentation of geographical data. Traditional choropleth mapping methods assume that data to be displayed are certain. This may not be true for many real-world problems. For example, attributes generated based on surveys may contain sampling and non-sampling error, and results generated using statistical inferences often come with a certain level of uncertainty. In recent years, several studies have incorporated uncertain geographical attributes into choropleth mapping with a primary focus on identifying the most homogeneous classes. However, no studies have yet accounted for the possibility that an areal unit might be placed in a wrong class due to data uncertainty. This paper addresses this issue by proposing a robustness measure and incorporating it into the optimal design of choropleth maps. In particular, this study proposes a discretization method to solve the new optimization problem along with a novel theoretical bound to evaluate solution quality. The new approach is applied to map the American Community Survey data. Test results suggest a tradeoff between within-class homogeneity and robustness. The study provides an important perspective on addressing data uncertainty in choropleth map design and offers a new approach for spatial analysts and decision-makers to incorporate robustness into the mapmaking process.  相似文献   

10.
In the field of digital terrain analysis (DTA), the principle and method of uncertainty in surface area calculation (SAC) have not been deeply developed and need to be further studied. This paper considers the uncertainty of data sources from the digital elevation model (DEM) and SAC in DTA to perform the following investigations: (a) truncation error (TE) modeling and analysis, (b) modeling and analysis of SAC propagation error (PE) by using Monte-Carlo simulation techniques and spatial autocorrelation error to simulate DEM uncertainty. The simulation experiments show that (a) without the introduction of the DEM error, higher DEM resolution and lower terrain complexity lead to smaller TE and absolute error (AE); (b) with the introduction of the DEM error, the DEM resolution and terrain complexity influence the AE and standard deviation (SD) of the SAC, but the trends by which the two values change may be not consistent; and (c) the spatial distribution of the introduced random error determines the size and degree of the deviation between the calculated result and the true value of the surface area. This study provides insights regarding the principle and method of uncertainty in SACs in geographic information science (GIScience) and provides guidance to quantify SAC uncertainty.  相似文献   

11.
栅格化属性精度损失的评估方法及其尺度效应(英文)   总被引:2,自引:0,他引:2  
Rasterization is a conversion process accompanied with information loss, which includes the loss of features’ shape, structure, position, attribute and so on. Two chief factors that affect estimating attribute accuracy loss in rasterization are grid cell size and evaluating method. That is, attribute accuracy loss in rasterization has a close relationship with grid cell size; besides, it is also influenced by evaluating methods. Therefore, it is significant to analyze these two influencing factors comprehensively. Taking land cover data of Sichuan at the scale of 1:250,000 in 2005 as a case, in view of data volume and its processing time of the study region, this study selects 16 spatial scales from 600 m to 30 km, uses rasterizing method based on the Rule of Maximum Area (RMA) in ArcGIS and two evaluating methods of attribute accuracy loss, which are Normal Analysis Method (NAM) and a new Method Based on Grid Cell (MBGC), respectively, and analyzes the scale effect of attribute (it is area here) accuracy loss at 16 different scales by these two evaluating methods comparatively. The results show that: (1) At the same scale, average area accuracy loss of the entire study region evaluated by MBGC is significantly larger than the one estimated using NAM. Moreover, this discrepancy between the two is obvious in the range of 1 km to 10 km. When the grid cell is larger than 10 km, average area accuracy losses calculated by the two evaluating methods are stable, even tended to parallel. (2) MBGC can not only estimate RMA rasterization attribute accuracy loss accurately, but can express the spatial distribution of the loss objectively. (3) The suitable scale domain for RMA rasterization of land cover data of Sichuan at the scale of 1:250,000 in 2005 is better equal to or less than 800 m, in which the data volume is favorable and the processing time is not too long, as well as the area accuracy loss is less than 2.5%.  相似文献   

12.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

13.
Landscape metrics have been widely used to characterize geographical patterns which are important for many geographical and ecological analyses. Cellular automata (CA) are attractive for simulating settlement development, landscape evolution, urban dynamics, and land-use changes. Although various methods have been developed to calibrate CA, landscape metrics have not been explicitly used to ensure the simulated pattern best fitted to the actual one. This article presents a pattern-calibrated method which is based on a number of landscape metrics for implementing CA by using genetic algorithms (GAs). A Pattern-calibrated GA–CA is proposed by incorporating percentage of landscape (PLAND), patch metric (LPI), and landscape division (D) into the fitness function of GA. The sensitivity analysis can allow the users to explore various combinations of weights and examine their effects. The comparison between Logistic- CA, Cell-calibrated GA–CA, and Pattern-calibrated GA–CA indicates that the last method can yield the best results for calibrating CA, according to both the training and validation data. For example, Logistic-CA has the average simulation error of 27.7%, but Pattern-calibrated GA–CA (the proposed method) can reduce this error to only 7.2% by using the training data set in 2003. The validation is further carried out by using new validation data in 2008 and additional landscape metrics (e.g., Landscape shape index, edge density, and aggregation index) which have not been incorporated for calibrating CA models. The comparison shows that this pattern-calibrated CA has better performance than the other two conventional models.  相似文献   

14.
Abstract

Accurate quantification of gully shoulder lines (gully borderlines) will help better understand gully formation and evolution. Surveying and mapping are the most important ways to obtain precise morphology. To evaluate the influences of different steps of surveying and of curve-fitting methods of mapping on the morphology of the shoulder line characterized by fractal dimensions, 13 shoulder lines at gully heads were surveyed using a total station and then mapped with different methods of curve fitting, with the fractal dimensions calculated by maps compared with those measured in the field. Fractal dimensions by field measurement ranged from 1.185 to 1.456. Compared with field measurements, the average absolute errors of polygonal line, quadratic B-spline, and arc-fitting methods are 0.045, 0.040, and 0.046, respectively; the average relative errors are 3.48, 3.13, and 3.59%. Therefore, the quadratic B-spline method has a higher accuracy. The standard error of the fractal dimension tends to be larger as average step length increases. The error is ~5% when the step length is 0.7 m, which is advisable for field surveying. This study will help promote the efficiency of field surveying and mapping, and thus promote the accuracy and credibility of gully morphology.  相似文献   

15.
16.
Abstract

Crucial aspects of the referent phenomena which provide the subject matter for cartographic symbolism are often overlooked by map interpreters in their haste to “read maps at a glance.” The blame rests largely on those practitioners of the environmental sciences who stress the simple, intuitive nature of maps, while effectively ignoring the true complexity of the cartographic communication process. Those persons who are responsible for traditional map interpretation training programs are also at fault for implying, through long emphasis, that symbol identification, position location, and navigation constitute the essence of map analysis. The intuitive acceptability of map symbols in spite of their abstract character is also a deceptive factor. The fact that there is far “more than meets the eye” to map interpretation is easily demonstrated by looking closely at several basic cartographic symbols. In order to become an effective map user the environmental scientist apparently must go well beyond the mastery of conventional map reading principles and learn to deal with diverse informational dimensions in the context of the map use purpose and the physical/cultural make-up of the geographical region under study.  相似文献   

17.
钱翌  于洪  王灵 《干旱区地理》2013,36(2):303-310
利用地理信息系统(GIS)及地统计学方法,对乌鲁木齐米东区农田土壤中重金属(Hg、Cu、Zn、Pb、Ni、Cd及Cr)的含量进行空间变异性分析。结果表明:除Cu、Zn、Pb和Hg超过土壤背景值外,7种重金属的平均含量均未超过国家环境质量二级标准(GB15618-1995);7种重金属均具有较好的空间变异结构,可以用指数模型、球状模型和高斯模型拟合,且具有不同程度的块金效应;Cd、Cr具有强烈的空间自相关性,Pb、Ni、Cu、Zn和Hg属中等强度空间相关, 说明其含量受外源污染的影响较大;采用普通克里格插值法得出7种重金属的空间分布图,除Cd空间分布规律不明显外,其它6种重金属均存在显著的空间分布规律。  相似文献   

18.
Abstract

The weighted Kappa coefficient is applied to the comparison of thematic maps. Weighted Kappa is a useful measure of accuracy when the map classes are ordered, or when the relative seriousness of the different possible errors may vary. The calculation and interpretation of weighted Kappa are demonstrated by two examples from forest surveys. First, the accuracy of thematic site quality maps classified according to an ordinal scale is assessed. Error matrices are derived from map overlays, and two different sets of agreement weights are used for the calculation. Weighted Kappa ranges from 0.34 to 0.55, but it does not differ significantly between two separate areas. Secondly, weighted Kappa is calculated for a tree species cover classified according to a nominal scale. Weights reflecting the economic loss for the forest owner due to erroneous data are used for the computation. The value of weighted Kappa is 0.56.  相似文献   

19.
Abstract

A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields.  相似文献   

20.
Abstract

Mapping forest soils using conventional methods is time consuming and expensive. An expert system is described and applied to the mapping of five forest soil-landscape units formed on a single granitoid parent material. Three thematic maps were considered important in influencing the distribution of soils. The first showed the distribution of nine classes of native eucalypt forests, and the second and third were derived from a digital elevation model and represented slope gradient and a soil wetness index combined with topographical position. These layers were input to a raster based geographical information system (GIS) and then geometrically co-registered to a regular 30 m grid. From a knowledge of soil distributions, the relationships between the soil-landscape units and the three data layers were quantified by an experienced soil scientist and used as rules in a rule based expert system. The thematic layers accessed from the GIS provided data for the expert system to infer the forest soil-landscape unit most likely to occur at any given pixel. The soil-landscape map output by the expert system compared favourably with a conventional soil-landscape map generated using interpretation of aerial photographs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号