首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 979 毫秒
1.
讨论了面状要素图形轮廓简化的一些规则以及面的空间知识获取方法,结合以直角方式转折的面状要素图形轮廓的特点,重点讨论了其图形渐进式简化方法。  相似文献   

2.
矢量河网数据渐进式传输是制图综合的逆过程,从几何特征出发化简曲线的方法忽略了曲线的形态特征。鉴于此,以曲线轴线为基准,基于曲线弯曲层次化提取河流曲线数据,并将提取的曲线数据分层组织,构建结合目标层和几何细节层的河网多尺度表达模型。基于该模型开发了矢量河网数据的渐进式传输系统,验证了该方法的有效性。  相似文献   

3.
计算几何在地图综合中的应用   总被引:3,自引:1,他引:2  
应申  李霖  王明常  翟亮 《测绘科学》2005,30(3):64-66
地图综合是计算几何中的一个应用问题。计算几何是有效的几何基本规则和算法,为地图目标间的复杂空间关系提供了强大的描述和分析依据。文章讨论地图综合中的条件和要求,尤其提出了比例尺缩小造成地不可感知性,而引起选取、目标拥挤、化简和符号化带来的相交等三个关键问题。根据计算几何的Voronoi、Delaunay等数据结构,简要探讨了目标选择、目标聚类和一致性化简的方法。  相似文献   

4.
The article is composed of two sections. In the first section, the authors describe the application of minimum line dimensions which are dependent on line shape, width and the operational scale of the map. The proposed solutions are based on the Euclidean metric space, for which the minimum dimensions of Saliszczew’s elementary triangle (Elementary triangle – is the term pertaining to model, standard triangle of least dimensions securing recognizability of a line. Its dimensions depend on scale of the map and width of the line representing it. The use of a triangle in the simplification process is as follows: triangles with sides (sections) on an arbitrary line and bases (completing the sides) are compared with lengths of the shorter side and the base of the elementary triangle.) were adapted. The second part of the article describes an application of minimum line dimensions for verifying and assessing generalized data. The authors also propose a method for determining drawing line resolution to evaluate the accuracy of algorithm simplification. Taking advantage of the proposed method, well-known simplification algorithms were compared on the basis of qualitative and quantitative evaluation. Moreover, corresponding with the methods of simplified data accuracy assessment the authors have extended these solutions with the rejected data. This procedure has allowed the identification of map areas where graphic conflicts occurred.  相似文献   

5.
Two basic strategies have been identified for handling multiple representations of line features in digital databases. Either a finite number of scale-dependent representations of cartographic lines can be explicitly stored, or a scale-independent database is generated from which subsequent scale representations can be extracted as needed. These two strategies parallel differences in hierarchical and non-hierarchical line simplification operators. Non-hierarchical line-simplification can produce the most geometrically accurate simplification at any scale, while hierarchical operators are often associated with scale-independent databases. In this research, hierarchical and non-hierarchical line simplification operators are evaluated by comparing both the points retained by these different algorithms and the overall quality of the graphic portrayal for sample lines of different complexity and at different scales. Visual inspection of the results did not reveal any discernable difference at any scale for any line. Subsequent numerical analyses shows some differences but overall little geometric quality is lost by using a hierarchical operator as opposed to a non-hierarchical one and given the greater flexibility of scale representations that is possible, hierarchical methods appear to be more satisfactory.  相似文献   

6.
Successful implementation of an intelligent system for automated map generalization requires formalization of cartographic principles that are, in many cases, only intuitively understood. Formalizing these principles requires acquisition and re-expression in the form of semantic nets, frames, production rules, or similar formalization methods. The various techniques for cartographic knowledge acquisition have been discussed on a theoretical basis; however, little empirical research has been conducted. This paper reports on empirical acquisition of cartographic knowledge by reverse engineering; that is, on trying to recapitulate decisions made on published documents or maps. The work is based on a computer-assisted multi-scale inventory of the Austrian National Topographic Map Series. Queries of the relational database, within which inventory data are stored, lead to the formulation of prototype production rules for modifying map symbols during automatic scale changes. Components of map generalization expressed in such rules include the selection behavior of settlement, transportation, and hydrographic objects, and the degree of simplification of settlement domains and building clusters. The acquired cartographic knowledge reveals quantitative relations between map elements and the changes in these relations that occur with scale transition. These insights can guide subsequent knowledge refinement using other acquisition methods. This paper provides, in addition, a conceptual framework by which other topographic map series may be compared at multiple scales.  相似文献   

7.
A scale‐independent database that allows derived maps to be dynamically updated from a centrally maintained data source is an appealing alternative to traditional map revision techniques, which by today's standards are costly and inefficient. This paper presents a dynamic spatial updating model that supports automated updating of non‐standard maps in a scale‐independent database‐centric map production environment. Maps derived from the database are not separate data sets, but rather active views of the database. Each derived map is displayed in a unique way by implementing cartographic operations at the map level. While the operations applied require user involvement for strategic cartographic decisions, and algorithmic initiation and control, the technique allows geographic data to be processed cartographically without affecting the geometric integrity of the database. Each time a derived map is opened it retrieves the spatial data (and updates) from the database and applies the unique cartographic representation methods that persist on the individual derived maps. Database updates are automatically triggered to cartographic products, as process dependent updates, according to their individual product‐specific behaviour. This paper investigates product‐specific behaviour (product multiplicities) and the cartographic processing requirements to support dynamic spatial updating techniques in an object‐oriented map publishing environment. These techniques are implemented in an off‐the‐shelf software environment using ArcGIS.  相似文献   

8.
本文首先分析了遥感影像尺度的三层次内涵。重点针对遥感像元尺度,分析了遥感像元尺度效应及其分形机理,由于现有分形方法没有考虑影像本身尺度(空间分辨率),造成尺度间分形维数的比较时像元尺度效应变化难以有效反映,本文针对此问题提出了基于表面积的加窗分形布朗运动和加窗双层地毯两种改进分形方法。为验证改进分形方法的可靠性,采用了不同像元尺度下系列监督分类进行验证。试验结果表明,每种地物的分维数都随着空间分辨率的降低或像元尺度的缩小,呈总体下降趋势,在某些特征尺度上会出现预示着某些地物结构的拐点,这些拐点对观测该区域地物具有一定指示意义。系列监督分类精度也一定程度上证明了以上两种改进分形方法在分析尺度效应中的可行性。因此本文的方法对于分析遥感像元尺度效应和探索地物尺度聚合规律具有一定的理论意义。  相似文献   

9.
Simultaneous curve simplification   总被引:1,自引:0,他引:1  
In this paper we present a method for simultaneous simplification of a collection of piecewise linear curves in the plane. The method is based on triangulations, and the main purpose is to remove line segments from the piecewise linear curves without changing the topological relations between the curves. The method can also be used to construct a multi-level representation of a collection of piecewise linear curves. We illustrate the method by simplifying cartographic contours and a set of piecewise linear curves representing a road network.   相似文献   

10.
融合像素—多尺度区域特征的高分辨率遥感影像分类算法   总被引:1,自引:0,他引:1  
刘纯  洪亮  陈杰  楚森森  邓敏 《遥感学报》2015,19(2):228-239
针对基于像素多特征的高分辨率遥感影像分类算法的"胡椒盐"现象和面向对象影像分析方法的"平滑地物细节"现象,提出了一种融合像素特征和多尺度区域特征的高分辨率遥感影像分类算法。(1)首先采用均值漂移算法对原始影像进行初始过分割,然后对初始过分割结果进行多尺度的区域合并,形成多尺度分割结果。根据多尺度区域合并RMI指数变化和分割尺度对分类精度的影响,确定最优分割尺度。(2)融合光谱特征、像元形状指数PSI(Pixel Shape Index)、初始尺度和最优尺度区域特征,并对多类型特征进行归一化,最后结合支持向量机(SVM)进行分类。实验结果表明该算法既能有效减少基于像素多特征的高分辨率遥感影像分类算法的"胡椒盐"现象,又能保持地物对象的完整性和地物细节信息,提高易混淆类别(如阴影和街道,裸地和草地)的分类精度。  相似文献   

11.
随着制图综合质量评价越来越受重视,对地图要素中所占比例最大的线要素的化简算法进行质量评估变得十分必要。结合线化简算法自身的特点,从几何和语义两方面总结了线要素在化简过程容易出现的问题,在分析线要素化简应满足的约束条件的基础上,提出对算法实施评估的评价指标,并选择水平中误差等三种评价指标对三种线化简算法具体实施质量评估。试验结果证明了本文提出的评估方法的科学性。  相似文献   

12.
图式符号色值和式样的评价是地图制图赛项中的一项重要内容,为提高评价效率,本文提出了一种快速评价方法。首先根据标准数据和对应地图图式进行数据预处理和矢量控制文件的制作;然后通过改进的Harris角点检测算法实现角点检测,并进行特征点匹配和几何校正;最后利用缓冲区分析进行符号色值评价,并基于图像矩理论结合GrabCut算法实现符号式样评价。本项技术成果已应用于地图制图赛项中图式符号的快速评价,在测绘地理信息行业职业技能竞赛中发挥了重要作用。  相似文献   

13.
2维矢量地图空间目标关系的组合式表达   总被引:9,自引:2,他引:7  
郭庆胜 《测绘学报》2000,29(2):155-161
矢量模式的地图平面空间目标是地图数据处理中常见的,其空间关系的描述与处理非常重要本文首先分析了平面地图上的尺度空间和拓扑空间的统一,以及点和直线段两两相互间的基本空间关系,并以这些基本窨绵组合为基础,试图利用组合式方法区分和描述地图空间点状、线状和面状空间目标两两相互间的空间关系,且得到各类空间关系的种数。  相似文献   

14.
以海南1∶10 000 DLG数据为例,描述了基于ArcGIS制图表达机制、制图数据与GIS数据通过模板方式生产实现一体化存储的工作方法,为海南1∶10 000 DLG数据更新提供参考,能够实现地理信息获取实时化、处理自动化、传输网络化和服务社会化。  相似文献   

15.
A new method of cartographic line simplification is presented. Regular hexagonal tessellations are used to sample lines for simplification, where hexagon width, reflecting sampling fidelity, is varied in proportion to target scale and drawing resolution. Tesserae constitute loci at which new sets of vertices are defined by vertex clustering quantization, and these vertices are used to compose simplified lines retaining only visually resolvable detail at target scale. Hexagon scaling is informed by the Nyquist–Shannon sampling theorem. The hexagonal quantization algorithm is also compared to an implementation of the Li–Openshaw raster-vector algorithm, which undertakes a similar process using square raster cells. Lines produced by either algorithm using like tessera widths are compared for fidelity to the original line in two ways: Hausdorff distances to the original lines are statistically analyzed, and simplified lines are presented against input lines for visual inspection. Results show that hexagonal quantization offers advantages over square tessellations for vertex clustering line simplification in that simplified lines are significantly less displaced from input lines. Visual inspection suggests lines produced by hexagonal quantization retain informative geographical shapes for greater differences in scale than do those produced by quantization in square cells. This study yields a scale-specific cartographic line simplification algorithm, following Li and Openshaw's natural principle, which is readily applicable to cartographic linework. Open-source Java code implementing the hexagonal quantization algorithm is available online.  相似文献   

16.
Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.  相似文献   

17.
针对数字高程模型(digital elevation model,DEM)数据的多尺度表达问题,根据DEM格网数据在能量谱密度中“低频-高能-大尺度”的对应关系,在化简中关联地形语义特征,构建了DEM数据的多尺度表达模型。实验结果表明,该模型可以实时动态派生不同尺度下的DEM数据,通过等高线放样观察发现,该模型派生的DEM数据满足地形表达、空间认知和制图综合中的“保留主要地形特征、舍弃次要地形特征”的基本原则。与常用的DEM化简方法进行高程值统计以及坡形变化的定量对比分析,结果表明该方法在统计意义与结构意义上都具有较好的效果。  相似文献   

18.
This article presents an area‐preservation approach for polygonal boundary simplification by the use of structured total least squares adjustment with constraints (STLSC), with the aim being to maintain the area of the original polygons after the simplification. Traditionally, a simplified line is represented by critical points selected from the original one. However, this study focuses on maintaining the areas of the polygons in the process of simplification of polygonal boundaries. Therefore, the proposed method in this article is a supplement to the existing line simplification methods, and it improves the quality of the simplification of polygonal boundaries in terms of positional and area errors. Based on the sub‐divisions of the original polyline, using the critical points detected from the polyline by the use of line simplification methods, the framework of the proposed method includes three main components, as follows: (1) establishment of the straight‐line‐segment fitting model based on both the critical and intermediate points on the sub‐polyline; (2) introduction of both area and end‐point constraints to reduce the geometric distortions due to the line simplification; and (3) derivation of the solution of boundary simplification by the use of STLSC. An empirical example was conducted to test the applicability of the proposed method. The results showed that: (1) by imposing the linear fitting model on both the critical and intermediate points on the sub‐polylines in the proposed STLSC method, the positional differences between the original points and the simplified line are approximately in a normal distribution; and (2) by introducing both end‐point and area constraints in the proposed STLSC method, the areas of the simplified polygons are the same as those of the original ones at different scales, and the two neighboring fitted lines are connected to each other at the optimized position.  相似文献   

19.
Abstract

The generalization of digital terrain models (DTMs) is a tool of great potential for simultaneous cartographic and photogrammetry generation processes at different scales, the main object of which is to feed different geographic information systems (GIS). These GIS enable multi-scale analysis and visualization through different data bases. This research proposes a semi-automatic DTM generalization process conditioned by a series of predefined parameters resulting in the generation of hybrid DTMs at different scales starting from a single cloud of points obtained through large-scale massive data acquisition processes. The generalization results obtained, applied on different areas of different relief, offer specific application ranks for each parameter with great precision, in contrast with DTMs obtained directly in each scale.  相似文献   

20.
ABSTRACT

We review recent developments in cartographic research in North America, in the context of informing the 29th International Cartographic Conference, and 18th General Assembly in 2019. The titles of papers published since 2015 in four leading cartographic journals yielded a corpus of 245 documents containing 1109 unique terms. These terms were analyzed using Latent Dirichlet Allocation and by visual analytics to produce 14 topic groups that mapped onto five classes. These classes were named as information visualization, cartographic data, spatial analysis and applications, methods and models, and GIScience. The classes were then used as themes to discuss the recent cartographic literature more broadly, first, to review recent trends in the research and to identify research gaps, and second, to examine prospects for new research over the next 20 years. A conclusion draws some broad findings from the review, suggesting that cartographic research in the future will be aimed less at dealing with data, and more at generating insight and knowledge to better inform society about global challenges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号