首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Abstract

We present the notion of a natural tree as an efficient method for storing spatial information for quick access. A natural tree is a representation of spatial adjacency, organised to allow efficient addition of new data, access to existing data, or deletions. The nodes of a natural tree are compound elements obtained by a particular Delaunay triangulation algorithm. Improvements to that algorithm allow both the construction of the triangulation and subsequent access to neighbourhood information to be O(N log N). Applications include geographical information systems, contouring, and dynamical systems reconstruction.  相似文献   

2.
Over the last two decades, the Delaunay triangulation has been the only choice for most geographical information system (GIS) users and researchers to build triangulated irregular networks (TINs). The classical Delaunay triangulation for creating TINs only considers the 2D distribution of data points. Recent research efforts have been devoted to generating data-dependent triangulation which incorporate information on both distribution and values of input data in the triangulation process. This paper compares the traditional Delaunay triangulations with several variant data-dependent triangulations based on Lawson's local optimization procedure (LOP). Two USGS digital elevation models (DEMs) are used in the comparison. It is clear from the experiments that the quality of TINs not only depends on the vertex placement but also on the vertex connection. Traditonal two step processes for TIN construction, which separate point selection from the triangulation, generate far worse results than the methods which iteratively select points during the triangulation process. A pure data-dependent triangulation contains a large amount of slivers and steep triangles, which greatly affect the quality of TINs constructed. Among the triangulation methods tested, the classical Delaunay triangulation is still the most successful technique for constructing TINs for approximating natural terrain surfaces.  相似文献   

3.
不规则三角网(TIN)可以逼真的模拟地形表面,因此被广泛应用于地学领域。Delaunay三角剖分算法是构建TIN网的最优算法,该文对传统Delaunay三角网构建算法进行分析,提出了一种针对大规模离散数据点生成TIN的高效合成算法。该算法首先根据离散点的分布位置和密度对其进行四叉树区域划分;然后以每个叶子节点的边界四边形为凸包,采用逐点插入法构建三角网;最后采用顶点合并法自底向上合并具有相同父节点的4个子节点,生成Delaunay三角网。实验结果表明,该算法时间复杂度较低,有效提高了TIN网的构建效率。  相似文献   

4.
Several algorithms have been proposed to generate a polygonal ‘footprint’ to characterize the shape of a set of points in the plane. One widely used type of footprint is the χ-shape. Based on the Delaunay triangulation (DT), χ-shapes guaranteed to be simple (Jordan) polygons. This paper presents for the first time an incremental χ-shape algorithm, capable of processing point data streams. Our incremental χ-shape algorithm allows both insertion and deletion operations, and can handle streaming individual points and multiple point sets. The experimental results demonstrated that the incremental algorithm is significantly more efficient than the existing, batch χ-shape algorithm for processing a wide variety of point data streams.  相似文献   

5.
6.
In this paper, we propose a new graphics processing unit (GPU) method able to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight-line graph consisting of points and segments. All existing methods compute the Delaunay triangulation of the given point set, insert all the segments, and then finally transform the resulting triangulation into the CDT. To the contrary, our novel approach simultaneously inserts points and segments into the triangulation, taking special care to avoid conflicts during retriangulations due to concurrent insertion of points or concurrent edge flips. Our implementation using the Compute Unified Device Architecture programming model on NVIDIA GPUs improves, in terms of running time, the best known GPU-based approach to the CDT problem.  相似文献   

7.
8.
This study introduces a new Triangulated Irregular Network(TIN) compression method and a progressive visualization technique using Delaunay triangulation. The compression strategy is based on the assumption that most triangulated 2.5-dimensional terrains are very similar to their Delaunay triangulation. Therefore, the compression algorithm only needs to maintain a few edges that are not included in the Delaunay edges. An efficient encoding method is presented for the set of edges by using vertex reordering and a general bracketing method. In experiments, the compression method examined several sets of TIN data with various resolutions, which were generated by five typical terrain simplification algorithms. By exploiting the results, the connecting structures of common terrain data are compressed to 0.17 bits per vertex on average, which is superior to the results of previous methods. The results are shown by a progressive visualization method for web-based GIS.  相似文献   

9.
As a basic and significant operator in map generalization, polyline simplification needs to work across scales. Perkal’s ε-circle rolling approach, in which a circle with diameter ε is rolled on both sides of the polyline so that the small bend features can be detected and removed, is considered as one of the few scale-driven solutions. However, the envelope computation, which is a key part of this method, has been difficult to implement. Here, we present a computational method that implements Perkal’s proposal. To simulate the effects of a rolling circle, Delaunay triangulation is used to detect bend features and further to construct the envelope structure around a polyline. Then, different connection methods within the enveloping area are provided to output the abstracted result, and a strategy to determine the best connection method is explored. Experiments with real land-use polygon data are implemented, and comparison with other algorithms is discussed. In addition to the scale-specificity inherited from Perkal’s proposal, the results show that the proposed algorithm can preserve the main shape of the polyline and meet the area-maintaining constraint during large-scale change. This algorithm is also free from self-intersection.  相似文献   

10.
ABSTRACT

The aim of this article is to describe a convenient but robust method for defining neighbourhood relations among buildings based on ordinary Delaunay diagrams (ODDs) and area Delaunay diagrams (ADDs). ODDs and ADDs are defined as a set of edges connecting the generators of adjacent ordinary Voronoi cells (points representing centroids of building polygons) and a set of edges connecting two centroids of building polygons, which are the generators of adjacent area Voronoi cells, respectively. Although ADDs are more robust than ODDs, computation time of ODDs is shorter than that of ADDs (the order of their computation time complexity is O(nlogn)). If ODDs can approximate ADDs with a certain degree of accuracy, the former can be used as an alternative. Therefore, we computed the ratio of the number of ADD edges to that of ODD edges overlapping ADDs at building and regional scales. The results indicate that: (1) for approximately 60% of all buildings, ODDs can exactly overlap ADDs with extra ODD edges; (2) at a regional scale, ODDs can overlap approximately 90% of ADDs with 10% extra ODD edges; and (3) focusing on judging errors, although ADDs are more accurate than ODDs, the difference is only approximately 1%.  相似文献   

11.
多边形主骨架线提取算法的设计与实现   总被引:1,自引:0,他引:1  
在Delaunay三角网的基础上对骨架线节点进行了分类,通过确定主骨架线的两个端点,运用回溯法提取了多边形的主骨架线,同时给出了详细的算法步骤,并在Visual C++2003环境下实现了该算法。较之其他算法,该算法思路简捷,易于编程,生成的主骨架线形态优良,较好地反映了多边形的主体形状特征和主延伸方向。  相似文献   

12.
自适应的IDW插值方法及其在气温场中的应用   总被引:3,自引:0,他引:3  
段平  盛业华  李佳  吕海洋  张思阳 《地理研究》2014,33(8):1417-1426
反距离权重(Inverse Distance Weighting,IDW)插值通常采用距离搜索策略选择插值参考点,当采样点集分布不均匀时,距离搜索策略使得参考点聚集一侧影响插值精度。自然邻近关系具有良好的自适应分布特性,可有效地解决参考点分布不均匀问题。结合自然邻近关系,提出自适应的反距离权重(Adaptive-IDW,AIDW)插值方法。首先对采样数据构建初始Delaunay三角网,然后采用逐点插入法,将待插值点插入初始Delaunay三角网中,局部调整得到新的Delaunay三角网,以待插值点的一阶邻近点作为IDW插值的参考点,使参考点自适应均匀地分布在待插值点周围,再进行IDW插值计算。利用AIDW插值方法对Franke函数、全国气温观测数据进行插值实验,结果表明此方法具有较高的精度,且减少了“牛眼”现象。  相似文献   

13.
Recently, points of interest (POIs) recommendation has evolved into a hot research topic with real-world applications. In this paper, we propose a novel semantics-enhanced density-based clustering algorithm SEM-DTBJ-Cluster, to extract semantic POIs from GPS trajectories. We then take into account three different factors (popularity, temporal and geographical features) that can influence the recommendation score of a POI. We characterize the impacts caused by popularity, temporal and geographical information, by using different scoring functions based on three developed recommendation models. Finally, we combine the three scoring functions together and obtain a unified framework PTG-Recommend for recommending candidate POIs for a mobile user. To the best of our knowledge, this work is the first that considers popularity, temporal and geographical information together. Experimental results on two real-world data sets strongly demonstrate that our framework is robust and effective, and outperforms the baseline recommendation methods in terms of precision and recall.  相似文献   

14.
15.
Several alternative estimation and interpolation methods for making annual precipitation maps of Asturias are analysed. The data series in this study corresponds to the year 2003. There exists an evident relationship between precipitation and altitude, with a high correlation coefficient of 0.70, that reflects the hillside effect; that is, the increase in the amount of precipitation in more mountainous areas. The direct spatial variability of precipitation and of altitude and the cross variability of precipitation–altitude are defined by two exponential variogram models: one with a short-range structure (15–30 km) that reflects the control exerted by the lesser, local mountain ranges over the amount of precipitation; and another with a long-range structure (80 km) that supposes the influence over precipitation of the major mountainous alignments of the inland areas of the Cantabrian Mountain Range (Cordillera Cantábrica) situated between 60 and 90 km from the coastline. These variogram models had to be validated for coregionalization by the Pardo-Igúzquiza and Dowd method so as to be able to make the cokriging map. The geometric estimation methods employed were triangulation and inverse distance. The geostatistical estimation methods developed were simple kriging, ordinary kriging, kriging with a trend model (universal kriging), lognormal kriging, and cokriging. In all of these methods, a 3 × 3 km2 grid was selected with a total of 2580 points to estimate, a circular search window of 60 km, and a relatively small number of samples with the aim of highlighting the local features and variations on isohyet maps. The kriging methods were implemented using the WinGslib software, incorporating two specific programs, Prog2 and Fichsurf, so as to be able then to make isohyet maps using the Surfer software. All the methods employed, apart from triangulation, rendered realistic maps with good fits to the values of the original data (precipitation) of the sample maps. The problem with triangulation lies not in the reliability of the estimates but in the fact that it gives rise to contrived maps because of the tendency of isohyets to present abundant triangular facets. The reliability of the methods was based on cross-validation analysis and on evaluation of the different types of errors, both in their values and in their graphical representations. Substantial differences were not found in the values of the errors that might discriminate some methods from others in an evident way. Bearing the aforesaid in mind, should we have to make an evaluation of the different estimation methods in decreasing order of acceptance, this would be: kriging with a trend model, inverse distance, cokriging, lognormal kriging, ordinary kriging, simple kriging, and triangulation. The application of other estimation methods such as colocated cokriging, kriging with an external drift, and kriging of variable local means (residual kriging) is dependent on the availability of a digital model of the terrain with an altitude grid of the region.  相似文献   

16.
Abstract

A technique is discussed for obtaining a contour tree efficiently as a byproduct of an operational contouring system. This tree may then be used to obtain contour symbolism or interval statistics as well as for further geomorphological study. Alternatively, the tree may be obtained without the computational expense of detailed contour interpolation. The contouring system proceeds by assuming a Voronoi neighbourhood or domain about each data point and generating a dual-graph Delaunay triangulation accordingly. Since a triangulation may be traversed in a tree order, individual triangles may be processed in a guaranteed top-to-bottom sequence on the map. At the active edge of the map under construction a linked list is maintained of the contour ‘stubs’ available to be updated by the next triangle processed. Any new contour segment may extend an existing stub, open two new stubs or close (connect) two previous stubs. Extending this list of edge links backwards into the existing map permits storage of contour segments within main memory until a dump (either to plotter or disc) is required by memory overflow, contour closure, contour labelling or job completion. Maintenance of an appropriate status link permits the immediate distinction of local closure (where the newly-connected segments are themselves not connected) from global closure (where a contour loop is completed and no longer required in memory). The resulting contour map may be represented as a tree, the root node being the bounding contour of the map. The nature of the triangle-ordering procedure ensures that inner contours are closed before enclosing ones, and hence a preliminary contour tree may be generated as conventional contour generation occurs. A final scan through the resulting tree eliminates any inconsistencies.  相似文献   

17.
Abstract

Abstract. To achieve high levels of performance in parallel geoprocessing, the underlying spatial structure and relations of spatial models must be accounted for and exploited during decomposition into parallel processes. Spatial models are classified from two perspectives, the domain of modelling and the scope of operations, and a framework of strategies is developed to guide the decomposition of models with different characteristics into parallel processes. Two models are decomposed using these strategies: hill-shading on digital elevation models and the construction of Delaunay Triangulations. Performance statistics are presented for implementations of these algorithms on a MIMD computer.  相似文献   

18.
二维Delaunay三角网的任意点删除算法研究   总被引:1,自引:0,他引:1  
针对目前基于影响域多边形剖分的点删除算法缺陷,提出一种二维Delaunay三角网点删除算法。首先利用具有拓扑关系的三角网搜索影响多边形,并以三角形矢量面积为工具三角剖分影响域多边形,最后通过镶嵌优化后的剖分三角网完成点的删除,且满足Delaunay法则。通过测试证明了算法的可靠性和高效性。  相似文献   

19.
Geographic data themes modelled as planar partitions are found in many GIS applications (e.g. topographic data, land cover, zoning plans, etc.). When generalizing this kind of 2D map, this specific nature has to be respected and generalization operations should be carefully designed. This paper presents a design and implementation of an algorithm to perform a split operation of faces (polygonal areas).

The result of the split operation has to fit in with the topological data structure supporting variable-scale data. The algorithm, termed SPLITAREA, obtains the skeleton of a face using a constrained Delaunay triangulation. The new split operator is especially relevant in urban areas with many infrastructural objects such as roads. The contribution of this work is twofold: (1) the quality of the split operation is formally assessed by comparing the results on actual test data sets with a goal/metric we defined beforehand for the ‘balanced’ split and (2) the algorithm allows a weighted split, where different neighbours have different weights due to different compatibility. With the weighted split, the special case of unmovable boundaries is also explicitly addressed.

The developed split algorithm can also be used outside the generalization context in other settings. For example, to make two cross-border data sets fit, the algorithm could be applied to allow splitting of slivers.  相似文献   


20.
Existing spatial clustering methods primarily focus on points distributed in planar space. However, occurrence locations and background processes of most human mobility events within cities are constrained by the road network space. Here we describe a density-based clustering approach for objectively detecting clusters in network-constrained point events. First, the network-constrained Delaunay triangulation is constructed to facilitate the measurement of network distances between points. Then, a combination of network kernel density estimation and potential entropy is executed to determine the optimal neighbourhood size. Furthermore, all network-constrained events are tested under a null hypothesis to statistically identify core points with significantly high densities. Finally, spatial clusters can be formed by expanding from the identified core points. Experimental comparisons performed on the origin and destination points of taxis in Beijing demonstrate that the proposed method can ascertain network-constrained clusters precisely and significantly. The resulting time-dependent patterns of clusters will be informative for taxi route selections in the future.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号