首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

2.
As increasingly large‐scale and higher‐resolution terrain data have become available, for example air‐form and space‐borne sensors, the volume of these datasets reveals scalability problems with existing GIS algorithms. To address this problem, a kind of serial algorithm was developed to generate viewshed on large grid‐based digital elevation models (DEMs). We first divided the whole DEM into rectangular blocks in row and column directions (called block partitioning), then processed these blocks with four axes followed by four sectors sequentially. When processing the particular block, we adopted the ‘reference plane’ algorithm to calculate the visibility of the target point on the block, and adjusted the calculation sequence according to the different spatial relationships between the block and the viewpoint since the viewpoint is not always inside the DEM. By adopting the ‘Reference Plane’ algorithm and using a block partitioning method to segment and load the DEM dynamically, it is possible to generate viewshed efficiently in PC‐based environments. Experiments showed that the divided block should be dynamically loaded whole into computer main memory when partitioning, and the suggested approach retains the accuracy of the reference plane algorithm and has near linear compute complexity.  相似文献   

3.
ABSTRACT

The aim of site planning based on multiple viewshed analysis is to select the minimum number of viewpoints that maximize visual coverage over a given terrain. However, increasingly high-resolution terrain data means that the number of terrain points will increase rapidly, which will lead to rapid increases in computational requirements for multiple viewshed site planning. In this article, we propose a fast Candidate Viewpoints Filtering (CVF) algorithm for multiple viewshed site planning to lay a foundation for viewpoint optimization selection. Firstly, terrain feature points are selected as candidate viewpoints. Then, these candidate viewpoints are clustered and those belonging to each cluster are sorted according to the index of viewshed contribution (IVC). Finally, the candidate viewpoints with relatively low viewshed contribution rate are removed gradually using the CVF algorithm, through which, the viewpoints with high viewshed contribution are preserved and the number of viewpoints to be preserved can be controlled by the number of clusters. To evaluate the effectiveness of our CVF algorithm, we compare it with the Region Partitioning for Filtering (RPF) and Simulated Annealing (SA) algorithms. Experimental results show that our CVF algorithm is a substantial improvement in both computational efficiency and total viewshed coverage rate.  相似文献   

4.
大规模地形实时绘制算法   总被引:8,自引:3,他引:5  
该文提出一种适合大规模地形实时绘制的简单高效的LOD简化算法。该算法使用一种紧凑有效的规则网格表示方法,优化网格节点的数目,减少可视化过程中的计算量,降低额外内存开销。探讨该算法相关的数据组织、视域裁剪、LOD层次选择、裂缝消除、三角形化等关键问题。实验结果表明,该算法实现简单,内存开销较少,CPU耗费小,对图形卡要求低,能够在普通机器上实现大规模地形的实时漫游。  相似文献   

5.
With the increasing sizes of digital elevation models (DEMs), there is a growing need to design parallel schemes for existing sequential algorithms that identify and fill depressions in raster DEMs. The Priority-Flood algorithm is the fastest sequential algorithm in the literature for depression identification and filling of raster DEMs, but it has had no parallel implementation since it was proposed approximately a decade ago. A parallel Priority-Flood algorithm based on the fastest sequential variant is proposed in this study. The algorithm partitions a DEM into stripes, processes each stripe using the sequential variant in many rounds, and progressively identifies more slope cells that are misidentified as depression cells in previous rounds. Both Open Multi-Processing (OpenMP)- and Message Passing Interface (MPI)-based implementations are presented. The speed-up ratios of the OpenMP-based implementation over the sequential algorithm are greater than four for all tested DEMs with eight computing threads. The mean speed-up ratio of our MPI-based implementation is greater than eight over TauDEM, which is a widely used MPI-based library for hydrologic information extraction. The speed-up ratios of our MPI-based implementation generally become larger with more computing nodes. This study shows that the Priority-Flood algorithm can be implemented in parallel, which makes it an ideal algorithm for depression identification and filling on both single computers and computer clusters.  相似文献   

6.
Selecting the set of candidate viewpoints (CVs) is one of the most important procedures in multiple viewshed analysis. However, the quantity of CVs remains excessive even when only terrain feature points are selected. Here we propose the Region Partitioning for Filtering (RPF) algorithm, which uses a region partitioning method to filter CVs of a multiple viewshed. The region partitioning method is used to decompose an entire area into several regions. The quality of CVs can be evaluated by summarizing the viewshed area of each CV in each region. First, the RPF algorithm apportions each CV to a region wherein the CV has a larger viewshed than that in other regions. Then, CVs with relatively small viewshed areas are removed from their original regions or reassigned to another region in each iterative step. In this way, a set of high-quality CVs can be preserved, and the size of the preserved CVs can be controlled by the RPF algorithm. To evaluate the computational efficiency of the RPF algorithm, its performance was compared with simple random (SR), simulated annealing (SA), and ant colony optimization (ACO) algorithms. Experimental results indicate that the RPF algorithm provides more than a 20% improvement over the SR algorithm, and that, on average, the computation time of the RPF algorithm is 63% that of the ACO algorithm.  相似文献   

7.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

8.
Abstract

Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.  相似文献   

9.
10.
Abstract

In most documentation of geographical information systems (GIS) it is very rare to find details of the algorithms used in the software, but alternative formulations of the same process may derive different results. In this research several alternatives in the design of viewshed algorithms are explored. Three major features of viewshed algorithms are examined: how elevations in the digital elevation model are inferred, how viewpoint and target are represented, and the mathematical formulation of the comparison. It is found that the second of these produces the greatest variability in the viewable area (up to 50 per cent over the mean viewable area), while the last gives the least. The same test data are run in a number of different GIS implementations of the viewshed operation, and smaller, but still considerable, variability in the viewable area is observed. The study highlights three issues: the need for standards and/or empirical benchmark datasets for GIS functions; the desirability of publication of algorithms used in GIS operations; and the fallacy of the binary representation of a complex GIS product such as the viewshed.  相似文献   

11.
This article presents an approach to computing K shortest paths in large buildings with complex horizontal and vertical connectivity. The building topology is obtained from Building Information Model (BIM) and implemented using directed multigraphs. Hierarchical design allows the calculation of feasible paths without the need to load into memory the topology of the entire building. It is possible to expand the graph with new connectivity on-the-fly. The paths calculated may be composed of traversable building components that are located inside the buildings or those that are both inside and outside buildings. The performance (computational time and heap size used) is optimized by using the appropriate collections (maps, lists and sets). The proposed algorithm is evaluated in several use-case scenarios – complete graphs and real building environments. In all test scenarios, the proposed path finding algorithm is faster and uses less memory when compared to the fast version of the Yen’s KSP algorithm. The proposed approach can be successfully used as a first level of coarse-to-fine path finding algorithms.  相似文献   

12.
This paper presents a non-linear algorithmic approach for seismic traveltime. It is based on large-scale optimization using non-linear least-squares and trust-region methods. These methods provide a natural way to stabilize algorithms based on Newton's iteration for non-linear minimization. They also correspond to an alternative (and often more efficient) view of the Levenberg-Marquardt method. Numerical experience on synthetic data and on real borehole-to-borehole problems are presented. In particular, results produced by the new algorithm are compared with those of Ivansson (1985) for the Kråkemåla experiment.  相似文献   

13.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

14.
The concept of computing least-cost paths has been proposed and discussed for a few decades but only in simplified form due to the limited computational resources in the past. With the advancement of computer technology in speed and data storage, it is now possible to implement least-cost path algorithms with realistic conditions. In this paper, we present our implementations of least-cost paths by integrating viewshed information computed from digital elevation models. Our implementations and analyses include four possible types of paths. They are scenic paths, strategic paths, hidden paths, and withdrawn paths. While possible applications of these least-cost paths include planning of civil engineering, military and environmental planning, other extensions can be formulated without much difficulty.  相似文献   

15.
Viewshed analysis remains one of the most popular GIS tools for assessing visibility, despite the recognition of several limitations when quantifying visibility from a human perspective. The visual significance of terrain is heavily influenced by the vertical dimension (i.e. slope, aspect and elevation) and distance from the observer, neither of which are adjusted for in standard viewshed analyses. Based on these limitations, this study aimed to develop a methodology which extends the standard viewshed to represent visible landscape as more realistically perceived by a human, called the ‘Vertical Visibility Index’ (VVI). This method was intended to overcome the primary limitations of the standard viewshed by calculating the vertical degrees of visibility between the eye-level of a human and the top and bottom point of each visible cell in a viewshed. Next, the validity of the VVI was assessed using two comparison methods: 1) the known proportion of vegetation visible as assessed through imagery for 10 locations; and 2) standard viewshed analysis for 50 viewpoints in an urban setting. While positive, significant correlations were observed between the VVI values and both comparators, the correlation was strongest between the VVI values and the image verified, known values (r = 0.863, p = 0.001). The validation results indicate that the VVI is a valid method which can be used as an improvement on standard viewshed analyses for the accurate representation of landscape visibility from a human perspective.  相似文献   

16.
GPU加速的多边形叠加分析   总被引:2,自引:0,他引:2  
叠加分析是地理信息系统最重要的分析功能之一,对多边形图层进行叠加分析要花费大量时间。为此,将GPU用于多边形叠加分析过程中的MBR过滤及多边形剪裁两个阶段。对MBR过滤阶段,提出了基于GPU的通过直方图及并行前置和实现的MBR过滤算法。对多边形剪裁阶段,通过改进Weiler-Atherton算法,使用新的焦点插入方法和简化的出入点标记算法,并结合并行前置和算法,提出了基于GPU的多边形剪裁算法。对实现过程中可能出现的负载不均衡情况,给出了基于动态规划的负载均衡方法。通过对这些算法的应用,实现对过滤阶段及精炼阶段的加速。实验结果表明,基于GPU的MBR过滤方法相对CPU实现的加速比为3.8,而基于GPU的多边形剪裁的速度比CPU实现快3.4倍。整体上,与CPU实现相比,GPU加速的多边形叠加提供了3倍以上的加速比。  相似文献   

17.
Hybrid terrain models combine large regular data sets and high-resolution irregular meshes [triangulated irregular network (TIN)] for topographically and morphologically complex terrain features such as man-made microstructures or cliffs. In this paper, a new method to generate and visualize this kind of 3D hybrid terrain models is presented. This method can integrate geographic data sets from multiple sources without a remeshing process to combine the heterogeneous data of the different models. At the same time, the original data sets are preserved without modification, and, thus, TIN meshes can be easily edited and replaced, among other features. Specifically, our approach is based on the utilization of the external edges of convexified TINs as the fundamental primitive to tessellate the space between both types of meshes. Our proposal is eminently parallel, requires only a minimal preprocessing phase, and minimizes the storage requirements when compared with the previous proposals.  相似文献   

18.
This article provides a decentralized and coordinate-free algorithm, called decentralized gradient field (DGraF), to identify critical points (peaks, pits, and passes) and the topological structure of the surface network connecting those critical points. Algorithms that can operate in the network without centralized control and without coordinates are important in emerging resource-constrained spatial computing environments, in particular geosensor networks. Our approach accounts for the discrepancies between finite granularity sensor data and the underlying continuous field, ignored by previous work. Empirical evaluation shows that our DGraF algorithm can improve the accuracy of critical points identification when compared with the current state-of-the-art decentralized algorithm and matches the accuracy of a centralized algorithm for peaks and pits. The DGraF algorithm is efficient, requiring O(n) overall communication complexity, where n is the number of nodes in the geosensor network. Further, empirical investigations of our algorithm across a range of simulations demonstrate improved load balance of DGraF when compared with an existing decentralized algorithm. Our investigation highlights a number of important issues for future research on the detection of holes and the monitoring of dynamic events in a field.  相似文献   

19.
基于最大似然估计的海面风场反演算法研究   总被引:1,自引:1,他引:0  
最大似然估计法被认为是海面风场反演的最佳方法,目前被用来处理SeaWinds散射计数据。风矢量求解算法是风场反演算法的核心内容。最大似然法的目标函数形式决定了在风场反演过程中必须采用数值方法求解风矢量,而传统数值求解方法运算量大。该文对最大似然估计的风场反演方法的基本原理和具体过程进行探讨,根据其目标函数的一般分布特征,提出一种较为高效的数值风矢量搜索算法。用SeaWinds散射计的12A实测数据和相应的L2B数据验证了该算法的可行性。  相似文献   

20.
Distances from points to closest shorelines in a given direction are used, for example, in some models for estimating wave exposure. Such distances, also called fetch lengths, can be determined using standard geographic information systems. However, performance may be a problem if these distances are required for a great number of study points. Two new algorithms for determining fetch lengths for study points in the same directions are presented in this paper. It is assumed that the two‐dimensional map is stored in vector format, i.e. shorelines of islands and mainland are stored as polygons. The first algorithm works on a set of undirected line segments derived from the shoreline polygons. The other works on a raster representation of the map. The algorithm saves memory by postponing the rasterisation until necessary. Both of the new algorithms have superior efficiency to a previously reported algorithm when the number of study points is large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号