首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

2.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

3.
ABSTRACT

The aim of site planning based on multiple viewshed analysis is to select the minimum number of viewpoints that maximize visual coverage over a given terrain. However, increasingly high-resolution terrain data means that the number of terrain points will increase rapidly, which will lead to rapid increases in computational requirements for multiple viewshed site planning. In this article, we propose a fast Candidate Viewpoints Filtering (CVF) algorithm for multiple viewshed site planning to lay a foundation for viewpoint optimization selection. Firstly, terrain feature points are selected as candidate viewpoints. Then, these candidate viewpoints are clustered and those belonging to each cluster are sorted according to the index of viewshed contribution (IVC). Finally, the candidate viewpoints with relatively low viewshed contribution rate are removed gradually using the CVF algorithm, through which, the viewpoints with high viewshed contribution are preserved and the number of viewpoints to be preserved can be controlled by the number of clusters. To evaluate the effectiveness of our CVF algorithm, we compare it with the Region Partitioning for Filtering (RPF) and Simulated Annealing (SA) algorithms. Experimental results show that our CVF algorithm is a substantial improvement in both computational efficiency and total viewshed coverage rate.  相似文献   

4.
As increasingly large‐scale and higher‐resolution terrain data have become available, for example air‐form and space‐borne sensors, the volume of these datasets reveals scalability problems with existing GIS algorithms. To address this problem, a kind of serial algorithm was developed to generate viewshed on large grid‐based digital elevation models (DEMs). We first divided the whole DEM into rectangular blocks in row and column directions (called block partitioning), then processed these blocks with four axes followed by four sectors sequentially. When processing the particular block, we adopted the ‘reference plane’ algorithm to calculate the visibility of the target point on the block, and adjusted the calculation sequence according to the different spatial relationships between the block and the viewpoint since the viewpoint is not always inside the DEM. By adopting the ‘Reference Plane’ algorithm and using a block partitioning method to segment and load the DEM dynamically, it is possible to generate viewshed efficiently in PC‐based environments. Experiments showed that the divided block should be dynamically loaded whole into computer main memory when partitioning, and the suggested approach retains the accuracy of the reference plane algorithm and has near linear compute complexity.  相似文献   

5.
A rapid and flexible parallel approach for viewshed computation on large digital elevation models is presented. Our work is focused on the implementation of a derivate of the R2 viewshed algorithm. Emphasis has been placed on input/output (IO) efficiency that can be achieved by memory segmentation and coalesced memory access. An implementation of the parallel viewshed algorithm on the Compute Unified Device Architecture (CUDA), which exploits the high parallelism of the graphics processing unit, is presented. This version is referred to as r.cuda.visibility. The accuracy of our algorithm is compared to the r.los R3 algorithm (integrated into the open-source Geographic Resources Analysis Support System geographic information system environment) and other IO-efficient algorithms. Our results demonstrate that the proposed implementation of the R2 algorithm is faster and more IO efficient than previously presented IO-efficient algorithms, and that it achieves moderate calculation precision compared to the R3 algorithm. Thus, to the best of our knowledge, the algorithm presented here is the most efficient viewshed approach, in terms of computational speed, for large data sets.  相似文献   

6.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

7.
Abstract

Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.  相似文献   

8.
作为GIS的核心功能之一,空间分析逐步向处理数据海量化及分析过程复杂化方向发展,以往的串行算法渐渐不能满足人们对空间分析在计算效率、性能等方面的需求,并行空间分析算法作为解决目前问题的有效途径受到越来越多的关注。该文在简要介绍空间分析方法和并行计算技术的基础上,着重从矢量算法与栅格算法两方面阐述了目前并行空间分析算法的研究进展,评述了在空间数据自身特殊性的影响下并行空间分析算法的发展方向及存在的问题,探讨了在计算机软硬件技术高速发展的新背景下并行空间分析算法设计面临的机遇与挑战。  相似文献   

9.
The continually increasing size of geospatial data sets poses a computational challenge when conducting interactive visual analytics using conventional desktop-based visualization tools. In recent decades, improvements in parallel visualization using state-of-the-art computing techniques have significantly enhanced our capacity to analyse massive geospatial data sets. However, only a few strategies have been developed to maximize the utilization of parallel computing resources to support interactive visualization. In particular, an efficient visualization intensity prediction component is lacking from most existing parallel visualization frameworks. In this study, we propose a data-driven view-dependent visualization intensity prediction method, which can dynamically predict the visualization intensity based on the distribution patterns of spatio-temporal data. The predicted results are used to schedule the allocation of visualization tasks. We integrated this strategy with a parallel visualization system deployed in a compute unified device architecture (CUDA)-enabled graphical processing units (GPUs) cloud. To evaluate the flexibility of this strategy, we performed experiments using dust storm data sets produced from a regional climate model. The results of the experiments showed that the proposed method yields stable and accurate prediction results with acceptable computational overheads under different types of interactive visualization operations. The results also showed that our strategy improves the overall visualization efficiency by incorporating intensity-based scheduling.  相似文献   

10.
With the wide adoption of big spatial data and the emergence of CyberGIS, the nontrivial computational intensity introduced by massive amount of data poses great challenges to the performance of vector map visualization. The parallel computing technologies provide promising solutions to such problems. Evenly decomposing the visualization task into multiple subtasks is one of the key issues in parallel visualization of vector data. This study focuses on the decomposition of polyline and polygon data for parallel visualization. Two key factors impacting the computational intensity were identified: the number of features and the number of vertices of each feature. The computational intensity transform functions (CITFs) were constructed based on the linear relationships between the factors and the computing time. The computational intensity grid (CIG) can then be constructed using the CITFs to represent the spatial distribution of computational intensity. A noninterlaced continuous space-filling curve is used to group the lattices of CIG into multiple sub-domains such that each sub-domain entails the same amount of computational intensity as others. The experiments demonstrated that the approach proposed in this paper was able to effectively estimate and spatially represent the computational intensity of visualizing polylines and polygons. Compared with the regular domain decomposition methods, the new approach generated much more balanced decomposition of computational intensity for parallel visualization and achieved near-linear speedups, especially when the data is greatly heterogeneously distributed in space.  相似文献   

11.
Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA.  相似文献   

12.
分布式水文模型的并行计算研究进展   总被引:3,自引:1,他引:2  
大流域、高分辨率、多过程耦合的分布式水文模拟计算量巨大,传统串行计算技术不能满足其对计算能力的需求,因此需要借助于并行计算的支持。本文首先从空间、时间和子过程三个角度对分布式水文模型的可并行性进行了分析,指出空间分解的方式是分布式水文模型并行计算的首选方式,并从空间分解的角度对水文子过程计算方法和分布式水文模型进行了分类。然后对分布式水文模型的并行计算研究现状进行了总结。其中,在空间分解方式的并行计算方面,现有研究大多以子流域作为并行计算的基本调度单元;在时间角度的并行计算方面,有学者对时空域双重离散的并行计算方法进行了初步研究。最后,从并行算法设计、流域系统综合模拟的并行计算框架和支持并行计算的高性能数据读写方法3个方面讨论了当前存在的关键问题和未来的发展方向。  相似文献   

13.
Selecting the set of candidate viewpoints (CVs) is one of the most important procedures in multiple viewshed analysis. However, the quantity of CVs remains excessive even when only terrain feature points are selected. Here we propose the Region Partitioning for Filtering (RPF) algorithm, which uses a region partitioning method to filter CVs of a multiple viewshed. The region partitioning method is used to decompose an entire area into several regions. The quality of CVs can be evaluated by summarizing the viewshed area of each CV in each region. First, the RPF algorithm apportions each CV to a region wherein the CV has a larger viewshed than that in other regions. Then, CVs with relatively small viewshed areas are removed from their original regions or reassigned to another region in each iterative step. In this way, a set of high-quality CVs can be preserved, and the size of the preserved CVs can be controlled by the RPF algorithm. To evaluate the computational efficiency of the RPF algorithm, its performance was compared with simple random (SR), simulated annealing (SA), and ant colony optimization (ACO) algorithms. Experimental results indicate that the RPF algorithm provides more than a 20% improvement over the SR algorithm, and that, on average, the computation time of the RPF algorithm is 63% that of the ACO algorithm.  相似文献   

14.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

15.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

16.
Viewshed analysis remains one of the most popular GIS tools for assessing visibility, despite the recognition of several limitations when quantifying visibility from a human perspective. The visual significance of terrain is heavily influenced by the vertical dimension (i.e. slope, aspect and elevation) and distance from the observer, neither of which are adjusted for in standard viewshed analyses. Based on these limitations, this study aimed to develop a methodology which extends the standard viewshed to represent visible landscape as more realistically perceived by a human, called the ‘Vertical Visibility Index’ (VVI). This method was intended to overcome the primary limitations of the standard viewshed by calculating the vertical degrees of visibility between the eye-level of a human and the top and bottom point of each visible cell in a viewshed. Next, the validity of the VVI was assessed using two comparison methods: 1) the known proportion of vegetation visible as assessed through imagery for 10 locations; and 2) standard viewshed analysis for 50 viewpoints in an urban setting. While positive, significant correlations were observed between the VVI values and both comparators, the correlation was strongest between the VVI values and the image verified, known values (r = 0.863, p = 0.001). The validation results indicate that the VVI is a valid method which can be used as an improvement on standard viewshed analyses for the accurate representation of landscape visibility from a human perspective.  相似文献   

17.
格网技术对GIS发展的影响   总被引:4,自引:0,他引:4  
格网技术作为新一代的Web技术,必将深刻影响GIS的发展。格网计算为数据密集型空间分析提供了资源支持。数据格网为海量空间数据分布式存储、管理、传输、分析提供了一体化的解决方法。格网技术为VRGIS实时场景渲染和海量场景数据存储以及GIS互操作问题的解决提供了一种新思路。格网中的智能体组件动态组装应用软件将对GIS应用开发方式产生重大影响。通过建立空间信息格网可以实现中国GIS产业的跨越式发展。  相似文献   

18.
Comparing models of debris-flow susceptibility in the alpine environment   总被引:12,自引:3,他引:9  
Debris-flows are widespread in Val di Fassa (Trento Province, Eastern Italian Alps) where they constitute one of the most dangerous gravity-induced surface processes. From a large set of environmental characteristics and a detailed inventory of debris flows, we developed five models to predict location of debris-flow source areas. The models differ in approach (statistical vs. physically-based) and type of terrain unit of reference (slope unit vs. grid cell). In the statistical models, a mix of several environmental factors classified areas with different debris-flow susceptibility; however, the factors that exert a strong discriminant power reduce to conditions of high slope-gradient, pasture or no vegetation cover, availability of detrital material, and active erosional processes. Since slope and land use are also used in the physically-based approach, all model results are largely controlled by the same leading variables.Overlaying susceptibility maps produced by the different methods (statistical vs. physically-based) for the same terrain unit of reference (grid cell) reveals a large difference, nearly 25% spatial mismatch. The spatial discrepancy exceeds 30% for susceptibility maps generated by the same method (discriminant analysis) but different terrain units (slope unit vs. grid cell). The size of the terrain unit also led to different susceptibility maps (almost 20% spatial mismatch). Maps based on different statistical tools (discriminant analysis vs. logistic regression) differed least (less than 10%). Hence, method and terrain unit proved to be equally important in mapping susceptibility.Model performance was evaluated from the percentages of terrain units that each model correctly classifies, the number of debris-flow falling within the area classified as unstable by each model, and through the metric of ROC curves. Although all techniques implemented yielded results essentially comparable; the discriminant model based on the partition of the study area into small slope units may constitute the most suitable approach to regional debris-flow assessment in the Alpine environment.  相似文献   

19.
云GIS的内涵与研究进展   总被引:1,自引:0,他引:1  
林德根  梁勤欧 《地理科学进展》2012,31(11):1519-1528
云计算将是下一代计算平台,云计算的发展必将带动与计算科学密切相关的地理信息系统学科的发展.本文采用文献分析方法综述了云计算的特征、云GIS内涵、关键技术和科学问题,指出云GIS是利用云基础设施获得大规模计算能力来解决GIS中海量空间数据的分布式存储、处理任务划分、查询检索、互操作和虚拟化等关键性科学问题,提高GIS 数据处理与管理能力,为计算密集型和数据密集型的各类GIS 服务提供高性能处理的技术.其基本内涵是空间数据的云特征,空间数据管理中的云计算特征.同时指出云GIS 将彻底突破GIS 既有的“专业圈子”,实现GIS自身的革命性突破,极大地扩展其市场规模.然后,本文介绍了云GIS平台,指出云GIS发展中的优势与不足;最后,从云GIS模式技术发展趋势、应用需求及教育专业需求3个方面展望了中国云GIS的研究进展.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号