首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The demand for parallel geocomputation based on raster data is constantly increasing with the increase of the volume of raster data for applications and the complexity of geocomputation processing. The difficulty of parallel programming and the poor portability of parallel programs between different parallel computing platforms greatly limit the development and application of parallel raster-based geocomputation algorithms. A strategy that hides the parallel details from the developer of raster-based geocomputation algorithms provides a promising way towards solving this problem. However, existing parallel raster-based libraries cannot solve the problem of the poor portability of parallel programs. This paper presents such a strategy to overcome the poor portability, along with a set of parallel raster-based geocomputation operators (PaRGO) designed and implemented under this strategy. The developed operators are compatible with three popular types of parallel computing platforms: graphics processing unit supported by compute unified device architecture, Beowulf cluster supported by message passing interface (MPI), and symmetrical multiprocessing cluster supported by MPI and open multiprocessing, which make the details of the parallel programming and the parallel hardware architecture transparent to users. By using PaRGO in a style similar to sequential program coding, geocomputation developers can quickly develop parallel raster-based geocomputation algorithms compatible with three popular parallel computing platforms. Practical applications in implementing two algorithms for digital terrain analysis show the effectiveness of PaRGO.  相似文献   

2.
作为GIS的核心功能之一,空间分析逐步向处理数据海量化及分析过程复杂化方向发展,以往的串行算法渐渐不能满足人们对空间分析在计算效率、性能等方面的需求,并行空间分析算法作为解决目前问题的有效途径受到越来越多的关注。该文在简要介绍空间分析方法和并行计算技术的基础上,着重从矢量算法与栅格算法两方面阐述了目前并行空间分析算法的研究进展,评述了在空间数据自身特殊性的影响下并行空间分析算法的发展方向及存在的问题,探讨了在计算机软硬件技术高速发展的新背景下并行空间分析算法设计面临的机遇与挑战。  相似文献   

3.
Polygon intersection is an important spatial data-handling process, on which many spatial operations are based. However, this process is computationally intensive because it involves the detection and calculation of polygon intersections. We addressed this computation issue based on two perspectives. First, we improved a method called boundary algebra filling to efficiently rasterize the input polygons. Polygon intersections were subsequently detected in the cells of the raster. Owing to the use of a raster data structure, this method offers advantages of reduced task dependence and improved performance. Based on this method, we developed parallel strategies for different procedures in terms of workload decomposition and task scheduling. Thus, the workload across different parallel processes can be balanced. The results suggest that our method can effectively accelerate the process of polygon intersection. When addressing datasets with 1,409,020 groups of overlapping polygons, our method could reduce the total execution time from 987.82 to 53.66 s, thereby obtaining an optimal speedup ratio of 18.41 while consistently balancing the workloads. We also tested the effect of task scheduling on the parallel efficiency, showing that reducing the total runtime is effective, especially for a lower number of processes. Finally, the good scalability of the method is demonstrated.  相似文献   

4.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

5.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   

6.
7.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

8.
With the increasing sizes of digital elevation models (DEMs), there is a growing need to design parallel schemes for existing sequential algorithms that identify and fill depressions in raster DEMs. The Priority-Flood algorithm is the fastest sequential algorithm in the literature for depression identification and filling of raster DEMs, but it has had no parallel implementation since it was proposed approximately a decade ago. A parallel Priority-Flood algorithm based on the fastest sequential variant is proposed in this study. The algorithm partitions a DEM into stripes, processes each stripe using the sequential variant in many rounds, and progressively identifies more slope cells that are misidentified as depression cells in previous rounds. Both Open Multi-Processing (OpenMP)- and Message Passing Interface (MPI)-based implementations are presented. The speed-up ratios of the OpenMP-based implementation over the sequential algorithm are greater than four for all tested DEMs with eight computing threads. The mean speed-up ratio of our MPI-based implementation is greater than eight over TauDEM, which is a widely used MPI-based library for hydrologic information extraction. The speed-up ratios of our MPI-based implementation generally become larger with more computing nodes. This study shows that the Priority-Flood algorithm can be implemented in parallel, which makes it an ideal algorithm for depression identification and filling on both single computers and computer clusters.  相似文献   

9.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

10.
As geospatial researchers' access to high-performance computing clusters continues to increase alongside the availability of high-resolution spatial data, it is imperative that techniques are devised to exploit these clusters' ability to quickly process and analyze large amounts of information. This research concentrates on the parallel computation of A Multidirectional Optimal Ecotope-Based Algorithm (AMOEBA). AMOEBA is used to derive spatial weight matrices for spatial autoregressive models and as a method for identifying irregularly shaped spatial clusters. While improvements have been made to the original ‘exhaustive’ algorithm, the resulting ‘constructive’ algorithm can still take a significant amount of time to complete with large datasets. This article outlines a parallel implementation of AMOEBA (the P-AMOEBA) written in Java utilizing the message passing library MPJ Express. In order to account for differing types of spatial grid data, two decomposition methods are developed and tested. The benefits of using the new parallel algorithm are demonstrated on an example dataset. Results show that different decompositions of spatial data affect the computational load balance across multiple processors and that the parallel version of AMOEBA achieves substantially faster runtimes than those reported in related publications.  相似文献   

11.
Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA.  相似文献   

12.
High-performance simulation of flow dynamics remains a major challenge in the use of physical-based, fully distributed hydrologic models. Parallel computing has been widely used to overcome efficiency limitation by partitioning a basin into sub-basins and executing calculations among multiple processors. However, existing partition-based parallelization strategies are still hampered by the dependency between inter-connected sub-basins. This study proposed a particle-set strategy to parallelize the flow-path network (FPN) model for achieving higher performance in the simulation of flow dynamics. The FPN model replaced the hydrological calculations on sub-basins with the movements of water packages along the upstream and downstream flow paths. Unlike previous partition-based task decomposition approaches, the proposed particle-set strategy decomposes the computational workload by randomly allocating runoff particles to concurrent computing processors. Simulation experiments of the flow routing process were undertaken to validate the developed particle-set FPN model. The outcomes of hourly outlet discharges were compared with field gauged records, and up to 128 computing processors were tested to explore its speedup capability in parallel computing. The experimental results showed that the proposed framework can achieve similar prediction accuracy and parallel efficiency to that of the Triangulated Irregular Network (TIN)-based Real-Time Integrated Basin Simulator (tRIBS).  相似文献   

13.
《The Journal of geography》2012,111(5):258-263
Abstract

Many universities are introducing courses to teach students the principles of geographic information systems (GIS). In addition to lectures, exercises with commercial GIS software are offered to show basic operations. Although students learn to execute such operations, the software may hide their internal structure and logic. We propose using a spreadsheet program as a teaching tool for raster operations such as filter and overlay. Spreadsheets offer a practical way to demonstrate and experiment with raster operations, because the raster structure is captured in the form of rows and columns. With this tool, students are able to perform and visualize operations as well as to see how the data are processed by the algorithms. Our approach is new in that we concentrate on the algorithms of operations. We make explicit which raster functions are actually evaluated when performing a particular operation. We conclude that there are good reasons for using spreadsheets in comparison to traditional GIS software when teaching raster operations. These are demonstration in class, simple user interface, familiarity to students, low cost, flexibility of changing cell values, ease of changing parameters, easy programming environment, and the possibility to look behind the scenes of operations by viewing the code.  相似文献   

14.
We have formulated a 3-D inverse solution for the magnetotelluric (MT) problem using the non-linear conjugate gradient method. Finite difference methods are used to compute predicted data efficiently and objective functional gradients. Only six forward modelling applications per frequency are typically required to produce the model update at each iteration. This efficiency is achieved by incorporating a simple line search procedure that calls for a sufficient reduction in the objective functional, instead of an exact determination of its minimum along a given descent direction. Additional efficiencies in the scheme are sought by incorporating preconditioning to accelerate solution convergence. Even with these efficiencies, the solution's realism and complexity are still limited by the speed and memory of serial processors. To overcome this barrier, the scheme has been implemented on a parallel computing platform where tens to thousands of processors operate on the problem simultaneously. The inversion scheme is tested by inverting data produced with a forward modelling code algorithmically different from that employed in the inversion algorithm. This check provides independent verification of the scheme since the two forward modelling algorithms are prone to different types of numerical error.  相似文献   

15.
A rapid and flexible parallel approach for viewshed computation on large digital elevation models is presented. Our work is focused on the implementation of a derivate of the R2 viewshed algorithm. Emphasis has been placed on input/output (IO) efficiency that can be achieved by memory segmentation and coalesced memory access. An implementation of the parallel viewshed algorithm on the Compute Unified Device Architecture (CUDA), which exploits the high parallelism of the graphics processing unit, is presented. This version is referred to as r.cuda.visibility. The accuracy of our algorithm is compared to the r.los R3 algorithm (integrated into the open-source Geographic Resources Analysis Support System geographic information system environment) and other IO-efficient algorithms. Our results demonstrate that the proposed implementation of the R2 algorithm is faster and more IO efficient than previously presented IO-efficient algorithms, and that it achieves moderate calculation precision compared to the R3 algorithm. Thus, to the best of our knowledge, the algorithm presented here is the most efficient viewshed approach, in terms of computational speed, for large data sets.  相似文献   

16.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

17.
With the wide adoption of big spatial data and the emergence of CyberGIS, the nontrivial computational intensity introduced by massive amount of data poses great challenges to the performance of vector map visualization. The parallel computing technologies provide promising solutions to such problems. Evenly decomposing the visualization task into multiple subtasks is one of the key issues in parallel visualization of vector data. This study focuses on the decomposition of polyline and polygon data for parallel visualization. Two key factors impacting the computational intensity were identified: the number of features and the number of vertices of each feature. The computational intensity transform functions (CITFs) were constructed based on the linear relationships between the factors and the computing time. The computational intensity grid (CIG) can then be constructed using the CITFs to represent the spatial distribution of computational intensity. A noninterlaced continuous space-filling curve is used to group the lattices of CIG into multiple sub-domains such that each sub-domain entails the same amount of computational intensity as others. The experiments demonstrated that the approach proposed in this paper was able to effectively estimate and spatially represent the computational intensity of visualizing polylines and polygons. Compared with the regular domain decomposition methods, the new approach generated much more balanced decomposition of computational intensity for parallel visualization and achieved near-linear speedups, especially when the data is greatly heterogeneously distributed in space.  相似文献   

18.
The availability of continental and global-scale spatio-temporal geographical data sets and the requirement to efficiently process, analyse and manage them led to the development of the temporally enabled Geographic Resources Analysis Support System (GRASS GIS). We present the temporal framework that extends GRASS GIS with spatio-temporal capabilities. The framework provides comprehensive functionality to implement a full-featured temporal geographic information system (GIS) based on a combined field and object-based approach. A significantly improved snapshot approach is used to manage spatial fields of raster, three-dimensional raster and vector type in time. The resulting timestamped spatial fields are organised in spatio-temporal fields referred to as space-time data sets. Both types of fields are handled as objects in our framework. The spatio-temporal extent of the objects and related metadata is stored in relational databases, thus providing additional functionalities to perform SQL-based analysis. We present our combined field and object-based approach in detail and show the management, analysis and processing of spatio-temporal data sets with complex spatio-temporal topologies. A key feature is the hierarchical processing of spatio-temporal data ranging from topological analysis of spatio-temporal fields over boolean operations on spatio-temporal extents, to single pixel, voxel and vector feature access. The linear scalability of our approach is demonstrated by handling up to 1,000,000 raster layers in a single space-time data set. We provide several code examples to show the capabilities of the GRASS GIS Temporal Framework and present the spatio-temporal intersection of trajectory data which demonstrates the object-based ability of our framework.  相似文献   

19.
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System (GRASS) environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution (LTE) network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.  相似文献   

20.
地形特征与山地气候变化空间关联规则数据挖掘研究   总被引:2,自引:0,他引:2  
以四川省的地形、气候为研究对象,针对山地地形特征与气候变化研究中,传统的统计分析、非线性拟合等方法缺乏分析处理海量数据和提取隐含信息能力的问题,提出将关联规则数据挖掘与栅格图像处理、地形分析相结合的研究方法。该方法利用栅格图像处理和地形分析技术,对地形和气候栅格图像进行坐标转换、裁剪、分类、因子提取、离散化等预处理,再用Apriori算法对提取的地形特征因子和气候因子进行分析,得到反映两者之间相关性的强关联规则。通过对60余万组数据的分析,得到22条满足最小支持度和置信度的关联规则,并由此综合分析得到6条复合关联规则。实验证明,这些反映地形特征与气候变化幅度之间关联性的关联规则可信度较高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号