首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 103 毫秒
1.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

2.
3.
In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.  相似文献   

4.
Performing point pattern analysis using Ripley’s K function on point events of large size is computationally intensive as it involves massive point-wise comparisons, time-consuming edge effect correction weights calculation, and a large number of simulations. This article presented two strategies to optimize the algorithm for point pattern analysis using Ripley’s K function and utilized cloud computing to further accelerate the optimized algorithm. The first optimization sorted the points on their x and y coordinates and thus narrowed the scope of searching for neighboring points down to a rectangular area around each point in estimating K function. Using the actual study area in computing edge effect correction weights is essential to estimate an unbiased K function, but is very computationally intensive if the study area is of complex shape. The second optimization reused the previously computed weights to avoid repeating expensive weights calculation. The optimized algorithm was then parallelized using Open Multi-Processing (OpenMP) and hybrid Message Passing Interface (MPI)/OpenMP on the cloud computing platform. Performance testing showed that the optimizations effectively accelerated point pattern analysis using K function by a factor of 8 using both the sequential version and the OpenMP-parallel version of the optimized algorithm. While the OpenMP-based parallelization achieved good scalability with respect to the number of CPU cores utilized and the problem size, the hybrid MPI/OpenMP-based parallelization significantly shortened the time for estimating K function and performing simulations by utilizing computing resources on multiple computing nodes. Computational challenge imposed by point pattern analysis tasks on point events of large size involving a large number of simulations can be addressed by utilizing elastic, distributed cloud resources.  相似文献   

5.
This article provides a decentralized and coordinate-free algorithm, called decentralized gradient field (DGraF), to identify critical points (peaks, pits, and passes) and the topological structure of the surface network connecting those critical points. Algorithms that can operate in the network without centralized control and without coordinates are important in emerging resource-constrained spatial computing environments, in particular geosensor networks. Our approach accounts for the discrepancies between finite granularity sensor data and the underlying continuous field, ignored by previous work. Empirical evaluation shows that our DGraF algorithm can improve the accuracy of critical points identification when compared with the current state-of-the-art decentralized algorithm and matches the accuracy of a centralized algorithm for peaks and pits. The DGraF algorithm is efficient, requiring O(n) overall communication complexity, where n is the number of nodes in the geosensor network. Further, empirical investigations of our algorithm across a range of simulations demonstrate improved load balance of DGraF when compared with an existing decentralized algorithm. Our investigation highlights a number of important issues for future research on the detection of holes and the monitoring of dynamic events in a field.  相似文献   

6.
Reconstruction of 3D trees from incomplete point clouds is a challenging issue due to their large variety and natural geometric complexity. In this paper, we develop a novel method to effectively model trees from a single laser scan. First, coarse tree skeletons are extracted by utilizing the L1-median skeleton to compute the dominant direction of each point and the local point density of the point cloud. Then we propose a data completion scheme that guides the compensation for missing data. It is an iterative optimization process based on the dominant direction of each point and local point density. Finally, we present a L1-minimum spanning tree (MST) algorithm to refine tree skeletons from the optimized point cloud, which integrates the advantages of both L1-median skeleton and MST algorithms. The proposed method has been validated on various point clouds captured from single laser scans. The experiment results demonstrate the effectiveness and robustness of our method for coping with complex shapes of branching structures and occlusions.  相似文献   

7.
A storage-efficient contour generation method, focusing on planar contours, is developed. Given cartographic elevations on a rectangular lattice, a continuous bivariate function, z = f(x, y), is determined by interpolating the elevation values. Then, we focus on a contour determined by z = constant. The contour curve is partitioned into multiple sections, each of which is exactly or approximately round. Three curvature types are introduced to evaluate the roundness of each section. The area and perimeter of the contour are computed by one-dimensional line integration using Green’s theorem. If the contour is open, it is divided into two curves starting from the same initial point, with the control points advancing in opposite directions. Two types of numerical experiments are performed to validate the effectiveness of the proposed method. One experiment uses an analytically defined elevation function and investigates the number of control points and computation time for a resulting computation error. The second experiment uses actual digital elevation model data of an isolated island in Japan and compares the proposed method with existing ones. Because the algorithm does not require lattice subdivision and the number of control points is drastically reduced, the proposed method is storage efficient.  相似文献   

8.
As increasingly large‐scale and higher‐resolution terrain data have become available, for example air‐form and space‐borne sensors, the volume of these datasets reveals scalability problems with existing GIS algorithms. To address this problem, a kind of serial algorithm was developed to generate viewshed on large grid‐based digital elevation models (DEMs). We first divided the whole DEM into rectangular blocks in row and column directions (called block partitioning), then processed these blocks with four axes followed by four sectors sequentially. When processing the particular block, we adopted the ‘reference plane’ algorithm to calculate the visibility of the target point on the block, and adjusted the calculation sequence according to the different spatial relationships between the block and the viewpoint since the viewpoint is not always inside the DEM. By adopting the ‘Reference Plane’ algorithm and using a block partitioning method to segment and load the DEM dynamically, it is possible to generate viewshed efficiently in PC‐based environments. Experiments showed that the divided block should be dynamically loaded whole into computer main memory when partitioning, and the suggested approach retains the accuracy of the reference plane algorithm and has near linear compute complexity.  相似文献   

9.
10.
This article evaluates the potential of 1-m resolution, 128-band hyperspectral imagery for mapping in-stream habitats, depths, and woody debris in third- to fifth-order streams in the northern Yellowstone region. Maximum likelihood supervised classification using principal component images provided overall classification accuracies for in-stream habitats (glides, riffles, pools, and eddy drop zones) ranging from 69% for third-order streams to 86% for fifth-order streams. This scale dependency of classification accuracy was probably driven by the greater proportion of transitional boundary areas in the smaller streams. Multiple regressions of measured depths (y) versus principal component scores (x1, x2,…, xn) generated R2 values ranging from 67% for high-gradient riffles to 99% for glides in a fifth-order reach. R2 values were lower in third-order reaches, ranging from 28% for runs and glides to 94% for pools. The less accurate depth estimates obtained for smaller streams probably resulted from the relative increase in the number of mixed pixels, where a wide range of depths and surface turbulence occurred within a single pixel. Matched filter (MF) mapping of woody debris generated overall accuracies of 83% in the fifth-order Lamar River. Accuracy figures for the in-stream habitat and wood mapping may have been misleadingly low because the fine-resolution imagery captured fine-scale variations not mapped by field teams, which in turn generated false “misclassifications” when the image and field maps were compared.The use of high spatial resolution hyperspectral (HSRH) imagery for stream mapping is limited by the need for clear water to measure depth, by any tree cover obscuring the stream, and by the limited availability of airborne hyperspectral sensors. Nonetheless, the high accuracies achieved in northern Yellowstone streams indicate that HSRH imagery can be a powerful tool for watershed-wide mapping, monitoring, and modeling of streams.  相似文献   

11.
This article presents a multidisciplinary study implemented in Geographic Information System (GIS) environment to investigate the palaeo-hydrography in a sector of the Magra River alluvial plain (north-western Italy) where the famous ruins of the Roman colony of Luna (now Luni) are located. The approach proposed here combines the results obtained by different remote sensing images (satellite and airborne photos) with the data derived from historical cartography and recent field surveys. The traces are mapped and organised in two vector databases organised in linear or polygonal features consistent with fluvial elements (e.g. palaeo-channels, abandoned streams, etc.) and marshy/swamp areas, respectively. This database represents a useful starting point that can be implemented by further more detailed studies aimed to better understand the evolution of the landscape and the possible relationship with the story of the archaeological site of Luna about which many questions are still unresolved.  相似文献   

12.
不规则三角网(TIN)可以逼真的模拟地形表面,因此被广泛应用于地学领域。Delaunay三角剖分算法是构建TIN网的最优算法,该文对传统Delaunay三角网构建算法进行分析,提出了一种针对大规模离散数据点生成TIN的高效合成算法。该算法首先根据离散点的分布位置和密度对其进行四叉树区域划分;然后以每个叶子节点的边界四边形为凸包,采用逐点插入法构建三角网;最后采用顶点合并法自底向上合并具有相同父节点的4个子节点,生成Delaunay三角网。实验结果表明,该算法时间复杂度较低,有效提高了TIN网的构建效率。  相似文献   

13.
During previous work in the San Juan Mountains of Colorado, we observed that headwater (first-order) streams draining landslides were often characterized by the presence of beaver (Castor canadensis) dams whereas other headwater tributaries typically lacked evidence of beaver. Here, we hypothesize that hummocky landslide topography attracts beaver. To test the hypothesis, we examined 10 landslides and 11 adjacent headwater streams in the area, noting location, vegetation, elevation, and evidence of beaver activity, and then compared the landslide and non-landslide headwater streams using the G-test to determine whether or not variables were independent of one another. We reject the null hypothesis that beaver dam presence is unrelated to landslide deposits (p = 0.003). We further hypothesize that this relationship results from differences in stream gradient and concavity between landslide streams and other streams. We found streams on landslides to have a greater portion of their gradients below what geologic and ecologic literature suggests is a reasonable upper threshold (12%) for beaver dam maintenance. Additionally, streams on landslides are more concave. We conclude that the relationship between beaver presence and landslides results from a higher proportion of reaches below the 12% threshold and increased concavity of headwater streams on landslides.  相似文献   

14.
Bankfull channel width is a fundamental measure of stream size and a key parameter of interest for many applications in hydrology, fluvial geomorphology, and stream ecology. We developed downstream hydraulic geometry relationships for bankfull channel width w as a function of drainage area A, w = α Aβ, (DHGwA) for nine aggregate ecoregions comprising the conterminous United States using 1588 sites from the U.S. Environmental Protection Agency's National Wadeable Streams Assessment (WSA), including 1152 sites from a randomized probability survey sample. Sampled stream reaches ranged from 1 to 75 m in bankfull width and 1 to 10,000 km2 in drainage area. The DHGwA exponent β, which expresses the rate at which bankfull stream width scales with drainage area, fell into three distinct clusters ranging from 0.22 to 0.38. Width increases more rapidly with basin area in the humid Eastern Highlands (encompassing the Northern and Southern Appalachians and the Ozark Mountains) and the Upper Midwest (Great Lakes region) than for the West (both mountainous and xeric areas), the southeastern Coastal Plain, and the Northern Plains (the Dakotas and Montana). Stream width increases least rapidly with basin area in the Temperate Plains (cornbelt) and Southern Plains (Great Prairies) in the heartland. The coefficient of determination (r2) was least in the noncoastal plains (0.36–0.41) and greatest in the Appalachians and Upper Midwest (0.68–0.77). DHGwA equations differed between streams with dominantly fine bed material (silt/sand) and those with dominantly coarse bed material (gravel/cobble/boulder) in six of the nine analysis regions. Where DHGwA equations varied by sediment size, fine-bedded streams were consistently narrower than coarse-bedded streams. Within the Western Mountains ecoregion, where there were sufficient sites to develop DHGwA relationships at a finer spatial scale, α and β ranged from 1.23 to 3.79 and 0.23 to 0.40, respectively, with r2 > 0.50 for 10 of 13 subregions (range: 0.36 to 0.92). Enhanced DHG equations incorporating additional data for three landscape variables that can be derived from GIS—mean annual precipitation, elevation, and mean reach slope—significantly improved equation fit and predictive value in several regions, most notably the Western Mountains and the Temperate Plains. Channel width was also related to human disturbance. We examined the influence of human disturbance on channel width using several indices of local and basinwide disturbance. Contrary to our expectations, the data suggest that the dominant response of channel width to human disturbance in the United States is a reduction in bankfull width in streams with greater disturbance, particularly in the Western Mountains (where population density, road density, agricultural land use, and local riparian disturbance were all negatively related to channel width) and in the Appalachians and New England (where urban and agricultural land cover and riparian disturbance were all negatively associated with channel width).  相似文献   

15.
Identification of steps and pools from stream longitudinal profile data   总被引:1,自引:0,他引:1  
Field research on step–pool channels has largely focused on the dimensions and sequence of steps and pools and how these features vary with slope, grain sizes and other governing variables. Measurements by different investigators are frequently compared, yet no means to identify steps and pools objectively have been used. Automated surveying instruments record the morphology of streams in unprecedented detail making it possible to objectively identify steps and pools, provided an appropriate classification procedure can be developed.To achieve objective identification of steps and pools from long profile survey data, we applied a series of scale-free geometric rules that include minimum step length (2.25% of bankfull width (Wb)), minimum pool length (10% of Wb), minimum residual depth (0.23% of Wb), minimum drop height (3.3% of Wb), and minimum step slope (10° greater than the mean slope). The rules perform as well as the mean response of 11 step–pool researchers who were asked to classify four long profiles, and the results correspond well with the channel morphologies identified during the field surveys from which the long profiles were generated. The method outperforms four other techniques that have been proposed. Sensitivity analysis shows that the method is most sensitive to the choice of minimum pool length and minimum drop height.Detailed bed scans of a step–pool channel created in a flume reveal that a single long profile with a fixed sampling interval poorly characterizes the steps and pools; five or more long profiles spread across the channel are required if a fixed sampling interval is used and the data suggest that survey points should be located more frequently than the diameter of the step-forming material. A single long profile collected by a surveyor who chooses breaks in slope and representative survey points was found to adequately characterize the mean bed profile.  相似文献   

16.
The factors determining the suitability of limestone for industrial use and its commercial value are the amounts of calcium oxide (CaO) and impurities. From 244 sample points in 18 drillhole sites in a limestone mine, southwestern Japan, data on four impurity elements, SiO2, Fe2O3, MnO, and P2O5 were collected. It generally is difficult to estimate spatial distributions of these contents, because most of the limestone bodies in Japan are located in the accretionary complex lithologies of Paleozoic and Mesozoic age. Because the spatial correlations of content data are not clearly shown by variogram analysis, a feedforward neural network was applied to estimate the content distributions. The network structure consists of three layers: input, middle, and output. The input layer has 17 neurons and the output layer four. Three neurons in the input layer correspond with x, y, z coordinates of a sample point and the others are rock types such as crystalline and conglomeratic limestones, and fossil types related to the geologic age of the limestone. Four neurons in the output layer correspond to the amounts of SiO2, Fe2O3, MnO, and P2O5. Numbers of neurons in the middle layer and training data differ with each estimation point to avoid the overfitting of the network. We could detect several important characteristics of the three-dimensional content distributions through the network such as a continuity of low content zones of SiO2 along a Lower Permian fossil zone trending NE-SW, and low-quality zones located in depths shallower than 50 m. The capability of the neural network-based method compared with the geostatistical method is demonstrated from the viewpoints of estimation errors and spatial characteristics of multivariate data. To evaluate the uncertainty of estimates, a method that draws several outputs by changing coordinates slightly from the target point and inputting them to the same trained network is proposed. Uncertainty differs with impurity elements, and is not based on just the spatial arrangement of data points.  相似文献   

17.
Abstract

We present a model for describing the visibility of a polyhedral terrain from a fixed viewpoint, based on a collection of nested horizons. We briefly introduce the concepts of mathematical and digital terrain models, and some background notions for visibility problems on terrains. Then, we define horizons on a polyhedral terrain, and introduce a visibility model, that we call the horizon map. We present a construction algorithm and a data structure for encoding the horizon map, and show how it can be used for solving point visibility queries with respect to a fixed viewpoint.  相似文献   

18.
基于能源消费的中国省级区域碳足迹时空演变分析   总被引:9,自引:0,他引:9  
卢俊宇  黄贤金  陈逸  肖潇 《地理研究》2013,32(2):326-336
碳足迹作为衡量生产某一产品在其生命周期所直接或间接排放的CO2量,其能够反应人类某项活动或某种产品对生态环境的压力程度。本文采用1997-2008年全国省级区域化石能源消费数据和土地利用结构数据,构建碳足迹计算模型,测算不同时间、不同区域的碳足迹、碳生态承载力和碳赤字,并引入物理学中重心的概念,测算1997-2008年全国各省级区域碳足迹的重心,进行碳足迹重心的时空演变趋势分析,掌握区域间能源消费碳排放的差异性;同时构建能源消费碳足迹压力指数模型,计算1997-2008年各省的碳足迹压力指数,对研究区域进行生态压力强度分级,并考察各省级区域碳足迹压力指数在两个相邻时间点之间的变化强度,进行生态压力变化强度的级别划分。  相似文献   

19.
We explore the response of bedrock streams to eustatic and tectonically induced fluctuations in base level. A numerical model coupling onshore fluvial erosion with offshore wave‐base erosion is developed. The results of a series of simulations for simple transgressions with constant rate of sea‐level change (SLR) show that response depends on the relative rates of rock uplift (U) and wave‐base erosion (?w). Simple regression runs highlight the importance of nearshore bathymetry. Shoreline position during sea‐level fall is set by the relative rate of base‐level fall (U‐SLR) and ?w, and is constant horizontally when these two quantities are equal. The results of models forced by a realistic Late Quaternary sea‐level curve are presented. These runs show that a stable shoreline position cannot be obtained if offshore uplift rates exceed ?w. Only in the presence of a relatively stable shoreline position, fluvial profiles can begin to approximate a steady‐state condition, with U balanced by fluvial erosion rate (?f). In the presence of a rapid offshore decrease in rock‐uplift rate (U), short (~5 km) fluvial channels respond to significant changes in rock‐uplift rate in just a few eustatic cycles. The results of the model are compared to real stream‐profile data from the Mendocino triple junction region of northern California. The late Holocene sea‐level stillstand response exhibited by the simulated channels is similar to the low‐gradient mouths seen in the California streams.  相似文献   

20.
ABSTRACT

The density-based spatial clustering of applications with noise (DBSCAN) method is often used to identify individual activity clusters (i.e., zones) using digital footprints captured from social networks. However, DBSCAN is sensitive to the two parameters, eps and minpts. This paper introduces an improved density-based clustering algorithm, Multi-Scaled DBSCAN (M-DBSCAN), to mitigate the detection uncertainty of clusters produced by DBSCAN at different scales of density and cluster size. M-DBSCAN iteratively calibrates suitable local eps and minpts values instead of using one global parameter setting as DBSCAN for detecting clusters of varying densities, and proves to be effective for detecting potential activity zones. Besides, M-DBSCAN can significantly reduce the noise ratio by identifying all points capturing the activities performed in each zone. Using the historic geo-tagged tweets of users in Washington, D.C. and in Madison, Wisconsin, the results reveal that: 1) M-DBSCAN can capture dispersed clusters with low density of points, and therefore detecting more activity zones for each user; 2) A value of 40 m or higher should be used for eps to reduce the possibility of collapsing distinctive activity zones; and 3) A value between 200 and 300 m is recommended for eps while using DBSCAN for detecting activity zones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号