首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 512 毫秒
1.
Reconstruction of 3D trees from incomplete point clouds is a challenging issue due to their large variety and natural geometric complexity. In this paper, we develop a novel method to effectively model trees from a single laser scan. First, coarse tree skeletons are extracted by utilizing the L1-median skeleton to compute the dominant direction of each point and the local point density of the point cloud. Then we propose a data completion scheme that guides the compensation for missing data. It is an iterative optimization process based on the dominant direction of each point and local point density. Finally, we present a L1-minimum spanning tree (MST) algorithm to refine tree skeletons from the optimized point cloud, which integrates the advantages of both L1-median skeleton and MST algorithms. The proposed method has been validated on various point clouds captured from single laser scans. The experiment results demonstrate the effectiveness and robustness of our method for coping with complex shapes of branching structures and occlusions.  相似文献   

2.
从机载LiDAR点云数据中分离地面点与非地面点生成城区DEM,是构建数字城市的首要工作。该文对机载LiDAR数据处理中的滤波关键算法进行了研究。首先,利用常用的区域增长方法对机载LiDAR点云数据进行了滤波处理,然后基于正交多项式分带滤波方法进行了点云滤波,比较了两种算法的运行效率,分析了二者的特点,并通过试验对比进行了阈值参数优选,试验表明该算法具有较好的滤波效果,在平原城区的数字城市三维建模中具有很好的应用价值。  相似文献   

3.
Point cloud classification plays a critical role in many applications of airborne light detection and ranging (LiDAR) data. In this paper, we present a deep feature-based method for accurately classifying multiple ground objects from airborne LiDAR point clouds. With several selected attributes of LiDAR point clouds, our method first creates a group of multi-scale contextual images for each point in the data using interpolation. Taking the contextual images as inputs, a multi-scale convolutional neural network (MCNN) is then designed and trained to learn the deep features of LiDAR points across various scales. A softmax regression classifier (SRC) is finally employed to generate classification results of the data with a combination of the deep features learned from various scales. Compared with most of traditional classification methods, which often require users to manually define a group of complex discriminant rules or extract a set of classification features, the proposed method has the ability to automatically learn the deep features and generate more accurate classification results. The performance of our method is evaluated qualitatively and quantitatively using the International Society for Photogrammetry and Remote Sensing benchmark dataset, and the experimental results indicate that our method can effectively distinguish eight types of ground objects, including low vegetation, impervious surface, car, fence/hedge, roof, facade, shrub and tree, and achieves a higher accuracy than other existing methods.  相似文献   

4.
Abstract

The concept of GRASS (Geographic Resources Analysis Support System) as an open system has created a favourable environment for integration of process based modelling and GIS. To support this integration a new generation of tools is being developed in the following areas: (a) interpolation from multidimensional scattered point data, (b) analysis of surfaces and hypersurfaces, (c) modelling of spatial processes and, (d) 3D dynamic visualization. Examples of two applications are given-spatial and temporal modelling of erosion and deposition, and multivariate interpolation and visualization of nitrogen concentrations in the Chesapeake Bay.  相似文献   

5.
The increasing research interest in global climate change and the rise of the public awareness have generated a significant demand for new tools to support effective visualization of big climate data in a cyber environment such that anyone from any location with an Internet connection and a web browser can easily view and comprehend the data. In response to the demand, this paper introduces a new web-based platform for visualizing multidimensional, time-varying climate data on a virtual globe. The web-based platform is built upon a virtual globe system Cesium, which is open-source, highly extendable and capable of being easily integrated into a web environment. The emerging WebGL technique is adapted to support interactive rendering of 3D graphics with hardware graphics acceleration. To address the challenges of transmitting and visualizing voluminous, complex climate data over the Internet to support real-time visualization, we develop a stream encoding and transmission strategy based on video-compression techniques. This strategy allows dynamic provision of scientific data in different precisions to balance the needs for scientific analysis and visualization cost. Approaches to represent, encode and decode processed data are also introduced in detail to show the operational workflow. Finally, we conduct several experiments to demonstrate the performance of the proposed strategy under different network conditions. A prototype, PolarGlobe, has been developed to visualize climate data in the Arctic regions from multiple angles.  相似文献   

6.
Three-dimensional (3D) building models are essential for 3D Geographic Information Systems and play an important role in various urban management applications. Although several light detection and ranging (LiDAR) data-based reconstruction approaches have made significant advances toward the fully automatic generation of 3D building models, the process is still tedious and time-consuming, especially for massive point clouds. This paper introduces a new framework that utilizes a spatial database to achieve high performance via parallel computation for fully automatic 3D building roof reconstruction from airborne LiDAR data. The framework integrates data-driven and model-driven methods to produce building roof models of the primary structure with detailed features. The framework is composed of five major components: (1) a density-based clustering algorithm to segment individual buildings, (2) an improved boundary-tracing algorithm, (3) a hybrid method for segmenting planar patches that selects seed points in parameter space and grows the regions in spatial space, (4) a boundary regularization approach that considers outliers and (5) a method for reconstructing the topological and geometrical information of building roofs using the intersections of planar patches. The entire process is based on a spatial database, which has the following advantages: (a) managing and querying data efficiently, especially for millions of LiDAR points, (b) utilizing the spatial analysis functions provided by the system, reducing tedious and time-consuming computation, and (c) using parallel computing while reconstructing 3D building roof models, improving performance.  相似文献   

7.
Errors in LiDAR-derived shrub height and crown area on sloped terrain   总被引:1,自引:0,他引:1  
This study developed and tested four methods for shrub height measurements with airborne LiDAR data in a semiarid shrub-steppe in southwestern Idaho, USA. Unique to this study was the focus of sagebrush height measurements on sloped terrain. The study also developed one of the first methods towards estimating crown area of sagebrush from LiDAR. Both sagebrush height and crown area were underestimated by LiDAR. Sagebrush height was estimated to within ± 0.26-0.32 mm (two standard deviations of standard error). Crown area was underestimated by a mean of 49%. Further, hillslope had a relatively low impact on sagebrush height and crown area estimation. From a management perspective, estimation of individual shrubs over large geographic areas can be accomplished using a 0.5 m rasterized vegetation height derivative from LiDAR. While the underestimation of crown area is substantial, we suggest that this underestimation would improve with higher LiDAR point density (>4 points/m2). Further studies can estimate shrub biomass using LiDAR height and crown area derivatives.  相似文献   

8.
Viewshed and line-of-sight are spatial analysis functions used in applications ranging from urban design to archaeology to hydrology. Vegetation data, a difficult variable to effectively emulate in computer models, is typically omitted from visibility calculations or unrealistically simulated. In visibility analyzes performed on a small scale, where calculation distances are a few hundred meters or less, ineffective incorporation of vegetation can lead to significant modeling error. Using an aerial LiDAR (light detection and ranging) data set of a lodgepole pine (Pinus contorta) dominant ecosystem in Idaho, USA, tree obstruction metrics were derived and integrated into a short-range visibility model. A total of 15 visibility plots were set at a micro-scale level, with visibility modeled to a maximum of 50 m from an observation point. Digital photographs of a 1 m2 target set at 5 m increments along three sightline paths for each visibility plot were used to establish control visibility values. Trunk obstructions, derived from mean vegetation height LiDAR data and processed through a series of tree structure algorithms, were factored into visibility calculations and compared to reference data. Results indicate the model calculated using trunk obstructions with LiDAR demonstrated a mean error of 8.8% underestimation of target visibility, while alternative methods using mean vegetation height and bare-earth models have an underestimation of 65.7% and overestimation of 31.1%, respectively.  相似文献   

9.
The analysis of the spatial structure of animal communities requires spatial data to determine the distribution of individuals and their limiting factors. New technologies like very precise GPS as well as satellite imagery and aerial photographs of very high spatial resolution are now available. Data from airborne LiDAR (Light Detection and Ranging) sensors can provide digital models of ground and vegetation surfaces with pixel sizes of less than 1 m. We present the first study in terrestrial herpetology using LiDAR data. We aim to identify the spatial patterns of a community of four species of lizards (Lacerta schreiberi, Timon lepidus, Podarcis bocagei, and P. hispanica), and to determine how the habitat is influencing the distribution of the species spatially. The study area is located in Northern Portugal. The position of each lizard was recorded during 16 surveys of 1 h with a very precise GPS (error < 1 m). LiDAR data provided digital models of surface, terrain, and normalised height. From these data, we derived slope, ruggedness, orientation, and hill-shading variables. We applied spatial statistics to determine the spatial structure of the community. We computed Maxent ecological niche models to determine the importance of environmental variables. The community and its species presented a clustered distribution. We identified 14 clusters, composed of 1–3 species. Species records showed two distribution patterns, with clusters associated with steep and flat areas. Cluster outliers had the same patterns. Juveniles and subadults were associated with areas of low quality, while sexes used space in similar ways. Maxent models identified suitable habitats across the study area for two species and in the flat areas for the other two species. LiDAR allowed us to understand the local distributions of a lizard community. Remotely sensed data and LiDAR are giving new insights into the study of species ecology. Images of higher spatial resolutions are necessary to map important factors such as refuges.  相似文献   

10.
Abstract

We present a model for describing the visibility of a polyhedral terrain from a fixed viewpoint, based on a collection of nested horizons. We briefly introduce the concepts of mathematical and digital terrain models, and some background notions for visibility problems on terrains. Then, we define horizons on a polyhedral terrain, and introduce a visibility model, that we call the horizon map. We present a construction algorithm and a data structure for encoding the horizon map, and show how it can be used for solving point visibility queries with respect to a fixed viewpoint.  相似文献   

11.
This paper focuses on two common problems encountered when using Light Detection And Ranging (LiDAR) data to derive digital elevation models (DEMs). Firstly, LiDAR measurements are obtained in an irregular configuration and on a point, rather than a pixel, basis. There is usually a need to interpolate from these point data to a regular grid so it is necessary to identify the approaches that make best use of the sample data to derive the most accurate DEM possible. Secondly, raw LiDAR data contain information on above‐surface features such as vegetation and buildings. It is often the desire to (digitally) remove these features and predict the surface elevations beneath them, thereby obtaining a DEM that does not contain any above‐surface features. This paper explores the use of geostatistical approaches for prediction in this situation. The approaches used are inverse distance weighting (IDW), ordinary kriging (OK) and kriging with a trend model (KT). It is concluded that, for the case studies presented, OK offers greater accuracy of prediction than IDW while KT demonstrates benefits over OK. The absolute differences are not large, but to make the most of the high quality LiDAR data KT seems the most appropriate technique in this case.  相似文献   

12.
13.
Abstract

Vector data storage has various advantages in a cartographic or geographical information system (GIS) environment, but lacks internal spatial relationships between individual features. Quadtree structures have been extensively used to store and access raster data. This paper shows how quadtree methods may be adapted for use in spatially indexing vector data. It demonstrates that a vector quadtree stored in floating point representation overcomes the classical problem with raster quadtrees of data approximation. Examples of vector quadtrees applied to realistic size data sets are given  相似文献   

14.
A light detection and ranging (LiDAR) survey was conducted in a densely built-up area to generate a high-resolution digital elevation model (DEM) to look for active faults. The urban district of Matsumoto City in central Japan is located in a 3-km2 basin along the Itoigawa–Shizuoka Tectonic Line active fault system, one of Japanese onshore fault systems with the highest earthquake probability. A high-resolution DEM at a 0.5-m-grid interval was obtained after removing the effects of laser returns from buildings, clouds and vegetation. It revealed a continuous scarp, up to ~ 2 m in height. Borehole data and archaeological studies indicate the scarp was formed during the most recent faulting event associated with historical earthquakes. In addition, the fault scarp strongly supports that the urban district is in a pull-apart basin related to a fault step-over between two left-lateral strike-slip faults. Consequently, accurate interpretation of fault geometry is crucial to provide estimates of future surface deformation and to allow modeling of basin structure and strong ground motion. Thus, the LiDAR mapping survey in urban districts is effective for detailed active fault mapping in order to constrain basin structure and to forecast the exact location of surface rupturing associated with large earthquakes.  相似文献   

15.
ABSTRACT

The analysis of geographically referenced data, specifically point data, is predicated on the accurate geocoding of those data. Geocoding refers to the process in which geographically referenced data (addresses, for example) are placed on a map. This process may lead to issues with positional accuracy or the inability to geocode an address. In this paper, we conduct an international investigation into the impact of the (in)ability to geocode an address on the resulting spatial pattern. We use a variety of point data sets of crime events (varying numbers of events and types of crime), a variety of areal units of analysis (varying the number and size of areal units), from a variety of countries (varying underlying administrative systems), and a locally-based spatial point pattern test to find the levels of geocoding match rates to maintain the spatial patterns of the original data when addresses are missing at random. We find that the level of geocoding success depends on the number of points and the number of areal units under analysis, but generally show that the necessary levels of geocoding success are lower than found in previous research. This finding is consistent across different national contexts.  相似文献   

16.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

17.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

18.
This article aims to develop a new type of temporal pattern analysis—temporal point pattern analysis (TPPA)—by treating the distribution of activities as a point pattern on a two-dimensional plane using the start time and end time of activities as axes. Geographic information systems (GIS) methods, which are originally used in spatial point pattern analysis in GIS, are introduced to support TPPA. This article presents a case study to understand the temporal patterns of the library visiting activities of university students using a four-week smart card data set in Chengdu City, China. Several methods from GIS are applied, including the measurement of mean centers, kernel density, nearest neighbor distances, and optimized hot spot analysis. Results show that the GIS methods can reveal a lot of information on the temporal pattern of activities, thereby proving the reasonability of the proposed TPPA of activities. Key Words: GIS, human activities, library visiting, smart card data, visualization.  相似文献   

19.
Abstract

Multiresolution data structures provide a means of retrieving geographical features from a database at levels of detail which are adaptable to different scales of representation. A database design is presented which integrates multi-scale storage of point, linear and polygonal features, based on the line generalization tree, with a multi-scale surface model based on the Delaunay pyramid. The constituent vertices of topologically-structured geographical features are thus distributed between the triangulated levels of a Delaunay pyramid in which triangle edges are constrained to follow those features at differing degrees of generalization. Efficient locational access is achieved by imposing a spatial index on each level of the pyramid.  相似文献   

20.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号