首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 142 毫秒
1.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

2.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   

3.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

4.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

5.
The availability of spatial data on an unprecedented scale as well as advancements in analytical and visualization techniques gives researchers the opportunity to study complex problems over large urban and regional areas. Nevertheless, few individual data sets exist that provide both the requisite spatial and/or temporal observational frequency to truly facilitate detailed investigations. Some data are collected frequently over time but only at a few geographic locations (e.g., weather stations). Similarly, other data are collected with a high level of spatial resolution but not at regular or frequent time intervals (e.g., satellite data). The purpose of this article is to present an interpolation approach that leverages the relative temporal richness of one data set with the relative spatial richness of another to fill in the gaps. Because different interpolation techniques are more appropriate than others for specific types of data, we propose a space–time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem to increase the accuracy results.

We call our ensemble approach the space–time interpolation environment (STIE). The primary steps within this environment include a spatial interpolation processor, a temporal interpolation processor, and a calibration processor, which enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In this article, we first describe STIE conceptually including the data input requirements, output structure, details of the primary steps, and the mechanism for coordinating the data within those steps. We then describe a case study focusing on urban land cover in Phoenix, Arizona, using our working implementation. Our empirical results show that our approach increased the accuracy for estimating urban land cover better than a single interpolation technique.  相似文献   

6.
Abstract

In this paper we address the problem of computing visibility information on digital terrain models in parallel. We propose a parallel algorithm for computing the visible region of an observation point located on the terrain. The algorithm is based on a sequential triangle-sorting visibility approach proposed by De Floriani et al. (1989). Static and dynamic parallelization strategies, both in terms of partitioning criteria and scheduling policies, are discussed. The different parallelization strategies are implemented on an MIMD multicomputer and evaluated through experimental results.  相似文献   

7.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

8.
ABSTRACT

Agent-based models (ABM) are used to represent a variety of complex systems by simulating the local interactions between system components from which observable spatial patterns at the system-level emerge. Thus, the degree to which these interactions are represented correctly must be evaluated. Networks can be used to discretely represent and quantify interactions between system components and the emergent system structure. Therefore, the main objective of this study is to develop and implement a novel validation approach called the NEtworks for ABM Testing (NEAT) that integrates geographic information science, ABM approaches, and spatial network representations to simulate complex systems as measurable and dynamic spatial networks. The simulated spatial network structures are measured using graph theory and compared with empirical regularities of observed real networks. The approach is implemented to validate a theoretical ABM representing the spread of influenza in the City of Vancouver, Canada. Results demonstrate that the NEAT approach can validate whether the internal model processes are represented realistically, thus better enabling the use of ABMs in decision-making processes.  相似文献   

9.
In this paper, we report efforts to develop a parallel implementation of the p-compact regionalization problem suitable for multi-core desktop and high-performance computing environments. Regionalization for data aggregation is a key component of many spatial analytical workflows that are known to be NP-Hard. We utilize a low communication cost parallel implementation technique that provides a benchmark for more complex implementations of this algorithm. Both the initialization phase, utilizing a Memory-based Randomized Greedy and Edge Reassignment (MERGE) algorithm, and the local search phase, utilizing Simulated Annealing, are distributed over available compute cores. Our results suggest that the proposed parallelization strategy is capable of solving the compactness-driven regionalization problem both efficiently and effectively. We expect this work to advance CyberGIS research by extending its application areas into the regionalization world and to make a contribution to the spatial analysis community by proposing this parallelization strategy to solve large regionalization problems efficiently.  相似文献   

10.
Abstract

Utilising the powerful resources of a parallel computer has become a technique available to the GIS software engineer for increasing the performance of such complex software systems. This paper discusses the effectiveness of both automatic and manual parallelising techniques with a view to making an assessment as to whether the inherent sequential structure of GIS software is a detrimental factor inhibiting the use of such techniques. With the aid of the Scan Line Fill (SLF) algorithm in GIMMS, it has been shown that whilst automated parallelization has no merits in this case, a significant performance benefit can be achieved with algorithm redesign at the macro level to exploit the natural geometric parallelism inherent within the algorithm. However, the results illustrate that the full potential of this approach will not be appreciated until the I/O bottleneck is completely overcome, as opposed to merely avoided.  相似文献   

11.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

12.
Existing sensor network query processors (SNQPs) have demonstrated that in-network processing is an effective and efficient means of interacting with wireless sensor networks (WSNs) for data collection tasks. Inspired by these findings, this article investigates the question as to whether spatial analysis over WSNs can be built upon established distributed query processing techniques, but, here, emphasis is on the spatial aspects of sensed data, which are not adequately addressed in the existing SNQPs. By spatial analysis, we mean the ability to detect topological relationships between spatially referenced entities (e.g. whether mist intersects a vineyard or is disjoint from it) and to derive representations grounded on such relationships (e.g. the geometrical extent of that part of a vineyard that is covered by mist). To support the efficient representation, querying and manipulation of spatial data, we use an algebraic approach. We revisit a previously proposed centralized spatial algebra comprising a set of spatial data types and a comprehensive collection of operations. We have redefined and re-conceptualized the algebra for distributed evaluation and shown that it can be efficiently implemented for in-network execution. This article provides rigorous, formal definitions of the spatial data types, points, lines and regions, together with spatial-valued and topological operations over them. The article shows how the algebra can be used to characterize complex and expressive topological relationships between spatial entities and spatial phenomena that, due to their dynamic, evolving nature, cannot be represented a priori.  相似文献   

13.
Fine-resolution population mapping using OpenStreetMap points-of-interest   总被引:1,自引:0,他引:1  
Data on population at building level is required for various purposes. However, to protect privacy, government population data is aggregated. Population estimates at finer scales can be obtained through areal interpolation, a process where data from a first spatial unit system is transferred to another system. Areal interpolation can be conducted with ancillary data that guide the redistribution of population. For population estimation at the building level, common ancillary data include three-dimensional data on buildings, obtained through costly processes such as LiDAR. Meanwhile, volunteered geographic information (VGI) is emerging as a new category of data and is already used for purposes related to urban management. The objective of this paper is to present an alternative approach for building level areal interpolation that uses VGI as ancillary data. The proposed method integrates existing interpolation techniques, i.e., multi-class dasymetric mapping and interpolation by surface volume integration; data on building footprints and points-of-interest (POIs) extracted from OpenStreetMap (OSM) are used to refine population estimates at building level. A case study was conducted for the city of Hamburg and the results were compared using different types of POIs. The results suggest that VGI can be used to accurately estimate population distribution, but that further research is needed to understand how POIs can reveal population distribution patterns.  相似文献   

14.
Spatial interpolation of marine environment data using P-MSN   总被引:1,自引:0,他引:1  
ABSTRACT

When a marine study area is large, the environmental variables often present spatially stratified non-homogeneity, violating the spatial second-order stationary assumption. The stratified non-homogeneous surface can be divided into several stationary strata with different means or variances, but still with close relationships between neighboring strata. To give the best linear-unbiased estimator for those environmental variables, an interpolated version of the mean of the surface with stratified non-homogeneity (MSN) method called point mean of the surface with stratified non-homogeneity (P-MSN) was derived. P-MSN distinguishes the spatial mean and variogram in different strata and borrows information from neighboring strata to improve the interpolation precision near the strata boundary. This paper also introduces the implementation of this method, and its performance is demonstrated in two case studies, one using ocean color remote sensing data, and the other using marine environment monitoring data. The predictions of P-MSN were compared with ordinary kriging, stratified kriging, kriging with an external drift, and empirical Bayesian kriging, the most frequently used methods that can handle some extent of spatial non-homogeneity. The results illustrated that for spatially stratified non-homogeneous environmental variables, P-MSN outperforms other methods by simultaneously improving interpolation precision and avoiding artificially abrupt changes along the strata boundaries.  相似文献   

15.
ABSTRACT

This paper proposes a new classification method for spatial data by adjusting prior class probabilities according to local spatial patterns. First, the proposed method uses a classical statistical classifier to model training data. Second, the prior class probabilities are estimated according to the local spatial pattern and the classifier for each unseen object is adapted using the estimated prior probability. Finally, each unseen object is classified using its adapted classifier. Because the new method can be coupled with both generative and discriminant statistical classifiers, it performs generally more accurately than other methods for a variety of different spatial datasets. Experimental results show that this method has a lower prediction error than statistical classifiers that take no spatial information into account. Moreover, in the experiments, the new method also outperforms spatial auto-logistic regression and Markov random field-based methods when an appropriate estimate of local prior class distribution is used.  相似文献   

16.
17.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

18.
Abstract

This paper presents an argument for introducing location-allocation theory to advanced undergraduate and beginning graduate students in a simplified continuous space environment that is relatively free of the distorting effects of networks and other aspects of more differentiated “real-world” environments. This approach can enable instructors to reinforce visually the role of different models and their objective functions. In this simplified setting, students can initially concentrate on the link between these mathematical programming techniques and the spatial nature of the problems being solved. In a short time, students can move into more advanced methods in more differentiated environments. A freeware program entitled NEWLAP was developed to facilitate this approach. This software features a variety of spatial allocation models and their associated constraints that can be applied on both the plane and the sphere. This paper outlines how this software can be used to show alternative solutions using different models on the same data set as well as application of the software in a “real world” problem on a global scale.  相似文献   

19.
ABSTRACT

Recently developed urban air quality sensor networks are used to monitor air pollutant concentrations at a fine spatial and temporal resolution. The measurements are however limited to point support. To obtain areal coverage in space and time, interpolation is required. A spatio-temporal regression kriging approach was applied to predict nitrogen dioxide (NO2) concentrations at unobserved space-time locations in the city of Eindhoven, the Netherlands. Prediction maps were created at 25 m spatial resolution and hourly temporal resolution. In regression kriging, the trend is separately modelled from autocorrelation in the residuals. The trend part of the model, consisting of a set of spatial and temporal covariates, was able to explain 49.2% of the spatio-temporal variability in NO2 concentrations in Eindhoven in November 2016. Spatio-temporal autocorrelation in the residuals was modelled by fitting a sum-metric spatio-temporal variogram model, adding smoothness to the prediction maps. The accuracy of the predictions was assessed using leave-one-out cross-validation, resulting in a Root Mean Square Error of 9.91 μg m?3, a Mean Error of ?0.03 μg m?3 and a Mean Absolute Error of 7.29 μg m?3. The method allows for easy prediction and visualization of air pollutant concentrations and can be extended to a near real-time procedure.  相似文献   

20.

Experimental variograms are crucial for most geostatistical studies. In kriging, for example, the variography has a direct influence on the interpolation weights. Despite the great importance of variogram estimators in predicting geostatistical features, they are commonly influenced by outliers in the dataset. The effect of some randomly spatially distributed outliers can mask the pattern of the experimental variogram and produce a destructuration effect, implying that the true data spatial continuity cannot be reproduced. In this paper, an algorithm to detect and remove the effect of outliers in experimental variograms using the Mahalanobis distance is proposed. An example of the algorithm’s application is presented, showing that the developed technique is able to satisfactorily detect and remove outliers from a variogram.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号