首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields.  相似文献   

2.
Abstract

Spatial join indices are join indices constructed for spatial objects. Similar to join indices in relational database systems, spatial join indices improve efficiency of spatial join operations. In this paper, a spatial-information-associated join indexing mechanism is developed to speed up spatial queries, especially, spatial range queries. Three distance-associated join index structures: basic, ring-structured and hierarchical, are developed and studied. Such join indexing structures can be further extended to include orientation information for flexible applications, which leads to zone-structured and other spatial-information-associated join indices. Our performance study and analysis show that spatial-information-associated join indices substantially improve the performance of spatial queries and that different structures are best suited for different applications.  相似文献   

3.
Abstract

Many data structures are possible for the storage of topological information for computer-based maps. The PAN graph is here suggested as an aid in the selection of a strategy appropriate to the application. Examples are given for the mapping of triangular networks and Thiessen polygons. Application of the technique is appropriate to both education in, and design of, spatial data structures for automated cartography and geographical information systems  相似文献   

4.
Abstract

In previous work, a relational data structure aimed at the exchange of spatial data between systems was developed. As this data structure was relational it was of first normal form, but compliance with the higher normal forms was not investigated. Recently, a new procedural method for composing fully normalized data structures from the basic data fields has been developed by H. C. Smith, as an alternative to the process of non-loss decomposition which is difficult to understand. Smith's method has been applied to data fields required to store points, lines and polygons in a chain-node spatial data model. When geographic domain, coverage layer and map are also considered, the procedure naturally leads to a catalogue model, needed for the exchange of spatial data. Although the method produces a fully normalized data structure, it is not as easy to identify which normal forms are responsible for the ultimate arrangement of the data fields into relations, but the benefits of these criteria for data base development also apply to spatial data structures and related ancillary data.  相似文献   

5.
Abstract

The accumulation of geological information in digital form, due to modern exploration methods, has introduced the possibility of applying geographical information system technology to the field of geology. To achieve the benefits in information management and in data analysis and interpretation, however, it will be necessary to develop spatial models and associated data structures which are specifically designed for working in three dimensions. Some progress in this direction has already been demonstrated, with the application of octree spatial subdivision techniques to the storage of uniform volume elements representing mineral properties. By imposing octree tessellations on more precisely-defined geometric data, such as triangulated surfaces and polygon line segments, it may now be possible to combine efficient spatial addressing with topologically-coded boundary representations of geological strata. The development of storage schemes capable of representing such geological boundary models at different scales poses a particular problem, a possible solution to which may be by means of hierarchical classification of the vertices of triangulated surfaces according to shape contribution.  相似文献   

6.
In integration of road maps modeled as road vector data, the main task is matching pairs of objects that represent, in different maps, the same segment of a real-world road. In an ad hoc integration, the matching is done for a specific need and, thus, is performed in real time, where only a limited preprocessing is possible. Usually, ad hoc integration is performed as part of some interaction with a user and, hence, the matching algorithm is required to complete its task in time that is short enough for human users to provide feedback to the application, that is, in no more than a few seconds. Such interaction is typical of services on the World Wide Web and to applications in car-navigation systems or in handheld devices.

Several algorithms were proposed in the past for matching road vector data; however, these algorithms are not efficient enough for ad hoc integration. This article presents algorithms for ad hoc integration of maps in which roads are represented as polylines. The main novelty of these algorithms is in using only the locations of the endpoints of the polylines rather than trying to match whole lines. The efficiency of the algorithms is shown both analytically and experimentally. In particular, these algorithms do not require the existence of a spatial index, and they are more efficient than an alternative approach based on using a grid index. Extensive experiments using various maps of three different cities show that our approach to matching road networks is efficient and accurate (i.e., it provides high recall and precision).

General Terms:Algorithms, Experimentation  相似文献   

7.
Existing sensor network query processors (SNQPs) have demonstrated that in-network processing is an effective and efficient means of interacting with wireless sensor networks (WSNs) for data collection tasks. Inspired by these findings, this article investigates the question as to whether spatial analysis over WSNs can be built upon established distributed query processing techniques, but, here, emphasis is on the spatial aspects of sensed data, which are not adequately addressed in the existing SNQPs. By spatial analysis, we mean the ability to detect topological relationships between spatially referenced entities (e.g. whether mist intersects a vineyard or is disjoint from it) and to derive representations grounded on such relationships (e.g. the geometrical extent of that part of a vineyard that is covered by mist). To support the efficient representation, querying and manipulation of spatial data, we use an algebraic approach. We revisit a previously proposed centralized spatial algebra comprising a set of spatial data types and a comprehensive collection of operations. We have redefined and re-conceptualized the algebra for distributed evaluation and shown that it can be efficiently implemented for in-network execution. This article provides rigorous, formal definitions of the spatial data types, points, lines and regions, together with spatial-valued and topological operations over them. The article shows how the algebra can be used to characterize complex and expressive topological relationships between spatial entities and spatial phenomena that, due to their dynamic, evolving nature, cannot be represented a priori.  相似文献   

8.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   

9.
Abstract

Vector data storage has various advantages in a cartographic or geographical information system (GIS) environment, but lacks internal spatial relationships between individual features. Quadtree structures have been extensively used to store and access raster data. This paper shows how quadtree methods may be adapted for use in spatially indexing vector data. It demonstrates that a vector quadtree stored in floating point representation overcomes the classical problem with raster quadtrees of data approximation. Examples of vector quadtrees applied to realistic size data sets are given  相似文献   

10.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

11.
Abstract

Abstract. To achieve high levels of performance in parallel geoprocessing, the underlying spatial structure and relations of spatial models must be accounted for and exploited during decomposition into parallel processes. Spatial models are classified from two perspectives, the domain of modelling and the scope of operations, and a framework of strategies is developed to guide the decomposition of models with different characteristics into parallel processes. Two models are decomposed using these strategies: hill-shading on digital elevation models and the construction of Delaunay Triangulations. Performance statistics are presented for implementations of these algorithms on a MIMD computer.  相似文献   

12.
The discovery of spatial clusters formed by proximal spatial units with similar non-spatial attribute values plays an important role in spatial data analysis. Although several spatial contiguity-constrained clustering methods are currently available, almost all of them discover clusters in a geographical dataset, even though the dataset has no natural clustering structure. Statistically evaluating the significance of the degree of homogeneity within a single spatial cluster is difficult. To overcome this limitation, this study develops a permutation test approach Specifically, the homogeneity of a spatial cluster is measured based on the local variance and cluster member permutation, and two-stage permutation tests are developed to determine the significance of the degree of homogeneity within each spatial cluster. The proposed permutation tests can be integrated into the existing spatial clustering algorithms to detect homogeneous spatial clusters. The proposed tests are compared with four existing tests (i.e., Park’s test, the contiguity-constrained nonparametric analysis of variance (COCOPAN) method, spatial scan statistic, and q-statistic) using two simulated and two meteorological datasets. The comparison shows that the proposed two-stage permutation tests are more effective to identify homogeneous spatial clusters and to determine homogeneous clustering structures in practical applications.  相似文献   

13.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

14.
Abstract

During the 1980s techniques for analysis of geographical patterns have been refined to the point that they may be applied to data from many fields. Quantitative spatial analysis and existing functions available in geographical information systems (GIS) enable computerized implementations of these spatial analysis methods. This paper describes the application of quantitative spatial analysis and GIS functions to analysis of language data, using the extensive files of the Linguistic Atlas of the Middle and South Atlantic States (LAMSAS). A brief review of recent development of using quantitative and statistical methods for analysing linguistic data is also included.  相似文献   

15.
ABSTRACT

We argue that the use of American Community Survey (ACS) data in spatial autocorrelation statistics without considering error margins is critically problematic. Public health and geographical research has been slow to recognize high data uncertainty of ACS estimates, even though ACS data are widely accepted data sources in neighborhood health studies and health policies. Detecting spatial autocorrelation patterns of health indicators on ACS data can be distorted to the point that scholars may have difficulty in perceiving the true pattern. We examine the statistical properties of spatial autocorrelation statistics of areal incidence rates based on ACS data. In a case study of teen birth rates in Mecklenburg County, North Carolina, in 2010, Global and Local Moran’s I statistics estimated on 5-year ACS estimates (2006–2010) are compared to ground truth rate estimates on actual counts of births certificate records and decennial-census data (2010). Detected spatial autocorrelation patterns are found to be significantly different between the two data sources so that actual spatial structures are misrepresented. We warn of the possibility of misjudgment of the reality and of policy failure and argue for new spatially explicit methods that mitigate the biasedness of statistical estimations imposed by the uncertainty of ACS data.  相似文献   

16.
Abstract

One motivation for the need of maps to be organized hierarchically in different resolutions is the fact that in most applications low-resolution maps require less heavy computations than maps represented at higher resolutions. Spatial data structures that permit generation of lower levels or resolution in a hierarchical fashion already exist, e.g., quad-trees and resolution pyramids. Many other spatial data structures that are non-hierarchical, and therefore do not permit the generation of resolution hierarchies, also exist. One such structure is the run-length-code (RLC), which has many powerful advantages that make the structure feasible in geographical information systems. In this article an approach to the problem of generating a resolution hierarchy from RLC is described and discussed.  相似文献   

17.
Abstract

Digital map data are currently available based on a variety of data structures, depending on the uses to which the data are to be put Within the major categories of vector and raster data, as well as other structures, there is a multiplicity of data formats. Further to this, data for digital map coordinates are frequently stored in a different way from attribute data pertaining to points, lines and polygons. Given these problems, this paper investigates the possibility of handling different kinds of data structures, as well as both coordinate and attribute information within a unified conceptual scheme. This scheme is expressed in terms of the design of an integrated geographical information system called GEO VIEW, which can be implemented in a relational data base environment. The structure of the tables in the data base is outlined, together with the methodology for coding different kinds of data structure into a standard form. Examples of queries are provided, using the SQL query language, to indicate how the system might be used and problems in optimizing spatial searching on a data base of this kind are addressed.  相似文献   

18.
ABSTRACT

Six routing algorithms, describing how flow (and water borne material) will be routed over Digital Elevation Models, are described and compared. The performance of these algorithms is determined based on both the calculation of the contributing area and the prediction of ephemeral gullies. Three groups of routing algorithms could be identified. Both from a statistical and a spatial viewpoint these groups produce significantly different results, with a major distinction between single flow and multiple flow algorithms. Single flow algorithms cannot accommodate divergent flow and are very sensitive to small errors. Therefore, they are not acceptable for hillslopes. The flux decomposition algorithm, proposed here, seems to be preferable to other multiple flow algorithms as it is mathematically straightforward, needs only up to two neighbours and yields more realistic results for drainage lines. The implications of the routing algorithms on the prediction of ephemeral gullies seem to be somewhat counterintuitive: the single flow algorithms that, at first sight, seem to mimic the process of overland flow, do not yield optimal prediction results.  相似文献   

19.
ABSTRACT

Big data have shifted spatial optimization from a purely computational-intensive problem to a data-intensive challenge. This is especially the case for spatiotemporal (ST) land use/land cover change (LUCC) research. In addition to greater variety, for example, from sensing platforms, big data offer datasets at higher spatial and temporal resolutions; these new offerings require new methods to optimize data handling and analysis. We propose a LUCC-based geospatial cyberinfrastructure (GCI) that optimizes big data handling and analysis, in this case with raster data. The GCI provides three levels of optimization. First, we employ spatial optimization with graph-based image segmentation. Second, we propose ST Atom Model to temporally optimize the image segments for LUCC. At last, the first two domain ST optimizations are supported by the computational optimization for big data analysis. The evaluation is conducted using DMTI (DMTI Spatial Inc.) Satellite StreetView imagery datasets acquired for the Greater Montreal area, Canada in 2006, 2009, and 2012 (534 GB, 60 cm spatial resolution, RGB image). Our LUCC-based GCI builds an optimization bridge among LUCC, ST modelling, and big data.  相似文献   

20.
《The Journal of geography》2012,111(5):181-191
Abstract

The human brain appears to have several “regions” that are structured to do different kinds of spatial thinking, according to a large and rapidly growing body of research in a number of disciplines. Building on a previous review of research with older children and adults, this article summarizes the research on spatial thinking by young children. Three conclusions: brain structures for spatial reasoning are fully functional at a very early age, adult intervention can enhance both use and representational ability, and practice in early grades is an important, perhaps even essential, part of the scaffold for later learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号