首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Introducing Clifford algebra as the mathematical foundation, a unified spatio‐temporal data model and hierarchical spatio‐temporal index are constructed by linking basic data objects, like pointclouds and Spatio‐Temporal Hyper Cubes of different dimensions, within the multivector structure of Clifford algebra. The transformation from geographic space into homogeneous and conformal space means that geometric, metric and many other kinds of operators of Clifford algebra can be implemented and we then design the shortest path, high‐dimensional Voronoi and unified spatial‐temporal process analyses with spacetime algebra. Tests with real world data suggest these traditional GIS analysis algorithms can be extended and constructed under Clifford Algebra framework, which can accommodate multiple dimensions. The prototype software system CAUSTA (Clifford Algebra based Unified Spatial‐Temporal Analysis) provides a useful tool for investigating and modeling the distribution characteristics and dynamic process of complex geographical phenomena under the unified spatio‐temporal structure.  相似文献   

2.
Spatial data infrastructures, which are characterized by multi‐represented datasets, are prevalent throughout the world. The multi‐represented datasets contain different representations for identical real‐world entities. Therefore, update propagation is useful and required for maintaining multi‐represented datasets. The key to update propagation is the detection of identical features in different datasets that represent corresponding real‐world entities and the detection of changes in updated datasets. Using polygon features of settlements as examples, this article addresses these key problems and proposes an approach for multi‐represented feature matching based on spatial similarity and a back‐propagation neural network (BPNN). Although this approach only utilizes the measures of distance, area, direction and length, it dynamically and objectively determines the weight of each measure through intelligent learning; in contrast, traditional approaches determine weight using expertise. Therefore, the weight may be variable in different data contexts but not for different levels of expertise. This approach can be applied not only to one‐to‐one matching but also to one‐to‐many and many‐to‐many matching. Experiments are designed using two different approaches and four datasets that encompass an area in China. The goals are to demonstrate the weight differences in different data contexts and to measure the performance of the BPNN‐based feature matching approach.  相似文献   

3.
Explicit information about places is captured in an increasing number of geospatial datasets. This article presents evidence that relationships between places can also be captured implicitly. It demonstrates that the hierarchy of central places in Germany is reflected in the link structure of the German language edition of Wikipedia. The official upper and middle centers declared, based on German spatial laws, are used as a reference dataset. The characteristics of the link structure around their Wikipedia pages, which link to each other or mention each other, and how often, are used to develop a bottom‐up method for extracting central places from Wikipedia. The method relies solely on the structure and number of links and mentions between the corresponding Wikipedia pages; no spatial information is used in the extraction process. The output of this method shows significant overlap with the official central place structure, especially for the upper centers. The results indicate that real‐world relationships are in fact reflected in the link structure on the web in the case of Wikipedia.  相似文献   

4.
Augmented reality (AR) overlays real‐world views or scenes with virtual, computer‐generated objects that appear to visually coexist in the same space. Location‐based social networks (LBSNs) are platforms for individuals to be connected through the interdependency derived from their physical locations and their location‐tagged social media content. Current research and development in both areas focuses on integrating mobile‐based AR and LBSNs. Several applications (e.g., Sekai Camera and Wallame) have been developed and commercialized successfully. However, little research has been done on the potential impacts and successful evaluation methods of AR‐integrated LBSNs in the GIScience field. To close this gap, the article outlines the impacts and benefits of AR‐integrated LBSNs and highlights the importance of LBSNs in GIScience research. Based on the status quo of AR‐integrated LBSNs, this article discusses—from theoretical and application‐oriented perspectives—how AR‐integrated LBSNs could enrich the GIScience research agenda in three aspects: data conflation, platial GIS, and multimedia storytelling. The article concludes with guidelines on visualization, functionality, and ethics that aim to help users develop and evaluate AR‐integrated LBSNs.  相似文献   

5.
The wide use of various sensors makes real‐time data acquisition possible. A new spatiotemporal data model, the Event‐driven Spatiotemporal Data Model (E‐ST), is proposed to dynamically express and simulate the spatiotemporal processes of geographic phenomena. In E‐ST, a sensor object is introduced into the model as a flexible real‐time data source. An event type that is generating and driving conditions is registered into a geographic object, so an event can not only express spatiotemporal change in a geographic object, but also drive spatiotemporal change in some geographic objects. As a dynamic GIS data model, the E‐ST has five characteristics – Temporality and Spatiality, Real‐time, Extendability, Causality, and Realizability. Described and realized in UML, a test‐case deployment demonstrating the impact of urban waterlogging on traffic confirms that a spatiotemporal change process in a geographic phenomena is expressed and simulated by this model. Summarizing this work, four directions for future research are outlined.  相似文献   

6.
作为数据更新与融合关键技术之一,空间数据匹配越来越受重视。城市居民地是空间目标变化中最为活跃的要素之一,成为空间数据更新的主要内容。不同于已有要素直接匹配的做法,提出了面状居民地匹配新方法,即对面状居民地采用降维技术进行处理,分别提取匹配双方面状居民地的骨架线,并通过其骨架线之间的匹配来达到面状居民地之间匹配的目的。实验证明,所提方法中把2维面状居民地转化为1维线状骨架线后,其匹配的复杂性与不确定性大幅降低,匹配效率、速度得到有效提升。  相似文献   

7.
8.
With the rapid advance of geospatial technologies, the availability of geospatial data from a wide variety of sources has increased dramatically. It is beneficial to integrate / conflate these multi‐source geospatial datasets, since the integration of multi‐source geospatial data can provide insights and capabilities not possible with individual datasets. However, multi‐source datasets over the same geographical area are often disparate. Accurately integrating geospatial data from different sources is a challenging task. Among the subtasks of integration/conflation, the most crucial one is feature matching, which identifies the features from different datasets as presentations of the same real‐world geographic entity. In this article we present a new relaxation‐based point feature matching approach to match the road intersections from two GIS vector road datasets. The relaxation labeling algorithm utilizes iterated local context updates to achieve a globally consistent result. The contextual constraints (relative distances between points) are incorporated into the compatibility function employed in each iteration's updates. The point‐to‐point matching confidence matrix is initialized using the road connectivity information at each point. Both the traditional proximity‐based approach and our relaxation‐based point matching approach are implemented and experiments are conducted over 18 test sites in rural and suburban areas of Columbia, MO. The test results show that our relaxation labeling approach has much better performance than the proximity matching approach in both simple and complex situations.  相似文献   

9.
This article introduces a type of DBMS called the Intentionally‐Linked Entities (ILE) DBMS for use as the basis for temporal and historical Geographical Information Systems. ILE represents each entity in a database only once, thereby mostly eliminating redundancy and fragmentation, two major problems in Relational and other database systems. These advantages of ILE are realized by using relationship objects and pointers to implement all of the relationships among data entities in a native fashion using dynamically‐allocated linked data structures. ILE can be considered to be a modern and extended implementation of the E/R data model. ILE also facilitates storage of things that are more faithful to the historical records, such as gazetteer entries of places with imprecisely known or unknown locations. This is difficult in Relational database systems but is a routine task using ILE because ILE is implemented using modern memory allocation techniques. We use the China Historical GIS (CHGIS) and other databases to illustrate the advantages of ILE. This is accomplished by modeling these databases in ILE and comparing them to the existing Relational implementations.  相似文献   

10.
The use of cellular automata (CA) has for some time been considered among the most appropriate approaches for modeling land‐use changes. Each cell in a traditional CA model has a state that evolves according to transition rules, taking into consideration its own and its neighbors’ states and characteristics. Here, we present a multi‐label CA model in which a cell may simultaneously have more than one state. The model uses a multi‐label learning method—a multi‐label support vector machine, Rank‐SVM—to define the transition rules. The model was used with a multi‐label land‐use dataset for Luxembourg, built from vector‐based land‐use data using a method presented here. The proposed multi‐label CA model showed promising performance in terms of its ability to capture and model the details and complexities of changes in land‐use patterns. Applied to historical land use data, the proposed model estimated the land use change with an accuracy of 87.2% exact matching and 98.84% when including cells with a misclassification of a single label, which is comparably better than a classical multi‐class model that achieved 83.6%. The multi‐label cellular automata outperformed a model combining CA and artificial neural networks. All model goodness‐of‐fit comparisons were quantified using various performance metrics for predictive models.  相似文献   

11.
Assessing spatial scenes for similarity is difficult from a cognitive and computational perspective. Solutions to spatial‐scene similarity assessments are sensible only if corresponding elements in the compared scenes are identified correctly. This matching process becomes increasingly complex and error‐prone for large spatial scenes as it is questionable how to choose one set of associations over another or how to account quantitatively for unmatched elements. We develop a comprehensive methodology for similarity queries over spatial scenes that incorporates cognitively motivated approaches about scene comparisons, together with explicit domain knowledge about spatial objects and their relations for the relaxation of spatial query constraints. Along with a sound graph‐theoretical methodology, this approach provides the foundation for plausible reasoning about spatial‐scene similarity queries.  相似文献   

12.
Agent‐based modeling provides a means for addressing the way human and natural systems interact to change landscapes over time. Until recently, evaluation of simulation models has focused on map comparison techniques that evaluate the degree to which predictions match real‐world observations. However, methods that change the focus of evaluation from patterns to processes have begun to surface; that is, rather than asking if a model simulates a correct pattern, models are evaluated on their ability to simulate a process of interest. We build on an existing agent‐based modeling validation method in order to present a temporal variant‐invariant analysis (TVIA). The enhanced method, which focuses on analyzing the uncertainty in simulation results, examines the degree to which outcomes from multiple model runs match some reference to how land use parcels make the transition from one land use class to another over time. We apply TVIA to results from an agent‐based model that simulates the relationships between landowner decisions and wildfire risk in the wildland‐urban interface of the southern Willamette Valley, Oregon, USA. The TVIA approach demonstrates a novel ability to examine uncertainty across time to provide an understanding of how the model emulates the system of interest.  相似文献   

13.
全空间信息系统将现实世界中的实体抽象为多粒度时空对象,并对其进行表达与分析,较传统地理信息系统来说更符合人类的认知。对高铁网络对象的基本特点及现有交通网络建模方法进行了分析,可知目前现有的建模方法难以支持具有动态性、关联性及多粒度特性的高铁网络对象的表达。依据多粒度时空对象建模思想,提出了一种适合高铁交通数据的全面表达与操作的建模方法;并对高铁网络对象化建模方法的优势及建模过程进行了探讨,设计了概念模型和逻辑模型;通过实例对提出的建模方法进行了验证,并采用可视化表达的方法检验高铁网络时空对象数据的有效性。实验结果表明,基于时空对象的建模方法能够较好描述高铁时空实体多维动态特征,能为后续高铁网络时空查询、分析提供支持。  相似文献   

14.
This article studies the analysis of moving object data collected by location‐aware devices, such as GPS, using graph databases. Such raw trajectories can be transformed into so‐called semantic trajectories, which are sequences of stops that occur at “places of interest.” Trajectory data analysis can be enriched if spatial and non‐spatial contextual data associated with the moving objects are taken into account, and aggregation of trajectory data can reveal hidden patterns within such data. When trajectory data are stored in relational databases, there is an “impedance mismatch” between the representation and storage models. Graphs in which the nodes and edges are annotated with properties are gaining increasing interest to model a variety of networks. Therefore, this article proposes the use of graph databases (Neo4j in this case) to represent and store trajectory data, which can thus be analyzed at different aggregation levels using graph query languages (Cypher, for Neo4j). Through a real‐world public data case study, the article shows that trajectory queries are expressed more naturally on the graph‐based representation than over the relational alternative, and perform better in many typical cases.  相似文献   

15.
Progress toward developing a GIS of place can only follow from an understanding of what place is, and this understanding draws on geographical theory. Here—following Agnew, Tuan, and others—we consider place as being made up of three components—location, locale, and sense of place—which are recognizable at multiple scales and vary historically as a product of social and political processes. Using the testimonies of two survivors of the Holocaust, we sketch the components of a model for a GIS of place that allows for this theory of place to be visualized and analyzed. The model is, crucially, both multi‐scalar and sensitive to uncertainty, as a GIS of place needs to be able to zoom in and out of the different scales at which place is experienced, as well as capture both uncertain data and uncertainty as data. We see potential in the representations proposed for scaling up from the anecdotal to the general in the sense that any narrative can be grouped and classified according to places and scales as shown here. The challenge in developing a GIS of place along the lines we propose here is to design a new set of functionalities that can do so.  相似文献   

16.
Spatial modeling methods usually use pixels and image objects as fundamental processing units to address real‐world objects, geo‐objects, in image space. To do this, both pixel‐based and object‐based approaches typically employ a linear two‐staged workflow of segmentation and classification. Pixel‐based methods segment a classified image to address geo‐objects in image space. In contrast, object‐based approaches classify a segmented image to identify geo‐objects from raster datasets. These methods lack the ability to simultaneously integrate the geometry and theme of geo‐objects in image space. This article explores Geographical Vector Agents (GVAs) as an automated and intelligent processing unit to directly address real‐world objects in the process of remote sensing image classification. The GVA is a distinct type of geographic automata characterized by elastic geometry, dynamic internal structure, neighborhoods and their respective rules. We test this concept by modeling a set of objects on a subset IKONOS image and LiDAR DSM datasets without the setting parameters (e.g. scale, shape information), usually applied in conventional Geographic Object‐Based Image Analysis (GEOBIA) approaches. The results show that the GVA approach achieves more than 3.5% improvement for correctness, 2% improvement for quality, although no significant improvement for completeness to GEOBIA, thus demonstrating the competitive performance of GVAs classification.  相似文献   

17.
Much research has been conducted on the use of sketch maps to search in spatial databases, nevertheless, they have faced challenges, such as modeling of the data abstraction level, aggregated features in sketches, modeling of semantic aspects of data, data redundancy, and evaluation of the results. Considering these challenges, in this article a new solution is presented for searching in databases based on data matching. The main difference between this solution and the other approaches lies in the parameters introduced to match data and how to solve the matching problem. Using geometrical, topological, and semantic parameters in the matching, as well as performing the matching process in the two phases of partial and global, has resulted in an of about 78%. The evaluation process is performed based on the matching parameters and the matching procedure; finally, the result is acceptable compared to previous implementations.  相似文献   

18.
We propose a method for geometric areal object matching based on multi‐criteria decision making. To enable this method, we focused on determining the matched areal object pairs that have all relations, one‐to‐one relationships to many‐to‐many relationships, in different spatial data sets by fusing geometric criteria without user invention. First, we identified candidate corresponding areal object pairs with a graph‐based approach in training data. Second, three matching criteria (areal hausdorff distance, intersection ratio, and turning function distance) were calculated in candidate corresponding pairs and these criteria were normalized. Third, the shape similarity was calculated by weighted linear combination using the normalized matching criteria (similarities) with the criteria importance through intercriteria correlation method. Fourth, a threshold (0.738) of the shape similarity estimated in the plot of precision versus recall versus all possible thresholds of training data was applied, and the matched pairs were determined and identified. Finally, we visually validated the detection of similar areal feature pairs and conducted statistical evaluation using precision, recall, and F‐measure values from a confusion matrix. Their values were 0.905, 0.848, and 0.876, respectively. These results validate that the proposed classifier, which detects 87.6% of matched areal pairs, is highly accurate.  相似文献   

19.
20.
When analyzing spatial issues, geographers are often confronted with many problems with regard to the imprecision of the available information. It is necessary to develop representation and design methods which are suited to imprecise spatiotemporal data. This led to the recent proposal of the F‐Perceptory approach. F‐Perceptory models fuzzy primitive geometries that are appropriate in representing homogeneous regions. However, the real world often contains cases that are much more complex, describing geographic features with composite structures such as a geometry aggregation or combination. From a conceptual point of view, these cases have not yet been managed with F‐Perceptory. This article proposes modeling fuzzy geographic objects with composite geometries, by extending the pictographic language of F‐Perceptory and its mapping to the Unified Modeling Language (UML) necessary to manage them in object/relational databases. Until now, the most commonly used object modeling tools have not considered imprecise data. The extended F‐Perceptory is implemented under a UML‐based modeling tool in order to support users in fuzzy conceptual data modeling. In addition, in order to properly define the related database design, an automatic derivation process is implemented to generate the fuzzy database model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号