首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
With the rapid advance of geospatial technologies, the availability of geospatial data from a wide variety of sources has increased dramatically. It is beneficial to integrate / conflate these multi‐source geospatial datasets, since the integration of multi‐source geospatial data can provide insights and capabilities not possible with individual datasets. However, multi‐source datasets over the same geographical area are often disparate. Accurately integrating geospatial data from different sources is a challenging task. Among the subtasks of integration/conflation, the most crucial one is feature matching, which identifies the features from different datasets as presentations of the same real‐world geographic entity. In this article we present a new relaxation‐based point feature matching approach to match the road intersections from two GIS vector road datasets. The relaxation labeling algorithm utilizes iterated local context updates to achieve a globally consistent result. The contextual constraints (relative distances between points) are incorporated into the compatibility function employed in each iteration's updates. The point‐to‐point matching confidence matrix is initialized using the road connectivity information at each point. Both the traditional proximity‐based approach and our relaxation‐based point matching approach are implemented and experiments are conducted over 18 test sites in rural and suburban areas of Columbia, MO. The test results show that our relaxation labeling approach has much better performance than the proximity matching approach in both simple and complex situations.  相似文献   

2.
Object matching facilitates spatial data integration, updating, evaluation, and management. However, data to be matched often originate from different sources and present problems with regard to positional discrepancies and different levels of detail. To resolve these problems, this article designs an iterative matching framework that effectively combines the advantages of the contextual information and an artificial neural network. The proposed method can correctly aggregate one‐to‐many (1:N) and many‐to‐many (M:N) potential matching pairs using contextual information in the presence of positional discrepancies and a high spatial distribution density. This method iteratively detects new landmark pairs (matched pairs), based on the prior landmark pairs as references, until all landmark pairs are obtained. Our approach has been experimentally validated using two topographic datasets at 1:50 and 1:10k. It outperformed a method based on a back‐propagation neural network. The precision increased by 4.5% and the recall increased by 21.6%, respectively.  相似文献   

3.
We propose a method for geometric areal object matching based on multi‐criteria decision making. To enable this method, we focused on determining the matched areal object pairs that have all relations, one‐to‐one relationships to many‐to‐many relationships, in different spatial data sets by fusing geometric criteria without user invention. First, we identified candidate corresponding areal object pairs with a graph‐based approach in training data. Second, three matching criteria (areal hausdorff distance, intersection ratio, and turning function distance) were calculated in candidate corresponding pairs and these criteria were normalized. Third, the shape similarity was calculated by weighted linear combination using the normalized matching criteria (similarities) with the criteria importance through intercriteria correlation method. Fourth, a threshold (0.738) of the shape similarity estimated in the plot of precision versus recall versus all possible thresholds of training data was applied, and the matched pairs were determined and identified. Finally, we visually validated the detection of similar areal feature pairs and conducted statistical evaluation using precision, recall, and F‐measure values from a confusion matrix. Their values were 0.905, 0.848, and 0.876, respectively. These results validate that the proposed classifier, which detects 87.6% of matched areal pairs, is highly accurate.  相似文献   

4.
Spatial data conflation involves the matching and merging of counterpart features in multiple datasets. It has applications in practical spatial analysis in a variety of fields. Conceptually, the feature‐matching problem can be viewed as an optimization problem of seeking a match plan that minimizes the total discrepancy between datasets. In this article, we propose a powerful yet efficient optimization model for feature matching based on the classic network flow problem in operations research. We begin with a review of the existing optimization‐based methods and point out limitations of current models. We then demonstrate how to utilize the structure of the network‐flow model to approach the feature‐matching problem, as well as the important factors for designing optimization‐based conflation models. The proposed model can be solved by general linear programming solvers or network flow solvers. Due to the network flow formulation we adopt, the proposed model can be solved in polynomial time. Computational experiments show that the proposed model significantly outperforms existing optimization‐based conflation models. We conclude with a summary of findings and point out directions of future research.  相似文献   

5.
Many past space‐time GIS data models viewed the world mainly from a spatial perspective. They attached a time stamp to each state of an entity or the entire area of study. This approach is less efficient for certain spatio‐temporal analyses that focus on how locations change over time, which require researchers to view each location from a temporal perspective. In this article, we present a data model to organize multi‐temporal remote sensing datasets and track their changes at the individual pixel level. This data model can also integrate raster datasets from heterogeneous sources under a unified framework. The proposed data model consists of several object classes under a hierarchical structure. Each object class is associated with specific properties and behaviors to facilitate efficient spatio‐temporal analyses. We apply this data model to a case study of analyzing the impact of the 2007 freeze in Knoxville, Tennessee. The characteristics of different vegetation clusters before, during, and after the 2007 freeze event are compared. Our findings indicate that the majority of the study area is impacted by this freeze event, and different vegetation types show different response patterns to this freeze.  相似文献   

6.
Scientific inquiry often requires analysis of multiple spatio‐temporal datasets, ranging in type and size, using complex multi‐step processes demanding an understanding of GIS theory and software. Cumulative spatial impact layers (CSIL) is a GIS‐based tool that summarizes spatio‐temporal datasets based on overlapping features and attributes. Leveraging a recursive quadtree method, and applying multiple additive frameworks, the CSIL tool allows users to analyze raster and vector datasets by calculating data, record, or attribute density. Providing an efficient and robust method for summarizing disparate, multi‐format, multi‐source geospatial data, CSIL addresses the need for a new integration approach and resulting geospatial product. The built‐in flexibility of the CSIL tool allows users to answer a range of spatially driven questions. Example applications are provided in this article to illustrate the versatility and variety of uses for this CSIL tool and method. Use cases include addressing regulatory decision‐making needs, economic modeling, and resource management. Performance reviews for each use case are also presented, demonstrating how CSIL provides a more efficient and robust approach to assess a range of multivariate spatial data for a variety of uses.  相似文献   

7.
The availability of geospatial data has increased significantly over recent decades. As a result, the question of how to update spatial data across different scales has become an attractive topic. One promising strategy is to use an updated larger‐scale dataset as a reference for detecting and updating changed objects represented in a to‐be‐updated smaller‐scale dataset. For such an update method, an understanding of the different types of changes that can occur is crucial. Using polygonal building data as an example, this study examines the various possible changes from different perspectives, such as the reasons for their occurrence, the forms in which they manifest, and their effects on output. Then, we apply map algebra theory to establish a cartographic model for updating polygonal building data. Supported by concepts of map algebra, an update procedure involving change detection, filtering, and fusion is implemented through a series of set operations. In addition to traditional polygon overlay functions, the constrained Delaunay triangulation model and knowledge of map generalization procedures are employed to construct set operations. The proposed method has been validated through tests using real‐world data. The experimental results show that our method is effective for updating 1:10k map data using 1:2k map data.  相似文献   

8.
Spatial data are usually described through a vector model in which geometries are represented by a set of coordinates embedded into an Euclidean space. The use of a finite representation, instead of the real numbers theoretically required, causes many robustness problems which are well known in the literature. Such problems are made even worse in a distributed context, where data is exchanged between different systems and several perturbations can be introduced in the data representation. In order to discuss the robustness of a spatial dataset, two implementation models have to be distinguished: the identity and the tolerance model. The robustness of a dataset in the identity model has been widely discussed in the literature and some algorithms of the Snap Rounding (SR) family can be successfully applied in such contexts. Conversely, this problem has been less explored in the tolerance model. The aim of this article is to propose an algorithm inspired by those of the SR family for establishing or restoring the robustness of a vector dataset in the tolerance model. The main ideas are to introduce an additional operation which spreads instead of snapping geometries, in order to preserve the original relation between them, and to use a tolerance region for such an operation instead of a single snapping location. Finally, some experiments on real‐world datasets are presented, confirming how the proposed algorithm can establish the robustness of a dataset.  相似文献   

9.
In geography, invariant aspects of sketches are essential to study because they reflect the human perception of real‐world places. A person's perception of a place can be expressed in sketches. In this article, we quantitatively and qualitatively analyzed the characteristics of single objects and characteristics among objects in sketches and the real world to find reliable invariants that can be used to establish references/correspondences between sketch and world in a matching process. These characteristics include category, shape, name, and relative size of each object. Moreover, quantity and spatial relationships—such as topological, ordering, and location relationships—among all objects are also analyzed to assess consistency between sketched and actual places. The approach presented in this study extracts the reliable invariants for query‐by‐sketch and prioritizes their relevance for a sketch‐map matching process.  相似文献   

10.
Dynamic geospatial complex systems are inherently four‐dimensional (4D) processes and there is a need for spatio‐temporal models that are capable of realistic representation for improved understanding and analysis. Such systems include changes of geological structures, dune formation, landslides, pollutant propagation, forest fires, and urban densification. However, these phenomena are frequently analyzed and represented with modeling approaches that consider only two spatial dimensions and time. Consequently, the main objectives of this study are to design and develop a modeling framework for 4D agent‐based modeling, and to implement the approach to the 4D case study for forest‐fire smoke propagation. The study area is central and southern British Columbia and the western parts of Alberta, Canada for forest fires that occurred in the summer season of 2017. The simulation results produced realistic spatial patterns of the smoke propagation dynamics.  相似文献   

11.
Recent urban studies have used human mobility data such as taxi trajectories and smartcard data as a complementary way to identify the social functions of land use. However, little work has been conducted to reveal how multi‐modal transportation data impact on this identification process. In our study, we propose a data‐driven approach that addresses the relationships between travel behavior and urban structure: first, multi‐modal transportation data are aggregated to extract explicit statistical features; then, topic modeling methods are applied to transform these explicit statistical features into latent semantic features; and finally, a classification method is used to identify functional zones with similar latent topic distributions. Two 10‐day‐long “big” datasets from the 2,370 bicycle stations of the public bicycle‐sharing system, and up to 9,992 taxi cabs within the core urban area of Hangzhou City, China, as well as point‐of‐interest data are tested to reveal the extent to which different travel modes contribute to the detection and understanding of urban land functions. Our results show that: (1) using latent semantic features delineated from the topic modeling process as the classification input outperforms approaches using explicit statistical features; (2) combining multi‐modal data visibly improves the accuracy and consistency of the identified functional zones; and (3) the proposed data‐driven approach is also capable of identifying mixed land use in the urban space. This work presents a novel attempt to uncover the hidden linkages between urban transportation patterns with urban land use and its functions.  相似文献   

12.
Spatial co‐location pattern mining aims to discover a collection of Boolean spatial features, which are frequently located in close geographic proximity to each other. Existing methods for identifying spatial co‐location patterns usually require users to specify two thresholds, i.e. the prevalence threshold for measuring the prevalence of candidate co‐location patterns and distance threshold to search the spatial co‐location patterns. However, these two thresholds are difficult to determine in practice, and improper thresholds may lead to the misidentification of useful patterns and the incorrect reporting of meaningless patterns. The multi‐scale approach proposed in this study overcomes this limitation. Initially, the prevalence of candidate co‐location patterns is measured statistically by using a significance test, and a non‐parametric model is developed to construct the null distribution of features with the consideration of spatial auto‐correlation. Next, the spatial co‐location patterns are explored at multi‐scales instead of single scale (or distance threshold) discovery. The validity of the co‐location patterns is evaluated based on the concept of lifetime. Experiments on both synthetic and ecological datasets show that spatial co‐location patterns are discovered correctly and completely by using the proposed method; on the other hand, the subjectivity in discovery of spatial co‐location patterns is reduced significantly.  相似文献   

13.
Multiple representation of geographic information occurs when a real‐world entity is represented more than once in the same or different databases. This occurs frequently in practice, and it invariably results in the occurrence of inconsistencies among the different representations of the same entity. In this paper, we propose an approach to the modeling of multiple represented entities, which is based on the relationships among the entities and their representations. Central to our approach is the Multiple Representation Schema Language that, by intuitive and declarative means, is used to specify rules that match objects representing the same entity, maintain consistency among these representations, and restore consistency if necessary. The rules configure a Multiple Representation Management System, the aim of which is to manage multiple representations over a number of autonomous federated databases. We present a graphical and a lexical binding to the schema language. The graphical binding is built on an extension to the Unified Modeling Language and the Object Constraint Language. We demonstrate that it is possible to implement the constructs of the schema language in the object‐relational model of a commercial RDBMS.  相似文献   

14.
One characteristic of a Geographic Information System (GIS) is that it addresses the necessity to handle a large amount of data at multiple scales. Lands span over an area greater than 15 million km2 all over the globe and information types are highly variable. In addition, multi‐scale analyses involve both spatial and temporal integration of datasets deriving from different sources. The currently worldwide used system of latitude and longitude coordinates could avoid limitations in data use due to biases and approximations. In this article a fast and reliable algorithm implemented in Arc Macro Language (AML) is presented to provide an automatic computation of the surface area of the cells in a regularly spaced longitude‐latitude (geographic) grid at different resolutions. The approach is based on the well‐known approximation of the spheroidal Earth's surface to the authalic (i.e. equal‐area) sphere. After verifying the algorithm's strength by comparison with a numerical solution for the reference spheroidal model, specific case studies are introduced to evaluate the differences when switching from geographic to projected coordinate systems. This is done at different resolutions and using different formulations to calculate cell areas. Even if the percentage differences are low, they become relevant when reported in absolute terms (hectares).  相似文献   

15.
Volunteered geographic information contains abundant valuable data, which can be applied to various spatiotemporal geographical analyses. While the useful information may be distributed in different, low‐quality data sources, this issue can be solved by data integration. Generally, the primary task of integration is data matching. Unfortunately, due to the complexity and irregularities of multi‐source data, existing studies have found it difficult to efficiently establish the correspondence between different sources. Therefore, we present a multi‐stage method to match multi‐source data using points of interest. A spatial filter is constructed to obtain candidate sets for geographical entities. The weights of non‐spatial characteristics are examined by a machine learning‐related algorithm with artificially labeled random samples. A case study on Fuzhou reveals that an average of 95% of instances are accurately matched. Thus, our study provides a novel solution for researchers who are engaged in data mining and related work to accurately match multi‐source data via knowledge obtained by the idea and methods of machine learning.  相似文献   

16.
Address ranges used in linear interpolation geocoding often have errors and omissions that result in input address numbers falling outside of known address ranges. Geocoding systems may match these input addresses to the closest available nearby address range and assign low confidence values (match scores) to increase match rates, but little is published describing the matching or scoring techniques used in these systems. This article sheds light on these practices by investigating the need for, technical approaches to, and utility of nearby matching methods used to increase match rates in geocode data. The scope of the problem is motivated by an analysis of a commonly used health dataset. The technical approach of a geocoding system that includes a nearby matching approach is described along with a method for scoring candidates based on spatially‐varying neighborhoods. This method, termed dynamic nearby reference feature scoring, identifies, scores, ranks, and returns the most probable candidate to which the input address feature belongs or is spatially near. This approach is evaluated against commercial systems to assess its effectiveness and resulting spatial accuracy. Results indicate this approach is viable for improving match rates while maintaining acceptable levels of spatial accuracy.  相似文献   

17.
The use of cellular automata (CA) has for some time been considered among the most appropriate approaches for modeling land‐use changes. Each cell in a traditional CA model has a state that evolves according to transition rules, taking into consideration its own and its neighbors’ states and characteristics. Here, we present a multi‐label CA model in which a cell may simultaneously have more than one state. The model uses a multi‐label learning method—a multi‐label support vector machine, Rank‐SVM—to define the transition rules. The model was used with a multi‐label land‐use dataset for Luxembourg, built from vector‐based land‐use data using a method presented here. The proposed multi‐label CA model showed promising performance in terms of its ability to capture and model the details and complexities of changes in land‐use patterns. Applied to historical land use data, the proposed model estimated the land use change with an accuracy of 87.2% exact matching and 98.84% when including cells with a misclassification of a single label, which is comparably better than a classical multi‐class model that achieved 83.6%. The multi‐label cellular automata outperformed a model combining CA and artificial neural networks. All model goodness‐of‐fit comparisons were quantified using various performance metrics for predictive models.  相似文献   

18.
已有的多源等高线匹配方法主要基于等高线拓扑关系构建以及基于空间欧氏距离的相似性度量,缺少对等高线几何形态相似性的考虑,在等高线密集区域、图幅边界区域以及地形变化剧烈区域易产生误匹配情况。为此,本文提出一种基于几何特征相似性的由粗匹配到精匹配的多源等高线匹配策略。提出顾及局部特性的基于节点曲率以及法向量与横坐标轴夹角的混合特征描述测度,将等高线点序列转化为几何形态特征描述序列,引入最长公共子序列算法,量化计算多源等高线数据之间的相似程度,并依据相似度实现同名等高线匹配。利用模拟数据和真实数据对本文方法的可靠性和运行效率进行验证,试验证明,本文提出的匹配策略顾及了等高线空间位置特征和几何形态特征,能够保证较高的匹配精度和运行效率,并具有较好的适用范围。  相似文献   

19.
空间目标匹配是实现多源空间信息融合、空间对象变化检测与动态更新的重要前提。针对多比例尺居民地匹配问题,提出了一种基于邻近模式的松弛迭代匹配方法。该方法首先利用缓冲区分析与空间邻近关系检测候选匹配目标与邻近模式,同时计算候选匹配目标或邻近模式间的几何相似性得到初始匹配概率矩阵;然后对邻近候选匹配对进行上下文兼容性建模,利用松弛迭代方法求解多比例尺居民地的最优匹配模型,选取匹配概率最大并满足上下文一致的候选匹配目标或邻近模式为最终匹配结果。实验结果表明,所提出的多比例尺居民地匹配方法具有较高的匹配精度,能有效克服形状轮廓同质化与非均匀性偏差问题,并准确识别1:M、M:N的复杂匹配关系。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号