首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
6.
In this article we analyze a well-known and extensively researched problem: how to find all datasets, on the one hand, and on the other hand only those that are of value to the user when dealing with a specific spatially oriented task. In analogy with existing approaches to a similar problem from other fields of human endeavor, we call this software solution ‘a spatial data recommendation service.’ In its final version, this service should be capable of matching requests created in the user's mind with the content of the existing datasets, while taking into account the user's preferences obtained from the user's previous use of the service. As a result, the service should recommend a list of datasets best suited to the user's needs. In this regard, we consider metadata, particularly natural language definitions of spatial entities, a crucial piece of the solution. To be able to use this information in the process of matching the user's request with the dataset content, this information must be semantically preprocessed. To automate this task we have applied a machine learning approach. With inductive logic programming (ILP) our system learns rules that identify and extract values for the five most frequent relations/properties found in Slovene natural language definitions of spatial entities. The initially established quality criterion for identifying and extracting information was met in three out of five examples. Therefore we conclude that ILP offers a promising approach to developing an information extraction component of a spatial data recommendation service.  相似文献   

7.
In this article, we present the GeoCorpora corpus building framework and software tools as well as a geo-annotated Twitter corpus built with these tools to foster research and development in the areas of microblog/Twitter geoparsing and geographic information retrieval. The developed framework employs crowdsourcing and geovisual analytics to support the construction of large corpora of text in which the mentioned location entities are identified and geolocated to toponyms in existing geographical gazetteers. We describe how the approach has been applied to build a corpus of geo-annotated tweets that will be made freely available to the research community alongside this article to support the evaluation, comparison and training of geoparsers. Additionally, we report lessons learned related to corpus construction for geoparsing as well as insights about the notions of place and natural spatial language that we derive from application of the framework to building this corpus.  相似文献   

8.
9.
Urban land use information plays an essential role in a wide variety of urban planning and environmental monitoring processes. During the past few decades, with the rapid technological development of remote sensing (RS), geographic information systems (GIS) and geospatial big data, numerous methods have been developed to identify urban land use at a fine scale. Points-of-interest (POIs) have been widely used to extract information pertaining to urban land use types and functional zones. However, it is difficult to quantify the relationship between spatial distributions of POIs and regional land use types due to a lack of reliable models. Previous methods may ignore abundant spatial features that can be extracted from POIs. In this study, we establish an innovative framework that detects urban land use distributions at the scale of traffic analysis zones (TAZs) by integrating Baidu POIs and a Word2Vec model. This framework was implemented using a Google open-source model of a deep-learning language in 2013. First, data for the Pearl River Delta (PRD) are transformed into a TAZ-POI corpus using a greedy algorithm by considering the spatial distributions of TAZs and inner POIs. Then, high-dimensional characteristic vectors of POIs and TAZs are extracted using the Word2Vec model. Finally, to validate the reliability of the POI/TAZ vectors, we implement a K-Means-based clustering model to analyze correlations between the POI/TAZ vectors and deploy TAZ vectors to identify urban land use types using a random forest algorithm (RFA) model. Compared with some state-of-the-art probabilistic topic models (PTMs), the proposed method can efficiently obtain the highest accuracy (OA = 0.8728, kappa = 0.8399). Moreover, the results can be used to help urban planners to monitor dynamic urban land use and evaluate the impact of urban planning schemes.  相似文献   

10.
With recent advances in remote sensing, location-based services and other related technologies, the production of geospatial information has exponentially increased in the last decades. Furthermore, to facilitate discovery and efficient access to such information, spatial data infrastructures were promoted and standardized, with a consideration that metadata are essential to describing data and services. Standardization bodies such as the International Organization for Standardization have defined well-known metadata models such as ISO 19115. However, current metadata assets exhibit heterogeneous quality levels because they are created by different producers with different perspectives. To address quality-related concerns, several initiatives attempted to define a common framework and test the suitability of metadata through automatic controls. Nevertheless, these controls are focused on interoperability by testing the format of metadata and a set of controlled elements. In this paper, we propose a methodology of testing the quality of metadata by considering aspects other than interoperability. The proposal adapts ISO 19157 to the metadata case and has been applied to a corpus of the Spanish Spatial Data Infrastructure. The results demonstrate that our quality check helps determine different types of errors for all metadata elements and can be almost completely automated to enhance the significance of metadata.  相似文献   

11.
12.
Evaluation is a key step to examine the quality of generalized maps with respect to map requirements. Map generalization facilitates the recognition of pattern generating processes by preserving and highlighting the patterns at smaller scales. This article focuses specifically on the evaluation of building patterns in topographic maps that are generalized from large to mid scales. Currently, there is a lack of knowledge and functionality on automatically evaluating how these patterns are generalized. The issues of the evaluation range from missing formal map requirements on building alignments to missing automated evaluation techniques. This article firstly analyses the requirements (constraints) related to the generalization of building alignments. Then, it focuses on three more specific constraints, i.e. on existence, orientation of alignments and spatial distribution of composing buildings. Later, a three-step approach is proposed to (1) recognize and (2) match alignments from source and generalized datasets and (3) evaluate building alignments in generalized datasets. Besides, many-to-many and partial matching between initial and target alignments is a side effect of generalization, which reduces the reliability of the evaluation results. This article introduces a confidence indicator to document the reliability and to inform intended users (e.g. cartographers) and/or systems about the reliability of evaluation decisions. The effectiveness of our approach is demonstrated by evaluating the alignments in both interactively (manually) generalized maps and automated generalized maps. Finally, we discuss how our approach can be used to control automated generalization and identify further improvements.  相似文献   

13.
14.
Local search services allow a user to search for businesses that satisfy a given geographical constraint. In contrast to traditional web search engines, current local search services rely heavily on static, structured data. Although this yields very accurate systems, it also implies a limited coverage, and limited support for using landmarks and neighborhood names in queries. To overcome these limitations, we propose to augment the structured information available to a local search service, based on the vast amount of unstructured and semi‐structured data available on the web. This requires a computational framework to represent vague natural language information about the nearness of places, as well as the spatial extent of vague neighborhoods. In this paper, we propose such a framework based on fuzzy set theory, and show how natural language information can be translated into this framework. We provide experimental results that show the effectiveness of the proposed techniques, and demonstrate that local search based on natural language hints about the location of places with an unknown address, is feasible.  相似文献   

15.
This paper proposes a novel rough set approach to discover classification rules in real‐valued spatial data in general and remotely sensed data in particular. A knowledge induction process is formulated to select optimal decision rules with a minimal set of features necessary and sufficient for a remote sensing classification task. The approach first converts a real‐valued or integer‐valued decision system into an interval‐valued information system. A knowledge induction procedure is then formulated to discover all classification rules hidden in the information system. Two real‐life applications are made to verify and substantiate the conceptual arguments. It demonstrates that the proposed approach can effectively discover in remotely sensed data the optimal spectral bands and optimal rule set for a classification task. It is also capable of unraveling critical spectral band(s) discerning certain classes. The framework paves the road for data mining in mixed spatial databases consisting of qualitative and quantitative data.  相似文献   

16.
The introduction of automated generalisation procedures in map production systems requires that generalisation systems are capable of processing large amounts of map data in acceptable time and that cartographic quality is similar to traditional map products. With respect to these requirements, we examine two complementary approaches that should improve generalisation systems currently in use by national topographic mapping agencies. Our focus is particularly on self‐evaluating systems, taking as an example those systems that build on the multi‐agent paradigm. The first approach aims to improve the cartographic quality by utilising cartographic expert knowledge relating to spatial context. More specifically, we introduce expert rules for the selection of generalisation operations based on a classification of buildings into five urban structure types, including inner city, urban, suburban, rural, and industrial and commercial areas. The second approach aims to utilise machine learning techniques to extract heuristics that allow us to reduce the search space and hence the time in which a good cartographical solution is reached. Both approaches are tested individually and in combination for the generalisation of buildings from map scale 1:5000 to the target map scale of 1:25 000. Our experiments show improvements in terms of efficiency and effectiveness. We provide evidence that both approaches complement each other and that a combination of expert and machine learnt rules give better results than the individual approaches. Both approaches are sufficiently general to be applicable to other forms of self‐evaluating, constraint‐based systems than multi‐agent systems, and to other feature classes than buildings. Problems have been identified resulting from difficulties to formalise cartographic quality by means of constraints for the control of the generalisation process.  相似文献   

17.
Editorial     
This paper addresses the problems associated with the integration of data between incongruent boundary systems. Currently, the majority of spatial boundaries are designed in an uncoordinated manner with individual organizations generating individual boundaries to meet individual needs. As a result, current technologies for analysing geospatial information, such as geographical information systems (GISs), are not reaching their full potential. In response to the problem of uncoordinated boundaries, the authors present an algorithm for the hierarchical structuring of administrative boundaries. This algorithm applies hierarchical spatial reasoning (HSR) theory to the automated structuring of polygons. In turn, these structured boundary systems facilitate accurate data integration and analysis whilst meeting the spatial requirements of selected agencies. The algorithm is presented in two parts. The first part outlines previous research undertaken by the authors into the delineation of administrative boundaries in metropolitan regions. The second part outlines the distinctly different constraints required for administrative-boundary design in rural areas. The development of the algorithm has taken place in a GIS environment utilizing Avenue, an object-orientated programming language that operates under ArcView, the desktop software developed and distributed by ESRI.  相似文献   

18.
基于图式语言的少数民族传统村落空间布局特征研究   总被引:1,自引:0,他引:1  
李伯华  徐崇丽  郑始年  王莎  窦银娣 《地理科学》2020,40(11):1784-1794
选取芋头村、皇都村、高上村及坪坦村4个典型侗族传统村落,以探究不同区域间图式语言的空间嵌套为目的,对侗族传统村落空间布局特征进行分析和总结。研究显示:① 不同侗族传统村落图式语汇类型多样,但互相之间存在共通性。通过对坪坦村进行图式语言空间嵌套,验证了侗族传统村落空间布局图式语言体系的有效性和普适性。② 侗族传统村落始终坚持“顺其自然,因地制宜”的基本理念有序发展,多呈现“水域空间–居住空间–连接空间–公共空间”的空间序列。③ 侗族传统村落整体上表现为以公共建筑为核心,周围建筑呈向心内聚式分布的特点。④ 侗族传统村落发展扩张中始终遵循本土性和秩序性的语法原则,空间要素的设计和布局多以传统风水观为指导。⑤ 传统村落空间布局图式语言体系对于同类型村落的空间修复与更新具有积极作用,其语汇、句法及语法的空间普适性有利于实现传统村落景观风貌的有效保护和可持续发展。  相似文献   

19.
影响首都选址的区位因子研究   总被引:1,自引:0,他引:1  
袁俊  吴殿廷  常旭 《世界地理研究》2007,16(2):32-37,88
首都区位的选择对于任何国家或政权都是至关重要的。过去尚未有人对影响首都区位的因子作过综合分析和系统研究。本文首先阐述了5种主要的首都区位类型;在此基础上,对世界首都的区位类型分布特征进行统计分析,得出世界首都区位类型的分布规律;最后,根据上述首都区位的时空演变规律,从自然-社会-历史-经济-政治-军事6个方面,对影响首都区位的因子进行了全面的剖析。  相似文献   

20.
基于地质统计学方法的土地利用空间变异尺度分析   总被引:1,自引:0,他引:1  
基于遥感和GIS解译得到土地利用专题图,采用景观指数法和地质统计学法,分析云南省呈贡县2001年、2003年、2006年土地利用格局的空间变异特征与尺度的关系及其控制因素。研究区土地利用空间格局的变异性在多个尺度上存在,且具有等级结构特征:不同尺度对应不同的空间格局;不同等级尺度下的过程是空间格局产生变化的控制因素;不同等级尺度上的格局可相互转换。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号