首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The volume and potential value of user generated content (UGC) is ever growing. Multiply sourced, its value is greatly increased by the inclusion of metadata that adequately and accurately describes that content – particularly if such data are to be integrated with more formal data sets. Typically, digital photography is tagged with location and attribute information that variously describe the location, events or objects in the image. Often inconsistent and incomplete, these attributes reflect concepts at a range of geographic scales. From a spatial data integration perspective, the information relating to “place” is of primary interest. The challenge therefore is in selecting the most appropriate tags that best describe the geography of the image. This article presents a methodology based on an information retrieval technique that separates out “place related tags” from the remainder of the tags. Different scales of geography are identified by varying the size of the sampling area within which the imagery falls. This is applied in the context of urban environments, using Flickr imagery. Empirical analysis is then used to assess the correctness of the chosen tags (i.e. whether the tag correctly describes the geographic region in which the image was taken). Logistic regression and Bayesian inference are used to attach a probability value to each place tag. The high correlation values achieved indicate that this methodology can be used to automatically select place tags for any urban region and thus hierarchically structure UGC in order that it can be semantically integrated with other data sources.  相似文献   

2.
Reverse geocoding, which transforms machine‐readable GPS coordinates into human‐readable location information, is widely used in a variety of location‐based services and analysis. The output quality of reverse geocoding is critical because it can greatly impact these services provided to end‐users. We argue that the output of reverse geocoding should be spatially close to and topologically correct with respect to the input coordinates, contain multiple suggestions ranked by a uniform standard, and incorporate GPS uncertainties. However, existing reverse geocoding systems often fail to fulfill these aims. To further improve the reverse geocoding process, we propose a probabilistic framework that includes: (1) a new workflow that can adapt all existing address models and unitizes distance and topology relations among retrieved reference data for candidate selections; (2) an advanced scoring mechanism that quantifies characteristics of the entire workflow and orders candidates according to their likelihood of being the best candidate; and (3) a novel algorithm that derives statistical surfaces for input GPS uncertainties and propagates such uncertainties into final output lists. The efficiency of the proposed approaches is demonstrated through comparisons to the four commercial reverse geocoding systems and through human judgments. We envision that more advanced reverse geocoding output ranking algorithms specific to different application scenarios can be built upon this work.  相似文献   

3.
The amount of volunteered geographic information (VGI) has increased over the past decade, and several studies have been conducted to evaluate the quality of VGI data. In this study, we evaluate the completeness of the road network in the VGI data set OpenStreetMap (OSM). The evaluation is based on an accurate and efficient network-matching algorithm. The study begins with a comparison of the two main strategies for network matching: segment-based and node-based matching. The comparison shows that the result quality is comparable for the two strategies, but the node-based result is considerably more computationally efficient. Therefore, we improve the accuracy of node-based algorithm by handling topological relationships and detecting patterns of complicated network components. Finally, we conduct a case study on the extended node-based algorithm in which we match OSM to the Swedish National Road Database (NVDB) in Scania, Sweden. The case study reveals that OSM has a completeness of 87% in the urban areas and 69% in the rural areas of Scania. The accuracy of the matching process is approximately 95%. The conclusion is that the extended node-based algorithm is sufficiently accurate and efficient for conducting surveys of the quality of OSM and other VGI road data sets in large geographic regions.  相似文献   

4.
要素操纵引擎(Feature Manipulate Engine,FME)平台是数据信息转换工具,以研究FME语义映射文件的组成和执行过程为基础,通过修改FME语义映射的方法,对OSM标签信息提取转换到MySQL数据库的过程进行了研究,实现OSM数据标签信息的快速、高效的提取转换。根据OSM的数据结构特征,设计数据库结构,在转换结果中充分体现出OSM数据键值对标签信息的特征。研究结果表明,MySQL数据库格式存储OSM属性数据信息,推动了语义特征分析的发展。  相似文献   

5.
The Annotation Process in OpenStreetMap   总被引:2,自引:1,他引:1  
In this article we describe the analysis of 25,000 objects from the OpenStreetMap (OSM) databases of Ireland, United Kingdom, Germany, and Austria. The objects are selected as exhibiting the characteristics of “heavily edited” objects. We consider “heavily edited” objects as having 15 or more versions over the object's lifetime. Our results indicate that there are some serious issues arising from the way contributors tag or annotate objects in OSM. Values assigned to the “name” and “highway” attributes are often subject to frequent and unexpected change. However, this “tag flip‐flopping” is not found to be strongly correlated with increasing numbers of contributors. We also show problems with usage of the OSM ontology/controlled vocabularly. The majority of errors occurring were caused by contributors choosing values from the ontology “by hand” and spelling these values incorrectly. These issues could have a potentially detrimental effect on the quality of OSM data while at the same time damaging the perception of OSM in the GIS community. The current state of tagging and annotation in OSM is not perfect. We feel that the problems identified are a combination of the flexibility of the tagging process in OSM and the lack of a strict mechanism for checking adherence to the OSM ontology for specific core attributes. More studies related to comparing the names of features in OSM to recognized ground‐truth datasets are required.  相似文献   

6.
ABSTRACT

Natural disasters, such as wildfires, earthquakes, landslides, or floods, lead to an increase in topical information shared on social media and in increased mapping activities in volunteered geographic information (VGI) platforms. Using earthquakes in Nepal and Central Italy as case studies, this research analyzes the effects of natural disasters on short-term (weeks) and longer-term (half year) changes in OpenStreetMap (OSM) mapping behavior and tweet activities in the affected regions. An increase of activities in OSM during the events can be partially attributed to those focused OSM mapping campaigns, for example, through the Humanitarian OSM Team (HOT). Using source tags in OSM change-sets, it was found that only a small portion of external mappers actually travels to the affected regions, whereas the majority of external mappers relies on desktop mapping instead. Furthermore, the study analyzes the spatio-temporal sequence of posted tweets together with keyword filters to identify a subset of users who most likely traveled to the affected regions for support and rescue operations. It also explores where, geographically, earthquake information spreads within social networks.  相似文献   

7.
Until recently, land surveys and digital interpretation of remotely sensed imagery have been used to generate land use inventories. These techniques however, are often cumbersome and costly, allocating large amounts of technical and temporal costs. The technological advances of web 2.0 have brought a wide array of technological achievements, stimulating the participatory role in collaborative and crowd sourced mapping products. This has been fostered by GPS-enabled devices, and accessible tools that enable visual interpretation of high resolution satellite images/air photos provided in collaborative mapping projects. Such technologies offer an integrative approach to geography by means of promoting public participation and allowing accurate assessment and classification of land use as well as geographical features. OpenStreetMap (OSM) has supported the evolution of such techniques, contributing to the existence of a large inventory of spatial land use information. This paper explores the introduction of this novel participatory phenomenon for land use classification in Europe's metropolitan regions. We adopt a positivistic approach to assess comparatively the accuracy of these contributions of OSM for land use classifications in seven large European metropolitan regions. Thematic accuracy and degree of completeness of OSM data was compared to available Global Monitoring for Environment and Security Urban Atlas (GMESUA) datasets for the chosen metropolises. We further extend our findings of land use within a novel framework for geography, justifying that volunteered geographic information (VGI) sources are of great benefit for land use mapping depending on location and degree of VGI dynamism and offer a great alternative to traditional mapping techniques for metropolitan regions throughout Europe. Evaluation of several land use types at the local level suggests that a number of OSM classes (such as anthropogenic land use, agricultural and some natural environment classes) are viable alternatives for land use classification. These classes are highly accurate and can be integrated into planning decisions for stakeholders and policymakers.  相似文献   

8.
周边区域地理信息的获取是我国地缘环境研究中的一个难题,志愿者地理信息(volunteered geographic information,VGI)的兴起为解决该难题提供了一个可行的方法。在目前一系列的VGI项目中,OpenStreetMap(OSM)是比较领先的应用,但OSM数据模型不同于我国周边应用的专业矢量数据模型,因此,利用OSM数据时首先需要对其进行模型转换。有鉴于此,本文提出了一种基于规则的OSM数据到专业应用矢量数据模型转换方法。该方法首先利用OSM定义的几何类型与地物属性作为分类依据,建立了模型转换基本规则库;对于志愿者根据自己的理解自行标注未包含在基本规则库中的目标采用人机交互方式进行模型转换,并在此过程中不断完善规则库,利用越南与巴基斯坦数据进行实验,最终形成了包括2 344条转换规则的模型转换规则库,为OSM数据模型到专业应用矢量数据模型的转换提供了一条可行途径。  相似文献   

9.
Virtual globes (VGs) allow Internet users to view geographic data of heterogeneous quality created by other users. This article presents a new approach for collecting and visualizing information about the perceived quality of 3D data in VGs. It aims at improving users' awareness of the quality of 3D objects. Instead of relying on the existing metadata or on formal accuracy assessments that are often impossible in practice, we propose a crowd-sourced quality recommender system based on the five-star visualization method successful in other types of Web applications. Four alternative five-star visualizations were implemented in a Google Earth-based prototype and tested through a formal user evaluation. These tests helped identifying the most effective method for a 3D environment. Results indicate that while most websites use a visualization approach that shows a ‘number of stars’, this method was the least preferred by participants. Instead, participants ranked the ‘number within a star’ method highest as it allowed reducing the visual clutter in urban settings, suggesting that 3D environments such as VGs require different design approaches than 2D or non-geographic applications. Results also confirmed that expert and non-expert users in geographic data share similar preferences for the most and least preferred visualization methods.  相似文献   

10.
As tools for collecting data continue to evolve and improve, the information available for research is expanding rapidly. Increasingly, this information is of a spatio‐temporal nature, which enables tracking of phenomena through both space and time. Despite the increasing availability of spatio‐temporal data, however, the methods for processing and analyzing these data are lacking. Existing geocoding techniques are no exception. Geocoding enables the geographic location of people and events to be known and tracked. However, geocoded information is highly generalized and subject to various interpolation errors. In addition, geocoding for spatio‐temporal data is especially challenging because of the inherent dynamism of associated data. This article presents a methodology for geocoding spatio‐temporal data in ArcGIS that utilizes several additional supporting procedures to enhance spatial accuracy, including the use of supplementary land use information, aerial photographs and local knowledge. This hybrid methodology allows for the tracking of phenomenon through space and over time. It is also able to account for reporting inconsistencies, which is a common feature of spatio‐temporal data. The utility of this methodology is demonstrated using an application to spatio‐temporal address records for a highly mobile group of convicted felons in Hamilton County, Ohio.  相似文献   

11.
OpenStreetMap (OSM) represents one of the most well‐known examples of a collaborative mapping project. Major research efforts have so far dealt with data quality analysis but the modality of OSM's evolution across space and time has barely been noted. This study aims to analyze spatio‐temporal patterns of contributions in OSM by proposing a contribution index (CI) in order to investigate the dynamism of OSM. The CI is based on a per cell analysis of the node quantity, interactivity, semantics, and attractivity (the ability to attract contributors). Additionally this research explores whether OSM has been constantly attracting new users and contributions or if OSM has experienced a decline in its ability to attract continued contributions. Using the Stuttgart region of Germany as a case study the empirical findings of the CI over time confirm that since 2007, OSM has been constantly attracting new users, who create new features, edit the existing spatial objects, and enrich them with attributes. This rate has been dramatically growing since 2011. The utilization of a Cellular Automata‐Markov (CA‐Markov) model provides evidence that by the end of 2016 and 2020, the rise of CI will spread out over the study area and only a few cells without OSM features will remain.  相似文献   

12.
OpenStreetMap (OSM) currently represents the most popular project of Volunteered Geographic Information (VGI): geodata are collected by common people and made available for public use. Airborne Laser Scanning (ALS) enables the acquisition of high-resolution digital elevation models that are used for many applications. This study combines the advantages of both ALS and OSM, offering a promising new approach that enhances data quality and allows change detection: the mainly up-to-date 2D data of OSM can be combined with the high-resolution – but rarely updated – elevation information provided by ALS. This case study investigates building objects of OSM and ALS data of the city of Bregenz, Austria. Data quality of OSM is discerned by the comparison of building footprints using different true positive definitions (e.g. overlapping area). High quality of OSM data is revealed, yet also limitations of each method with respect to heterogeneous regions and building outlines are identified. For the first time, an up-to-date Digital Surface Model (DSM) combining 2D OSM and ALS data is achieved. A multitude of applications such as flood simulations and solar potential assessments can directly benefit from this data combination, since their value and reliability strongly depend on an up-to-date DSM.  相似文献   

13.
ABSTRACT

Nowadays, several research projects show interest in employing volunteered geographic information (VGI) to improve their systems through using up-to-date and detailed data. The European project CAP4Access is one of the successful examples of such international-wide research projects that aims to improve the accessibility of people with restricted mobility using crowdsourced data. In this project, OpenStreetMap (OSM) is used to extend OpenRouteService, a well-known routing platform. However, a basic challenge that this project tackled was the incompleteness of OSM data with regards to certain information that is required for wheelchair accessibility (e.g. sidewalk information, kerb data, etc.). In this article, we present the results of initial assessment of sidewalk data in OSM at the beginning of the project as well as our approach in awareness raising and using tools for tagging accessibility data into OSM database for enriching the sidewalk data completeness. Several experiments have been carried out in different European cities, and discussion on the results of the experiments as well as the lessons learned are provided. The lessons learned provide recommendations that help in organizing better mapping party events in the future. We conclude by reporting on how and to what extent the OSM sidewalk data completeness in these study areas have benefited from the mapping parties by the end of the project.  相似文献   

14.
Today, many services that can geocode addresses are available to domain scientists and researchers, software developers, and end‐users. For a number of reasons, including quality of reference database and interpolation technique, a given address geocoded by different services does not often result in the same location. Considering that there are many widely available and accessible geocoding services and that each geocoding service may utilize a different reference database and interpolation technique, selecting a suitable geocoding service that meets the requirements of any application or user is a challenging task. This is especially true for online geocoding services which are often used as black boxes and do not provide knowledge about the reference databases and the interpolation techniques they employ. In this article, we present a geocoding recommender algorithm that can recommend optimal online geocoding services by realizing the characteristics (positional accuracy and match rate) of the services and preferences of the user and/or their application. The algorithm is simulated and analyzed using six popular online geocoding services for different address types (agricultural, commercial, industrial, residential) and preferences (match rate, positional accuracy).  相似文献   

15.
随着互联网应用的发展,所产生的非结构化文本大多与地理位置相关联,因此,地理信息检索(GIR)成为当前GIS和IR领域研究的热点。文本地理编码是建立文本与地理位置坐标对应关系的过程,是实现GIR的基础。本文对文本地理编码涉及的地理实体识别、地理实体消歧、文本位置聚焦、区域语言建模等关键技术进行分类总结,提出了该领域未来研究工作和面临的挑战,为文本地理编码进一步相关研究提供新思路。  相似文献   

16.
ArcGIS Server是ESRI在ArcGIS 9.0系列产品中推出的一个创建企业级GIS应用的平台。它的出现使得分布式GIS应用进入到一个新的领域,不仅仅是数据的共享,而且高级的GIS功能如制图、空间分析和地理编码、多用户编辑等也能够通过Internet/Intranet为普通用户所拥有。文中介绍了相关的概念并分析了ArcGIS Server的体系结构,在此基础上通过实例展示了基于ArcGIS Server构建分布式GIS应用的可行性。  相似文献   

17.
With the widespread use of tag clouds, multiple map-based variations have been proposed. Like standard tag clouds (also called word clouds), these ‘tag maps’ all share the basic strategy of displaying words within a ‘geographic space’ and scaling the word size to depict frequency (or importance) of those words within some dataset. While some tag maps simply plot a standard tag cloud on top of a map, the subset of tag maps we focus on here are those in which the collection of words are displayed within bounded geographic regions (often of irregular shape) that the words are relevant for. For this form of tag map, map scale and polygon shape add constraints to word size and position that have not been considered in most prior approaches to tag map word layout. In this paper, we present a layout strategy for tag map generation that includes consideration of the shape and size of the geographical regions acting as containers for the tags. The method introduced here uses a triangulated irregular network (TIN) to subdivide the geographical region into many triangle subareas, with the centroid of each triangle being a potential location to centre a tag on. All the triangles are sorted by their area and all the tags are sorted by their weight value (e.g. frequency, importance or popularity). Positioning of tags is undertaken sequentially from most important (or frequent or popular) with potential locations being the TIN triangle centroids (tried from largest to smallest triangle). After each tag placement, the TIN is recalculated to integrate the tag centroid and bounding corners into the TIN creation. The limited whitespace in the geographical region, at any specific scale, is used fully by dynamically adjusting the font size along with the number and the direction of tags. The method can be applied to add tags within geographic polygons that are convex, concave and other more complex regions containing holes or islands.  相似文献   

18.
Geocoding urban addresses usually requires the use of an underlying address database. Under the influence of the format defined for TIGER files decades ago, most address databases and street geocoding algorithms are organized around street centerlines, associating numbering ranges to thoroughfare segments between two street crossings. While this method has been successfully employed in the USA for a long time, its transposition to other countries may lead to increased errors. This article presents an evaluation of the centerline‐geocoding resources provided by Google Maps, as compared to the point‐geocoding method used in the city of Belo Horizonte, Brazil, which we took as a baseline. We generated a textual address for each point object found in the city's point‐based address database, and submitted it to the Google Maps geocoding API. We then compared the resulting coordinates with the ones recorded in Belo Horizonte's GIS. We demonstrate that the centerline segment interpolation method, employed by the online resources following the American practice, has problems that can considerably influence the quality of the geocoding outcome. Completeness and accuracy have been found to be irregular, especially within lower income areas. Such errors in online services can have a significant impact on geocoding efforts related to social applications, such as public health and education, since the online service can be faulty and error‐prone in the most socially demanding areas of the city. In the conclusion, we point out that a volunteered geographic information (VGI) approach can help with the enrichment and enhancement of current geocoding resources, and can possibly lead to their transformation into more reliable point‐based geocoding services.  相似文献   

19.
Consistency among parts and aggregates: A computational model   总被引:2,自引:0,他引:2  
Heterogeneous geographic databases contain multiple views of the same geographic objects at different levels of spatial resolution. When users perceive geographic objects as one spatial unit, although they are physically separated into multiple parts, appropriate methods are needed to assess the consistency among the aggregate and the parts. The critical aspect is that the overall spatial relationships with respect to other geographic objects must be preserved throughout the aggregation process. We developed a systematic model for the constraints that must hold with respect to other spatial objects when two parts of an object are aggregated. We found three sets of configurations that require increasingly more information in order to make a precise statement about their consistency: (1) configurations that are satisfied by the topological relations between the two parts and the object of interest; (2) configurations that need further information about the topological relation between the object of concern and the connector in order to be resolved unambiguously; and (3) configurations that require additional information about the topological relation between the aggregate's boundary and the boundary or interior of the object of interest to be uniquely described. The formalism extends immediately to relations between two regions with disconnected parts as well as to relations between a region and an arbitrary number of separations.  相似文献   

20.
OpenStreetMap (OSM) is an extraordinarily large and diverse spatial database of the world. Road networks are amongst the most frequently occurring spatial content within the OSM database. These road network representations are usable in many applications. However the quality of these representations can vary between locations. Comparing OSM road networks with authoritative road datasets for a given area or region is an important task in assessing OSM's fitness for use for applications like routing and navigation. Such comparisons can be technically challenging and no software implementation exists which facilitates them easily and automatically. In this article we develop and propose a flexible methodology for comparing the geometry of OSM road network data with other road datasets. Quantitative measures for the completeness and spatial accuracy of OSM are computed, including the compatibility of OSM road data with other map databases. Our methodology provides users with significant flexibility in how they can adjust the parameterization to suit their needs. This software implementation is exclusively built on open source software and a significant degree of automation is provided for these comparisons. This software can subsequently be extended and adapted for comparison between OSM and other external road datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号