首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

In recent years, geographical information systems have been employed in a wide variety of application domains, and as a result many research efforts are being devoted to those upcoming problems. Geospatial data security, especially access control, has attracted increased research interests within the academic community. The tendency towards sharing and interoperability of geospatial data and applications makes it common to acquire and integrate geospatial data from multiple organisations to accomplish a complex task. Meanwhile, many organisations have the requirement for securing access to possessed sensitive or proprietary geospatial data. In this heterogeneous and distributed environment, consistent access control functionality is crucial to promote controlled accessibility. As an extension of general access control mechanisms in the IT domain, the mechanism for geospatial data access control has its own requirements and characteristics of granularity and geospatial logic. In this paper, we address several fundamental aspects concerning the design and implementation of an access control system for geospatial data, including the classification, requirements, authorisation models, storage structures and management approaches for authorisation rules, matching and decision-making algorithms between authorisation rules and access requests, and its policy enforcement mechanisms. This paper also presents a system framework for realising access control functionality for geospatial data, and explain access control procedures in detail.  相似文献   

2.
The provision of open data by governments at all levels has rapidly increased over recent years. Given that one of the dominant motivations for the provision of open data is to generate ‘value’, both economic and civic, there are valid concerns over the costs incurred in this pursuit. Typically, costs of open data are framed as internal to the data providing government. Building on the strong history of GIScience research on data provision via spatial data infrastructures, this article considers both the direct and indirect costs of open data provision, framing four main areas of indirect costs: citizen participation challenges, uneven provision across geography and user types, subsidy of private sector activities, and the creation of inroads for corporate influence on government. These areas of indirect cost lead to the development of critical questions, including constituency, purpose, enablement, protection, and priorities. These questions are posed as a guide to governments that provide open data in addressing the indirect costs of open data.  相似文献   

3.
随着大数据建设的快速推进,推动政府治理现代化成为大数据应用的热点。本文结合浙江省地理空间大数据应用示范工程建设,重点分析了政务应用中对时空大数据的需求,并结合现有地理信息公共服务中的瓶颈问题,提出了地理空间大数据建设的总体框架、主要内容及关键技术,并通过政务示范应用进一步验证了项目建设成效。  相似文献   

4.
5.
ABSTRACT

The availability and quantity of remotely sensed and terrestrial geospatial data sets are on the rise. Historically, these data sets have been analyzed and quarried on 2D desktop computers; however, immersive technologies and specifically immersive virtual reality (iVR) allow for the integration, visualization, analysis, and exploration of these 3D geospatial data sets. iVR can deliver remote and large-scale geospatial data sets to the laboratory, providing embodied experiences of field sites across the earth and beyond. We describe a workflow for the ingestion of geospatial data sets and the development of an iVR workbench, and present the application of these for an experience of Iceland’s Thrihnukar volcano where we: (1) combined satellite imagery with terrain elevation data to create a basic reconstruction of the physical site; (2) used terrestrial LiDAR data to provide a geo-referenced point cloud model of the magmatic-volcanic system, as well as the LiDAR intensity values for the identification of rock types; and (3) used Structure-from-Motion (SfM) to construct a photorealistic point cloud of the inside volcano. The workbench provides tools for the direct manipulation of the georeferenced data sets, including scaling, rotation, and translation, and a suite of geometric measurement tools, including length, area, and volume. Future developments will be inspired by an ongoing user study that formally evaluates the workbench’s mature components in the context of fieldwork and analyses activities.  相似文献   

6.
Global geospatial data from Earth observation: status and issues   总被引:1,自引:0,他引:1  
ABSTRACT

Data covering the whole of the surface of the Earth in a homogeneous and reliable manner has been accumulating over many years. This type of data became available from meteorological satellites from the 1960s and from Earth-observing satellites at a small scale from the early 1970s but has gradually accumulated at larger scales up to the present day when we now have data covering many environmental themes at large scales. These data have been used to generate information which is presented in the form of global data sets. This paper will give a brief introduction to the development of Earth observation and to the organisations and sensors which collect data and produce global geospatial data sets. Means of accessing global data sets will set out the types of data available that will be covered. Digital elevation models are discussed in a separate section because of their importance in georeferencing image data as well as their application to analysis of thematic data. The paper will also examine issues of availability, accuracy, validation and reliability and will look at future challenges.  相似文献   

7.
Current search engines in most geospatial data portals tend to induce users to focus on one single-data characteristic dimension (e.g. popularity and release date). This approach largely fails to take account of users’ multidimensional preferences for geospatial data, and hence may likely result in a less than optimal user experience in discovering the most applicable dataset. This study reports a machine learning framework to address the ranking challenge, the fundamental obstacle in geospatial data discovery, by (1) identifying a number of ranking features of geospatial data to represent users’ multidimensional preferences by considering semantics, user behavior, spatial similarity, and static dataset metadata attributes; (2) applying a machine learning method to automatically learn a ranking function; and (3) proposing a system architecture to combine existing search-oriented open source software, semantic knowledge base, ranking feature extraction, and machine learning algorithm. Results show that the machine learning approach outperforms other methods, in terms of both precision at K and normalized discounted cumulative gain. As an early attempt of utilizing machine learning to improve the search ranking in the geospatial domain, we expect this work to set an example for further research and open the door towards intelligent geospatial data discovery.  相似文献   

8.
ABSTRACT

Many visions for geospatial technology have been advanced over the past half century. Initially researchers saw the handling of geospatial data as the major problem to be overcome. The vision of geographic information systems arose as an early international consensus. Later visions included spatial data infrastructure, Digital Earth, and a nervous system for the planet. With accelerating advances in information technology, a new vision is needed that reflects today’s focus on open and multimodal access, sharing, engagement, the Web, Big Data, artificial intelligence, and data science. We elaborate on the concept of geospatial infrastructure, and argue that it is essential if geospatial technology is to contribute to the solution of problems facing humanity.  相似文献   

9.
Abstract

The geospatial sciences face grand information technology (IT) challenges in the twenty-first century: data intensity, computing intensity, concurrent access intensity and spatiotemporal intensity. These challenges require the readiness of a computing infrastructure that can: (1) better support discovery, access and utilization of data and data processing so as to relieve scientists and engineers of IT tasks and focus on scientific discoveries; (2) provide real-time IT resources to enable real-time applications, such as emergency response; (3) deal with access spikes; and (4) provide more reliable and scalable service for massive numbers of concurrent users to advance public knowledge. The emergence of cloud computing provides a potential solution with an elastic, on-demand computing platform to integrate – observation systems, parameter extracting algorithms, phenomena simulations, analytical visualization and decision support, and to provide social impact and user feedback – the essential elements of the geospatial sciences. We discuss the utilization of cloud computing to support the intensities of geospatial sciences by reporting from our investigations on how cloud computing could enable the geospatial sciences and how spatiotemporal principles, the kernel of the geospatial sciences, could be utilized to ensure the benefits of cloud computing. Four research examples are presented to analyze how to: (1) search, access and utilize geospatial data; (2) configure computing infrastructure to enable the computability of intensive simulation models; (3) disseminate and utilize research results for massive numbers of concurrent users; and (4) adopt spatiotemporal principles to support spatiotemporal intensive applications. The paper concludes with a discussion of opportunities and challenges for spatial cloud computing (SCC).  相似文献   

10.
ABSTRACT

Linked Data is known as one of the best solutions for multisource and heterogeneous web data integration and discovery in this era of Big Data. However, data interlinking, which is the most valuable contribution of Linked Data, remains incomplete and inaccurate. This study proposes a multidimensional and quantitative interlinking approach for Linked Data in the geospatial domain. According to the characteristics and roles of geospatial data in data discovery, eight elementary data characteristics are adopted as data interlinking types. These elementary characteristics are further combined to form compound and overall data interlinking types. Each data interlinking type possesses one specific predicate to indicate the actual relationship of Linked Data and uses data similarity to represent the correlation degree quantitatively. Therefore, geospatial data interlinking can be expressed by a directed edge associated with a relation predicate and a similarity value. The approach transforms existing simple and qualitative geospatial data interlinking into complete and quantitative interlinking and promotes the establishment of high-quality and trusted Linked Geospatial Data. The approach is applied to build data intra-links in the Chinese National Earth System Scientific Data Sharing Network (NSTI-GEO) and data -links in NSTI-GEO with the Chinese Meteorological Data Network and National Population and Health Scientific Data Sharing Platform.  相似文献   

11.
ABSTRACT

Photo-sharing services provide a rich resource of crowdsourced spatial data consisting of georeferenced imagery and metadata. Shared photos can provide valuable information for a variety of applications and geospatial analysis tasks, such as identifying tourist hot spots or traveled routes. Understanding the spatiotemporal patterns of photo contributions will allow analysts to assess the suitability of these data for related analysis tasks. Using California as a study area, this paper analyzes various aspects of photo contribution patterns of Panoramio and Flickr. It identifies areas where annual photo contributions are still growing and areas that undergo a decline in annual contributions. Multiple regression is used to identify which environmental correlates are associated with an increase in photo-sharing activities. Furthermore, panel data of annual contributions between 2006 and 2013 for California subcounties will be used in a regression model to demonstrate that there is a positive feedback effect between Panoramio and Flickr photo contributions, but no neighborhood effect. The results of this paper provide insight into the data quality of crowdsourced image collections. These collections are commonly used for geospatial applications, including tourist information services and the computation of scenic routes.  相似文献   

12.
针对由于缺少对地理空间数据访问控制安全威胁因素的详细分析使得现有的地理空间访问控制模型不够完善的问题,该文提出了地理空间数据访问控制安全威胁模型——STALE模型。该模型结合空间关系、多尺度、属性等地理空间数据特征,详细描述了地理空间数据文件与数据库在访问控制中存在的安全威胁。在此基础上,针对模型中各类威胁因素,提出了应对策略,并进行了实验验证,证明了STALE模型的实用性。  相似文献   

13.
ABSTRACT

Earth observations and model simulations are generating big multidimensional array-based raster data. However, it is difficult to efficiently query these big raster data due to the inconsistency among the geospatial raster data model, distributed physical data storage model, and the data pipeline in distributed computing frameworks. To efficiently process big geospatial data, this paper proposes a three-layer hierarchical indexing strategy to optimize Apache Spark with Hadoop Distributed File System (HDFS) from the following aspects: (1) improve I/O efficiency by adopting the chunking data structure; (2) keep the workload balance and high data locality by building the global index (k-d tree); (3) enable Spark and HDFS to natively support geospatial raster data formats (e.g., HDF4, NetCDF4, GeoTiff) by building the local index (hash table); (4) index the in-memory data to further improve geospatial data queries; (5) develop a data repartition strategy to tune the query parallelism while keeping high data locality. The above strategies are implemented by developing the customized RDDs, and evaluated by comparing the performance with that of Spark SQL and SciSpark. The proposed indexing strategy can be applied to other distributed frameworks or cloud-based computing systems to natively support big geospatial data query with high efficiency.  相似文献   

14.
尹志华  唐斌 《测绘科学》2011,36(2):162-164
本文就目前空间数据集成的两种主流模式,在论证了各模式的优缺点的基础上,得出了用GML作为地理空间数据转换及传输的标准是切实可行、具有应用前景的方案。从而提出了基于GML的多源异构空间数据集成的模型,并探讨了基于这一模型实现空间数据共享的若干问题。  相似文献   

15.
Crowdsourcing geospatial data   总被引:6,自引:0,他引:6  
In this paper we review recent developments of crowdsourcing geospatial data. While traditional mapping is nearly exclusively coordinated and often also carried out by large organisations, crowdsourcing geospatial data refers to generating a map using informal social networks and web 2.0 technology. Key differences are the fact that users lacking formal training in map making create the geospatial data themselves rather than relying on professional services; that potentially very large user groups collaborate voluntarily and often without financial compensation with the result that at a very low monetary cost open datasets become available and that mapping and change detection occur in real time. This situation is similar to that found in the Open Source software environment.We shortly explain the basic technology needed for crowdsourcing geospatial data, discuss the underlying concepts including quality issues and give some examples for this novel way of generating geospatial data. We also point at applications where alternatives do not exist such as life traffic information systems. Finally we explore the future of crowdsourcing geospatial data and give some concluding remarks.  相似文献   

16.
Population has significant application value and scientific significance in resource use, public health, public transportation, disaster assessment, and environmental management. However, traditional census data can not show the population density difference within census units. Furthermore, census data are not uniform across countries, and reconciling these differences when using data from multiple countries require considerable effort. Finally, there are scale differences between census and geospatial data (e.g., land use/cover), making data analysis and needed research difficult. These challenges significantly limit the applications of census data. The advent of gridded population mapping (GPM) technology has overcome these challenges. GPM technology has developed rapidly in recent years. The research data and models are rich and diverse, and many achievements have been made. A systematic review of the current state of GPM research will help relevant researchers and data users. This article begins by summarizing the core elements of GPM research in four aspects: auxiliary data, models, accuracy, and products. It will then go on to four problems prevalent in GPM research that have direct or indirect effects on the accuracy of GPM. Finally, the article prospects GPM research from four different aspects based on the current state of research.  相似文献   

17.
国土资源空间数据共享模式研究   总被引:1,自引:0,他引:1  
本文分析了空间数共享的四种主要模式,指出基于图片的空间数据快速浏览模式是目前一种有效的空间数据共享模式,并介绍了应用该模式的国土资源空间数据快速浏览系统。  相似文献   

18.
一种基于地形轮廓匹配的地理数据AR可视化新方法   总被引:1,自引:0,他引:1  
陈科  尹啸  陈晨  杨忠祥 《测绘科学》2011,36(1):119-120,115
地理数据的增强现实可视化技术,是地理信息科学可视化的一个重要的发展方向.作者针对结构特征明显的区域,初探了增强现实技术在地学可视化领域的应用.首先,分别提取视频影像数据的可视轮廓和DEM地形数据的水平轮廓,将两种地形轮廓匹配,实现地理数据的地理坐标系与视频影像的图像坐标系之间的配准,最后模拟了地理数据基于视频影像的AR...  相似文献   

19.
ABSTRACT

Open data are currently a hot topic and are associated with realising ambitions such as a more transparent and efficient government, solving societal problems, and increasing economic value. To describe and monitor the state of open data in countries and organisations, several open data assessment frameworks were developed. Despite high scores in these assessment frameworks, the actual (re)use of open government data (OGD) fails to live up to its expectations. Our review of existing open data assessment frameworks reveals that these only cover parts of the open data ecosystem. We have developed a framework, which assesses open data supply, open data governance, and open data user characteristics holistically. This holistic open data framework assesses the maturity of the open data ecosystem and proves to be a useful tool to indicate which aspects of the open data ecosystem are successful and which aspects require attention. Our initial assessment in the Netherlands indicates that the traditional geographical data perform significantly better than non-geographical data, such as healthcare data. Therefore, open geographical data policies in the Netherlands may provide useful cues for other OGD strategies.  相似文献   

20.
ABSTRACT

Volunteered geographic information (VGI) has entered a phase where there are both a substantial amount of crowdsourced information available and a big interest in using it by organizations. But the issue of deciding the quality of VGI without resorting to a comparison with authoritative data remains an open challenge. This article first formulates the problem of quality assessment of VGI data. Then presents a model to measure trustworthiness of information and reputation of contributors by analyzing geometric, qualitative, and semantic aspects of edits over time. An implementation of the model is running on a small data-set for a preliminary empirical validation. The results indicate that the computed trustworthiness provides a valid approximation of VGI quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号