首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Since the Bonn 2011 conference, the “water-energy-food”(WEF) nexus has aroused global concern to promote sustainable development. The WEF nexus is a complex,dynamic, and open system containing interrelated and interdependent elements. However,the nexus studies have mainly focused on natural elements based on massive earth observation data. Human elements(e.g., society, economy, politics, culture) are described insufficiently, because traditional earth observation technologies cannot effectively ...  相似文献   

2.
The Digital Elevation Model that has been derived from the February 2000 Shuttle Radar Topography Mission (SRTM) has been one of the most important publicly available new spatial data sets in recent years. However, the ‘finished’ grade version of the data (also referred to as Version 2) still contains data voids (some 836,000 km2)—and other anomalies—that prevent immediate use in many applications. These voids can be filled using a range of interpolation algorithms in conjunction with other sources of elevation data, but there is little guidance on the most appropriate void‐filling method. This paper describes: (i) a method to fill voids using a variety of interpolators, (ii) a method to determine the most appropriate void‐filling algorithms using a classification of the voids based on their size and a typology of their surrounding terrain; and (iii) the classification of the most appropriate algorithm for each of the 3,339,913 voids in the SRTM data. Based on a sample of 1304 artificial but realistic voids across six terrain types and eight void size classes, we found that the choice of void‐filling algorithm is dependent on both the size and terrain type of the void. Contrary to some previous findings, the best methods can be generalised as: kriging or inverse distance weighting interpolation for small and medium size voids in relatively flat low‐lying areas; spline interpolation for small and medium‐sized voids in high‐altitude and dissected terrain; triangular irregular network or inverse distance weighting interpolation for large voids in very flat areas, and an advanced spline method (ANUDEM) for large voids in other terrains.  相似文献   

3.
Despite a long history of synergy, current techniques for integrating Geographic Information System (GIS) software with hydrologic simulation models do not fully utilize the potential of GIS for modeling hydrologic systems. Part of the reason for this is a lack of GIS data models appropriate for representing fluid flow in space and time. Here we address this challenge by proposing a spatiotemporal data model designed specifically for large‐scale river basin systems. The data model builds from core concepts in geographic information science and extends these concepts to accommodate mathematical representations of fluid flow at a regional scale. Space–time is abstracted into three basic objects relevant to hydrologic systems: a control volume, a flux and a flux coupler. A control volume is capable of storing mass, energy or momentum through time, a flux represents the movement of these quantities within space–time and a flux coupler insures conservation of the quantities within an overall system. To demonstrate the data model, a simple case study is presented to show how the data model could be applied to digitally represent a river basin system.  相似文献   

4.
Interactive statistical graphics are reviewed in the contexts of spatial data and geographical information systems (GIS). GIS provide the user with an active geographical view of the data—a map that can be used as an entry point to the data base. Prototype software—SPIDER—illustrates the possibilities of using statistical graphics as further views of the data, which can be made active and thus provide alternative means of querying the data. These views can be cross-referenced by 'linking'. It is argued that such a system can provide a very rich environment for pursuing exploratory statistical analysis of spatial data.  相似文献   

5.
New sources of data such as ‘big data’ and computational analytics have stimulated innovative pedestrian oriented research. Current studies, however, are still limited and subjective with regard to the use of Google Street View and other online sources for environment audits or pedestrian counts because of the manual information extraction and compilation, especially for large areas. This study aims to provide future research an alternative method to conduct large scale data collection more consistently and objectively on pedestrian counts and possibly for environment audits and stimulate discussion of the use of ‘big data’ and recent computational advances for planning and design. We explore and report information needed to automatically download and assemble Google Street View images, as well as other image parameters for a wide range of analysis and visualization, and explore extracting pedestrian count data based on these images using machine vision and learning technology. The reliability tests results based on pedestrian information collected from over 200 street segments in Buffalo, NY, Washington, D.C., and Boston, MA respectively suggested that the image detection method used in this study are capable of determining the presence of pedestrian with a reasonable level of accuracy. The limitation and potential improvement of the proposed method is also discussed.  相似文献   

6.
When different spatial databases are combined, an important issue is the identification of inconsistencies between data. Quite often, representations of the same geographical entities in databases are different and reflect different points of view. In order to fully take advantage of these differences when object instances are associated, a key issue is to determine whether the differences are normal, i.e. explained by the database specifications, or if they are due to erroneous or outdated data in one database. In this paper, we propose a knowledge‐based approach to partially automate the consistency assessment between multiple representations of data. The inconsistency detection is viewed as a knowledge‐acquisition problem, the source of knowledge being the data. The consistency assessment is carried out by applying a proposed method called MECO. This method is itself parameterized by some domain knowledge obtained from a second method called MACO. MACO supports two approaches (direct or indirect) to perform the knowledge acquisition using data‐mining techniques. In particular, a supervised learning approach is defined to automate the knowledge acquisition so as to drastically reduce the human‐domain expert's work. Thanks to this approach, the knowledge‐acquisition process is sped up and less expert‐dependent. Training examples are obtained automatically upon completion of the spatial data matching. Knowledge extraction from data following this bottom‐up approach is particularly useful, since the database specifications are generally complex, difficult to analyse, and manually encoded. Such a data‐driven process also sheds some light on the gap between textual specifications and those actually used to produce the data. The methodology is illustrated and experimentally validated by comparing geometrical representations and attribute values of different vector spatial databases. The advantages and limits of such partially automatic approaches are discussed, and some future works are suggested.  相似文献   

7.
Abstract

The need for rapid access to large amounts of data is one of the central problems in the field of geographical information systems. This paper describes the use of a hardware solution to this problem, in the particular case of selecting digitized boundary data from a very large file. The hardware is International Computers'(ICL's) award winning Contents Addressable Filestore (CAFS) which is a special unit fitted to ICL disc drives allowing the fast selection of records from large files. By reformatting the boundary data, it was possible to perform searches using CAFS. A command-driven package was written to allow users to select the boundaries for any named zone or zones, display them on a graphics terminal and write them out in a format suitable for input to the GIMMS package. This package was tested on a file containing the boundaries of all the wards of Great Britain. After reformatting, this file was 6 Mbytes in size, but by means of CAFS could be searched interactively with response times of the order of 5-10 seconds.  相似文献   

8.
In this paper, conformal geometric algebra (CGA) is introduced to construct a Delaunay–Triangulated Irregular Network (DTIN) intersection for change detection with 3D vector data. A multivector-based representation model is first constructed to unify the representation and organization of the multidimensional objects of DTIN. The intersection relations between DTINs are obtained using the meet operator with a sphere-tree index. The change of area/volume between objects at different times can then be extracted by topological reconstruction. This method has been tested with the Antarctica ice change simulation data. The characteristics and efficiency of our method are compared with those of the Möller method as well as those from the Guigue–Devillers method. The comparison shows that this new method produces five times less redundant segments for DTIN intersection. The computational complexity of the new method is comparable to Möller’s and that of Guigue–Devillers methods. In addition, our method can be easily implemented in a parallel computation environment as shown in our case study. The new method not only realizes the unified expression of multidimensional objects with DTIN but also achieves the unification of geometry and topology in change detection. Our method can also serve as an effective candidate method for universal vector data change detection.  相似文献   

9.
Quantitative reconstructions of mean July temperatures (T jul) based on new and previously published pollen-stratigraphical data covering the last 2000 years from 11 lakes in northern Fennoscandia and the Kola Peninsula are presented. T jul values are based on a previously published pollen-climate transfer function for the region with a root-mean-square error of prediction (RMSEP) of 0.99°C. The most obvious trend in the inferred temperatures from all sites is the general decrease in T jul during the last 2000 years. Pollen-inferred T jul values on average 0.18 ± 0.56°C (n = 91) higher than present (where “present” refers to the last three decades based on pollen-inferred T jul in core-top samples) are indicated between 0 and 1100 AD (2000–850 cal year BP), and temperatures −0.2 ± 0.47°C (n = 78) below present are inferred between 1100 and 1900 AD (850–50 cal year BP). No consistent temperature peak is observed during the ‘Medieval Warm Period’, ca. 900–1200 AD (1100–750 cal year BP), but the cooler period between 1100 and 1900 AD (850–50 cal year BP) corresponds in general with the ‘Little Ice Age’ (LIA). Consistently with independent stable isotopic data, the composite pollen-based record suggests that the coldest periods of the LIA date to 1500–1600 AD (450–350 cal year BP) and 1800–1850 AD (150–100 cal year BP). An abrupt warming occurred at about 1900 AD and the twentieth century is the warmest century since about 1000 AD (950 cal year BP).
A. E. BjuneEmail:
  相似文献   

10.
This paper presents a typology of local‐government data sharing arrangements in the US at a time when spatial data infrastructures (SDI) are moving into a second generation. In the first generation, the US National Spatial Data Infrastructure (NSDI) theoretically involved a pyramid of data integration resting on local‐government data sharing. Availability of local‐government data is the foundation for all SDI‐related data sharing in this model. However, first‐generation SDI data‐sharing activities and principles have gained only a tenuous hold in local governments. Some formalized data sharing occurs, but only infrequently in response to SDI programmes and policies. Previous research suggests that local‐government data sharing aligns with immediate organizational and practical concerns rather than state or national policies and programmes. We present research findings echoing extending these findings to show that local‐government data sharing is largely informal in nature and is undertaken to support existing governmental activities. NSDI principles remain simply irrelevant for the majority of surveyed local governments. The typology we present distinguishes four distinct types of local‐government data sharing arrangements that reflect institutional, political, and economic factors. The effectiveness of second generation, client‐service‐based SDI will be seriously constrained if the problems of local government take‐up fail to be addressed.  相似文献   

11.
The monitoring of the environment's status at continental scale involves the integration of information derived by the analysis of multiple, complex, multidisciplinary, and large‐scale phenomena. Thus, there is a need to define synthetic Environmental Indicators (EIs) that concisely represent these phenomena in a manner suitable for decision‐making. This research proposes a flexible system to define EIs based on a soft fusion of contributing environmental factors derived from multi‐source spatial data (mainly Earth Observation data). The flexibility is twofold: the EI can be customized based on the available data, and the system is able to cope with a lack of expert knowledge. The proposal allows a soft quantifier‐guided fusion strategy to be defined, as specified by the user through a linguistic quantifier such as ‘most of’. The linguistic quantifiers are implemented as Ordered Weighted Averaging operators. The proposed approach is applied in a case study to demonstrate the periodical computation of anomaly indicators of the environmental status of Africa, based on a 7‐year time series of dekadal Earth Observation datasets. Different experiments have been carried out on the same data to demonstrate the flexibility and robustness of the proposed method.  相似文献   

12.
ABSTRACT

Kernel Density Estimation (KDE) is an important approach to analyse spatial distribution of point features and linear features over 2-D planar space. Some network-based KDE methods have been developed in recent years, which focus on estimating density distribution of point events over 1-D network space. However, the existing KDE methods are not appropriate for analysing the distribution characteristics of certain kind of features or events, such as traffic jams, queue at intersections and taxi carrying passenger events. These events occur and distribute in 1-D road network space, and present a continuous linear distribution along network. This paper presents a novel Network Kernel Density Estimation method for Linear features (NKDE-L) to analyse the space–time distribution characteristics of linear features over 1-D network space. We first analyse the density distribution of each linear feature along networks, then estimate the density distribution for the whole network space in terms of the network distance and network topology. In the case study, we apply the NKDE-L to analyse the space–time dynamics of taxis’ pick-up events, with real road network and taxi trace data in Wuhan. Taxis’ pick-up events are defined and extracted as linear events (LE) in this paper. We first conduct a space–time statistics of pick-up LE in different temporal granularities. Then we analyse the space–time density distribution of the pick-up events in the road network using the NKDE-L, and uncover some dynamic patterns of people’s activities and traffic condition. In addition, we compare the NKDE-L with quadrat method and planar KDE. The comparison results prove the advantages of the NKDE-L in analysing spatial distribution patterns of linear features in network space.  相似文献   

13.
This research is motivated by the need for 3D GIS data models that allow for 3D spatial query, analysis and visualization of the subunits and internal network structure of ‘micro‐spatial environments’ (the 3D spatial structure within buildings). It explores a new way of representing the topological relationships among 3D geographical features such as buildings and their internal partitions or subunits. The 3D topological data model is called the combinatorial data model (CDM). It is a logical data model that simplifies and abstracts the complex topological relationships among 3D features through a hierarchical network structure called the node‐relation structure (NRS). This logical network structure is abstracted by using the property of Poincaré duality. It is modelled and presented in the paper using graph‐theoretic formalisms. The model was implemented with real data for evaluating its effectiveness for performing 3D spatial queries and visualization.  相似文献   

14.
Moving objects produce trajectories, which are stored in databases by means of finite samples of time-stamped locations. When speed limitations in these sample points are also known, space–time prisms (also called beads) (Pfoser and Jensen 1999 Pfoser, D. and Jensen, C.S. 1999. “Capturing the uncertainty of moving-object representations”. In Advances in spatial databases (SSD’99), Hong Kong, China, July 20–23, 1999, Vol. 1651, 111132. Lecture notes in Computer Science.  [Google Scholar], Egenhofer 2003 Egenhofer, M. 2003. “Approximation of geopatial lifelines”. In SpadaGIS, workshop on spatial data and geographic information systsems, 4SpaDaGIS–Workshop on Spatial Data and Geographic Information Systems, Milano, Italy, March 2003. University of Genova.  [Google Scholar], Miller 2005 Miller, H. 2005. A measurement theory for time geography. Geographical Analysis, 37(1): 1745. [Crossref], [Web of Science ®] [Google Scholar]) can be used to model the uncertainty about an object's location in between sample points. In this setting, a query of particular interest that has been studied in the literature of geographic information systems (GIS) is the alibi query. This boolean query asks whether two moving objects could have physically met. This adds up to deciding whether the chains of space–time prisms (also called necklaces of beads) of these objects intersect. This problem can be reduced to deciding whether two space–time prisms intersect.

The alibi query can be seen as a constraint database query. In the constraint database model, spatial and spatiotemporal data are stored by boolean combinations of polynomial equalities and inequalities over the real numbers. The relational calculus augmented with polynomial constraints is the standard first-order query language for constraint databases and the alibi query can be expressed in it. The evaluation of the alibi query in the constraint database model relies on the elimination of a block of three exªistential quantifiers. Implementations of general purpose elimination algorithms, such as those provided by QEPCAD, Redlog, and Mathematica, are, for practical purposes, too slow in answering the alibi query for two specific space–time prisms. These software packages completely fail to answer the alibi query in the parametric case (i.e., when it is formulated in terms of parameters representing the sample points and speed constraints).

The main contribution of this article is an analytical solution to the parametric alibi query, which can be used to answer the alibi query on two specific space–time prisms in constant time (a matter of milliseconds in our implementation). It solves the alibi query for chains of space–time prisms in time proportional to the sum of the lengths of the chains. To back this claim up, we implemented our method in Mathematica alongside the traditional quantifier elimination method. The solutions we propose are based on the geometric argumentation and they illustrate the fact that some practical problems require creative solutions, where at least in theory, existing systems could provide a solution.  相似文献   

15.
Abstract

Vector data storage has various advantages in a cartographic or geographical information system (GIS) environment, but lacks internal spatial relationships between individual features. Quadtree structures have been extensively used to store and access raster data. This paper shows how quadtree methods may be adapted for use in spatially indexing vector data. It demonstrates that a vector quadtree stored in floating point representation overcomes the classical problem with raster quadtrees of data approximation. Examples of vector quadtrees applied to realistic size data sets are given  相似文献   

16.
A common concern when working with health‐related data is that national standard guidelines are designed to preserve individual statistical information, usually recorded as text or in a spreadsheet format (‘statistical confidentiality’), but lack appropriate rules for visualizing this information on maps (‘spatial confidentiality’). Privacy rules to protect spatial confidentiality become more and more important, as governmental agencies increasingly incorporate Geographic Information Systems (GIS) as a tool for collecting, storing, analysing, and disseminating spatial information. The purpose of this paper is to propose the first step of a general framework for presenting the location of confidential point data on maps using empirical perceptual research. The overall objective is to identify geographic masking methods that preserve both the confidentiality of individual locations, and at the same time the essential visual characteristics of the original point pattern.  相似文献   

17.
This paper evaluates errors and uncertainties in representing landscapes that arise from different data rasterization methods, spatial resolutions, and downscaled land‐use change (LUC) scenarios. A vector LU dataset for Luxembourg (minimum mapping unit: 0.15 ha; year 2000) was used as the baseline reference map. This map was rasterized at three spatial resolutions using three cell class assignment methods. The landscape composition and configuration of these maps were compared. Four alternative scenarios of future LUC were also generated for the three resolutions using existing LUC scenarios and a statistical downscaling method creating 37 maps of LUC for the year 2050. These maps were compared in terms of composition and spatial configuration using simple metrics of landscape fragmentation and an analysis of variance (ANOVA). Differences in landscape composition and configuration between the three cell class assignment methods and the three spatial resolutions were found to be at least as large as the differences between the LUC scenarios. This occurred in spite of the large LUC projected by the scenarios. This demonstrates the importance of the rasterization method and the level of aggregation as a contribution to uncertainty when developing future LUC scenarios and in analysing landscape structure in ecological studies.  相似文献   

18.
The role of remote sensing in phenological studies is increasingly regarded as a key to understand large area seasonal phenomena. This paper describes the application of Moderate Resolution Imaging Spectroradiometer (MODIS) time series data for forest phenological patterns. The forest phenological phase of Northeast China (NE China) and its spatial characteristics were inferred using 1-km 10-day MODIS normalized difference vegetation index (NDVI) datasets of 2002. The threshold-based method was used to estimate three key forest phenological variables, which are the start of growing season (SOS), the end of growing season (EOS) and growing season length (GSL).Then the spatial patterns of forest phenological variables of NE China were mapped and analyzed. The derived phenological variables were validated by the field observed data from published papers in the same study area. Results indicate that forest phenological phase from MODIS data is comparable with the observed data. As the derived forest phenological pattern is related to forest type distribution, it is helpful to discriminate between forest types.  相似文献   

19.
Estimation of net primary productivity in China using remote sensing data   总被引:2,自引:0,他引:2  
1 IntroductionAs a major part of terrestrial ecosystem, vegetation plays an important role in the energy, matter and momentum exchange between land surface and atmosphere. Through the process of photosynthesis, land plants assimilate carbon in atmosphere and incorporate into dry matter while part of carbon is emitted into atmosphere again through plant respiration. The remainder of photosynthesis and respiration is called net primary productivity (NPP), which is important in the global carbon…  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号