首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.  相似文献   

2.
ABSTRACT

A wide range of environmental process simulations would benefit from the development of GIS able to cope with the additional dimensions of vertical space and time, and having extended spatial modelling facilities. The results of a project in the first of these areas, namely on the development of techniques for handling four-dimensional (4-D) data, are described. The key topics of data models, visualization and interpolation have been studied. Problems include the large size of some 4-D data sets, the sparseness of sampling in some dimensions compared with others, and the need to combine data sets from different sensors which may be of different dimensionalities and scales. The gridded volume data common to most environmental models are stored in a 4-D bintree form in which all dimensions are treated identically. Simple 4-D objects may also be stored, and links are provided between volume and object databases. The techniques have been implemented within a computational testbed allowing parallel processing to cope with large data sizes.  相似文献   

3.
Abstract

Geographical Information Systems (GIS) are becoming basic tools for a wide variety of earth science and land-use applications. This article presents linear programming (LP) as a promising tool for spatial modelling within a GIS. Although LP is not properly a spatial technique, it may be used to optimize spatial distributions or to guide the integration of variables. An example of the use of LP in land-use planning is described, with minimizing rural unemployment as the main goal. Technical, financial and ecological constraints are established to show the influence of several limitations on achieving the optimal solution. LP makes it possible to achieve optimal land-use, where the objective is maximized and the constraints respected. LP can also be used to simulate different planning scenarios, by modifying both the objective function coefficients and the constraints. The integration of LP and GIS is presented in two phases: (i) acquisition of attribute data for the LP model, and (ii) modelling and mapping the results.  相似文献   

4.
《自然地理学》2013,34(6):517-527
Short-term monthly mean temperature (Tm ) and short-term daily mean temperature (Td ) rather than long-term monthly and daily mean temperature (Tm and Td ) are preferred for some ecosystem studies such as carbon source and sink, pine beetle mortality, and snow melting. The recent progress of modeling Tm and Td (based on the previous work on Tm ) supported by climatologically aided interpolation (CAI) is reported over the mountainous Yellowstone National Park. With the spatial scale of a 30 m digital elevation model (DEM), the slope, aspect, and shadows cast by surrounding topography, which could not be well captured by very coarse DEM, could be taken into account. Data from 12 months (Jan-Dec 2008) and 12 dates (25 Jan-Dec 2008) were used to demonstrate the approach. Inverse distance weighting (IDW) interpolations of limited temperature anomalies were adopted to represent the deviations from normality. Tm , as a preexisting climatology surface, was added to deviations in order to model Tm . Linear temporal interpolation of adjacent Tm was used to create a climatology surface, which was then added to deviations in order to model Td . Results show the mean absolute errors (MAEs) for Tm ranged from 0.75° C to 1.78° C, while the MAEs for Td ranged from 1.14° C to 2.02° C. The four factors of elevation, seasonal change of lapse rate, temperature difference caused by variation in solar radiation, and preexisting climatology surface for the CAI approach were comprehensively considered in this approach.  相似文献   

5.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

6.
We present an interpolation model that describes Holocene groundwater level rise and the creation of accommodation space in 3D in the Rhine‐Meuse delta – the Netherlands. The model area (ca. 12 400 km2) covers two palaeovalleys of Late Pleistocene age (each 30 km wide) and the overlying Holocene deposits of the Rhine‐Meuse delta, the Holland coastal plain, and the Zuiderzee former lagoon. Water table rise is modelled from 10 800 to 1000 cal. BP, making use of age‐depth relations based on 384 basal peat index points, and producing output in the form of stacked palaeo groundwater surfaces, groundwater age‐depth curves, and voxel sets. These products allow to resolve (i) regional change and variations of inland water table slopes, (ii) spatial differences in the timing and pacing of transgression, and (iii) analysis of interplay of coastal, fluvial and subsidence controls on the provision of accommodation space. The interpolation model is a multi‐parameter trend function, to which a 3D‐kriging procedure of the residuals is added. This split design deploys a generic approach for modelling provision of accommodation space in deltas and coastal lowlands, aiming to work both in areas of intermediate data availability and in the most data‐rich environments. Major provision of accommodation space occurred from 8500 cal BP onwards, but a different evolution occurred in each of the two palaeovalleys. In the northern valley, creation of accommodation space began to stall at 7500 cal BP, while in the southern valley provision of new accommodation space in considerable quantities continued longer. The latter is due to the floodplain gradient that was maintained by the Rhine, which distinguishes the fluvial deltaic environment from the rest of the back‐barrier coastal plain. The interpolation results allow advanced mapping and investigation of apparent spatial differences in Holocene aggradation in larger coastal sedimentary systems. Furthermore, they provide a means to generate first‐order age information with centennial precision for 3D geological subsurface models of Holocene deltas and valley fills. As such, the interpolation is of use in studies into past and present land subsidence and into low land sedimentation.  相似文献   

7.
8.

The tree-limit altitudes of Norway spruce (Picea abies) and Scots pine (Pinus sylvestris) from 180 sites (within an area of 95?km?×?165?km) in the southern Scandes were correlated with the geographical variables latitude, longitude and distance to the sea. The results were compared with a similar investigation of the tree-limit of mountain birch (Betula pubescens Ehrh. ssp. tortuosa (Ledeb.) Nyman) in the same area. The three tree-limit altitudes showed good negative correlation with latitude, poor correlation with longitude and good positive correlation with the distance to the sea, suggesting that on a regional scale the altitudes are controlled by macroclimate. At some sites, local topoclimatic features, some of which were partially aspect-dependent, may cause deviations in the regional pattern of tree-limit altitude that is set primarily by summer temperature. Tree-limit responses to potential future climate warming will probably differ substantially in magnitude from site to site in relation to local topography and associated ecological constraints.  相似文献   

9.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

10.
11.

This paper describes the application of an unsupervised clustering method, fuzzy c-means (FCM), to generate mineral prospectivity models for Cu?±?Au?±?Fe mineralization in the Feizabad District of NE Iran. Various evidence layers relevant to indicators or potential controls on mineralization, including geochemical data, geological–structural maps and remote sensing data, were used. The FCM clustering approach was employed to reduce the dimensions of nine key attribute vectors derived from different exploration criteria. Multifractal inverse distance weighting interpolation coupled with factor analysis was used to generate enhanced multi-element geochemical signatures of areas with Cu?±?Au?±?Fe mineralization. The GIS-based fuzzy membership function MSLarge was used to transform values of the different evidence layers, including geological–structural controls as well as alteration, into a [0–1] range. Four FCM-based validation indices, including Bezdek’s partition coefficient (VPc) and partition entropy (VPe) indices, the Fukuyama and Sugeno (VFS) index and the Xie and Beni (VXB) index, were employed to derive the optimum number of clusters and subsequently generate prospectivity maps. Normalized density indices were applied for quantitative evaluation of the classes of the FCM prospectivity maps. The quantitative evaluation of the results demonstrates that the higher favorability classes derived from VFS and VXB (Nd?=?9.19) appear more reliable than those derived from VPc and VPe (Nd?=?6.12) in detecting existing mineral deposits and defining new zones of potential Cu?±?Au?±?Fe mineralization in the study area.

  相似文献   

12.
Abstract

Resource models integrating disparate nominal or class grid-cell data can be implemented by using spatial filters. Most modelling procedures do not adequately handle noise created during the process of merging and integrating multiple grid-cell data sets. Data integration can be best accomplished in an environment where ready access to statistical and database management systems support the reclassification of noise grid-cells. These systems provide access to functionality and information which support the design of the spatial filter and the evaluation of the result of the spatial filter and the resource model.  相似文献   

13.

Substantial changes in a core idea of geography, integration, have occurred since Alexander von Humboldt published Kosmos (1845-1862). These changes are part of a larger shift in Western civilization to mechanistic reasoning. This shift led to the strengthening of system-based analysis, central to the development of geographic information systems (GIS). The duality of holism and the systems approach has led to an apparent contradiction in geography. R. Hartshorne in The Nature of Geography described this contradiction, but as did Alfred Hettner and Emil Wisotzki before, moved to partial systems as the core concept of geographic integration. Hartshorne's concept of vertical integration is the antecedent for the ubiquitous GIS layer model. The reduction of geographic relationships and processes to mechanistic components (layers) aids the systematic approach, but may lessen geographic understanding of a place's interrelationships. Although the partiality of the system approach was already acknowledged by Finch and Hartshorne in the 1930s, the tension between holistic and system approaches in geography remains. Holism and system-based approaches are indeed complementary methods for developing geographic understanding. Using holistic approaches to understand geographic phenomena, before we teleologically (following a purpose) analyze phenomena as a system, extends GIS to include broader interrelationships of geography in specific locations.  相似文献   

14.
ABSTRACT

Recently developed urban air quality sensor networks are used to monitor air pollutant concentrations at a fine spatial and temporal resolution. The measurements are however limited to point support. To obtain areal coverage in space and time, interpolation is required. A spatio-temporal regression kriging approach was applied to predict nitrogen dioxide (NO2) concentrations at unobserved space-time locations in the city of Eindhoven, the Netherlands. Prediction maps were created at 25 m spatial resolution and hourly temporal resolution. In regression kriging, the trend is separately modelled from autocorrelation in the residuals. The trend part of the model, consisting of a set of spatial and temporal covariates, was able to explain 49.2% of the spatio-temporal variability in NO2 concentrations in Eindhoven in November 2016. Spatio-temporal autocorrelation in the residuals was modelled by fitting a sum-metric spatio-temporal variogram model, adding smoothness to the prediction maps. The accuracy of the predictions was assessed using leave-one-out cross-validation, resulting in a Root Mean Square Error of 9.91 μg m?3, a Mean Error of ?0.03 μg m?3 and a Mean Absolute Error of 7.29 μg m?3. The method allows for easy prediction and visualization of air pollutant concentrations and can be extended to a near real-time procedure.  相似文献   

15.
In integration of road maps modeled as road vector data, the main task is matching pairs of objects that represent, in different maps, the same segment of a real-world road. In an ad hoc integration, the matching is done for a specific need and, thus, is performed in real time, where only a limited preprocessing is possible. Usually, ad hoc integration is performed as part of some interaction with a user and, hence, the matching algorithm is required to complete its task in time that is short enough for human users to provide feedback to the application, that is, in no more than a few seconds. Such interaction is typical of services on the World Wide Web and to applications in car-navigation systems or in handheld devices.

Several algorithms were proposed in the past for matching road vector data; however, these algorithms are not efficient enough for ad hoc integration. This article presents algorithms for ad hoc integration of maps in which roads are represented as polylines. The main novelty of these algorithms is in using only the locations of the endpoints of the polylines rather than trying to match whole lines. The efficiency of the algorithms is shown both analytically and experimentally. In particular, these algorithms do not require the existence of a spatial index, and they are more efficient than an alternative approach based on using a grid index. Extensive experiments using various maps of three different cities show that our approach to matching road networks is efficient and accurate (i.e., it provides high recall and precision).

General Terms:Algorithms, Experimentation  相似文献   

16.
Abstract

This paper describes a new joint Dutch research initiative ‘GIS-cartography’, combining the research efforts of the cartographers of Utrecht University, Delft University of Technology and the International Institute for Aerospace Survey and Earth Sciences (ITC) in Enschede. The research initiative focuses on the quantification and visualization of data quality, which will be placed in the context of providing automated visual decision support in specific map use strategies. As these map use strategies can only be performed if the relevant cartographic images can be created, studies of both physical access to the data, user interfaces and the provision of sufficient support to allow the user to understand and to derive sensible conclusions from the data are included in the project. Before modules automatically visualizing data quality can be implemented, data documentation, standardization and integration have to be effected, therefore these issues are also covered.  相似文献   

17.
ABSTRACT

This paper examines the potential of using GIS (Geographical Information Systems) in the field for environmental characterization, modelling and decision support particularly in isolated areas where data collection is difficult. Observations are based on experiences gained during two Anglo-Russian expeditions to the Altai Mountains of south central Siberia aimed at evaluating proposals for a new national park in the Katunsky Ridge area of the Belukha Massif. The use of GIS together with GPS (Global Positioning Systems) for primary data collection and verification/update of existing data is described and the use of field-based systems for on-the-spot environmental modelling and decision support is evaluated.  相似文献   

18.
Abstract. Three-dimensional imaging is a powerful technique for the visualization and interpretation of environmental data. The success of the process is linked to careful, technically-justifiable selection of variable parameters during the gridding and imaging process. The impacts of various approaches to gridding and possible setting of parameters on the final image and volume calculations were examined by generating alternative images for a very well characterized contaminated site in layered coastal plain sediments. To image properly scattered data collected at close intervals in wells from layered geological media, a higher grid density in the z direction is required along with a weighting factor to emphasize the influence of data in the x and y directions. For steeply-varying contaminant concentration data, the best results were obtained by gridding the log of the property value; an anti-log transformation is carried out to restore property values to the correct value before the visualization file is prepared. The techniques and recommendations made in this article were designed for modelling contaminant values with very steep gradients dispersed in a strongly anisotropic media. These recommendations may not apply directly to other sites but the process of selecting parameters should be similar.  相似文献   

19.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

20.
Abstract

Mapping by sampling and prediction of local and regional values of two-dimensional surfaces is a frequent, complex task in geographical information systems. This article describes a method for the approximation of two-dimensional surfaces by optimizing sample size, arrangement and prediction accuracy simultaneously. First, a grid of an ancillary data set is approximated by a quadtree to determine a predefined number of homogeneous mapping units. This approximation is optimal in the sense of minimizing Kullback-divergence between the quadtree and the grid of ancillary data. Then, samples are taken from each mapping unit. The performance of this sampling has been tested against other sampling strategies (regular and random) and found to be superior in reconstructing the grid using three interpolation techniques (inverse squared Euclidean distance, kriging, and Thiessen-polygonization). Finally, the discrepancy between the ancillary grid and the surface to be mapped is modelled by different levels and spatial structures of noise. Conceptually this method is advantageous in cases when sampling strata cannot be well defined a priori and the spatial structure of the phenomenon to be mapped is not known, but ancillary information (e.g., remotely-sensed data), corresponding to its spatial pattern, is available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号