首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

2.
This paper brought out a new idea on the retrieval of suspended sediment concentration, which uses both the water-leaving radiance from remote sensing data and the grain size of the suspended sediment. A principal component model and a neural network model based on those two parameters were constructed. The analyzing results indicate that testing errors of the models using the two parameters are 0.256 and 0.244, while the errors using only water-leaving radiance are 0,384 and 0.390. The stability of the models with grain size parameter is also better than the one without grain size. This research proved that it is necessary to introduce the grain size parameter into suspended sediment concentration retrieval models in order to improve the retrieval precision of these models.  相似文献   

3.
ABSTRACT

Present-day indoor navigation systems are often not well adapted to the specific needs and requirements of its users. This research aims at improving those indoor navigation systems by providing navigation support that cognitively closer to user preferences and behaviour. More specifically, the focus is on the implementation of an accurate turn calculation method in a turn minimization algorithm, aiming to lower the complexity of routes and route instructions. This new-introduced perception-based turn calculation procedure is based on a direct door-to-door walking pattern, and, in contrast to previous algorithms, independent of the underlying indoor network type. It takes into account the effects of geometry of indoor space on human movement. To evaluate its functioning, both the traditional algorithm and the proposed perception-based algorithm are applied in the fewest turns path algorithm. It is demonstrated that the proposed algorithm accurately calculates turns in alignment with people’s perception. The implementation of the calculation algorithm in the fewest turns path algorithm also allows future applications in indoor simplest path algorithms, and overall contributes to cognitively richer indoor navigation systems.  相似文献   

4.
《The Journal of geography》2012,111(3):167-172
Abstract

The SYMAP program for producing line-printed maps on the computer is considered with respect to problems the new user is likely to have. Two groups of electives or options in the F-MAP package are discussed in detail: those that control map size and orientation and those that permit manipulation of the interpolation algorithm. Some classroom examples are suggested that emphasize the heuristic nature of the SYMAP program.  相似文献   

5.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

6.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

7.
Abstract

Utilising the powerful resources of a parallel computer has become a technique available to the GIS software engineer for increasing the performance of such complex software systems. This paper discusses the effectiveness of both automatic and manual parallelising techniques with a view to making an assessment as to whether the inherent sequential structure of GIS software is a detrimental factor inhibiting the use of such techniques. With the aid of the Scan Line Fill (SLF) algorithm in GIMMS, it has been shown that whilst automated parallelization has no merits in this case, a significant performance benefit can be achieved with algorithm redesign at the macro level to exploit the natural geometric parallelism inherent within the algorithm. However, the results illustrate that the full potential of this approach will not be appreciated until the I/O bottleneck is completely overcome, as opposed to merely avoided.  相似文献   

8.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

9.
Summary. It is the supposed presence of intermediate depth earthquakes in areas of continental collision which supports the existence of subducting slabs in regions such as the Zagros mountains of Iran. Mounting field evidence from that region suggests that intermediate focal depths allocated by tele-seismic locations are wrong. No pP depth control exists for earthquakes in the Zagros and all teleseismic locations use P phases alone.
This paper examines the effect of random noise in arrival-time data on the variances calculated for origin time and hypocentral depth. These can be simply related to the distribution of recording stations and it can be shown that in any one region the smaller shocks will tend to have greater errors in depth than the larger. However, this effect cannot alone account for all the probable mislocation in depth of some small shocks in the Zagros. The discrepancy may, however, be explained by the difference in the quality of arrival-time data between large and small events. It is also shown that crustal earthquakes will have greater errors in depth than earthquakes of equivalent size in the mantle. If reliably read, PKP phases can help contribute to the accuracy of hypocentral depth. The conclusion is that although errors in origin time and depth are well correlated for teleseismic locations of all earthquakes, the value of the errors themselves may be small for the bigger shocks, and this may explain why standard bulletin locations seem to give very reasonable focal depths for the biggest events in Iran. Thus the focal depths of the smaller earthquakes (which include all the published deeper ones in the Zagros) are unreliable, while the focal depths for the largest events are likely to be much better.  相似文献   

10.
DEMs derived from LIDAR data are nowadays largely used for quantitative analyses and modelling in geology and geomorphology. High-quality DEMs are required for the accurate morphometric and volumetric measurement of land features. We propose a rigorous automatic algorithm for correcting systematic errors in LIDAR data in order to assess sub-metric variations in surface morphology over wide areas, such as those associated with landslide, slump, and volcanic deposits. Our procedure does not require a priori knowledge of the surface, such as the presence of known ground control points. Systematic errors are detected on the basis of distortions in the areas of overlap among different strips. Discrepancies between overlapping strips are assessed at a number of chosen computational tie points. At each tie point a local surface is constructed for each strip containing the point. Displacements between different strips are then calculated at each tie point, and minimization of these discrepancies allows the identification of major systematic errors. These errors are identified as a function of the variables that describe the data acquisition system. Significant errors mainly caused by a non-constant misestimation of the roll angle are highlighted and corrected. Comparison of DEMs constructed using first uncorrected and then corrected LIDAR data from different Mt. Etna surveys shows a meaningful improvement in quality: most of the systematic errors are removed and the accuracy of morphometric and volumetric measurements of volcanic features increases. These corrections are particularly important for the following studies of Mt. Etna: calculation of lava flow volume; calculation of erosion and deposition volume of pyroclastic cones; mapping of areas newly covered by volcanic ash; and morphological evolution of a portion of an active lava field over a short time span.  相似文献   

11.
Abstract

This study examines the propagation of thematic error through GIS overlay operations. Existing error propagation models for these operations are shown to yield results that are inconsistent with actual levels of propagation error. An alternate model is described that yields more consistent results. This model is based on the frequency of errors of omission and commission in input data. Model output can be used to compute a variety of error indices for data derived from different overlay operations.  相似文献   

12.
Abstract

Accurate quantification of gully shoulder lines (gully borderlines) will help better understand gully formation and evolution. Surveying and mapping are the most important ways to obtain precise morphology. To evaluate the influences of different steps of surveying and of curve-fitting methods of mapping on the morphology of the shoulder line characterized by fractal dimensions, 13 shoulder lines at gully heads were surveyed using a total station and then mapped with different methods of curve fitting, with the fractal dimensions calculated by maps compared with those measured in the field. Fractal dimensions by field measurement ranged from 1.185 to 1.456. Compared with field measurements, the average absolute errors of polygonal line, quadratic B-spline, and arc-fitting methods are 0.045, 0.040, and 0.046, respectively; the average relative errors are 3.48, 3.13, and 3.59%. Therefore, the quadratic B-spline method has a higher accuracy. The standard error of the fractal dimension tends to be larger as average step length increases. The error is ~5% when the step length is 0.7 m, which is advisable for field surveying. This study will help promote the efficiency of field surveying and mapping, and thus promote the accuracy and credibility of gully morphology.  相似文献   

13.
14.

Maps are an important source of data for planning and land use analysis of flood-prone areas. Map users with inadequate training are not aware that map errors can lead to ineffective decisions. Although inherent errors introduced by transformation, map construction, and symbolization are never identified on maps, they limit the effectiveness of maps as sources of data. Additional vertical and horizontal errors can be introduced during map use. Knowledge of the sources and amounts of such errors should result in more effective decisions regarding flood hazards.  相似文献   

15.
ABSTRACT

The analysis of geographically referenced data, specifically point data, is predicated on the accurate geocoding of those data. Geocoding refers to the process in which geographically referenced data (addresses, for example) are placed on a map. This process may lead to issues with positional accuracy or the inability to geocode an address. In this paper, we conduct an international investigation into the impact of the (in)ability to geocode an address on the resulting spatial pattern. We use a variety of point data sets of crime events (varying numbers of events and types of crime), a variety of areal units of analysis (varying the number and size of areal units), from a variety of countries (varying underlying administrative systems), and a locally-based spatial point pattern test to find the levels of geocoding match rates to maintain the spatial patterns of the original data when addresses are missing at random. We find that the level of geocoding success depends on the number of points and the number of areal units under analysis, but generally show that the necessary levels of geocoding success are lower than found in previous research. This finding is consistent across different national contexts.  相似文献   

16.
ABSTRACT

Road intersection data have been used across a range of geospatial analyses. However, many datasets dating from before the advent of GIS are only available as historical printed maps. To be analyzed by GIS software, they need to be scanned and transformed into a usable (vector-based) format. Because the number of scanned historical maps is voluminous, automated methods of digitization and transformation are needed. Frequently, these processes are based on computer vision algorithms. However, the key challenges to this are (1) the low conversion accuracy for low quality and visually complex maps, and (2) the selection of optimal parameters. In this paper, we used a region-based deep convolutional neural network-based framework (RCNN) for object detection, in order to automatically identify road intersections in historical maps of several cities in the United States of America. We found that the RCNN approach is more accurate than traditional computer vision algorithms for double-line cartographic representation of the roads, though its accuracy does not surpass all traditional methods used for single-line symbols. The results suggest that the number of errors in the outputs is sensitive to complexity and blurriness of the maps, and to the number of distinct red-green-blue (RGB) combinations within them.  相似文献   

17.
Abstract

The weighted Kappa coefficient is applied to the comparison of thematic maps. Weighted Kappa is a useful measure of accuracy when the map classes are ordered, or when the relative seriousness of the different possible errors may vary. The calculation and interpretation of weighted Kappa are demonstrated by two examples from forest surveys. First, the accuracy of thematic site quality maps classified according to an ordinal scale is assessed. Error matrices are derived from map overlays, and two different sets of agreement weights are used for the calculation. Weighted Kappa ranges from 0.34 to 0.55, but it does not differ significantly between two separate areas. Secondly, weighted Kappa is calculated for a tree species cover classified according to a nominal scale. Weights reflecting the economic loss for the forest owner due to erroneous data are used for the computation. The value of weighted Kappa is 0.56.  相似文献   

18.
ABSTRACT

The aim of this article is to describe a convenient but robust method for defining neighbourhood relations among buildings based on ordinary Delaunay diagrams (ODDs) and area Delaunay diagrams (ADDs). ODDs and ADDs are defined as a set of edges connecting the generators of adjacent ordinary Voronoi cells (points representing centroids of building polygons) and a set of edges connecting two centroids of building polygons, which are the generators of adjacent area Voronoi cells, respectively. Although ADDs are more robust than ODDs, computation time of ODDs is shorter than that of ADDs (the order of their computation time complexity is O(nlogn)). If ODDs can approximate ADDs with a certain degree of accuracy, the former can be used as an alternative. Therefore, we computed the ratio of the number of ADD edges to that of ODD edges overlapping ADDs at building and regional scales. The results indicate that: (1) for approximately 60% of all buildings, ODDs can exactly overlap ADDs with extra ODD edges; (2) at a regional scale, ODDs can overlap approximately 90% of ADDs with 10% extra ODD edges; and (3) focusing on judging errors, although ADDs are more accurate than ODDs, the difference is only approximately 1%.  相似文献   

19.
ABSTRACT

The size and spatial distribution of loess slides are important for estimating the yield of eroded materials and determining the landslide risk. While previous studies have investigated landslide size distributions, the spatial distribution pattern of landslides at different spatial scales is poorly understood. The results indicate that the loess slide distribution exhibits a power-law scaling across a range of the size distribution. The mean landslide size and size distribution in the different geomorphic types are different. The double Pareto and inverse gamma functions can coincide well with the empirical probability distribution of the loess slide areas and can quantitatively reveal the rollover location, maximum probability, and scaling exponents. The frequency of loess slides increases with mean monthly precipitation. Moreover, point distance analysis showed that > 80% of landslides are located < 3 km from other loess slides. We found that the loess slides at the two study sites (Zhidan and Luochuan County) in northern Shaanxi Province, China show a significant clustered distribution. Furthermore, analysis results of the correlated fractal dimension show that the landslides exhibit a dispersed distribution at smaller spatial scales and a clustered distribution at larger spatial scales.  相似文献   

20.
Performing point pattern analysis using Ripley’s K function on point events of large size is computationally intensive as it involves massive point-wise comparisons, time-consuming edge effect correction weights calculation, and a large number of simulations. This article presented two strategies to optimize the algorithm for point pattern analysis using Ripley’s K function and utilized cloud computing to further accelerate the optimized algorithm. The first optimization sorted the points on their x and y coordinates and thus narrowed the scope of searching for neighboring points down to a rectangular area around each point in estimating K function. Using the actual study area in computing edge effect correction weights is essential to estimate an unbiased K function, but is very computationally intensive if the study area is of complex shape. The second optimization reused the previously computed weights to avoid repeating expensive weights calculation. The optimized algorithm was then parallelized using Open Multi-Processing (OpenMP) and hybrid Message Passing Interface (MPI)/OpenMP on the cloud computing platform. Performance testing showed that the optimizations effectively accelerated point pattern analysis using K function by a factor of 8 using both the sequential version and the OpenMP-parallel version of the optimized algorithm. While the OpenMP-based parallelization achieved good scalability with respect to the number of CPU cores utilized and the problem size, the hybrid MPI/OpenMP-based parallelization significantly shortened the time for estimating K function and performing simulations by utilizing computing resources on multiple computing nodes. Computational challenge imposed by point pattern analysis tasks on point events of large size involving a large number of simulations can be addressed by utilizing elastic, distributed cloud resources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号