首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We analysed the sensitivity of a decision tree derived forest type mapping to simulated data errors in input digital elevation model (DEM), geology and remotely sensed (Landsat Thematic Mapper) variables. We used a stochastic Monte Carlo simulation model coupled with a one‐at‐a‐time approach. The DEM error was assumed to be spatially autocorrelated with its magnitude being a percentage of the elevation value. The error of categorical geology data was assumed to be positional and limited to boundary areas. The Landsat data error was assumed to be spatially random following a Gaussian distribution. Each layer was perturbed using its error model with increasing levels of error, and the effect on the forest type mapping was assessed. The results of the three sensitivity analyses were markedly different, with the classification being most sensitive to the DEM error, than to the Landsat data errors, but with only a limited sensitivity to the geology data error used. A linear increase in error resulted in non‐linear increases in effect for the DEM and Landsat errors, while it was linear for geology. As an example, a DEM error of as small as ±2% reduced the overall test accuracy by more than 2%. More importantly, the same uncertainty level has caused nearly 10% of the study area to change its initial class assignment at each perturbation, on average. A spatial assessment of the sensitivities indicates that most of the pixel changes occurred within those forest classes expected to be more sensitive to data error. In addition to characterising the effect of errors on forest type mapping using decision trees, this study has demonstrated the generality of employing Monte Carlo analysis for the sensitivity and uncertainty analysis of categorical outputs that have distinctive characteristics from that of numerical outputs.  相似文献   

2.
Digital elevation models (DEMs) have been widely used for a range of applications and form the basis of many GIS-related tasks. An essential aspect of a DEM is its accuracy, which depends on a variety of factors, such as source data quality, interpolation methods, data sampling density and the surface topographical characteristics. In recent years, point measurements acquired directly from land surveying such as differential global positioning system and light detection and ranging have become increasingly popular. These topographical data points can be used as the source data for the creation of DEMs at a local or regional scale. The errors in point measurements can be estimated in some cases. The focus of this article is on how the errors in the source data propagate into DEMs. The interpolation method considered is a triangulated irregular network (TIN) with linear interpolation. Both horizontal and vertical errors in source data points are considered in this study. An analytical method is derived for the error propagation into any particular point of interest within a TIN model. The solution is validated using Monte Carlo simulations and survey data obtained from a terrestrial laser scanner.  相似文献   

3.
Space–time prism (STP) is an important concept for the modeling of object movements in space and time. An STP can be conceptualized as the result of the potential path of a moving object revolving around in the three-dimensional space. Though the concept has found applications in time geography, research on the analysis and propagation of uncertainty in STPs, particularly under high degree of nonlinearity, is scanty. Based on the efficiency and effectiveness of the moment-design (M-D) method, this paper proposes an approach to deal with nonlinear error propagation problems in the potential path areas (PPAs) of STPs and their intersections. Propagation of errors to the PPA and its boundary, and to the intersection of two PPAs is investigated. Performance of the proposed method is evaluated via a series of experimental studies. In comparison with the Monte Carlo method and the implicit function method, simulation results show the advantages of the M-D method in the analysis of error propagation in STPs.  相似文献   

4.
Delay-time tomography can be either linearized or non-linear. In the case of linearized tomography, an error due to the linearization is introduced. If the tomography is performed in a non-linear fashion, the theory used is more accurate from the physical point of view, but if the data have a statistical error, a noise bias in the model is introduced due to the non-linear propagation of errors. We investigate the error propagation of a weakly non-linear delay-time tomography example using second-order perturbation theory. This enables us to compare the linearization error with the noise bias. We show explicitly that the question of whether a non-linear inversion methods leads to a better estimation of the model parameters than a linearized method is dependent on the signal-to-noisc ratio. We also show that, in cases of poor data quality, a linearized inversion method leads to a better estimation of the model parameters than a non-linear method.  相似文献   

5.
DEM不确定性影响评价中的填洼分析   总被引:4,自引:0,他引:4  
洼地广泛存在于DEM实现中,洼地的处理会影响DEM不确定性评价结果。该文利用蒙特卡罗方法模拟DEM不确定性,用偏差指标评价DEM不确定性对坡度和地形指数的影响,将填洼与不填洼情况下的偏差指标相减来量化填洼对DEM不确定性评价的影响。研究发现,洼地对不同参数DEM不确定性影响评价作用不同,随着DEM不确定性的增大,洼地的影响也增大。  相似文献   

6.
In a mountainous region, the glacier area and length extracted form the satellite imagery data is the projected area and length of the land surface, which can’t be representative of the reality; there are always some errors. In this paper, the methods of calculating glacier area and length calculation were put forward based on satellite imagery data and a digital elevation model (DEM). The pure pixels and the mixed pixels were extracted based on the linear spectral un-mixing approach, the slop of the pixels was calculated based on the DEM, then the area calculation method was presented. The projection length was obtained from the satellite imagery data, and the elevation differences was calculated from the DEM. The length calculation method was presented based on the Pythagorean theorem. For a glacier in the study area of western Qilian Mountain, northwestern China, the projected area and length were 140.93 km2 and 30.82 km, respectively. This compares with the results calculated by the methods in this paper, which were 155.16 km2 and 32.11 km respectively, a relative error of the projected area and length extracted from the LandSat Thematic Mapper (TM) image directly reach to -9.2 percent and -4.0 percent, respectively. The calculation method is more in accord with the practicality and can provide reference for some other object’s area and length monitoring in a mountainous region.  相似文献   

7.
New expressions are derived for the standard errors in the eigenvalues of a cross-product matrix by themethod of error propagation.Cross-product matrices frequently arise in multivariate data analysis,especially in principal component analysis (PCA).The derived standard errors account for the variabilityin the data as a result of measurement noise and are therefore essentially different from the standarderrors developed in multivariate statistics.Those standard errors were derived in order to account for thefinite number of observations on a fixed number of variables,the so-called sampling error.They can beused for making inferences about the population eigenvalues.Making inferences about the populationeigenvalues is often not the purposes of PCA in physical sciences,This is particularly true if themeasurements are performed on an analytical instrument that produces two-dimensional arrays for onechemical sample:the rows and columns of such a data matrix cannot be identified with observations onvariables at all.However,PCA can still be used as a general data reduction technique,but now the effectof measurement noise on the standard errors in the eigenvalues has to be considered.The consequencesfor significance testing of the eigenvalues as well as the usefulness for error estimates for scores andloadings of PCA,multiple linear regression (MLR) and the generalized rank annihilation method(GRAM) are discussed.The adequacy of the derived expressions is tested by Monte Carlo simulations.  相似文献   

8.
9.
10.
Spatial data uncertainty models (SDUM) are necessary tools that quantify the reliability of results from geographical information system (GIS) applications. One technique used by SDUM is Monte Carlo simulation, a technique that quantifies spatial data and application uncertainty by determining the possible range of application results. A complete Monte Carlo SDUM for generalized continuous surfaces typically has three components: an error magnitude model, a spatial statistical model defining error shapes, and a heuristic that creates multiple realizations of error fields added to the generalized elevation map. This paper introduces a spatial statistical model that represents multiple statistics simultaneously and weighted against each other. This paper's case study builds a SDUM for a digital elevation model (DEM). The case study accounts for relevant shape patterns in elevation errors by reintroducing specific topological shapes, such as ridges and valleys, in appropriate localized positions. The spatial statistical model also minimizes topological artefacts, such as cells without outward drainage and inappropriate gradient distributions, which are frequent problems with random field-based SDUM. Multiple weighted spatial statistics enable two conflicting SDUM philosophies to co-exist. The two philosophies are ‘errors are only measured from higher quality data’ and ‘SDUM need to model reality’. This article uses an automatic parameter fitting random field model to initialize Monte Carlo input realizations followed by an inter-map cell-swapping heuristic to adjust the realizations to fit multiple spatial statistics. The inter-map cell-swapping heuristic allows spatial data uncertainty modelers to choose the appropriate probability model and weighted multiple spatial statistics which best represent errors caused by map generalization. This article also presents a lag-based measure to better represent gradient within a SDUM. This article covers the inter-map cell-swapping heuristic as well as both probability and spatial statistical models in detail.  相似文献   

11.
In many cases of model evaluation in physical geography, the observed data to which model predictions are compared may not be error free. This paper addresses the effect of observational errors on the mean squared error, the mean bias error and the mean absolute deviation through the derivation of a statistical framework and Monte Carlo simulation. The effect of bias in the observed values may either decrease or increase the expected values of the mean squared error and mean bias error, depending on whether model and observational biases have the same or opposite signs, respectively. Random errors in observed data tend to inflate the mean squared error and the mean absolute deviation, and also increase the variability of all the error indices considered here. The statistical framework is applied to a real example, in which sampling variability of the observed data appears to account for most of the difference between observed and predicted values. Examination of scaled differences between modelled and observed values, where the differences are divided by the estimated standard errors of the observed values, is suggested as a diagnostic tool for determining whether random observational errors are significant.  相似文献   

12.
 应用MM5模式以及1°×1°的NCEP再分析资料,对新疆风能资源“代表年”进行了3 km×3 km分辨率的模拟试验,同时将10 m高度的模拟值与气象站10 m高度的实测值进行了对比。结果表明:(1)模式能较真实反映年、月平均风速大小的空间分布特征,但存在一定的系统性偏差,且偏差的大小具有明显的月际变化特征与地域性变化特征。平均来说,夏半年的偏差幅度小于冬半年,大风区的偏差幅度明显小于非大风区,达坂城-小草湖风区、哈密东南部风区、三塘湖-淖毛湖风区的模拟偏差最小。(2)对各风区逐小时平均风速模拟值进行线性回归订正,虽能有效减小有些风区模拟与实测值间的风速偏差,但对有效风速小时数的订正效果极其有限,订正后的偏差仍具有随机性。(3)以月为单位,通过对逐小时平均风速模拟值立方的回归订正可有效减小年平均风功率密度的模拟误差,同时模式中逐小时的空气密度可直接以观测点的月平均空气密度取而代之。该试验不仅对近期新疆已经完成的风能资源详查与综合评价中有关中尺度模式参数化方案的最优组合选择和水平分辨率的调整具有现实意义,而且也为如何提高风电功率短期预报的精准性提供了研究素材。  相似文献   

13.
In the field of digital terrain analysis (DTA), the principle and method of uncertainty in surface area calculation (SAC) have not been deeply developed and need to be further studied. This paper considers the uncertainty of data sources from the digital elevation model (DEM) and SAC in DTA to perform the following investigations: (a) truncation error (TE) modeling and analysis, (b) modeling and analysis of SAC propagation error (PE) by using Monte-Carlo simulation techniques and spatial autocorrelation error to simulate DEM uncertainty. The simulation experiments show that (a) without the introduction of the DEM error, higher DEM resolution and lower terrain complexity lead to smaller TE and absolute error (AE); (b) with the introduction of the DEM error, the DEM resolution and terrain complexity influence the AE and standard deviation (SD) of the SAC, but the trends by which the two values change may be not consistent; and (c) the spatial distribution of the introduced random error determines the size and degree of the deviation between the calculated result and the true value of the surface area. This study provides insights regarding the principle and method of uncertainty in SACs in geographic information science (GIScience) and provides guidance to quantify SAC uncertainty.  相似文献   

14.
A number of methods have been developed over the last few decades to model the gravitational gradients using digital elevation data. All methods are based on second-order derivatives of the Newtonian mass integral for the gravitational potential. Foremost are algorithms that divide the topographic masses into prisms or more general polyhedra and sum the corresponding gradient contributions. Other methods are designed for computational speed and make use of the fast Fourier transform (FFT), require a regular rectangular grid of data, and yield gradients on the entire grid, but only at constant altitude. We add to these the ordinary numerical integration (in horizontal coordinates) of the gradient integrals. In total we compare two prism, two FFT and two ordinary numerical integration methods using 1" elevation data in two topographic regimes (rough and moderate terrain). Prism methods depend on the type of finite elements that are generated with the elevation data; in particular, alternative triangulations can yield significant differences in the gradients (up to tens of Eötvös). The FFT methods depend on a series development of the topographic heights, requiring terms up to 14th order in rough terrain; and, one popular method has significant bias errors (e.g. 13 Eötvös in the vertical–vertical gradient) embedded in its practical realization. The straightforward numerical integrations, whether on a rectangular or triangulated grid, yield sub-Eötvös differences in the gradients when compared to the other methods (except near the edges of the integration area) and they are as efficient computationally as the finite element methods.  相似文献   

15.
The area increment of land surface compared with its projected area is an effect of topographic relief and is also a source of environmental variations. To examine the effects of topography and data resolution on surface area calculation, we calculated incremental area coefficients (IACs), based on two different algorithms, for a DEM of China at a series of spatial resolutions. Sampling the DEM with a regional network of 50?km?×?50?km cell size, we explored the relationships among the two IACs and topographic features. Both IACs studied were exponential functions of resolution. At 30-m resolution, the IACs were 4.31 and 4.89% over China, respectively. The largest increment for a 50?km?×?50?km cell was >45%. Between the IACs there was a linear relationship that varied with DEM resolution. Hierarchical variation partitioning revealed that the factors included contributed in a very similar percentage composition to the two IACs, mean slope (37.5 or 38.7%) and standard deviation of slope (22.3 or 19.6%) at local scale dominated the area increment, followed by regional elevation range. Data resolution contributed about 10%, while the deviation of slope exposure only had minimal (1.4 or 1.7%) impact on surface-area increment. For a specific type of geomorphology, a threshold resolution of DEM can be determined, below which the surface-area increment (i.e., IAC) is negligible. Our results provided the first comprehensive estimate of the contributions of the topographic features, DEM resolution, and algorithms for the surface-area increment, and indicated the scale-related properties and potential environmental consequences of topographic heterogeneity in various estimates of natural resources and ecosystem functions when area needs to be taken into account.  相似文献   

16.
This paper explores three theoretical approaches for estimating the degree of correctness to which the accuracy figures of a gridded Digital Elevation Model (DEM) have been estimated depending on the number of checkpoints involved in the assessment process. The widely used average‐error statistic Mean Square Error (MSE) was selected for measuring the DEM accuracy. The work was focused on DEM uncertainty assessment using approximate confidence intervals. Those confidence intervals were constructed both from classical methods which assume a normal distribution of the error and from a new method based on a non‐parametric approach. The first two approaches studied, called Chi‐squared and Asymptotic Student t, consider a normal distribution of the residuals. That is especially true in the first case. The second case, due to the asymptotic properties of the t distribution, can perform reasonably well with even slightly non‐normal residuals if the sample size is large enough. The third approach developed in this article is a new method based on the theory of estimating functions which could be considered much more general than the previous two cases. It is based on a non‐parametric approach where no particular distribution is assumed. Thus, we can avoid the strong assumption of distribution normality accepted in previous work and in the majority of current standards of positional accuracy. The three approaches were tested using Monte Carlo simulation for several populations of residuals generated from originally sampled data. Those original grid DEMs, considered as ground data, were collected by means of digital photogrammetric methods from seven areas displaying differing morphology employing a 2 by 2 m sampling interval. The original grid DEMs were subsampled to generate new lower‐resolution DEMs. Each of these new DEMs was then interpolated to retrieve its original resolution using two different procedures. Height differences between original and interpolated grid DEMs were calculated to obtain residual populations. One interpolation procedure resulted in slightly non‐normal residual populations, whereas the other produced very non‐normal residuals with frequent outliers. Monte Carlo simulations allow us to report that the estimating function approach was the most robust and general of those tested. In fact, the other two approaches, especially the Chi‐squared method, were clearly affected by the degree of normality of the residual population distribution, producing less reliable results than the estimating functions approach. This last method shows good results when applied to the different datasets, even in the case of more leptokurtic populations. In the worst cases, no more than 64–128 checkpoints were required to construct an estimate of the global error of the DEM with 95% confidence. The approach therefore is an important step towards saving time and money in the evaluation of DEM accuracy using a single average‐error statistic. Nevertheless, we must take into account that MSE is essentially a single global measure of deviations, and thus incapable of characterizing the spatial variations of errors over the interpolated surface.  相似文献   

17.
While error propagation in GIS is a topic that has received a lot of attention, it has not been researched with 3D GIS data. We extend error propagation to 3D city models using a Monte Carlo simulation on a use case of annual solar irradiation estimation of building rooftops for assessing the efficiency of installing solar panels. Besides investigating the extension of the theory of error propagation in GIS from 2D to 3D, this paper presents the following contributions. We (1) introduce varying XY/Z accuracy levels of the geometry to reflect actual acquisition outcomes; (2) run experiments on multiple accuracy classes (121 in total); (3) implement an uncertainty engine for simulating acquisition positional errors to procedurally modelled (synthetic) buildings; (4) perform the uncertainty propagation analysis on multiple levels of detail (LODs); and (5) implement Solar3Dcity – a CityGML-compliant software for estimating the solar irradiation of roofs, which we use in our experiments. The results show that in the case of the city of Delft in the Netherlands, a 0.3/0.6 m positional uncertainty yields an error of 68 kWh/m2/year (10%) in solar irradiation estimation. Furthermore, the results indicate that the planar and vertical uncertainties have a different influence on the estimations, and that the results are comparable between LODs. In the experiments we use procedural models, implying that analyses are carried out in a controlled environment where results can be validated. Our uncertainty propagation method and the framework are applicable to other 3D GIS operations and/or use cases. We released Solar3Dcity as open-source software to support related research efforts in the future.  相似文献   

18.
Abstract

When data on environmental attributes such as those of soil or groundwater are manipulated by logical cartographic modelling, the results are usually assumed to be exact. However, in reality the results will be in error because the values of input attributes cannot be determined exactly. This paper analyses how errors in such values propagate through Boolean and continuous modelling, involving the intersection of several maps. The error analysis is carried out using Monte Carlo methods on data interpolated by block kriging to a regular grid which yields predictions and prediction error standard deviations of attribute values for each pixel. The theory is illustrated by a case study concerning the selection of areas of medium textured, non-saline soil at an experimental farm in Alberta, Canada. The results suggest that Boolean methods of sieve mapping are much more prone to error propagation than the more robust continuous equivalents. More study of the effects of errors and of the choice of attribute classes and of class parameters on error propagation is recommended.  相似文献   

19.
20.
1IntroductionSubsidence,asverticalmovementoftheeallof'USt,occurredinmanypartsoftheworld,Particularlyindenselypopulateddeltalcregions.Withocctirrenceofsurfacesubsidence,alotofdamagehasbeeninduced.Surfacesubsidencecanberesultedfromnabscauses.suchastectonicinchonandsealevelrise,fromman-madeinducedcauses,suchasexcessivewithdrawalofgroundwater,geothennalfluids.oilandgas,orextrachonofcoal,sulphur,goldandothersolidsthroughminingorUndergroundconstrUchon(t'Unnelling),orfromother"dxedcausessuchashydro…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号