首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The calculation of surface area is meaningful for a variety of space-filling phenomena, e.g., the packing of plants or animals within an area of land. With Digital Elevation Model (DEM) data we can calculate the surface area by using a continuous surface model, such as by the Triangulated Irregular Network (TIN). However, just as the triangle-based surface area discussed in this paper, the surface area is generally biased because it is a nonlinear mapping about the DEM data which contain measurement errors. To reduce the bias in the surface area, we propose a second-order bias correction by applying nonlinear error propagation to the triangle-based surface area. This process reveals that the random errors in the DEM data result in a bias in the triangle-based surface area while the systematic errors in the DEM data can be reduced by using the height differences. The bias is theoretically given by a probability integral which can be approximated by numerical approaches including the numerical integral and the Monte Carlo method; but these approaches need a theoretical distribution assumption about the DEM measurement errors, and have a very high computational cost. In most cases, we only have variance information on the measurement errors; thus, a bias estimation based on nonlinear error propagation is proposed. Based on the second-order bias estimation proposed, the variance of the surface area can be improved immediately by removing the bias from the original variance estimation. The main results are verified by the Monte Carlo method and by the numerical integral. They show that an unbiased surface area can be obtained by removing the proposed bias estimation from the triangle-based surface area originally calculated from the DEM data.  相似文献   

2.
3.
This paper explores three theoretical approaches for estimating the degree of correctness to which the accuracy figures of a gridded Digital Elevation Model (DEM) have been estimated depending on the number of checkpoints involved in the assessment process. The widely used average‐error statistic Mean Square Error (MSE) was selected for measuring the DEM accuracy. The work was focused on DEM uncertainty assessment using approximate confidence intervals. Those confidence intervals were constructed both from classical methods which assume a normal distribution of the error and from a new method based on a non‐parametric approach. The first two approaches studied, called Chi‐squared and Asymptotic Student t, consider a normal distribution of the residuals. That is especially true in the first case. The second case, due to the asymptotic properties of the t distribution, can perform reasonably well with even slightly non‐normal residuals if the sample size is large enough. The third approach developed in this article is a new method based on the theory of estimating functions which could be considered much more general than the previous two cases. It is based on a non‐parametric approach where no particular distribution is assumed. Thus, we can avoid the strong assumption of distribution normality accepted in previous work and in the majority of current standards of positional accuracy. The three approaches were tested using Monte Carlo simulation for several populations of residuals generated from originally sampled data. Those original grid DEMs, considered as ground data, were collected by means of digital photogrammetric methods from seven areas displaying differing morphology employing a 2 by 2 m sampling interval. The original grid DEMs were subsampled to generate new lower‐resolution DEMs. Each of these new DEMs was then interpolated to retrieve its original resolution using two different procedures. Height differences between original and interpolated grid DEMs were calculated to obtain residual populations. One interpolation procedure resulted in slightly non‐normal residual populations, whereas the other produced very non‐normal residuals with frequent outliers. Monte Carlo simulations allow us to report that the estimating function approach was the most robust and general of those tested. In fact, the other two approaches, especially the Chi‐squared method, were clearly affected by the degree of normality of the residual population distribution, producing less reliable results than the estimating functions approach. This last method shows good results when applied to the different datasets, even in the case of more leptokurtic populations. In the worst cases, no more than 64–128 checkpoints were required to construct an estimate of the global error of the DEM with 95% confidence. The approach therefore is an important step towards saving time and money in the evaluation of DEM accuracy using a single average‐error statistic. Nevertheless, we must take into account that MSE is essentially a single global measure of deviations, and thus incapable of characterizing the spatial variations of errors over the interpolated surface.  相似文献   

4.
基于数字高程模型(DEM)计算得到的坡度、坡向等地形属性是滑坡危险性评价模型的重要输入数据, DEM误差会导致地形属性计算结果不确定性, 进而影响滑坡危险性评价模型的结果。本文选择基于专家知识的滑坡危险性评价模型和逻辑斯第回归模型, 采用蒙特卡洛模拟方法, 研究DEM误差所导致的滑坡危险性评价模型结果不确定性。研究区位于长江中上游的重庆开县, 采用5 m分辨率的DEM, 以序贯高斯模拟方法模拟了不同大小(误差标准差为1 m、7.5 m、15 m)和空间自相关性(变程为0 m、30 m、60 m、120 m)的12 类DEM误差场参与滑坡危险性评价。每次模拟包括100 个实现, 通过对每次模拟分别计算滑坡危险性评价结果的标准差图层和分类一致性百分比图层, 用以评价结果不确定性。评价结果表明, 在不同的DEM精度下, 两个滑坡危险性评价模型所得结果的总体不确定性随空间自相关程度的变化趋势并不相同。当DEM空间自相关性程度不同时, 基于专家知识的滑坡危险性评价模型的评价结果总体不确定随着DEM误差增加而呈现不同的变化趋势, 而逻辑斯第回归模型的评价结果总体不确定性随着DEM误差大小增加而单调增加。从评价结果总体不确定性角度而言, 总体上逻辑斯第回归模型比基于专家知识的滑坡危险性评价模型更加依赖于DEM数据质量。  相似文献   

5.
Digital elevation models (DEMs) have been widely used for a range of applications and form the basis of many GIS-related tasks. An essential aspect of a DEM is its accuracy, which depends on a variety of factors, such as source data quality, interpolation methods, data sampling density and the surface topographical characteristics. In recent years, point measurements acquired directly from land surveying such as differential global positioning system and light detection and ranging have become increasingly popular. These topographical data points can be used as the source data for the creation of DEMs at a local or regional scale. The errors in point measurements can be estimated in some cases. The focus of this article is on how the errors in the source data propagate into DEMs. The interpolation method considered is a triangulated irregular network (TIN) with linear interpolation. Both horizontal and vertical errors in source data points are considered in this study. An analytical method is derived for the error propagation into any particular point of interest within a TIN model. The solution is validated using Monte Carlo simulations and survey data obtained from a terrestrial laser scanner.  相似文献   

6.
Abstract

When data on environmental attributes such as those of soil or groundwater are manipulated by logical cartographic modelling, the results are usually assumed to be exact. However, in reality the results will be in error because the values of input attributes cannot be determined exactly. This paper analyses how errors in such values propagate through Boolean and continuous modelling, involving the intersection of several maps. The error analysis is carried out using Monte Carlo methods on data interpolated by block kriging to a regular grid which yields predictions and prediction error standard deviations of attribute values for each pixel. The theory is illustrated by a case study concerning the selection of areas of medium textured, non-saline soil at an experimental farm in Alberta, Canada. The results suggest that Boolean methods of sieve mapping are much more prone to error propagation than the more robust continuous equivalents. More study of the effects of errors and of the choice of attribute classes and of class parameters on error propagation is recommended.  相似文献   

7.
The weights-of-evidence model (a Bayesian probability model) was applied to the task of evaluating landslide susceptibility using GIS. Using landslide location and a spatial database containing information such as topography, soil, forest, geology, land cover and lineament, the weights-of-evidence model was applied to calculate each relevant factor's rating for the Boun area in Korea, which had suffered substantial landslide damage following heavy rain in 1998. In the topographic database, the factors were slope, aspect and curvature; in the soil database, they were soil texture, soil material, soil drainage, soil effective thickness and topographic type; in the forest map, they were forest type, timber diameter, timber age and forest density; lithology was derived from the geological database; land-use information came from Landsat TM satellite imagery; and lineament data from IRS satellite imagery. Tests of conditional independence were performed for the selection of factors, allowing 43 combinations of factors to be analysed. For the analysis of mapping landslide susceptibility, the contrast values, W + and W -, of each factor's rating were overlaid spatially. The results of the analysis were validated using the previous landslide locations. The combination of slope, curvature, topography, timber diameter, geology and lineament showed the best results. The results can be used for hazard prevention and land-use planning.  相似文献   

8.
Spatial data uncertainty models (SDUM) are necessary tools that quantify the reliability of results from geographical information system (GIS) applications. One technique used by SDUM is Monte Carlo simulation, a technique that quantifies spatial data and application uncertainty by determining the possible range of application results. A complete Monte Carlo SDUM for generalized continuous surfaces typically has three components: an error magnitude model, a spatial statistical model defining error shapes, and a heuristic that creates multiple realizations of error fields added to the generalized elevation map. This paper introduces a spatial statistical model that represents multiple statistics simultaneously and weighted against each other. This paper's case study builds a SDUM for a digital elevation model (DEM). The case study accounts for relevant shape patterns in elevation errors by reintroducing specific topological shapes, such as ridges and valleys, in appropriate localized positions. The spatial statistical model also minimizes topological artefacts, such as cells without outward drainage and inappropriate gradient distributions, which are frequent problems with random field-based SDUM. Multiple weighted spatial statistics enable two conflicting SDUM philosophies to co-exist. The two philosophies are ‘errors are only measured from higher quality data’ and ‘SDUM need to model reality’. This article uses an automatic parameter fitting random field model to initialize Monte Carlo input realizations followed by an inter-map cell-swapping heuristic to adjust the realizations to fit multiple spatial statistics. The inter-map cell-swapping heuristic allows spatial data uncertainty modelers to choose the appropriate probability model and weighted multiple spatial statistics which best represent errors caused by map generalization. This article also presents a lag-based measure to better represent gradient within a SDUM. This article covers the inter-map cell-swapping heuristic as well as both probability and spatial statistical models in detail.  相似文献   

9.
Terrain attributes such as slope gradient and slope shape, computed from a gridded digital elevation model (DEM), are important input data for landslide susceptibility mapping. Errors in DEM can cause uncertainty in terrain attributes and thus influence landslide susceptibility mapping. Monte Carlo simulations have been used in this article to compare uncertainties due to DEM error in two representative landslide susceptibility mapping approaches: a recently developed expert knowledge and fuzzy logic-based approach to landslide susceptibility mapping (efLandslides), and a logistic regression approach that is representative of multivariate statistical approaches to landslide susceptibility mapping. The study area is located in the middle and upper reaches of the Yangtze River, China, and includes two adjacent areas with similar environmental conditions – one for efLandslides model development (approximately 250 km2) and the other for model extrapolation (approximately 4600 km2). Sequential Gaussian simulation was used to simulate DEM error fields at 25-m resolution with different magnitudes and spatial autocorrelation levels. Nine sets of simulations were generated. Each set included 100 realizations derived from a DEM error field specified by possible combinations of three standard deviation values (1, 7.5, and 15 m) for error magnitude and three range values (0, 60, and 120 m) for spatial autocorrelation. The overall uncertainties of both efLandslides and the logistic regression approach attributable to each model-simulated DEM error were evaluated based on a map of standard deviations of landslide susceptibility realizations. The uncertainty assessment showed that the overall uncertainty in efLandslides was less sensitive to DEM error than that in the logistic regression approach and that the overall uncertainties in both efLandslides and the logistic regression approach for the model-extrapolation area were generally lower than in the model-development area used in this study. Boxplots were produced by associating an independent validation set of 205 observed landslides in the model-extrapolation area with the resulting landslide susceptibility realizations. These boxplots showed that for all simulations, efLandslides produced more reasonable results than logistic regression.  相似文献   

10.
We use a GIS‐based agent‐based model (ABM), named dynamic ecological exurban development (DEED), with spatial data in hypothetical scenarios to evaluate the individual and interacting effects of lot‐size zoning and municipal land‐acquisition strategies on possible forest‐cover outcomes in Scio Township, a municipality in Southeastern Michigan. Agent types, characteristics, behavioural methods, and landscape perceptions (i.e. landscape aesthetics) are empirically informed using survey data, spatial analyses, and a USDA methodology for mapping landscape aesthetic quality. Results from our scenario experiments computationally verified literature that show large lot‐size zoning policies lead to greater sprawl, and large lot‐size zoning policies can lead to increased forest cover, although we found this effect to be small relative to municipal land acquisition. The return on land acquisition for forest conservation was strongly affected by the location strategy used to select parcels for conservation. Furthermore, the location strategy for forest conservation land acquisition was more effective at increasing aggregate forest levels than the independent zoning policies, the quantity of area acquired for forest conservation, and any combination of the two. The results using an integrated GIS and ABM framework for evaluating land‐use development policies on forest cover provide additional insight into how these types of policies may act out over time and what aspects of the policies were more influential towards the goal of maximising forest cover.  相似文献   

11.
As sea level is projected to rise throughout the twenty-first century due to climate change, there is a need to ensure that sea level rise (SLR) models accurately and defensibly represent future flood inundation levels to allow for effective coastal zone management. Digital elevation models (DEMs) are integral to SLR modelling, but are subject to error, including in their vertical resolution. Error in DEMs leads to uncertainty in the output of SLR inundation models, which if not considered, may result in poor coastal management decisions. However, DEM error is not usually described in detail by DEM suppliers; commonly only the RMSE is reported. This research explores the impact of stated vertical error in delineating zones of inundation in two locations along the Devon, United Kingdom, coastline (Exe and Otter Estuaries). We explore the consequences of needing to make assumptions about the distribution of error in the absence of detailed error data using a 1 m, publically available composite DEM with a maximum RMSE of 0.15 m, typical of recent LiDAR-derived DEMs. We compare uncertainty using two methods (i) the NOAA inundation uncertainty mapping method which assumes a normal distribution of error and (ii) a hydrologically correct bathtub method where the DEM is uniformly perturbed between the upper and lower bounds of a 95% linear error in 500 Monte Carlo Simulations (HBM+MCS). The NOAA method produced a broader zone of uncertainty (an increase of 134.9% on the HBM+MCS method), which is particularly evident in the flatter topography of the upper estuaries. The HBM+MCS method generates a narrower band of uncertainty for these flatter areas, but very similar extents where shorelines are steeper. The differences in inundation extents produced by the methods relate to a number of underpinning assumptions, and particularly, how the stated RMSE is interpreted and used to represent error in a practical sense. Unlike the NOAA method, the HBM+MCS model is computationally intensive, depending on the areas under consideration and the number of iterations. We therefore used the HBM+ MCS method to derive a regression relationship between elevation and inundation probability for the Exe Estuary. We then apply this to the adjacent Otter Estuary and show that it can defensibly reproduce zones of inundation uncertainty, avoiding the computationally intensive step of the HBM+MCS. The equation-derived zone of uncertainty was 112.1% larger than the HBM+MCS method, compared to the NOAA method which produced an uncertain area 423.9% larger. Each approach has advantages and disadvantages and requires value judgements to be made. Their use underscores the need for transparency in assumptions and communications of outputs. We urge DEM publishers to move beyond provision of a generalised RMSE and provide more detailed estimates of spatial error and complete metadata, including locations of ground control points and associated land cover.  相似文献   

12.
Mineral exploration activities require robust predictive models that result in accurate mapping of the probability that mineral deposits can be found at a certain location. Random forest (RF) is a powerful machine data-driven predictive method that is unknown in mineral potential mapping. In this paper, performance of RF regression for the likelihood of gold deposits in the Rodalquilar mining district is explored. The RF model was developed using a comprehensive exploration GIS database composed of: gravimetric and magnetic survey, a lithogeochemical survey of 59 elements, lithology and fracture maps, a Landsat 5 Thematic Mapper image and gold occurrence locations. The results of this study indicate that the use of RF for the integration of large multisource data sets used in mineral exploration and for prediction of mineral deposit occurrences offers several advantages over existing methods. Key advantages of RF include: (1) the simplicity of parameter setting; (2) an internal unbiased estimate of the prediction error; (3) the ability to handle complex data of different statistical distributions, responding to nonlinear relationships between variables; (4) the capability to use categorical predictors; and (5) the capability to determine variable importance. Additionally, variables that RF identified as most important coincide with well-known geologic expectations. To validate and assess the effectiveness of the RF method, gold prospectivity maps are also prepared using the logistic regression (LR) method. Statistical measures of map quality indicate that the RF method performs better than LR, with mean square errors equal to 0.12 and 0.19, respectively. The efficiency of RF is also better, achieving an optimum success rate when half of the area predicted by LR is considered.  相似文献   

13.
Areal interpolation is the process by which data collected from one set of zonal units can be estimated for another zonal division of the same space that shares few or no boundaries with the first. In previous research, we outlined the use of dasymetric mapping for areal interpolation and showed it to be the most accurate method tested. There we used control information derived from classified satellite imagery to parameterize the dasymetric method, but because such data are rife with errors, here we extend the work to examine the sensitivity of the population estimates to error in the classified imagery. Results show the population estimates by dasymetric mapping to be largely insensitive to the errors of classification in the Landsat image when compared with the other methods tested. The dasymetric method deteriorates to the accuracy of the next worst estimate only when 40% error occurs in the classified image, a level of error that may easily be bettered within most remote sensing projects.  相似文献   

14.
山地城市土地覆盖变化对地表温度的影响   总被引:5,自引:3,他引:5  
彭征  廖和平  郭月婷  李清 《地理研究》2009,28(3):673-684
针对山地城市复杂的城市地貌和下垫面类型,本文使用TM、DEM、ETM+等遥感影像资料,提取了重庆市土地利用覆盖类型;借助TM、MSS等遥感数据的红外波段,反演出1988年和2000年的地表温度。分析了重庆市近十年的土地覆盖变化及其对地表温度的影响,结果表明,在1988~2000年间,研究区土地覆盖变化明显,特别是城市土地覆盖面积有显著增加。土地覆盖类型的变化会改变地表温度的空间分布,尤其是城市土地的扩展会提高地表温度。对山地、丘陵、平坝、陡坡四种耕地的地表温度进行了深入分析与研究,结果表明:山地城市土地覆盖变化引起了植被覆盖度的变化,而植被覆盖度的变化又相应地影响了地表温度的变化,植被覆盖度每下降10%,地表温度上升0.49K。  相似文献   

15.
张春华  李修楠  吴孟泉  秦伟山  张筠 《地理科学》2018,38(11):1904-1913
利用2015年Landsat 8 OLI遥感影像和DEM作为分类数据源,结合野外调查数据,采用面向对象的分类方法对昆嵛山地区土地覆盖信息进行提取,并对分类结果进行精度评价与比较分析。研究表明:面向对象分类方法提取的各地类连续且边界清晰,分类效果与实际情况基本吻合。昆嵛山地区占主导地位的土地覆盖类型是针叶林,面积为1 546.81 km2。研究区土地覆盖分类的总体精度和Kappa系数分别为91.5%和0.88,其中针叶林、草地、水体和建设用地的生产者精度均达到87%以上。相对于监督分类方法,本研究提出的土地覆盖信息提取方法的总体分类精度和Kappa系数分别提高14.7%和0.17。基于面向对象的中分辨率遥感影像,能够获取较高精度的土地覆盖信息,为大范围土地覆盖分类研究提供方法参考。  相似文献   

16.
根据兰州市1995年的1:5万土地利用现状图、2001年1:4 000彩红外航片、2005年的Landsat TM遥感影像、1:5万DEM和2001-2010年城市总体规划等数据资料,结合GID和DUEM模型,设计了两种不同情景模拟分析了兰州市城市土地利用变化.研究结果表明:兰州市城市土地利用增长集中在城关区雁滩、七里河区马滩和崔家大滩、安宁区安宁堡乡、沙井驿乡、崔家庄和迎门滩等地区;且兰州市城市土地利用变化趋势深受城市总体规划和政府政策等因素影响.同时也发现DUEM模型在实际应用中存在一定的局限性.  相似文献   

17.
Influence of survey strategy and interpolation model on DEM quality   总被引:2,自引:0,他引:2  
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.  相似文献   

18.
The increasing use of Geographical Information System applications has generated a strong interest in the assessment of data quality. As an example of quantitative raster data, we analysed errors in Digital Terrain Models (DTM). Errors might be classified as systematic (strongly dependent on the production methodology) and random. The present work attempts to locate some types of randomly distributed, weakly spatially correlated errors by applying a new methodology based on Principal Components Analysis. The Principal Components approach presented is very different from the typical scheme used in image processing. A prototype implementation has been conducted using MATLAB, and the overall procedure has been numerically tested using a Monte Carlo approach. A DTM of Stockholm, with integer-valued heights varying from 0 to 59 m has been used as a testbed.The model was contaminated by adding randomly located errors, distributed uniformly within 4 m and 4m. The procedure has been applied using both spike shaped (isolated errors) and pyramid-like errors. The preliminary results show that for the former, roughly half of the errors have been located with a Type I error probability of 4.6 per cent on average, checking up to 1 per cent of the dataset. The associated Type II error of the larger errors (of exactly 4m or 4 m) drops from an initial value of 1.21 per cent down to 0.63 per cent. By checking another 1 per cent of the dataset, such error drops to 0.34 per cent implying that about 71 per cent of the 4m errors have been located; Type I error was below 11.27 per cent. The results for pyramid-like errors are slightly worse, with a Type I error of 25.80 per cent on average for the first 1 per cent effort, and a Type II error drop from an initial value of 0.81 per cent down to 0.65 per cent. The procedure can be applied both for error detection during the DTM generation and by end users. It might also be used for other types of quantitative raster data.  相似文献   

19.
Digital elevation model (DEM) elevation accuracy and spatial resolution are typically considered before a given DEM is used for the assessment of coastal flooding, sea-level rise or erosion risk. However, limitations of DEMs arising from their original data source can often be overlooked during DEM selection. Global elevation error statistics provided by DEM data suppliers can provide a useful indicator of actual DEM error, but these statistics can understate elevation errors occurring outside of idealised ground reference areas. The characteristic limitations of a range of DEM sources that may be used for the assessment of coastal inundation and erosion risk are tested using high-resolution photogrammetric, low- and medium-resolution global positioning system (GPS)-derived and very high-resolution terrestrial laser scanning point data sets. Errors detected in a high-resolution photogrammetric DEM are found to be substantially beyond quoted error, demonstrating the degree to which quoted DEM accuracy can understate local DEM error and highlighting the extent to which spatial resolution can fail to provide a reliable indicator of DEM accuracy. Superior accuracies and inundation prediction results are achieved based on much lower-resolution GPS points confirming conclusions drawn in the case of the photogrammetric DEM data. This suggests a scope for the use of GPS-derived DEMs in preference to the photogrammetric DEM data in large-scale risk-mapping studies. DEM accuracies and superior representation of micro-topography achieved using high-resolution terrestrial laser scan data confirm its advantages for the prediction of subtle inundation and erosion risk. However, the requirement for data fusion of GPS to remove ground-vegetation error highlighted limitations for the use of side-scan laser scan data in densely vegetated areas.  相似文献   

20.
Loci of extreme curvature of the topographic surface may be defined by the derivation function (T) depending on the first‐, second‐, and third‐order partial derivatives of elevation. The loci may partially describe ridge and thalweg lines. The first‐ and second‐order partial derivatives are commonly calculated from a digital elevation model (DEM) by fitting the second‐order polynomial to a 3×3 window. This approach cannot be used to compute the third‐order partial derivatives and T. We deduced formulae to estimate the first‐, second‐, and third‐order partial derivatives from a DEM fitting the third‐order polynomial to a 5×5 window. The polynomial is approximated to elevation values of the window. This leads to a local denoising that may enhance calculations. Under the same grid size of a DEM and root mean square error (RMSE) of elevation, calculation of the second‐order partial derivatives by the method developed results in significantly lower RMSE of the derivatives than that using the second‐order polynomial and the 3×3 window. An RMSE expression for the derivation function is deduced. The method proposed can be applied to derive any local topographic variable, such as slope gradient, aspect, curvatures, and T. Treatment of a DEM by the method developed demonstrated that T mapping may not substitute regional logistic algorithms to detect ridge/thalweg networks. However, the third‐order partial derivatives of elevation can be used in digital terrain analysis, particularly, in landform classifications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号