首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Soil erodibility is one of the most important factors used in spatial soil erosion risk assessment. Soil information derived from soil map is used to generate soil erodibility factor map. Soil maps are not available at appropriate scale. In general, soil maps at small scale are used in deriving soil erodibility map that largely generalized spatial variability and it largely ignores the spatial variability since soil map units are discrete polygons. The present study was attempted to generate soil erodibilty map using terrain indices derived from DTM and surface soil sample data. Soil variability in the hilly landscape is largely controlled by topography represented by DTM. The CartoDEM (30 m) was used to derive terrain indices such as terrain wetness index (TWI), stream power index (SPI), sediment transport index (STI) and slope parameters. A total of 95 surface soil samples were collected to compute soil erodibility factor (K) values. The K values ranged from 0.23 to 0.81 t ha?1R?1 in the watershed. Correlation analysis among K-factor and terrain parameters showed highest correlation of soil erodibilty with TWI (r 2= 0.561) followed by slope (r 2= 0.33). A multiple linear regression model was developed to derive soil erodibilty using terrain parameters. A set of 20 soil sample points were used to assess the accuracy of the model. The coefficient of determination (r 2) and RMSE were computed to be 0.76 and 0.07 t ha?1R?1 respectively. The proposed methodology is quite useful in generating soil erodibilty factor map using digital elevation model (DEM) for any hilly terrain areas. The equation/model need to be established for the particular hilly terrain under the study. The developed model was used to generate spatial soil erodibility factor (K) map of the watershed in the lower Himalayan range.  相似文献   

2.
With the rapid development of the World Wide Web, remote sensing (RS) data have become available to a wider range of public/professional users than ever before. Web Map Services (WMSs) provide a simple Web interface for requesting RS data from distributed geospatial databases. RS data providers typically expect to provide lightweight WMSs. They have a low construction cost, and can be easily managed and deployed on standard hardware/software platforms. However, existing systems for WMSs are often heavyweight and inherently hard to manage, due to their improper usage of databases or data storage. That is, they are not suitable for public data services on the Web. In addition, RS data are moving toward the multi-dimensional paradigm, which is characterized by multi-sensor, multi-spectral, multi-temporal and high resolution. Therefore, an efficient organization and storage approach of multi-dimensional RS data is needed for lightweight WMSs, and the efficient WMSs must support multi-dimensional Web browsing. In this paper, we propose a Global Remote Sensing Data Hierarchical Model (GRHM) based on the image pyramid and tiling techniques. GRHM is a logical model that is independent upon physical storage. To support lightweight WMSs, we propose a physical storage structure, and deploy multi-dimensional RS data on Web servers. To further improve the performance of WMSs, a data declustering method based on Hilbert space-filling curve is adopted for the distributed storage. We also provide an Open Geospatial Consortium (OGC) WMS and a Web map system in Web browsers. Experiments conducted on real RS datasets show promising performance of the proposed lightweight WMSs.  相似文献   

3.
In karst areas, accurately measuring and managing the spatial variability of soil water content (SWC) is very critical in settling numerous issues such as karst rocky desertification, ecosystem reconstruction, etc. In these areas, SWC exhibits strong spatial dependence, and it is a time and labor consuming procedure to measure its spatial variability. Therefore, estimation of this kind of soil property at an acceptable level of accuracy is of great significance. This study was conducted to evaluate and compare the spatial estimation of SWC by using ordinary kriging (OK) and cokriging (COK) methods with prime terrain variables, tending to predict SWC using limited available sample data for a 2,363.7 km2 study area in Mashan County, Guangxi Zhuang Autonomous Region, Southwest China. The measured SWC ranged from 3.36 to 26.69 %, with a mean of 17.34 %. The correlation analysis between SWC and prime terrain variables indicated that SWC showed significantly positive correlation with elevation (r is 0.46, P < 0.01), and significantly negative correlation with slope (r is ?0.30, P < 0.01); however, SWC was not significantly correlated with aspect in the study area. Therefore, elevation and slope were used as auxiliary data together for SWC prediction using COK method, and mean error (ME) and root mean square error were adopted to validate the prediction of SWC by these methods. Results indicated that COK with prime terrain variables data was superior to OK with relative improvement of 28.52 % in the case of limited available data, and also revealed that such elevation and slope data have the potential to improve the precision and reliability of SWC prediction as useful auxiliary variables.  相似文献   

4.
5.
Digital soil mapping relies on field observations, laboratory measurements and remote sensing data, integrated with quantitative methods to map spatial patterns of soil properties. The study was undertaken in a hilly watershed in the Indian Himalayan region of Mandi district, Himachal Pradesh for mapping soil nutrients by employing artificial neural network (ANN), a potent data mining technique. Soil samples collected from the surface layer (0–15 cm) of 75 locations in the watershed, through grid sampling approach during the fallow period of November 2015, were preprocessed and analysed for various soil nutrients like soil organic carbon (SOC), nitrogen (N) and phosphorus (P). Spectral indices like Colouration Index, Brightness Index, Hue Index and Redness Index derived from Landsat 8 satellite data and terrain parameters such as Terrain Wetness Index, Stream Power Index and slope using CartoDEM (30 m) were used. Spectral and terrain indices sensitive to different nutrients were identified using correlation analysis and thereafter used for predictive modelling of nutrients using ANN technique by employing feed-forward neural network with backpropagation network architecture and Levenberg–Marquardt training algorithm. The prediction of SOC was obtained with an R2 of 0.83 and mean squared error (MSE) of 0.05, whereas for available nitrogen, it was achieved with an R2 value of 0.62 and MSE of 0.0006. The prediction accuracy for phosphorus was low, since the phosphorus content in the area was far below the normal P values of typical Indian soils and thus the R2 value observed was only 0.511. The attempts to develop prediction models for available potassium (K) and clay (%) failed to give satisfactory results. The developed models were validated using independent data sets and used for mapping the spatial distribution of SOC and N in the watershed.  相似文献   

6.
Within the TERENO initiative, four terrestrial observatories, collecting huge amounts of environmental data, are being set up since 2008. To manage, describe, exchange and publish these data, the distributed Spatial Data Infrastructure TEODOOR (http://www.tereno.net) was created. Each institution responsible for an individual observatory sets up its own local data infrastructure, which may communicate with each other to exchange data and metadata internally or to the public by OGC compliant Web services. The TEODOOR data portal serves as a database node to provide scientists and decision makers with reliable and well-accessible data and data products. Various tools like hierarchical search or Web-GIS functions allow a deeper insight into the different observatories, test sites and sensor networks. Sensor data can be queried and selected for measured parameters, stations and/or time periods, and can be visualized and downloaded according to a shared TERENO data policy. Currently, TEODOOR provides free access to data from more than 500 monitoring stations.  相似文献   

7.
张传才  秦奋  张喜旺  王航  肖培青 《水文》2018,38(2):15-24
DEM分辨率对分布式水沙过程模拟具有重要影响,然而,产生影响的内部机制尚不明确。改进水沙物理模型CASC2D-SED的结构,将坡度由DEM在模型内部直接提取改为由模块单独计算,并将坡度设计为模型的独立输入参数,通过单独改变坡度参数来研究坡度对水沙模拟DEM尺度效应的影响。基于改进的CASC2D-SED模型,以内蒙古准格尔旗沙圪堵镇附近的一个小流域为研究对象,以无人机航测的1m分辨率DEM数据、野外实测与室内实验获得的土壤特性数据、土地利用数据和降雨数据为基础,采用3种水沙模拟方案进行多象元尺度的水沙过程模拟,进而探索水沙过程模拟的DEM尺度效应及发生机制。研究表明:⑴在4~20m GRID分辨率区间模拟的径流量位于323.18m3和411.43m3之间,波动不大;⑵2~20m GRID分辨率区间内,模拟的侵蚀流量在3.43m3和65.61m3间变化,波动很大;(3)坡度和径流路径是水文过程模拟DEM尺度效应的两个对立影响因子,是水文过程模拟DEM尺度效应不明显的主要原因;⑷DEM尺度效应对侵蚀输沙具有重要影响,地形坡度是侵蚀输沙DEM尺度效应的主要控制因子;⑸地形坡度随DEM分辨率降低而发生的空间上的波动变化是侵蚀输沙量随DEM分辨率降低而波动变化的原因。  相似文献   

8.
Estimation of the degree of local seismic wave amplification (site effects) requires precise information about the local site conditions. In many regions of the world, local geologic information is either sparse or is not readily available. Because of this, seismic hazard maps for countries such as Mozambique, Pakistan and Turkey are developed without consideration of site factors and, therefore, do not provide a complete assessment of future hazards. Where local geologic information is available, details on the traditional maps often lack the precision (better than 1:10,000 scale) or the level of information required for modern seismic microzonation requirements. We use high-resolution (1:50,000) satellite imagery and newly developed image analysis methods to begin addressing this problem. Our imagery, consisting of optical data and digital elevation models (DEMs), is recorded from the ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) sensor system. We apply a semi-automated, object-oriented, multi-resolution feature segmentation method to identify and extract local terrain features. Then we classify the terrain types into mountain, piedmont and basin units using geomorphometry (topographic slope) as our parameter. Next, on the basis of the site classification schemes from the Wills and Silva (1998) study and the Wills et al (2000) and Wills and Clahan (2006) maps of California, we assign the local terrain units with V s 30 (the average seismic shear-wave velocity through the upper 30m of the subsurface) ranges for selected regions in Mozambique, Pakistan and Turkey. We find that the applicability of our site class assignments in each region is a good first-approximation for quantifying local site conditions and that additional work, such as the verification of the terrain’s compositional rigidity, is needed.  相似文献   

9.
A diverse set of computer programs has been developed at the Lawrence Livermore National Laboratory (LLNL)to process geophysical data obtained from boreholes. These programs support such services as digitizing analog records, reading and processing raw data, cataloging and storing processed data, retrieving selected data for analysis, and generating data plots on several different devices. A variety of geophysical data types are accommodated, including both wireline logs and laboratory analyses of downhole samples. Many processing tasks are handled by means of a single, flexible, general-purpose, data-manipulation program. Separate programs are available for processing data from density, gravity, velocity, and epithermal neutron logs. The computer-based storage and retrieval system, which has been in operation since 1973, currently contains over 4400 data files. Most of this data was obtained from the Nevada Test Site (NTS)in conjunction with the nuclear test program. Each data file contains a single geophysical parameter as a function of depth. Automatic storage and retrieval are facilitated by the assignment of unique file names that define the storage location of each data file. Files of interest to the user may be located and retrieved by means of a search program that examines the catalog. A convention recognized by all programs in the system is that of a zero ordinate denoting a gap in an otherwise continuous data trace. This convention provides a simple mechanism for editing and displaying data files in an automated and consistent manner.  相似文献   

10.
《Comptes Rendus Geoscience》2005,337(1-2):203-217
Advances in flood forecasting have been constrained by the difficulty of estimating rainfall continuously over space, for catchment-, national- and continental-scale areas. This has had a concomitant impact on the choice of appropriate model formulations for given flood-forecasting applications. Whilst weather radar used in combination with raingauges – and extended to utilise satellite remote-sensing and numerical weather prediction models – have offered the prospect of progress, there have been significant problems to be overcome. These problems have curtailed the development and adoption of more complete distributed model formulations that aim to increase forecast accuracy. Advanced systems for weather radar display and processing, and for flood forecast construction, are now available to ease the task of implementation. Applications requiring complex networks of models to make forecasts at many locations can be undertaken without new code development and be readily revised to take account of changing requirements. These systems make use of forecast-updating procedures that assimilate data from telemetry networks to improve flood forecast performance, at the same time coping with the possibility of data loss. Flood forecasting systems that integrate rainfall monitoring and forecasting with flood forecasting and warning are now operational in many areas. Present practice in flood modelling and forecast updating is outlined from a UK perspective. Challenges for improvement are identified, particularly against a background of greater access to spatial datasets on terrain, soils, geology, land-cover, and weather variables. Representing the effective runoff production and translation processes operating at a given grid or catchment scale may prove key to improved flood simulation, and robust application to ungauged basins through physics-based linkages with these spatial datasets. The need to embrace uncertainty in flood-warning decision-making is seen as a major challenge for the future. To cite this article: R.J. Moore et al., C. R. Geoscience 337 (2005).  相似文献   

11.
基于GML的WFS研究与实现   总被引:1,自引:0,他引:1  
罗显刚  谢忠  吴亮  刘丹 《地球科学》2006,31(5):639-644
网络要素服务(webfeature services, WFS) 是空间数据互操作的一个重要组成部分, 能为不同GIS数据格式提供要素级的交互.基于地理标志语言(geography markup language, GML) 的WFS能够为Web环境下的空间数据互操作技术和空间信息处理互操作技术提供简单而又有效的基本数据访问、要素编辑(包括添加、删除、更新)、要素的组合查询.通过研究开放式GIS联盟(open GIS consortiumInc., OGC) 的标准规范, 使用MAPGIS基础平台和.NET编译环境实现WFS, 使用XML传输和存储地理信息, 其中包括属性和地理要素的几何属性.最后给出了以MAPGIS平台为基础的基于GML的WFS的体系结构和实现方法.WFS只解决了空间数据互操作的一部分, 为了更好的进行空间数据互操作, WCS、WCTS的实现是必不可少的.   相似文献   

12.
吴亮  谢忠 《地球科学》2006,31(5):649-652
如何提高地质图空间数据库信息交换体系的可扩展性和互操作能力是这一领域中的研究热点.首先研究了ISO/OGC空间信息共享的规范和标准, 并建立了具有服务能力的空间信息集成模块, 在此基础上研究具有一致性的空间信息共享与服务体系, 形成独立于软件、适用于地质空间信息互操作的表达模式, 提供一个可用于异构和分布式体系下的数据描述机制, 使得应用管理地质空间数据成为可能.提出了以地理置标语言(geography markup language, GML) 模型为框架, 给出一种地质空间信息交换的机制.针对地学应用需求, 提出了专业应用数据的模式, 为地质空间信息交换提供了一种可行的方法.   相似文献   

13.
We consider the main population of cosmic voids in a heirarchical clustering model. Based on the Press-Schechter formalism modified for regions in the Universe with reduced or enhanced matter densities, we construct the mass functions for gravitationally bound objects of dark matter occupying voids or superclusters. We show that the halo mass functions in voids and superclusters differ substantially. In particular, the spatial density of massive (M ~ 1012 M ) halos is appreciably lower in voids than in superclusters, with the difference in the mass functions being greater for larger masses. According to our computations, an appreciable fraction of the mass of matter in voids should be preserved to the present epoch in the form of primordial gravitationally bound objects (POs) with modest masses (to 10% for M PO < 109 M ) keeping baryons. These primordial objects represent “primary blocks” in the heirarchical clustering model. We argue that the oldest globular clusters in the central regions of massive galaxies are the stellar remnants of these primordial objects: they can form in molecular clouds in these objects, only later being captured in the central regions of massive galaxies in the process of gravitational clustering. Primordial objects in voids can be observed as weak dwarf galaxies or Lyα absorption systems.  相似文献   

14.
Soil erodibility (K) affects sediment delivery to streams and needs to be appropriately quantified and interpolated as a fundamental geographic variable for implementing suitable catchment management and conservation practices. The spatial distribution of K for erosion modelling at non-sampling grid locations has traditionally been estimated using interpolation algorithms such as kriging which do not adequately represent the uncertainty of estimates. These methods cause smoothing effects through overestimating the low values and underestimating the large values. In this study observed values were used to implement a sequential Gaussian simulation (SGS) procedure to evaluate the certainty of modelled data. Soil erodibility values were computed using 41 soil samples taken from the top 10 cm soil layer regularly distributed across four catchments, 367–770 ha in area, within Kangaroo River State forest, New South Wales (NSW). One hundred realisations were applied in the simulation process to provide spatial uncertainty and error estimates of soil erodibility. The results indicated that values simulated by the SGS algorithm produced similar K values for the neighbouring cells. At the pixel level, the SGS approach generated a reliable estimation of soil erodibility in most areas. Spatial variation of the K factor in this study was strongly related to soil landscape differences across the catchments; within catchments slope gradient did not have a substantial impact on the numerical values of the K factor using pixel-by-pixel comparisons of raster grid maps.  相似文献   

15.
This study was prompted by the massive and unprecedented failure experienced within a very short time after construction of “Ada-George Road” which had deltaic lateritic sub-base stabilised with undisclosed but most probably uncontrolled (scientifically) proportions of cement and geosta (a relatively new chemical stabiliser). Samples of the deltaic laterite were taken from two of the borrow-pits within Port Harcourt metropolis from which materials were quarried for the construction of the sub-base. The results showed that although geosta addition to deltaic laterite mixed with cement gives considerable improvement on the strength of the resulting mixture, this stabilisation is only effective at very low geosta content not exceeding 2% depending on the optimum geosta content (OGC) which in turn is a function of the percentage of fines of the soil. But the Ada George road project appeared to be politically rather than scientifically motivated. Consequently, the OGC must have been grossly exceeded in the construction of this geosta stabilised sub-base. As a result, the as-built CBR must have been much lower than expected hence the massive failure that resulted. Even for 2% geosta content, this composite stabilisation was also found to be most effective at 4% cement content.  相似文献   

16.
The release of a digital elevation model (DEM) for Australia on a 9″ (~250 m) grid has enabled the computation of gravimetric terrain corrections thus allowing the computation of complete Bouguer anomalies across the continent. The terrain correction was calculated through a two‐dimensional fast Fourier transform algorithm applied to a linear, planar approximation of the terrain‐correction formula, and with a constant topographic density of 2670 kg.m‐3. The technique was applied to two datasets in order to test for instabilities in the terrain‐correction algorithm: the original 9″ DEM, and a 27″ DEM averaged from the 9″ data. The 27″ terrain corrections were compared with values supplied by the Australian Geological Survey Organisation in Tasmania: 86% of these data were found to agree within 3.91 μm.s‐2; 98% agreed to within 5.32 μm.s‐2 (1σ).  相似文献   

17.
This paper illustrates the main characteristics of the newly developed landslide model r.massmov, which is based on the shallow water equations, and is capable of simulating the landslide propagation over complex topographies. The model is the result of the reimplementation of the MassMov2D into the free and open-source GRASS GIS with a series of enhancements aiming at allowing its possible integration into innovative early warning monitoring systems and specifically into Web processing services. These improvements, finalized at significantly reducing computational times, include the introduction of a new automatic stopping criterion, fluidization process algorithm, and the parallel computing. Moreover, the results of multi-spatial resolution analysis conducted on a real case study located in the southern Switzerland are presented. In particular, this analysis, composed by a sensitivity analysis and calibration process, allowed to evaluate the model capabilities in simulating the phenomenon at different input data resolution. The results illustrate that the introduced modifications lead to important reductions in the computational time (more than 90 % faster) and that, using the lower dataset resolution capable of guaranteeing reliable results, the model can be run in about 1 s instead of the 3.5 h required by previous model with not optimized dataset resolution. Aside, the results of the research are a series of new GRASS GIS modules for conducting sensitivity analysis and for calibration. The latter integrates the automated calibration program “UCODE” with any GRASS raster module. Finally, the research workflow presented in this paper illustrates a best practice in applying r.massmov in real case applications.  相似文献   

18.
The formation and evolution of supermassive (102?1010 M ) black holes (SMBHs) in the dense cores of globular clusters and galaxies is investigated. The raw material for the construction of the SMBHs is stellar black holes produced during the evolution of massive (25?150M ) stars. The first SMBHs, with masses of ~1000M , arise in the centers of the densest and most massive globular clusters. Current scenarios for the formation of SMBHs in the cores of globular clusters are analyzed. The dynamical deceleration of the most massive and slowly moving stellar-mass (< 100M ) black holes, accompanied by the radiation of gravitational waves in late stages, is a probable scenario for the formation of SMBHs in the most massive and densest globular clusters. The dynamical friction of the most massive globular clusters close to the dense cores of their galaxies, with the formation of close binary black holes due to the radiation of gravitational waves, leads to the formation of SMBHs with masses ? 103 M in these regions. The stars of these galaxies form galactic bulges, providing a possible explanation for the correlation between the masses of the bulge and of the central SMBHs. The deceleration of the most massive galaxies in the central regions of the most massive and dense clusters of galaxies could lead to the appearance of the most massive (to 1010 M ) SMBHs in the cores of cD galaxies. A side product of this cascade scenario for the formation of massive galaxies with SMBHs in their cores is the appearance of stars with high spatial velocities (> 300 km/s). The velocities of neutron stars and stellar-mass black holes can reach ~105 km/s.  相似文献   

19.
从网格概念出发,详细介绍了实现地质图协同编辑的SIG(SPATIAL INFORMATIONGR ID)空间信息网格技术,以及国家地质空间信息网格体系的结构和服务机制。国家地质空间信息网格采用万维网体系架构,以资源和信息共享、协同为主线,构成一个三层次体系架构。主要组成为:网格应用层、网格平台层、网格基础设施。并针对地质图空间数据共享与服务,设计了地质图服务流程,提供了信息查询、信息服务页面,展示了地质图的在线协同编辑。  相似文献   

20.
Lagrangian retention and flushing are examined by advecting neutrally buoyant point particles within a circulation field generated by a numerical ocean model of Tampa Bay. Large temporal variations in Lagrangian residence time are found under realistic changes in boundary conditions. Two 90-day time periods are examined. The first (P1) is characterized by low freshwater inflow and weak baroclinic circulation. The second (P2) has high freshwater inflow and strong baroclinic circulation. At the beginning of both time periods, 686,400 particles are released uniformly throughout the bay. Issues relating to particle distribution and flushing are examined at three different spatial scales: (1) at the scale of the entire bay, (2) the four major regions within the bay, and (3) at the scale of individual model grid cells. Two simple theoretical models for the particle number over time, N(t), are fit to the particle counts from the ocean model. The theoretical models are shown to represent N(t) reasonably well when considering the entire bay, allowing for straightforward calculation of baywide residence times: 156 days for P1 and 36 days for P2. However, the accuracy of these simple models decreases with decreasing spatial scale. This is likely due to the fact that particles may exit, reenter, or redistribute from one region to another in any sequence. The smaller the domain under consideration, the more this exchange process dominates. Therefore, definitions of residence time need to be modified for “non-local” situations. After choosing a reasonable definition, and removal of the tidal and synoptic signals, the residence times at each grid cell in P1 is found to vary spatially from a few days to 90 days, the limit of the calculation, with an average residence time of 53 days. For P2, the overall spatial pattern is more homogeneous, and the residence times have an average value of 26 days.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号