首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
二维大地电磁资料的偏移   总被引:3,自引:2,他引:3       下载免费PDF全文
本文采用反射映像(即U/D成像)原理对大地电磁资料进行成像,将地面观测的波场分解成上行波和下行波,并分别向下延拓,利用上、下行波的时间一致性原理确定地下电性界面的位置.对TE和TM模式的MT响应分别处理可以得到两个深度剖面,两剖面的一致性便为实际反射界面的最佳估计.理论和实际资料的计算表明,MT偏移技术是比较有效的,它可获得地下界面直观的映像.和常规MT反演方法相比较,该方法具有算法简单等优点,并且还能获得表示地下地质构造真实映像的MT深度剖面.  相似文献   

2.
Lin YF  Anderson MP 《Ground water》2003,41(3):306-315
A digital procedure to estimate recharge/discharge rates that requires relatively short preparation time and uses readily available data was applied to a setting in central Wisconsin. The method requires only measurements of the water table, fluxes such as stream baseflows, bottom of the system, and hydraulic conductivity to delineate approximate recharge/discharge zones and to estimate rates. The method uses interpolation of the water table surface, recharge/discharge mapping, pattern recognition, and a parameter estimation model. The surface interpolator used is based on the theory of radial basis functions with thin-plate splines. The recharge/discharge mapping is based on a mass-balance calculation performed using MODFLOW. The results of the recharge/discharge mapping are critically dependent on the accuracy of the water table interpolation and the accuracy and number of water table measurements. The recharge pattern recognition is performed with the help of a graphical user interface (GUI) program based on several algorithms used in image processing. Pattern recognition is needed to identify the recharge/discharge zonations and zone the results of the mapping method. The parameter estimation program UCODE calculates the parameter values that provide a best fit between simulated heads and flows and calibration head-and-flow targets. A model of the Buena Vista Ground Water Basin in the Central Sand Plains of Wisconsin is used to demonstrate the procedure.  相似文献   

3.
Accurate mapping of water surface boundaries in rivers is an important step for monitoring water stages, estimating discharge, flood extent, and geomorphic response to changing hydrologic conditions, and assessing riverine habitat. Nonetheless, it is a challenging task in spatially and spectrally heterogeneous river environments, commonly characterized by high spatiotemporal variations in morphology, bed material, and bank cover. In this study, we investigate the influence of channel morphology and bank characteristics on the delineation of water surface boundaries in rivers using high spatial resolution passive remote sensing and a template‐matching (object‐based) algorithm, and compare its efficacy with that of Support Vector Machine (SVM) (pixel‐based) algorithm. We perform a detailed quantitative evaluation of boundary‐delineation accuracy using spatially explicit error maps in tandem with the spatial maps of geomorphic and bank classes. Results show that template matching is more successful than SVM in delineating water surface boundaries in river sections with spatially challenging geomorphic landforms (e.g. sediment bar structures, partially submerged sediment deposits) and shallow water conditions. However, overall delineation accuracy by SVM is higher than that of template matching (without iterative hierarchical learning). Vegetation and water indices, especially when combined with texture information, improve the accuracy of template matching, for example, in river sections with overhanging trees and shadows – the two most problematic conditions in water surface boundary delineation. By identifying the influence of channel morphology and bank characteristics on water surface boundary mapping, this study helps determine river sections with higher uncertainty in delineation. In turn, the most suitable methods and data sets can be selectively utilized to improve geomorphic/hydraulic characterization. The methodology developed here can also be applied to similar studies on other geomorphic landforms including floodplains, wetlands, lakes, and coastlines. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
The aim of refracted arrivals inversion is the computation of near-surface information, i.e. first-layer thicknesses and refractor velocities, in order to estimate the initial static corrections for the seismic data. The present trend is moving towards totally automatic inversion techniques, which start by picking the first breaks and end by aligning the seismic traces at the datum plane. Accuracy and computational time savings are necessary requirements. These are not straightforward, because accuracy means noise immunity, which implies the processing of large amounts of data to take advantage of redundancy; moreover, owing to the non-linearity of the problem, accuracy also means high-order modelling and, as a consequence, complex algorithms for making the inversion. The available methods are considered here with respect to the expected accuracy, i.e. to the model they assume. It is shown that the inversion of the refracted arrivals with a linear model leads to an ill-conditioned problem with the result that complete separation between the weathering thickness and the refractor velocity is not possible. This ambiguity is carefully analysed both in the spatial domain and in the wavenumber domain. An error analysis is then conducted with respect to the models and to the survey configurations that are used. Tests on synthetic data sets validate the theories and also give an idea of the magnitude of the error. This is largely dependent on the structure; here quantitative analysis is extended up to second derivative effects, whereas up to now seismic literature has only dealt with first derivatives. The topographical conditions which render the traditional techniques incorrect are investigated and predicted by the error equations. Improved solutions, based on more accurate models, are then considered: the advantages of the Generalized Reciprocal Method are demonstrated by applying the results of the error analysis to it, and the accuracy of the non-linear methods is discussed with respect to the interpolation technique which they adopt. Finally, a two-step procedure, consisting of a linear model inversion followed by a local non-linear correction, is suggested as a good compromise between accuracy and computational speed.  相似文献   

5.
Cone penetration test (CPT) and standard penetration test (SPT) are widely used for the site specific evaluation of liquefaction potential and are getting increased use in the regional mapping of liquefaction hazard. This paper compares CPT and SPT-based liquefaction potential characterizations of regional geologic units using the liquefaction potential index (LPI) across the East Bay of the San Francisco Bay, California, USA and examines the statistical and spatial variability of LPI across and within geologic units. Overall, CPT-based LPI characterizations result in higher hazard than those derived from the SPT. This bias may result from either mis-classifications of soil type in the CPT or a bias in the CPT simplified procedure for liquefaction potential. Regional mapping based on cumulative distribution of LPI values show different results depending on which dataset is used. For both SPT and CPT-based characterizations, the geologic units in the area have broad LPI distributions that overlap between units and are not distinct from the population as a whole. Regional liquefaction classifications should therefore give a distribution, rather than a single hazard rating that does not provide for variability within the area. The CPT-based LPI values have a higher degree of spatial correlation and a lower variance over a greater distance than those estimated from SPTs. As a result, geostatistical interpolation can provide a detailed map of LPI when densely sampled CPT data are available. The statistical distribution of LPI within specific geologic units and interpolated maps of LPI can be used to understand the spatial variability of liquefaction potential.  相似文献   

6.
Two different methods for the construction of an approximation to bicubic splines for interpolating irregularly spaced two-dimensional data are described. These are referred to as the least squares line (LSL) and linear segment (LINSEG) construction procedures. A quantitative test is devised for investigating the absolute accuracy and efficiency of the two spline interpolation procedures. The test involves (i) laying of artificial flight lines on the analytically known field of a model, (ii) interpolation of field values along the flight lines and their subtraction from the original field values to compute the residuals. This test is applied on fields due to four models (three prism models and one dyke model) placed at different depths below the flight lines, and for each case the error estimates (the mean error, the maximum error and the standard deviation) are tabulated. An analysis of the error estimates shows in all cases the LSL interpolation to be more accurate than the LINSEG, although the latter is about 50% faster in computer time. The relative accuracy and efficiency of the LSL interpolation is also tested against a recent method based on harmonization procedure, which shows the latter to be more precise, though much slower in speed.  相似文献   

7.
Hydraulic tomography for detecting fracture zone connectivity   总被引:1,自引:0,他引:1  
Hao Y  Yeh TC  Xiang J  Illman WA  Ando K  Hsu KC  Lee CH 《Ground water》2008,46(2):183-192
Fracture zones and their connectivity in geologic media are of great importance to ground water resources management as well as ground water contamination prevention and remediation. In this paper, we applied a recently developed hydraulic tomography (HT) technique and an analysis algorithm (sequential successive linear estimator) to synthetic fractured media. The application aims to explore the potential utility of the technique and the algorithm for characterizing fracture zone distribution and their connectivity. Results of this investigation showed that using HT with a limited number of wells, the fracture zone distribution and its connectivity (general pattern) can be mapped satisfactorily although estimated hydraulic property fields are smooth. As the number of wells and monitoring ports increases, the fracture zone distribution and connectivity become vivid and the estimated hydraulic properties approach true values. We hope that the success of this application may promote the development and application of the new generations of technology (i.e., hydraulic, tracer, pneumatic tomographic surveys) for mapping fractures and other features in geologic media.  相似文献   

8.
Gravity data are often acquired over long periods of time using different instruments and various survey techniques, resulting in data sets of non-uniform accuracy. As station locations are inhomogeneously distributed, gravity values are interpolated on to a regular grid to allow further processing, such as computing horizontal or vertical gradients. Some interpolation techniques can estimate the interpolation error. Although estimation of the error due to interpolation is of importance, it is more useful to estimate the maximum gravity anomaly that may have gone undetected by a survey. This is equivalent to the determination of the maximum mass whose gravity anomaly will be undetected at any station location, given the data accuracy at each station. Assuming that the maximum density contrast present in the survey area is known or can be reasonably assumed from a knowledge of the geology, the proposed procedure is as follows: at every grid node, the maximum mass whose gravity anomaly does not disturb any of the surrounding observed gravity values by more than their accuracies is determined. A finite vertical cylinder is used as the mass model in the computations. The resulting map gives the maximum detection error and, as such, it is a worst-case scenario. Moreover, the map can be used to optimize future gravity surveys: new stations should be located at, or near, map maxima. The technique is applied to a set of gravity observations obtained from different surveys made over a period of more than 40 years in the Abitibi Greenstone Belt in eastern Canada.  相似文献   

9.
Delineation of regional arid karstic aquifers: an integrative data approach   总被引:1,自引:0,他引:1  
This research integrates data procedures for the delineation of regional ground water flow systems in arid karstic basins with sparse hydrogeologic data using surface topography data, geologic mapping, permeability data, chloride concentrations of ground water and precipitation, and measured discharge data. This integrative data analysis framework can be applied to evaluate arid karstic aquifer systems globally. The accurate delineation of ground water recharge areas in developing aquifer systems with sparse hydrogeologic data is essential for their effective long-term development and management. We illustrate the use of this approach in the Cuatrociénegas Basin (CCB) of Mexico. Aquifers are characterized using geographic information systems for ground water catchment delineation, an analytical model for interbasin flow evaluation, a chloride balance approach for recharge estimation, and a water budget for mapping contributing catchments over a large region. The test study area includes the CCB of Coahuila, Mexico, a UNESCO World Biosphere Reserve containing more than 500 springs that support ground water-dependent ecosystems with more than 70 endemic organisms and irrigated agriculture. We define recharge areas that contribute local and regional ground water discharge to springs and the regional flow system. Results show that the regional aquifer system follows a topographic gradient that during past pluvial periods may have linked the Río Nazas and the Río Aguanaval of the Sierra Madre Occidental to the Río Grande via the CCB and other large, currently dry, upgradient lakes.  相似文献   

10.
基于稀疏反演的地震插值方法是一种重要的插值方法,然而大多数这类方法只针对无噪声数据或者高信噪比数据插值.实际上,地震数据含有各种噪声,使得插值问题变得更加困难.凸集投影方法是一种高效的插值算法,但是对于含噪声数据的插值效果不理想,针对含噪声数据提出的加权凸集投影方法能够实现同时插值和去噪,但是除了最小阈值需要认真选取外,增加一个权重因子来实现去噪功能.本文由迭代阈值算法推导出加权凸集投影方法,证明其是解无约束优化问题的一种方法,加权因子可以看作拟合误差项的系数.本文还提出了一种改进的凸集投影方法,与原始凸集投影方法相比该方法不需要增加任何计算量,只要通过阈值的选择来进行插值和去噪.数值模拟证明了该算法的计算效率,并且对含噪声数据能够实现较好的插值效果;先插值后去噪的结果证明了同时去噪和插值算法的可靠性和稳定性.  相似文献   

11.
In many practical cases, it is necessary to characterize the explored area with a regular set of geodata. Regular matrix data (e.g., ordinary maps) are calculated via existing data interpolation and extrapolation. For low frequency (oversampled) data acquired within a dense profile net (e.g., seismic three‐dimensional structural or gravity mapping), this procedure is mathematically more or less stable and, to a certain extent, unique since we might neglect discrepancies resulting from different interpolations. The situation is quite different for high‐resolution and high‐frequency contaminated data (e.g., raw seismic attributes or geochemistry measurements) represented by sparse profiling. Considering the variety of exploration cases, the investigation of different interpolation algorithm efficiency seems very important. Since it is impossible to compare all algorithms by means of formal mathematics, we have designed a test program. A representative set of seismic attribute maps has been artificially destroyed by introducing blank values (from 20% up to 95%) and then restored by different interpolation algorithms— bicubic, bilinear, nearest neighbor, and “smart averaging.” Smart averaging interpolation is done in a “live” window. The position, form, and size of the window are determined by some mathematical criterion on a trial‐and‐error basis. Discrepancies between restored and initial (true) data have been assessed and analysed. It is shown that the total (absolute) efficiency and comparative (relative) efficiency of the algorithms depend mostly upon the initial interpolant data characteristics. Identifying the best interpolation algorithm for all interpretive cases seems impossible. Some aspects of data processing are discussed in connection with interpolation accuracy.  相似文献   

12.
Digital elevation models have been used in many applications since they came into use in the late 1950s. It is an essential tool for applications that are concerned with the Earth's surface such as hydrology, geology, cartography, geomorphology, engineering applications, landscape architecture and so on. However, there are some differences in assessing the accuracy of digital elevation models for specific applications. Different applications require different levels of accuracy from digital elevation models. In this study, the magnitudes and spatial patterning of elevation errors were therefore examined, using different interpolation methods. Measurements were performed with theodolite and levelling. Previous research has demonstrated the effects of interpolation methods and the nature of errors in digital elevation models obtained with indirect survey methods for small‐scale areas. The purpose of this study was therefore to investigate the size and spatial patterning of errors in digital elevation models obtained with direct survey methods for large‐scale areas, comparing Inverse Distance Weighting, Radial Basis Functions and Kriging interpolation methods to generate digital elevation models. The study is important because it shows how the accuracy of the digital elevation model is related to data density and the interpolation algorithm used. Cross validation, split‐sample and jack‐knifing validation methods were used to evaluate the errors. Global and local spatial auto‐correlation indices were then used to examine the error clustering. Finally, slope and curvature parameters of the area were modelled depending on the error residuals using ordinary least regression analyses. In this case, the best results were obtained using the thin plate spline algorithm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
In recent years airborne laser scanning (ALS) evolved into a state‐of‐the‐art technology for topographic data acquisition. We present a novel, automatic method for water surface classification and delineation by combining the geometrical and signal intensity information provided by ALS. The reflection characteristics of water surfaces in the near‐infrared wavelength (1064 nm) of the ALS system along with the surface roughness information provide the basis for the differentiation between water and land areas. Water areas are characterized by a high number of laser shot dropouts and predominant low backscatter energy. In a preprocessing step, the recorded intensities are corrected for spherical loss and atmospheric attenuation, and the locations of laser shot dropouts are modeled. A seeded region growing segmentation, applied to the point cloud and the modeled dropouts, is used to detect potential water regions. Object‐based classification of the resulting segments determines the final separation of water and non‐water points. The water‐land‐boundary is defined by the central contour line of the transition zone between water and land points. We demonstrate that the proposed workflow succeeds for a regulated river (Inn, Austria) with smooth water surface as well as for a pro‐glacial braided river (Hintereisfernerbach, Austria). A multi‐temporal analysis over five years of the pro‐glacial river channel emphasizes the applicability of the developed method for different ALS systems and acquisition settings (e.g. point density). The validation, based on real time kinematic (RTK) global positioning system (GPS) field survey and a terrestrial orthophoto, indicate point cloud classification accuracy above 97% with 0·45 m planimetric accuracy (root mean square error) of the water–land boundary. This article shows the capability of ALS data for water surface mapping with a high degree of automation and accuracy. This provides valuable datasets for a number of applications in geomorphology, hydrology and hydraulics, such as monitoring of braided rivers, flood modeling and mapping. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Protection of groundwater‐dependent ecosystems (GDEs) is an important criterion in sustainable groundwater management, particularly when human water consumption is in competition with environmental water demands; however, the delineation of GDEs is commonly a challenging task. The Groundwater‐dependent Ecosystem Mapping (GEM) method proposed here is based on interpretation of the land surface response to the drying process derived from combined changes in two multispectral indices, the Normalised Difference Vegetation Index and the Normalised Difference Wetness Index, both derived from Landsat imagery. The GEM method predicts three land cover classes used for delineation of potential GDEs: vegetation with permanent access to groundwater; vegetation with diminishing access to groundwater; and water bodies that can persist through a prolonged dry period. The method was applied to a study site in the Ellen Brook region of Western Australia, where a number of GDEs associated with localised groundwater, diffuse discharge zones, and riparian vegetation were known. The estimated accuracy of the method indicated a good agreement between the predicted and known GDEs; Producer's accuracy was calculated as up to 91% for some areas. The method is most applicable for mapping GDEs in regions with a distinct drying period. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
16.
17.
太湖藻类生长模型研究   总被引:32,自引:1,他引:31  
本文提出了一改进的藻类生长模型及其新颖的计算机模拟算法,该模型不但考虑了不温、总氮、总氮、总磷、浮游动物的辐射等因素对藻类生长率的影响,而且根据水量,总磷和藻类浓度等因素对藻类死亡率进行了修正和计算精度至关重要的,因此,本文建立了具有绝对稳定性和二阶精度的数值算法求解藻类生长模型中的偏微分方程组,此外,为了进一步验证该藻类生长模型的实测值进行模拟,由于实测值为每个月中某一天的测量值,为了模拟过程能正确进行,本文采用样条插值的方法估计出每用插值等方法,本文提出了一种广义拟结果与测量数据基本符合,而在有明显的差异的地方,文中作出了相应的解释,结果表明,本文提出了藻类生长模型及其算法是有效的,各采样点藻类浓度的模拟值能较好地拟合实测值。  相似文献   

18.
选取最小曲率、克里格、改进Shepard、反距离加权和径向基函数等5种网格化数学模型,对小江断裂地磁总强度加密区岩石圈磁场数据进行数据网格化,采用均方根预测误差和插值数据残差均方根等评价指标对网格化结果进行评价,结果表明,克里格插值与反距离加权插值法的精度最高。进一步比较克里格插值与反距离加权插值法的网格化图形质量,结果显示克里格插值网格化过程中兼顾了数据的平滑性和各实测点与待估点之间的空间位置关系,避免了系统误差,得出克里格插值更适用于岩石圈磁场数据网格化的结论。  相似文献   

19.
Spatial interpolation methods used for estimation of missing precipitation data generally under and overestimate the high and low extremes, respectively. This is a major limitation that plagues all spatial interpolation methods as observations from different sites are used in local or global variants of these methods for estimation of missing data. This study proposes bias‐correction methods similar to those used in climate change studies for correcting missing precipitation estimates provided by an optimal spatial interpolation method. The methods are applied to post‐interpolation estimates using quantile mapping, a variant of equi‐distant quantile matching and a new optimal single best estimator (SBE) scheme. The SBE is developed using a mixed‐integer nonlinear programming formulation. K‐fold cross validation of estimation and correction methods is carried out using 15 rain gauges in a temperate climatic region of the U.S. Exhaustive evaluation of bias‐corrected estimates is carried out using several statistical, error, performance and skill score measures. The differences among the bias‐correction methods, the effectiveness of the methods and their limitations are examined. The bias‐correction method based on a variant of equi‐distant quantile matching is recommended. Post‐interpolation bias corrections have preserved the site‐specific summary statistics with minor changes in the magnitudes of error and performance measures. The changes were found to be statistically insignificant based on parametric and nonparametric hypothesis tests. The correction methods provided improved skill scores with minimal changes in magnitudes of several extreme precipitation indices. The bias corrections of estimated data also brought site‐specific serial autocorrelations at different lags and transition states (dry‐to‐dry, dry‐to‐wet, wet‐to‐wet and wet‐to‐dry) close to those from the observed series. Bias corrections of missing data estimates provide better serially complete precipitation time series useful for climate change and variability studies in comparison to uncorrected filled data series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Comparative study of methods for WHPA delineation   总被引:3,自引:0,他引:3  
Human activities, whether agricultural, industrial, commercial, or domestic, can contribute to ground water quality deterioration. In order to protect the ground water exploited by a production well, it is essential to develop a good knowledge of the flow system and to adequately delineate the area surrounding the well within which potential contamination sources should be managed. Many methods have been developed to delineate such a wellhead protection area (WHPA). The integration of more information on the geologic and hydrogeologic characteristics of the study area increases the precision of any given WHPA delineation method. From a practical point of view, the WHPA delineation methods allowing the simplest and least expensive integration of the available information should be favored. This paper presents a comparative study in which nine different WHPA delineation methods were applied to a well and a spring in an unconfined granular aquifer and to a well in a confined highly fractured rock aquifer. These methods range from simple approaches to complex computer models. Hydrogeological mapping and numerical modeling with MODFLOW-MODPATH were used as reference methods to respectively compare the delineation of the zone of contribution and the zone of travel obtained from the various WHPA methods. Although applied to simple ground water flow systems, these methods provided a relatively wide range of results. To allow a realistic delineation of the WHPA in aquifers of variable geometry, a WHPA delineation method should ensure a water balance and include observed or calculated regional flow characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号