首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   1篇
  国内免费   2篇
测绘学   1篇
大气科学   1篇
地球物理   3篇
地质学   4篇
海洋学   1篇
自然地理   1篇
  2021年   1篇
  2019年   1篇
  2015年   1篇
  2012年   1篇
  2010年   1篇
  2008年   2篇
  2005年   2篇
  2004年   1篇
  1993年   1篇
排序方式: 共有11条查询结果,搜索用时 15 毫秒
1.
Remote sensing techniques allow monitoring the Earth surface and acquiring worthwhile information that can be used efficiently in agro-hydrological systems. Satellite images associated to computational models represent reliable resources to estimate actual evapotranspiration fluxes, ETa, based on surface energy balance. The knowledge of ETa and its spatial distribution is crucial for a broad range of applications at different scales, from fields to large irrigation districts. In single plots and/or in irrigation districts, linking water volumes delivered to the plots with the estimations of remote sensed ETa can have a great potential to develop new cost-effective indicators of irrigation performance, as well as to increase water use efficiency. With the aim to assess the irrigation system performance and the opportunities to save irrigation water resources at the “SAT Llano Verde” district in Albacete, Castilla-La Mancha (Spain), the Surface Energy Balance Algorithm for Land (SEBAL) was applied on cloud-free Landsat 5 Thematic Mapper (TM) images, processed by cubic convolution resampling method, for three irrigation seasons (May to September 2006, 2007 and 2008). The model allowed quantifying instantaneous, daily, monthly and seasonal ETa over the irrigation district. The comparison between monthly irrigation volumes distributed by each hydrant and the corresponding spatially averaged ETa, obtained by assuming an overall efficiency of irrigation network equal to 85%, allowed the assessment of the irrigation system performance for the area served by each hydrant, as well as for the whole irrigation district. It was observed that in all the investigated years, irrigation volumes applied monthly by farmers resulted generally higher than the corresponding evapotranspiration fluxes retrieved by SEBAL, with the exception of May, in which abundant rainfall occurred. When considering the entire irrigation seasons, it was demonstrated that a considerable amount of water could have been saved in the district, respectively equal to 26.2, 28.0 and 16.4% of the total water consumption evaluated in the three years.  相似文献   
2.
Time-frequency peak filtering (TFPF) is an effective method for seismic random noise attenuation. The linearity of the signal has a significant influence on the accuracy of the TFPF method. The higher the linearity of the signal to be filtered is, the better the denoising result is. With this in mind, and taking the lateral coherence of reflected events into account, we do TFPF along the reflected events to improve the degree of linearity and enhance the continuity of these events. The key factor to realize this idea is to find the traces of the reflected events. However, the traces of the events are too hard to obtain in the complicated field seismic data. In this paper, we propose a Multiple Directional TFPF (MD–TFPF), in which the filtering is performed in certain direction components of the seismic data. These components are obtained by a directional filter bank. In each direction component, we do TFPF along these decomposed reflected events (the local direction of the events) instead of the channel direction. The final result is achieved by adding up the filtering results of all decomposition directions of seismic data. In this way, filtering along the reflected events is implemented without accurately finding the directions. The effectiveness of the proposed method is tested on synthetic and field seismic data. The experimental results demonstrate that MD–TFPF can more effectively eliminate random noise and enhance the continuity of the reflected events with better preservation than the conventional TFPF, curvelet denoising method and F–X deconvolution method.  相似文献   
3.
集合卡曼滤波由于易于使用而被广泛地应用到陆面数据同化研究中,它是建立在模型为线性、误差为正态分布的假设上,而实际土壤湿度方程是高度非线性的,并且当土壤过干或过湿时会发生样本偏斜.为了全面评估它在同化表层土壤湿度观测来反演土壤湿度廓线的性能,特引入不需要上述假设的采样重要性重采样粒子滤波,比较非线性和偏斜性对同化算法的影响.结果显示:不管是小样本还是大样本,集合卡曼滤波都能快速、准确地逼近样本均值,而粒子滤波只有在大样本时才能缓慢地趋近;此外,集合卡曼滤波的粒子边缘概率密度及其偏度和峰度与粒子滤波完全不同,前者粒子虽不完全满足正态分布,但始终为单峰状态,而后者粒子随同化推进经历了单峰到双峰再到单峰的变化.  相似文献   
4.
In this paper, the authors examine models of probability distributions for sampling error in rainfall estimates obtained from discrete satellite sampling in time based on 5 years of 15-min radar rainfall data in the central United States. The sampling errors considered include all combinations of 3, 6, 12, or 24 h sampling of rainfall over 32, 64, 128, 256, or 512 km square domains, and 1, 5, or 30 day rainfall accumulations. Results of this study reveal that the sampling error distribution depends strongly on the rain rate; hence the conditional distribution of sampling error is more informative than its marginal distribution. The distribution of sampling error conditional on rain rate is strongly affected by the sampling interval. At sampling intervals of 3 or 6 h, the logistic distribution appears to fit the conditional sampling error quite well, while the shifted-gamma, shifted-weibull, shifted-lognormal, and normal distributions fit poorly. At sampling intervals of 12 or 24 h, the shifted-gamma, shifted-weibull, or shifted-lognormal distribution fit the conditional sampling error better than the logistics or normal distribution. These results are vital to understanding the accuracy of satellite rainfall products, for performing validation assessment of these products, and for analyzing the effects of rainfall-related errors in hydrological models.  相似文献   
5.
6.
Various uncertainties arising during acquisition process of geoscience data may result in anomalous data instances(i.e.,outliers)that do not conform with the expected pattern of regular data instances.With sparse multivariate data obtained from geotechnical site investigation,it is impossible to identify outliers with certainty due to the distortion of statistics of geotechnical parameters caused by outliers and their associated statistical uncertainty resulted from data sparsity.This paper develops a probabilistic outlier detection method for sparse multivariate data obtained from geotechnical site investigation.The proposed approach quantifies the outlying probability of each data instance based on Mahalanobis distance and determines outliers as those data instances with outlying probabilities greater than 0.5.It tackles the distortion issue of statistics estimated from the dataset with outliers by a re-sampling technique and accounts,rationally,for the statistical uncertainty by Bayesian machine learning.Moreover,the proposed approach also suggests an exclusive method to determine outlying components of each outlier.The proposed approach is illustrated and verified using simulated and real-life dataset.It showed that the proposed approach properly identifies outliers among sparse multivariate data and their corresponding outlying components in a probabilistic manner.It can significantly reduce the masking effect(i.e.,missing some actual outliers due to the distortion of statistics by the outliers and statistical uncertainty).It also found that outliers among sparse multivariate data instances affect significantly the construction of multivariate distribution of geotechnical parameters for uncertainty quantification.This emphasizes the necessity of data cleaning process(e.g.,outlier detection)for uncertainty quantification based on geoscience data.  相似文献   
7.
Exploratory data analysis(EDA)is a toolbox of data manipulation methods for looking at data to seewhat they seem to say,i.e.one tries to let the data speak for themselves.In this way there is hope thatthe data will lead to indications about'models'of relationships not expected a priori.In this respect EDAis a pre-step to confirmatory data analysis which delivers measures of how adequate a model is.In thistutorial the focus is on multivariate exploratory data analysis for quantitative data using linear methodsfor dimension reduction and prediction.Purely graphical multivariate tools such as 3D rotation andscatterplot matrices are discussed after having introduced the univariate and bivariate tools on which theyare based.The main tasks of multivariate exploratory data analysis are identified as'search for structure'by dimension reduction and'model selection'by comparing predictive power.Resampling is used tosupport validity,and variables selection to improve interpretability.  相似文献   
8.
The cumulative distribution function (CDF) of magnitude of seismic events is one of the most important probabilistic characteristics in Probabilistic Seismic Hazard Analysis (PSHA). The magnitude distribution of mining induced seismicity is complex. Therefore, it is estimated using kernel nonparametric estimators. Because of its model-free character the nonparametric approach cannot, however, provide confidence interval estimates for CDF using the classical methods of mathematical statistics.To assess errors in the seismic events magnitude estimation, and thereby in the seismic hazard parameters evaluation in the nonparametric approach, we propose the use of the resampling methods. Resampling techniques applied to a one dataset provide many replicas of this sample, which preserve its probabilistic properties. In order to estimate the confidence intervals for the CDF of magnitude, we have developed an algorithm based on the bias corrected and accelerated method (BCa method). This procedure uses the smoothed bootstrap and second-order bootstrap samples. We refer to this algorithm as the iterated BCa method. The algorithm performance is illustrated through the analysis of Monte Carlo simulated seismic event catalogues and actual data from an underground copper mine in the Legnica–Głogów Copper District in Poland.The studies show that the iterated BCa technique provides satisfactory results regardless of the sample size and actual shape of the magnitude distribution.  相似文献   
9.
In this paper, a methodology for the selection of statistical models for describing the extreme wave heights on the basis of resampling techniques is presented. Two such techniques are evaluated: the jackknife and the bootstrap. The methods are applied to two high-quality datasets of wave measurements in the Mediterranean and one from the East Coast of the USA. The robustness of the estimates of the extreme values of wave heights at return periods important for coastal engineering design is explored further. In particular, we demonstrate how an ensemble error norm can be used to select the most appropriate extreme probability model from a choice of cumulative distribution functions (CDFs). This error norm is based on the mean error norm of the optimised CDF for each resampled (replicate) data series. The resampling approach is also used to present confidence intervals of the CDF parameters. We provide a brief discussion of the sensitivity of these parameters and the suitability of each model in terms of uncertainty with resampling techniques. The advantages of resampling are outlined, and the superiority of the bootstrap over the jackknife in quantifying the uncertainty of extreme quantiles is demonstrated for these records.  相似文献   
10.
The stochastic model has been widely used for the simulation study. However, there was a difficulty in the reproduction of the skewness of observed series and so the stochastic model for the skewness preservation was appeared. While the skewness in the residuals of the stochastic model has been considered for the skewness preservation this study uses a random resampling technique of residuals from the stochastic models for the simulation study and for the investigation of the skewness coefficient. The main advantage of this resampling scheme, called the bootstrap method is that it does not rely on the assumption of population distribution and this study uses the combined model of the stochastic and bootstrapped models. The stochastic and bootstrapped stochastic (or combined) models are used for the investigations of skewness preservation and of the reproduction of probability density function between the simulated series. The models are applied to the annual and monthly streamflows of Yongdam site in Korea and Yakima river, Washington, USA for the streamflow simulation study then the statistics and probability density functions for the observed and simulated streamflows are compared. As the results the bootstrapped stochastic model reproduces the skewness and probability density function much better than the stochastic model. This evidences suggest that the bootstrapped stochastic model might be more appropriate than the stochastic model for the preservation of skewness and for simulation purposes of the series.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号