首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.  相似文献   

2.
This article employs Support Vector Machine (SVM) and Relevance Vector Machine (RVM) for prediction of Evaporation Losses (E) in reservoirs. SVM that is firmly based on the theory of statistical learning theory, uses regression technique by introducing ε‐insensitive loss function has been adopted. RVM is based on a Bayesian formulation of a linear model with an appropriate prior that results in a sparse representation. The input of SVM and RVM models are mean air temperature (T) ( °C), average wind speed (WS) (m/sec), sunshine hours (SH)(hrs/day), and mean relative humidity (RH) (%). Equations have been also developed for prediction of E. The developed RVM model gives variance of the predicted E. A comparative study has also been presented between SVM, RVM and ANN models. The results indicate that the developed SVM and RVM can be used as a practical tool for prediction of E. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
把核函数理论与数量化理论IV模型有机结合,提出了核数量化理论IV模型,并以高阶对称矩阵端部特征对求解的Lanczos算法为基础设计了大样本核数量化理论IV模型的算法框架.把核数量化理论IV模型应用于高光谱图像的降维实验研究,研究结果表明,合理选择核函数模型及参数,核数量化理论IV模型能够在低维标度空间中表征原始数据的族群信息,从而取得满意的分类效果.核数量化理论IV模型为大样本地学观测数据的分析处理提供了一种新的理论工具.  相似文献   

4.
最小二乘支持向量机(LS-SVM)用于拟合回归处理时的参数设置一直是一个难题,它会受到信号类型和强度、核函数类型、噪声强度、计算精度要求等因素的影响.本文针对Ricker子波核LS-SVM去除地震勘探信号中随机噪声问题,讨论和分析了向量机参数、核参数对去噪性能的影响.实验表明,核参数f可取为地震记录的主频,不能较准确估计时宁大勿小;向量机参数γ只要不取得过小,一般情况下都是能接受的.采用此方法对含不同强度噪声的地震勘探信号进行了去噪处理.  相似文献   

5.
为了借助容易获取的地震相关因素间接预测地震震级,提出基于相关向量机(Relevance Vector Machine,RVM)方法的地震震级预测模型。通过样本学习建立地震震级与地震累积频度、累积释放能量、平均震级、b值、η值和相关区震级等6个主要影响因素之间的非线性映射关系,利用已知影响因素预测地震震级。结果表明:RVM模型预测结果均优于BP神经网络及SOM-BP神经网络预测结果;通过敏感因子分析比较各因素的敏感程度,b值和η值最为突出,在震级研究中应重点分析。综合分析,RVM模型具有精度高和离散性小等优点,对地震震级预测有较好的推广价值。  相似文献   

6.
A new approach for streamflow simulation using nonparametric methods was described in a recent publication (Sharma et al. 1997). Use of nonparametric methods has the advantage that they avoid the issue of selecting a probability distribution and can represent nonlinear features, such as asymmetry and bimodality that hitherto were difficult to represent, in the probability structure of hydrologic variables such as streamflow and precipitation. The nonparametric method used was kernel density estimation, which requires the selection of bandwidth (smoothing) parameters. This study documents some of the tests that were conduced to evaluate the performance of bandwidth estimation methods for kernel density estimation. Issues related to selection of optimal smoothing parameters for kernel density estimation with small samples (200 or fewer data points) are examined. Both reference to a Gaussian density and data based specifications are applied to estimate bandwidths for samples from bivariate normal mixture densities. The three data based methods studied are Maximum Likelihood Cross Validation (MLCV), Least Square Cross Validation (LSCV) and Biased Cross Validation (BCV2). Modifications for estimating optimal local bandwidths using MLCV and LSCV are also examined. We found that the use of local bandwidths does not necessarily improve the density estimate with small samples. Of the global bandwidth estimators compared, we found that MLCV and LSCV are better because they show lower variability and higher accuracy while Biased Cross Validation suffers from multiple optimal bandwidths for samples from strongly bimodal densities. These results, of particular interest in stochastic hydrology where small samples are common, may have importance in other applications of nonparametric density estimation methods with similar sample sizes and distribution shapes. Received: November 12, 1997  相似文献   

7.
This paper proposes to use least square support vector machine (LSSVM) and relevance vector machine (RVM) for prediction of the magnitude (M) of induced earthquakes based on reservoir parameters. Comprehensive parameter (E) and maximum reservoir depth (H) are used as input variables of the LSSVM and RVM. The output of the LSSVM and RVM is M. Equations have been presented based on the developed LSSVM and RVM. The developed RVM also gives variance of the predicted M. A comparative study has been carried out between the developed LSSVM, RVM, artificial neural network (ANN), and linear regression models. Finally, the results demonstrate the effectiveness and efficiency of the LSSVM and RVM models.  相似文献   

8.
A new approach for streamflow simulation using nonparametric methods was described in a recent publication (Sharma et al. 1997). Use of nonparametric methods has the advantage that they avoid the issue of selecting a probability distribution and can represent nonlinear features, such as asymmetry and bimodality that hitherto were difficult to represent, in the probability structure of hydrologic variables such as streamflow and precipitation. The nonparametric method used was kernel density estimation, which requires the selection of bandwidth (smoothing) parameters. This study documents some of the tests that were conduced to evaluate the performance of bandwidth estimation methods for kernel density estimation. Issues related to selection of optimal smoothing parameters for kernel density estimation with small samples (200 or fewer data points) are examined. Both reference to a Gaussian density and data based specifications are applied to estimate bandwidths for samples from bivariate normal mixture densities. The three data based methods studied are Maximum Likelihood Cross Validation (MLCV), Least Square Cross Validation (LSCV) and Biased Cross Validation (BCV2). Modifications for estimating optimal local bandwidths using MLCV and LSCV are also examined. We found that the use of local bandwidths does not necessarily improve the density estimate with small samples. Of the global bandwidth estimators compared, we found that MLCV and LSCV are better because they show lower variability and higher accuracy while Biased Cross Validation suffers from multiple optimal bandwidths for samples from strongly bimodal densities. These results, of particular interest in stochastic hydrology where small samples are common, may have importance in other applications of nonparametric density estimation methods with similar sample sizes and distribution shapes. Received: November 12, 1997  相似文献   

9.
Kernel density estimators are useful building blocks for empirical statistical modeling of precipitation and other hydroclimatic variables. Data driven estimates of the marginal probability density function of these variables (which may have discrete or continuous arguments) provide a useful basis for Monte Carlo resampling and are also useful for posing and testing hypotheses (e.g bimodality) as to the frequency distributions of the variable. In this paper, some issues related to the selection and design of univariate kernel density estimators are reviewed. Some strategies for bandwidth and kernel selection are discussed in an applied context and recommendations for parameter selection are offered. This paper complements the nonparametric wet/dry spell resampling methodology presented in Lall et al. (1996).  相似文献   

10.
A novel approach to infer streamflow signals for ungauged basins   总被引:1,自引:0,他引:1  
In this paper, we present a novel paradigm for inference of streamflow for ungauged basins. Our innovative procedure fuses concepts from both kernel methods and data assimilation. Based on the modularity and flexibility of kernel techniques and the strengths of the variational Bayesian Kalman filter and smoother, we can infer streamflow for ungauged basins whose hydrological and system properties and/or behavior are non-linear and non-Gaussian. We apply the proposed approach to two watersheds, one in California and one in West Virginia. The inferred streamflow signals for the two watersheds appear promising. These preliminary and encouraging validations demonstrate that our new paradigm is capable of providing accurate conditional estimates of streamflow for ungauged basins with unknown and non-linear dynamics.  相似文献   

11.
基于小样本学习理论的支持向量机(SVM)方法可用于建立非线性函数预测模型。利用支持向量机方法,根据样本数据采用自动拟合的方法构造核函数,使得建立的关系不仅具有较高的拟合精度,而且具有较好的推广性。地震波的频谱与其波形的关系是互为正、反傅立叶变换的关系,所以地震波的波形及其频谱是同一物理现象的两种不同的表达形式。波形特征沿纵横方向上的变化反映了地层介质在纵横方向上的差异;反射波频谱上的差异则反映了岩性和流体成分的不同以及地层厚度的变化等。直接由地震波波形预测扇体砂岩厚度,不仅充分利用了地震波信息,而且极大地提高了预测模型的准确性。模型及实例验证了该方法的适用性。  相似文献   

12.
许冲  徐锡伟 《地球物理学报》2012,55(9):2994-3005
基于统计学习理论与地理信息系统(GIS)技术的地震滑坡灾害空间预测是一个重要的研究方向,其可以对相似地震条件下地震滑坡的发生区域进行预测.2010年4月14日07时49分(北京时间),青海省玉树县发生了Mw6.9级大地震,作者基于高分辨率遥感影像解译与现场调查验证的方法,圈定了2036处本次地震诱发滑坡,这些滑坡大概分布在一个面积为1455.3 km2的矩形区域内.本文以该矩形区域为研究区,以GIS与支持向量机(SVM)模型为基础,开展基于不同核函数的地震滑坡空间预测模型研究.应用GIS技术建立玉树地震滑坡灾害及相关滑坡影响因子空间数据库,选择高程、坡度、坡向、斜坡曲率、坡位、水系、地层岩性、断裂、公路、归一化植被指数(NDVI)、同震地表破裂、地震动峰值加速度(PGA)共12个因子作为地震滑坡预测因子.以SVM模型为基础,基于线性核函数、多项式核函数、径向基核函数、S形核函数等4类核函数开展地震滑坡空间预测研究,分别建立了玉树地震滑坡危险性指数图、危险性分级图、预测结果图.4类核函数对应的模型正确率分别为79.87%,83.45%,84.16%,64.62%.基于不同的训练样本开展模型训练与讨论工作,表明径向基核函数是最适用于该地区的地震滑坡空间预测模型.本文为地震滑坡空间预测模型中核函数的科学选择提供了依据,也为地震区的滑坡防灾减灾工作提供了参考.  相似文献   

13.
Carbonate reservoirs have complex pore structures, which not only significantly affect the elastic properties and seismic responses of the reservoirs but also affect the accuracy of the prediction of the physical parameters. The existing rockphysics inversion methods are mainly designed for clastic rocks, and the inversion objects are generally porosity and water saturation. The data used are primarily based on the elastic parameters, and the inversion methods are mainly linear approximations. To date, there has been a lack of a simultaneous pore structure and physical parameter inversion method for carbonate reservoirs. To solve these problems, a new Bayesian nonlinear simultaneous inversion method based on elastic impedance is proposed. This method integrates the differential effective medium model of multiple-porosity rocks, Gassmann equation,Amplitude Versus Offset(AVO) theory, Bayesian theory, and a nonlinear inversion algorithm to achieve the simultaneous quantitative prediction of the pore structure and physical parameters of complex porous reservoirs. The forward modeling indicates that the contribution of the pore structure, i.e., the pore aspect ratio, to the AVO response and elastic impedance is second only to that of porosity and is far greater than that of water saturation. The application to real data shows that the new inversion method for determining the pore structure and physical parameters directly from pre-stack data can accurately predict a reservoir's porosity and water saturation and can evaluate the pore structure of the effective reservoir.  相似文献   

14.
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan‐view flux maps. Key advantages of heat tracing include cost‐effective sensors and ease of data collection and interpretation, without the need for expensive and time‐consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.  相似文献   

15.
Kernel density estimators are useful building blocks for empirical statistical modeling of precipitation and other hydroclimatic variables. Data driven estimates of the marginal probability density function of these variables (which may have discrete or continuous arguments) provide a useful basis for Monte Carlo resampling and are also useful for posing and testing hypotheses (e.g bimodality) as to the frequency distributions of the variable. In this paper, some issues related to the selection and design of univariate kernel density estimators are reviewed. Some strategies for bandwidth and kernel selection are discussed in an applied context and recommendations for parameter selection are offered. This paper complements the nonparametric wet/dry spell resampling methodology presented in Lall et al. (1996).  相似文献   

16.
This paper introduces an extension of the traditional stationary linear coregionalization model to handle the lack of stationarity. Under the proposed model, coregionalization matrices are spatially dependent, and basic univariate spatial dependence structures are non-stationary. A parameter estimation procedure of the proposed non-stationary linear coregionalization model is developed under the local stationarity framework. The proposed estimation procedure is based on the method of moments and involves a matrix-valued local stationary variogram kernel estimator, a weighted local least squares method in combination with a kernel smoothing technique. Local parameter estimates are knitted together for prediction and simulation purposes. The proposed non-stationary multivariate spatial modeling approach is illustrated using two real bivariate data examples. Prediction performance comparison is carried out with the classical stationary multivariate spatial modeling approach. According to several criteria, the prediction performance of the proposed non-stationary multivariate spatial modeling approach appears to be significantly better.  相似文献   

17.
The authors propose a new analysis method, called the macro–micro analysis method (MMAM) in a companion paper. (Earthquake Engng. Struct. Dyn., this issue) for strong motion prediction with higher resolution and accuracy. The MMAM takes advantage of the bounding medium theory to obtain optimistic and pessimistic estimates of the expected strong motion and the singular perturbation expansion that leads to an efficient multi‐scale analysis. The results of the numerical simulation with the MMAM are given as the sum of waves of low resolution covering the whole city and waves of high resolution for each part of the city. While the huge computation amount is reduced by the MMAM, the computation amount is huge still. For resolving this problem, this paper applies the finite element method with voxel element to numerical simulation tools after some numerical verification. To reproduce complicated material properties of surface soft deposits, fundamental hysteresis attenuation is implemented in the three‐dimensional simulation code. The proposed method is verified by carrying out the strong motion prediction with MMAM and comparing with measured data. In addition, the effect of three‐dimensional soil–structure and frequency component on the maximum velocity distribution, which is simulated by proposed method with high spatial resolution, is discussed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
In comparison with the traditional analysis of annual maximums, the peaks over threshold method provides many advantages when performing flood frequency analysis and trend analysis. However, the choice of the threshold remains an important question without definite answers and common visual diagnostic tools are difficult to reproduce on a large scale. This study investigates the behaviour of some automatic methods for threshold selection based on the generalized Pareto model for flood peak exceedances of the threshold and the Anderson–Darling test for fitting this model. In particular, the choice of a critical significance level to define an interval of acceptable values is addressed. First, automatic methods are investigated using a simulation study to assess fitting and prediction performance in a controlled environment. It is shown that P values approximated by an existing table of critical values can speed up computation without affecting the quality of the outcomes. Second, a case study compares automatically and manually selected thresholds for 285 sites across Canada by flood regime and super regions based on site characteristics. Correspondences are examined in terms of prediction of flood quantiles and trend analysis. Results show that trend detection is sensitive to the threshold selection method when studying the evolution of the number of peaks per year. Finally, a hybrid method is developed to combine automatic methods and is calibrated on the basis of super regions. The outcomes of the hybrid method are shown to more closely reproduce the estimates of the manually selected thresholds while reducing the model uncertainty.  相似文献   

19.
The electrical potential generated by a point source of current on the ground surface is studied for a multi-layered earth formed by layers alternatively characterized by a constant conductivity value and by conductivity varying linearly with depth. The problem is accounted for by solving a Laplace's differential equation for the uniform layers and a Poisson's differential equation for the transitional layers. Then, by a simple algorithm and by the introduction of a suitable kernel function, the general expression of the apparent resistivity for a Schlumberger array placed on the surface is obtained. Moreover some details are given for the solution of particular cases as 1) the presence of a infinitely resistive basement, 2) the absence of any one or more uniform layers, and 3) the absence of any one or more transitional layers. The new theory proves to be rather general, as it includes that for uniform layers with sharp boundaries as a particular case. Some mathematical properties of the kernel function are studied in view of the application of a direct system of quantitative interpretation. Two steps are considered for the solution of the direct problem: (i) The determination of the kernel function from the field measurements of the apparent resistivity. Owing to the identical mathematical formalism of the old with this new resistivity theory, the procedures there developed for the execution of the first step are here as well applicable without any change. Thus, some graphical and numerical procedures, already published, are recalled. (ii) The determination of the layer distribution from the kernel function. A recurrent procedure is proposed and studied in detail. This recurrent procedure follows the principle of the reduction to a lower boundary plane, as originally suggested by Koefoed for the old geoelectrical theory. Here the method differs mainly for the presence of reduction coefficients, which must be calculated each time when passing to a reduced earth section.  相似文献   

20.
The development and implementation of an earthquake early warning system (EEWS), both in regional or on-site configurations can help to mitigate the losses due to the occurrence of moderate-to-large earthquakes in densely populated and/or industrialized areas. The capability of an EEWS to provide real-time estimates of source parameters (location and magnitude) can be used to take some countermeasures during the earthquake occurrence and before the arriving of the most destructive waves at the site of interest. However, some critical issues are peculiar of EEWS and need further investigation: (1) the uncertainties on earthquake magnitude and location estimates based on the measurements of some observed quantities in the very early portion of the recorded signals; (2) the selection of the most appropriate parameter to be used to predict the ground motion amplitude both in near- and far-source ranges; (3) the use of the estimates provided by the EEWS for structural engineering and risk mitigation applications.In the present study, the issues above are discussed using the Campania–Lucania region (Southern Apennines) in Italy, as test-site area. In this region a prototype system for earthquake early warning, and more generally for seismic alert management, is under development. The system is based on a dense, wide dynamic accelerometric network deployed in the area where the moderate-to-large earthquake causative fault systems are located.The uncertainty analysis is performed through a real-time probabilistic seismic hazard analysis by using two different approaches. The first is the Bayesian approach that implicitly integrate both the time evolving estimate of earthquake parameters, the probability density functions and the variability of ground motion propagation providing the most complete information. The second is a classical point estimate approach which does not account for the probability density function of the magnitude and only uses the average of the estimates performed at each seismic station.Both the approaches are applied to two main towns located in the area of interest, Napoli and Avellino, for which a missed and false alarm analysis is presented by means of a scenario earthquake: an M 7.0 seismic event located at the centre of the seismic network.Concerning the ground motion prediction, attention is focused on the response spectra as the most appropriate function to characterize the ground motion for earthquake engineering applications of EEWS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号