首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 800 毫秒
1.
分期设计洪水频率与防洪标准关系研究   总被引:16,自引:4,他引:12       下载免费PDF全文
现行分期设计洪水模式估算的分期设计洪水值均小于或等于年最大设计值,达不到规定的防洪标准。采用Gumbel-Hougaard Copula函数描述两个分期的分期最大洪水之间的相关性结构,并构造边缘分布为P-Ⅲ分布的分期最大洪水联合分布,建立分期最大洪水与年最大洪水的关系式,讨论分期设计洪水频率与防洪标准应满足的关系,探讨能够满足防洪标准的新的分期设计洪水模式。应用示例表明,新模式主汛期设计值相对年最大设计值小幅度增加,而非主汛期设计值则小于年最大设计值,既满足不降低防洪标准的要求又能够起到优化设计洪水的作用,为分期设计洪水研究提供了一条新的思路。  相似文献   

2.
A common characteristic of gold deposits is highly skewed frequency distributions. Lognormal and three-parameter lognormal distributions have worked well for Witwatersrand-type deposits. Epithermal gold deposits show evidence of multiple pulses of mineralization, which make fitting simple distribution models difficult. A new approach is proposed which consists of the following steps: (1) ordering the data in descending order. (2) Finding the cumulative coefficient of variation for each datum. Look for the quantile where there is a sudden acceleration of the cumulative C.V. Typically, the quantile will be above 0.85. (3) Fitting a lognormal model to the data above that quantile. Establish the mean above the quantile, Z H * . This is done by fitting a single or double truncated lognormal model. (4) Use variograms to establish the spatial continuity of below-quantile data (ZL) and indicator variable (1 if below quantile, 0 if above). (5) Estimate grade of blocks by (1*) (Z L * )+(1 – 1*) (Z H * ), where 1* is the kriged estimate of the indicator, and Z L * is the kriged estimate of the below quantile portion of the distribution. The method is illustrated for caldera, Carlin-type, and hot springs-type deposits. For the latter two types, slight variants of the above steps are developed.  相似文献   

3.
A large number of models have been proposed over the last years for regional flood frequency analysis in northern regions. However, these models dealt generally with snowmelt-caused spring floods. This paper deals with the adaptation, application, and comparison of two regional frequency analysis methods, canonical correlation analysis (CCA) and universal canonical kriging (UCK), on autumnal floods of 29 stations from the C?te-Nord region (QC, Canada). Three possible periods during which autumnal floods can take place are tested. The absolute and specific flood peak and volume quantiles are also studied. A jack-knife resampling procedure is applied to compare the performance of each model according to the selected period and the type of quantile. The period of September 1st to December 15th is found to be optimal to represent autumnal floods and specific quantiles were shown to lead to better results than absolute quantiles. Variables that explain best the autumnal floods are the basin area, the fraction of the area covered with lakes, and the average of mean July, August, and September maximal temperatures. The CCA model performs slightly better than UCK.  相似文献   

4.
Regional flood frequency analysis (RFFA) is often used in hydrology to estimate flood quantiles when there is a limitation of at-site recorded flood data. One of the commonly used RFFA methods is the index flood method, which is based on the assumptions that a region satisfies criterion of simple scaling and it can be treated homogeneous. Another RFFA method is quantile regression technique where prediction equations are developed for flood quantiles of interest as function of catchment characteristics. In this paper, the scaling property of regional floods in New South Wales (NSW) State in Australia is investigated. The results indicate that the annual maximum floods in NSW satisfy a simple scaling assumption. The application of a heterogeneity test, however, reveals that NSW flood data set does not satisfy the criteria for a homogeneous region. Finally, a set of prediction equations are developed for NSW using quantile regression technique; an independent test shows that these equations can provide reasonably accurate design flood estimates with a median relative error of about 27%.  相似文献   

5.
There is a close relationship between groundwater level in a shallow aquifer and the surface ecological environment; hence, it is important to accurately simulate and predict the groundwater level in eco-environmental construction projects. The multiple linear regression (MLR) model is one of the most useful methods to predict groundwater level (depth); however, the predicted values by this model only reflect the mean distribution of the observations and cannot effectively fit the extreme distribution data (outliers). The study reported here builds a prediction model of groundwater-depth dynamics in a shallow aquifer using the quantile regression (QR) method on the basis of the observed data of groundwater depth and related factors. The proposed approach was applied to five sites in Tianjin city, north China, and the groundwater depth was calculated in different quantiles, from which the optimal quantile was screened out according to the box plot method and compared to the values predicted by the MLR model. The results showed that the related factors in the five sites did not follow the standard normal distribution and that there were outliers in the precipitation and last-month (initial state) groundwater-depth factors because the basic assumptions of the MLR model could not be achieved, thereby causing errors. Nevertheless, these conditions had no effect on the QR model, as it could more effectively describe the distribution of original data and had a higher precision in fitting the outliers.  相似文献   

6.
Consideration of order relations is key to indicator kriging, indicator cokriging, and probability kriging, especially for the latter two methods wherein the additional modeling of cross-covariance contributes to an increased chance of violating order relations. Herein, Gaussian-type curves are fit to estimates of the cumulative distribution function (cdf) at data quantiles to: (1) yield smoothed estimates of the cdf; and (2) to correct for violations of order relations (i.e., to correct for situations wherein the estimate of the cdf for a larger quantile is less than that for a smaller quantile). Smoothed estimates of the cdf are sought as a means to improve the approximation to the integral equation for the expected value of the regionalized variable in probability kriging. Experiments show that this smoothing yields slightly improved estimation of the expected value (in probability kriging). Another experiment, one that uses the same variogram for all indicator functions, does not yield improved estimates.Presented at the 25th Anniversary Meeting of the IAMG, Prague, Czech Republic, October 10–15, 1993.  相似文献   

7.
In Mexico, poverty has forced people to live almost on the water of rivers. This situation along with the occurrence of floods is a serious problem for the local governments. In order to protect their lives and goods, it is very important to account with a mathematical tool that may reduce the uncertainties in computing the design events for different return periods. In this paper, the Logistic model for bivariate extreme value distribution with Weibull-2 and Mixed Weibull marginals is proposed for the case of flood frequency analysis. A procedure to estimate their parameters based on the maximum likelihood method is developed. A region in Northwestern Mexico with 16 gauging stations has been selected to apply the model and regional at-site quantiles were estimated. A significant improvement occurs, measured through the use of a goodness-of-fit test, when parameters are estimated using the bivariate distribution instead of its univariate counterpart. Results suggest that it is very important to consider the Mixed Weibull distribution and its bivariate option when analyzing floods generated by a␣mixture of two populations.  相似文献   

8.
吴俊梅  林炳章  邵月红 《水文》2015,35(5):15-22
介绍了基于水文气象途径的地区线性矩法的概念,通过基于次序统计量的线性矩进行参数估计与基于水文气象一致区的地区分析法相结合,以太湖流域1d时段的年极值降雨资料为例,进行暴雨频率分析。应用水文气象一致区的判别准则,将太湖流域划分为8个水文气象一致区;综合考虑三种拟合优度检测方法,选择1~8区的最优分布线型分别为:GEV、GLO、GEV、GEV、GNO、GNO、GEV、GNO;根据地区分析法原理,估算各雨量站的暴雨频率设计值。分析表明:太湖流域各重现期下的年极值降雨空间分布形态基本一致,西南山区是太湖流域的暴雨高风险区,应该在地区防洪规划中引起重视。结果表明:地区线性矩法具有很高的学术和实用价值,建议在全国范围内推广,作为防洪规划的顶层设计和基础工作,以满足工程防洪设计、地区防洪规划、山洪预警和城市防涝防洪规划等方面的需求。  相似文献   

9.
Flood frequency analysis based on simulated peak discharges   总被引:2,自引:0,他引:2  
Flood frequency approaches vary from statistical methods, directly applied on the observed annual maximum flood series, to adopting rainfall–runoff simulation models that transform design rainfalls to flood discharges. Reliance on statistical flood frequency analysis depends on several factors such as the selected probability distribution function, estimation of the function parameters, possible outliers, and length of the observed flood series. Through adopting the simulation approach in this paper, watershed-average rainfalls of various occurrence probabilities were transformed into the corresponding peak discharges using a calibrated hydrological model. A Monte Carlo scheme was employed to consider the uncertainties involved in rainfall spatial patterns and antecedent soil moisture condition (AMC). For any given rainfall depth, realizations of rainfall spatial distribution and AMC conditions were entered as inputs to the model. Then, floods of different return periods were simulated by transforming rainfall to runoff. The approach was applied to Tangrah watershed in northeastern Iran. It was deduced that the spatial rainfall distribution and the AMCs exerted a varying influence on the peak discharge of different return periods. Comparing the results of the simulation approach with those of the statistical frequency analysis revealed that, for a given return period, flood quantiles based on the observed series were greater than the corresponding simulated discharges. It is also worthy to note that existence of outliers and the selection of the statistical distribution function has a major effect in increasing the differences between the results of the two approaches.  相似文献   

10.
11.
Variograms of Order ω: A Tool to Validate a Bivariate Distribution Model   总被引:1,自引:0,他引:1  
The multigaussian model is used in mining geostatistics to simulate the spatial distribution of grades or to estimate the recoverable reserves of an ore deposit. Checking the suitability of such model to the available data often constitutes a critical step of the geostatistical study. In general, the marginal distribution is not a problem because the data can be transformed to normal scores, so the check is usually restricted to the bivariate distributions. In this work, several tests for diagnosing the two-point normality of a set of Gaussian data are reviewed and commented. An additional criterion is proposed, based on the comparison between the usual variogram and the variograms of lower order: the latter are defined as half the mean absolute increments of the attribute raised to a power between 0 and 2. This criterion is then extended to other bivariate models, namely the bigamma, Hermitian and Laguerrian models. The concepts are illustrated on two real data-sets. Finally, some conditions to ensure the internal consistency of the variogram under a given model are given.  相似文献   

12.
A good prediction of solid waste landfill settlement is important for landfill design and rehabilitation. A one-dimensional model which accounts for mechanical settlement and biodegradation processes is developed to simulate the settlement behavior of municipal solid waste landfill. The derivation of analytical solutions for specific conditions is introduced. The numerical approach, capable of coping with more general conditions, is also presented to estimate the spatial and temporal distribution of landfill settlement. The proposed model can simulate typical features of short- and long-term landfill settlement behaviors. With proper selection of parameter values, field measurements are well simulated by this model. The effects of some design parameters on the settlement behavior of municipal solid waste landfills are also examined with the help of this model.  相似文献   

13.
Statistical models are one of the most preferred methods among many landslide susceptibility assessment methods. As landslide occurrences and influencing factors have spatial variations, global models like neural network or logistic regression (LR) ignore spatial dependence or autocorrelation characteristics of data between the observations in susceptibility assessment. However, to assess the probability of landslide within a specified period of time and within a given area, it is important to understand the spatial correlation between landslide occurrences and influencing factors. By including these relations, the predictive ability of the developed model increases. In this respect, spatial regression (SR) and geographically weighted regression (GWR) techniques, which consider spatial variability in the parameters, are proposed in this study for landslide hazard assessment to provide better realistic representations of landslide susceptibility. The proposed model was implemented to a case study area from More and Romsdal region of Norway. Topographic (morphometric) parameters (slope angle, slope aspect, curvature, plan, and profile curvatures), geological parameters (geological formations, tectonic uplift, and lineaments), land cover parameter (vegetation coverage), and triggering factor (precipitation) were considered as landslide influencing factors. These influencing factors together with past rock avalanche inventory in the study region were considered to obtain landslide susceptibility maps by using SR and LR models. The comparisons of susceptibility maps obtained from SR and LR show that SR models have higher predictive performance. In addition, the performances of SR and LR models at the local scale were investigated by finding the differences between GWR and SR and GWR and LR maps. These maps which can be named as comparison maps help to understand how the models estimate the coefficients at local scale. In this way, the regions where SR and LR models over or under estimate the landslide hazard potential were identified.  相似文献   

14.
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

15.
Kriging with Inequality Constraints   总被引:1,自引:0,他引:1  
A Gaussian random field with an unknown linear trend for the mean is considered. Methods for obtaining the distribution of the trend coefficients given exact data and inequality constraints are established. Moreover, the conditional distribution for the random field at any location is calculated so that predictions using e.g. the expectation, the mode, or the median can be evaluated and prediction error estimates using quantiles or variance can be obtained. Conditional simulation techniques are also provided.  相似文献   

16.
The top twenty carbon-emitting nations contribute around 80% to global CO2 emissions. The transformation of business operations, products, and services through establishing a digital economy (DGE) might help these nations to accomplish Sustainable Development Goals (SDGs) and carbon neutrality. However, digitalization poses certain direct and indirect effects on the environment, and also emissions and digitalization levels vary across nations. Further, the decoupling of economic growth and emissions makes it very challenging to reduce emissions without decreasing economic growth. Against this background, this research assesses the impacts of DGE and financial expansion (FE) on emissions in the top twenty emitters by considering the direct effect of DGE as well as its indirect effects through economic growth. The newly proposed method of moment quantile regressions (MM-QR) is adopted to unveil the associations between variables by accounting for distributional and heterogeneous variations in variables from 2003 to 2019. The novel findings demonstrate that DGE significantly boosts emissions. However, the indirect effects of DGE on emissions through economic growth reduce emissions and thereby, stimulate environmental quality. Interestingly, both direct and indirect effects of DGE are noticeable only from quantiles 6 to 9 and these impacts exhibit an increasing trend. FE decreases CO2 and uplifts environmental quality in all quantiles without much variation. Economic growth (GR) augments CO2; however, the magnitude of its effects reduces from lower to upper quantiles. Population density (PDN) alleviates environmental deterioration and its effects intensify from lower to upper quantiles. Afterward, the Driscoll-Kraay (DK) regression test confirmed the results of the MM-QR. Based on these novel results, a policy framework is proposed to reduce electronic waste and accelerate digital penetration in different sectors of the economy to enhance resource-saving and achieve carbon neutrality.  相似文献   

17.
Spatial declustering weights   总被引:1,自引:0,他引:1  
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

18.
A procedure for estimating maximum values of seismic peak ground accelerationat the examined site and quantiles of its probabilistic distribution in a future timeinterval of a given length is considered. The input information for the method areseismic catalog and regression relation between peak seismic acceleration at a givenpoint and magnitude and distance from the site to epicenter (seismic attenuation law).The method is based on Bayesian approach, which simply accounts for influenceof uncertainties of seismic acceleration values. The main assumptions for the method are Poissonian character of seismic events flow and distribution law of Gutenberg-Richter's type. The method is applied to seismic hazard estimation in six selected sitesin Greece.  相似文献   

19.
We present a methodology that allows conditioning the spatial distribution of geological and petrophysical properties of reservoir model realizations on available production data. The approach is fully consistent with modern concepts depicting natural reservoirs as composite media where the distribution of both lithological units (or facies) and associated attributes are modeled as stochastic processes of space. We represent the uncertain spatial distribution of the facies through a Markov mesh (MM) model, which allows describing complex and detailed facies geometries in a rigorous Bayesian framework. The latter is then embedded within a history matching workflow based on an iterative form of the ensemble Kalman filter (EnKF). We test the proposed methodology by way of a synthetic study characterized by the presence of two distinct facies. We analyze the accuracy and computational efficiency of our algorithm and its ability with respect to the standard EnKF to properly estimate model parameters and assess future reservoir production. We show the feasibility of integrating MM in a data assimilation scheme. Our methodology is conducive to a set of updated model realizations characterized by a realistic spatial distribution of facies and their log permeabilities. Model realizations updated through our proposed algorithm correctly capture the production dynamics.  相似文献   

20.
Approximate local confidence intervals can be produced by nonlinear methods designed to estimate indicator variables. The most precise of these methods, the conditional expectation, can only be used in practice in the multi-Gaussian context. Theoretically, less efficient methods have to be used in more general cases. The methods considered here are indicator kriging, probability kriging (indicator-rank co-kriging), and disjunctive kriging (indicator co-kriging). The properties of these estimators are studied in this paper in the multi-Gaussian context, for this allows a more detailed study than under more general models. Conditional distribution approximation is first studied. Exact results are given for mean squared errors and conditional bias. Then conditional quantile estimators are compared empirically. Finally, confidence intervals are compared from the points of view of bias and precision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号