首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Parametric method of flood frequency analysis (FFA) involves fitting of a probability distribution to the observed flood data at the site of interest. When record length at a given site is relatively longer and flood data exhibits skewness, a distribution having more than three parameters is often used in FFA such as log‐Pearson type 3 distribution. This paper examines the suitability of a five‐parameter Wakeby distribution for the annual maximum flood data in eastern Australia. We adopt a Monte Carlo simulation technique to select an appropriate plotting position formula and to derive a probability plot correlation coefficient (PPCC) test statistic for Wakeby distribution. The Weibull plotting position formula has been found to be the most appropriate for the Wakeby distribution. Regression equations for the PPCC tests statistics associated with the Wakeby distribution for different levels of significance have been derived. Furthermore, a power study to estimate the rejection rate associated with the derived PPCC test statistics has been undertaken. Finally, an application using annual maximum flood series data from 91 catchments in eastern Australia has been presented. Results show that the developed regression equations can be used with a high degree of confidence to test whether the Wakeby distribution fits the annual maximum flood series data at a given station. The methodology developed in this paper can be adapted to other probability distributions and to other study areas. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract

Abstract The identification of flood seasonality is a procedure with many practical applications in hydrology and water resources management. Several statistical methods for capturing flood seasonality have emerged during the last decade. So far, however, little attention has been paid to the uncertainty involved in the use of these methods, as well as to the reliability of their estimates. This paper compares the performance of annual maximum (AM) and peaks-over-threshold (POT) sampling models in flood seasonality estimation. Flood seasonality is determined by two most frequently used methods, one based on directional statistics (DS) and the other on the distribution of monthly relative frequencies of flood occurrence (RF). The performance is evaluated for the AM and three common POT sampling models depending on the estimation method, flood seasonality type and sample record length. The results demonstrate that the POT models outperform the AM model in most analysed scenarios. The POT sampling provides significantly more information on flood seasonality than the AM sampling. For certain flood seasonality types, POT samples can lead to estimation uncertainty that is found in up to ten-times longer AM samples. The performance of the RF method does not depend on the flood seasonality type as much as that of the DS method, which performs poorly on samples generated from complex seasonality distributions.  相似文献   

3.
A methodology is proposed for the inference, at the regional and local scales, of flood magnitude and associated probability. Once properly set-up, this methodology is able to provide flood frequencies distributions at gauged and un-gauged river sections pertaining to the same homogeneous region, using information extracted from rainfall observations. A proper flood frequency distribution can be therefore predicted even in un-gauged watersheds, for which no discharge time series is available.  相似文献   

4.
多尺度突变现象的扫描式t检验方法及其相干性分析   总被引:8,自引:2,他引:8  
阐明了将检测两子样本平均值之差的学生氏t检验推广到对多尺度突变现象进行扫描式检测的计算方法;对于t检验要求序列独立的限制,引用了初步的订正方法;还给出了检测两个序列间多尺度突变相干性的计算公式.扫描式t检验不仅具有相当于子波变换检测多尺度突变现象的功能,而且解决了子波变换检测突变时缺少临界值的问题.由于t统计量包含有二阶矩均方差,它不能像子波变换那样作为分解工具,但检测的尺度参数也就不必局限于2的整数幂,因而可以进行扫描式检测.应用于尼罗河年最高与最低水位历史序列(AD622-1470),能较客观和精确地检测出两序列在某些尺度上的相干性(同步或反位相)变化;并由此重新划分了该流域几十年至百余年时间尺度的相对干湿期.结果与目前查阅到的埃及灾荒历史记载相吻合.  相似文献   

5.
The annual peak flow series of Polish rivers are mixtures of summer and winter flows. As Part II of a sequence of two papers, practical aspects of applicability of seasonal approach to flood frequency analysis (FFA) of Polish rivers are discussed. Taking A Two‐Component Extreme Value (TCEV1) model as an example it was shown in the first part that regardless of estimation method, the seasonal approach can give profit in terms of upper quantile estimation accuracy that rises with the return period of the quantile and is the greatest for no seasonal variation. In this part, an assessment of annual maxima (AM) versus seasonal maxima (SM) approach to FFA was carried out with respect to seasonal and annual peak flow series of 38 Polish gauging stations. First, the assumption of mutual independence of the seasonal maxima has been tested. The smoothness of SM and AM empirical probability distribution functions was analysed and compared. The TCEV1 model with seasonally estimated parameters was found to be not appropriate for most Polish data as it considerably underrates the skewness of AM distributions and upper quantile values as well. Consequently, the discrepancies between the SM and AM estimates of TCEV1 are observed. Taking SM and TCEV1 distribution, the dominating season in AM series was confronted with predominant season for extreme floods. The key argument for presumptive superiority of SM approach that SM samples are more statistically homogeneous than AM samples has not been confirmed by the data. An analysis of fitness to SM and AM of Polish datasets made for seven distributions pointed to Pearson (3) distribution as the best for AM and Summer Maxima, whereas it was impossible to select a single best model for winter samples. In the multi‐model approach to FFA, the tree functions, i.e., Pe(3), CD3 and LN3, should be involved for both SM and AM. As the case study, Warsaw gauge on the Vistula River was selected. While most of AM elements are here from winter season, the prevailing majority of extreme annual floods are the summer maxima. The upper quantile estimates got by means of classical annual and two‐season methods happen to be fairly close; what's more they are nearly equal to the quantiles calculated just for the season of dominating extreme floods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
The annual peak flow series of the Polish rivers are mixtures of summer and winter flows. In the Part I of a sequence of two papers, theoretical aspects of applicability of seasonal approach to flood frequency analysis (FFA) in Poland are discussed. A testing procedure is introduced for the seasonal model and the data overall fitness. Conditions for objective comparative assessment of accuracy of annual maxima (AM) and seasonal maxima (SM) approaches to FFA are formulated and finally Gumbel (EV1) distribution is chosen as seasonal distribution for detailed investigation. Sampling properties of AM quantile x(F) estimates are analysed and compared for the SM and AM models for equal seasonal variances. For this purpose, four estimation methods were used, employing both asymptotic approach and sampling experiments. Superiority of the SM over AM approach is stated evident in the upper quantile range, particularly for the case of no seasonal variation in the parameters of Gumbel distribution. In order to learn whether the standard two‐ and three‐parameter flood frequency distributions can be used to model the samples generated from the Two‐Component Extreme Value 1 (TCEV1) distribution, the shape of TCEV1 probability density function (PDF) has been tested in terms of bi‐modality. Then the use of upper quantile estimate obtained from the dominant season of extreme floods (DEFS) as AM upper quantile estimate is studied and respective systematic error is assessed. The second part of the paper deals with advantages and disadvantages of SM and AM approach when applied to real flow data of Polish rivers. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
Long flood series are required to accurately estimate flood quantiles associated with high return periods, in order to design and assess the risk in hydraulic structures such as dams. However, observed flood series are commonly short. Flood series can be extended through hydro-meteorological modelling, yet the computational effort can be very demanding in case of a distributed model with a short time step is considered to obtain an accurate flood hydrograph characterisation. Statistical models can also be used, where the copula approach is spreading for performing multivariate flood frequency analyses. Nevertheless, the selection of the copula to characterise the dependence structure of short data series involves a large uncertainty. In the present study, a methodology to extend flood series by combining both approaches is introduced. First, the minimum number of flood hydrographs required to be simulated by a spatially distributed hydro-meteorological model is identified in terms of the uncertainty of quantile estimates obtained by both copula and marginal distributions. Second, a large synthetic sample is generated by a bivariate copula-based model, reducing the computation time required by the hydro-meteorological model. The hydro-meteorological modelling chain consists of the RainSim stochastic rainfall generator and the Real-time Interactive Basin Simulator (RIBS) rainfall-runoff model. The proposed procedure is applied to a case study in Spain. As a result, a large synthetic sample of peak-volume pairs is stochastically generated, keeping the statistical properties of the simulated series generated by the hydro-meteorological model. This method reduces the computation time consumed. The extended sample, consisting of the joint simulated and synthetic sample, can be used for improving flood risk assessment studies.  相似文献   

8.
Hydrological frequency analysis is the most widely used method to estimate risk for extreme values. The most used statistical distributions to fit extreme value data in hydrology can be regrouped in three classes: class C of regularly varying distributions, class D of sub exponential and class E, Exponential depending on their tail behavior. The Halphen distributions (Halphen type A (HA), Halphen type B (HB)) are separated by the Gamma distribution; these three distributions belong to class D and can be displayed in the (δ1, δ2) moment-ratio diagram. In this study, a statistical test for discriminating between HA, HB and the Gamma distribution is developed. The methodology is based on: (1) the generation of N samples of different sizes n around the Gamma curve; (2) the determination of the confidence zones around the Gamma curve for each fixed couple (δ1, δ2) moment-ratios and finally; (3) the study of the power of the test developed and the calculation of the type 2 error β and the power of the test which is 1-β for a fixed significance level α. Results showed that the test is powerful especially for high coefficients of skewness. This test will be included in Decision Support System of the HYFRAN-PLUS software.  相似文献   

9.
The deterioration of the condition of process plants assets has a major negative impact on the safety of its operation. Risk based integrity modeling provides a methodology to quantify the risks posed by an aging asset. This provides a means for the protection of human life, financial investment and the environmental damage from the consequences of its failures. This methodology is based on modeling the uncertainty in material degradations using probability distributions, known as priors. Using Bayes theorem, one may improve the prior distribution to obtain a posterior distribution using actual inspection data. Although the choice of priors is often subjective, a rational consensus can be achieved by judgmental studies and analyzing the generic data from the same or similar installations. The first part of this paper presents a framework for a risk based integrity modeling. This includes a methodology to select the prior distributions for the various types of corrosion degradation mechanisms, namely, the uniform, localized and erosion corrosion. Several statistical tests were conducted based on the data extracted from the literature to check which of the prior distributions follows data the best. Once the underlying distribution has been confirmed, one can estimate the parameters of the distributions. In the second part, the selected priors are tested and validated using actual plant inspection data obtained from existing assets in operation. It is found that uniform corrosion can be best described using 3P-Weibull and 3P-Lognormal distributions. Localized corrosion can be best described using Type1 extreme value and 3P-Weibull, while erosion corrosion can best be described using the 3P-Weibull, Type1 extreme value, or 3P-Lognormal distributions.  相似文献   

10.
Abstract

Flood frequency analysis can be made by using two types of flood peak series, i.e. the annual maximum (AM) and peaks-over-threshold (POT) series. This study presents a comparison of the results of both methods for data from the Litija 1 gauging station on the Sava River in Slovenia. Six commonly used distribution functions and three different parameter estimation techniques were considered in the AM analyses. The results showed a better performance for the method of L-moments (ML) when compared with the conventional moments and maximum likelihood estimation. The combination of the ML and the log-Pearson type 3 distribution gave the best results of all the considered AM cases. The POT method gave better results than the AM method. The binomial distribution did not offer any noticeable improvement over the Poisson distribution for modelling the annual number of exceedences above the threshold.
Editor D. Koutsoyiannis

Citation Bezak, N., Brilly, M., and ?raj, M., 2014. Comparison between the peaks-over-threshold method and the annual maximum method for flood frequency analysis. Hydrological Sciences Journal, 59 (5), 959–977.  相似文献   

11.
《水文科学杂志》2013,58(5):974-991
Abstract

The aim is to build a seasonal flood frequency analysis model and estimate seasonal design floods. The importance of seasonal flood frequency analysis and the advantages of considering seasonal design floods in the derivation of reservoir planning and operating rules are discussed, recognising that seasonal flood frequency models have been in use for over 30 years. A set of non-identical models with non-constant parameters is proposed and developed to describe flows that reflect seasonal flood variation. The peak-over-threshold (POT) sampling method was used, as it is considered to provide significantly more information on flood seasonality than annual maximum (AM) sampling and has better performance in flood seasonality estimation. The number of exceedences is assumed to follow the Poisson distribution (Po), while the peak exceedences are described by the exponential (Ex) and generalized Pareto (GP) distributions and a combination of both, resulting in three models, viz. Po-Ex, Po-GP and Po-Ex/GP. Their performances are analysed and compared. The Geheyan and the Baiyunshan reservoirs were chosen for the case study. The application and statistical experiment results show that each model has its merits and that the Po-Ex/GP model performs best. Use of the Po-Ex/GP model is recommended in seasonal flood frequency analysis for the purpose of deriving reservoir operation rules.  相似文献   

12.
Abstract

The exact distribution of the ratio of any magnitude to the sum of all magnitudes in an annual flood series satisfying the usual distribution-free assumptions of independence and identical distribution, and the additional parametric assumption of exponential tail behaviour with truncation, is shown to be a beta distribution of the first kind. A two-parameter linear transformation of the beta distribution completes the derivation and yields a Wakeby distribution which has the number of members in a series as a given parameter. The Wakeby distribution is developed to illustrate how, in principle, some perceived deficiencies in current flood frequency analysis may be met: more complex parametric assumptions should lead to distributions of wider application. In particular, the distribution has a secure theoretical basis and is hydrologically more realistic because it bounds the variate and requires the definition of a temporally finite annual series. Analytical expressions are obtained for estimating the two distribution parameters; the quantite standard error and a plotting rule. An example is given of the application of the distribution to the design flood problem and an annual flood series is modelled. It is further suggested that a suitable design value for the largest flood to be withstood by a protection work is a statistic of the largest flood occurring during its lifetime. For the derived Wakeby distribution this criterion specifies risk and probability of non-exceedance of the design flood once a lifetime is selected.  相似文献   

13.
The most popular practice for analysing nonstationarity of flood series is to use a fixed single‐type probability distribution incorporated with the time‐varying moments. However, the type of probability distribution could be both complex because of distinct flood populations and time‐varying under changing environments. To allow the investigation of this complex nature, the time‐varying two‐component mixture distributions (TTMD) method is proposed in this study by considering the time variations of not only the moments of its component distributions but also the weighting coefficients. Having identified the existence of mixed flood populations based on circular statistics, the proposed TTMD was applied to model the annual maximum flood series of two stations in the Weihe River basin, with the model parameters calibrated by the meta‐heuristic maximum likelihood method. The performance of TTMD was evaluated by different diagnostic plots and indexes and compared with stationary single‐type distributions, stationary mixture distributions and time‐varying single‐type distributions. The results highlighted the advantages of TTMD with physically‐based covariates for both stations. Besides, the optimal TTMD models were considered to be capable of settling the issue of nonstationarity and capturing the mixed flood populations satisfactorily. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
Asymmetric copula in multivariate flood frequency analysis   总被引:2,自引:0,他引:2  
The univariate flood frequency analysis is widely used in hydrological studies. Often only flood peak or flood volume is statistically analyzed. For a more complete analysis the three main characteristics of a flood event i.e. peak, volume and duration are required. To fully understand these variables and their relationships, a multivariate statistical approach is necessary. The main aim of this paper is to define the trivariate probability density and cumulative distribution functions. When the joint distribution is known, it is possible to define the bivariate distribution of volume and duration conditioned on the peak discharge. Consequently volume–duration pairs, statistically linked to peak values, become available. The authors build trivariate joint distribution of flood event variables using the fully nested or asymmetric Archimedean copula functions. They describe properties of this copula class and perform extensive simulations to highlight differences with the well-known symmetric Archimedean copulas. They apply asymmetric distributions to observed flood data and compare the results those obtained using distributions built with symmetric copula and the standard Gumbel Logistic model.  相似文献   

15.
Currently used goodness-of-fit (GOF) indicators (i.e. efficiency criteria) are largely empirical and different GOF indicators emphasize different aspects of model performance; a thorough assessment of model skill may require the use of robust skill matrices. In this study, based on the maximum likelihood method, a statistical measure termed BC-GED error model is proposed, which firstly uses the Box–Cox (BC) transformation method to remove the heteroscedasticity of model residuals, and then employs the generalized error distribution (GED) with zero-mean to fit the distribution of model residuals after BC transformation. Various distance-based GOF indicators can be explicitly expressed by the BC-GED error model for different values of the BC transformation parameter λ and GED kurtosis coefficient β. Our study proves that (1) the shape of error distribution implied in the GOF indicators affects the model performance on high or low flow discharges because large error-power (β) value can cause low probability of large residuals and small β value will lead to high probability of zero value; (2) the mean absolute error could balance consideration of low and high flow value as its assumed error distribution (i.e. Laplace distribution, where β = 1) is the turning point of GED derivative at zero value. The results of a study performed in the Baocun watershed via comparison of the SWAT model-calibration results using six distance-based GOF indicators show that even though the formal BC-GED is theoretically reasonable, the calibrated model parameters do not always correspond to high performance of model-simulation results because of imperfection of the hydrologic model. However, the derived distance-based GOF indicators using the maximum likelihood method offer an easy way of choosing GOF indicators for different study purposes and developing multi-objective calibration strategies.  相似文献   

16.
The objective of the study was to compare the relative accuracy of three methodologies of regional flood frequency analysis in areas of limited flood records. Thirty two drainage basins of different characteristics, located mainly in the southwest region of Saudi Arabia, were selected for the study. In the first methodology, region curves were developed and used together with the mean annual flood, estimated from the characteristics of drainage basin, to estimate flood flows at a location in the basin. The second methodology was to fit probability distribution functions to annual maximum rainfall intensity in a drainage basin. The best fitted probability function was used together with common peak flow models to estimate the annual maximum flood flows in the basin. In the third methodology, duration reduction curves were developed and used together with the average flood flow in a basin to estimate the peak flood flows in the basin. The results obtained from each methodology were compared to the flood records of the selected stations using three statistical measures of goodness-of-fit. The first methodology was found best in a case of having short length of record at a drainage basin. The second methodology produced satisfactory results. Thus, it is recommended in areas where data are not sufficient and/or reliable to utilise the first methodology.  相似文献   

17.
ABSTRACT

Classification of floods is often based on return periods of their peaks estimated from probability distributions and hence depends on assumptions. The choice of an appropriate distribution function and parameter estimation are often connected with high uncertainties. In addition, limited length of data series and the stochastic characteristic of the occurrence of extreme events add further uncertainty. Here, a distribution-free classification approach is proposed based on statistical moments. By using robust estimators the sampling effects are reduced and time series of different lengths can be analysed together. With a developed optimization procedure, locally and regionally consistent flood categories can be defined. In application, it is shown that the resulting flood categories can be used to assess the spatial extent of extreme floods and their coincidences. Moreover, groups of gauges, where simultaneous events belong to the same classes, are indicators for homogeneous groups of gauges in regionalization.  相似文献   

18.
Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov–Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy‐tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.  相似文献   

19.
Probabilistic performance assessment requires the development of probability distributions that can predict different performance levels of structures with reasonable accuracy. This study evaluates the performance of a non-seismically designed multi-column bridge bent retrofitted with four different alternatives, and based on their performance under an ensemble of earthquake records it proposes accurate prediction models and distribution fits for different performance criteria as a case study. Here, finite element methods have been implemented where each retrofitting technique has been modeled and numerically validated with the experimental results. Different statistical distributions are employed to represent the variation in the considered performance criteria for the retrofitted bridge bents. The Kolmogorov-Smirnov goodness-of-fit test was carried out to compare different distributions and find the suitable distribution for each performance criteria. An important conclusion drawn here is that the yield displacement of CFRP, steel, and ECC jacketed bridge bents are best described by a gamma distribution. The crushing displacement and crushing base shear of all four retrofitted bent follow a normal and Weibull distribution, respectively. A probabilistic model is developed to approximate the seismic performance of retrofitted bridge bents. These probabilistic models and response functions developed in this study allow for the performance prediction of retrofitted bridge bents.  相似文献   

20.
The objective of this study was to examine a new resampling methodology for estimating reference levels of 137Cs in uneroded locations. Accurate and precise measurement of 137Cs is required from reference locations to estimate long-term (c. 40 years) sediment redistribution (SRD) and landscape stability. Without reliable long-term, quantitative erosion data it is extremely difficult for land managers to make optimal decisions to ensure landscape sustainability. To determine the influence of 137Cs reference site sampling, particularly under-sampling, on SRD and landscape stability, two statistical approaches were applied to a grid-based data set. Caesium-137 inventories in the reference location (n=36) indicated that data were normally distributed, with a mean inventory of 2150±130 Bq m−2 (±95% confidence band) and a coefficient of variation of 18%. The two approaches used to determine the effect of under sampling included: (1) one-time random subsampling from the total sample collected, subsamples ranged from n=3 to n=30; from these data means and parametric confidence bands were calculated; and (2) random subsamples (n=3 to n=36) were selected from the total 137Cs reference sample, and each subsample was in turn resampled 1000 times with replacement to establish a sampling distribution of means. Thus, an empirically derived mean and 95% confidence bands were established. Caesium-137 activities determined from each approach were input into equations to estimate SRD from two cultivated fields. Results indicate that the one-time random sampling approach for subsamples of size ≤12 significantly over- or under-estimated net SRD, particularly from the gently sloping agricultural field. Computer-intensive resampling produced significantly better estimates of net SRD when compared with the random one-sample approach, especially when a subsample of size three was used. Landscape stability, based on partitioning the agricultural fields into areas exhibiting erosion, stability and deposition, was better approximated for both fields by applying resampling. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号