首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

2.
The reliability of a levee system is a crucial factor in flood risk management. In this study we present a probabilistic methodology to assess the effects of levee cover strength on levee failure probability, triggering time, flood propagation and consequent impacts on population and assets. A method for determining fragility curves is used in combination with the results of a one-dimensional hydrodynamic model to estimate the conditional probability of levee failure in each river section. Then, a levee breach model is applied to calculate the possible flood hydrographs, and for each breach scenario a two-dimensional hydrodynamic model is used to estimate flood hazard (flood extent and timing, maximum water depths) and flood impacts (economic damage and affected population) in the areas at risk along the river reach. We show an application for levee overtopping and different flood scenarios for a 98 km reach of the lower Po River in Italy. The results show how different design solutions for the levee cover can influence the probability of levee failure and the consequent flood scenarios. In particular, good grass cover strength can significantly delay levee failure and reduce maximum flood depths in the flood-prone areas, thus helping the implementation of flood risk management actions.
EDITOR D. Koutsoyiannis

ASSOCIATE EDITOR A. Viglione  相似文献   

3.
An important problem in frequency analysis is the selection of an appropriate probability distribution for a given sample data. This selection is generally based on goodness-of-fit tests. The goodness-of-fit method is an effective means of examining how well a sample data agrees with an assumed probability distribution as its population. However, the goodness of fit test based on empirical distribution functions gives equal weight to differences between empirical and theoretical distribution functions corresponding to all observations. To overcome this drawback, the modified Anderson–Darling test was suggested by Ahmad et al. (1988b). In this study, the critical values of the modified Anderson–Darling test statistics are revised using simulation experiments with extensions of the shape parameters for the GEV and GLO distributions, and a power study is performed to test the performance of the modified Anderson–Darling test. The results of the power study show that the modified Anderson–Darling test is more powerful than traditional tests such as the χ2, Kolmogorov–Smirnov, and Cramer von Mises tests. In addition, to compare the results of these goodness-of-fit tests, the modified Anderson–Darling test is applied to the annual maximum rainfall data in Korea.  相似文献   

4.
Abstract

The physically-based flood frequency models use readily available rainfall data and catchment characteristics to derive the flood frequency distribution. In the present study, a new physically-based flood frequency distribution has been developed. This model uses bivariate exponential distribution for rainfall intensity and duration, and the Soil Conservation Service-Curve Number (SCS-CN) method for deriving the probability density function (pdf) of effective rainfall. The effective rainfall-runoff model is based on kinematic-wave theory. The results of application of this derived model to three Indian basins indicate that the model is a useful alternative for estimating flood flow quantiles at ungauged sites.  相似文献   

5.
The objective of the study was to compare the relative accuracy of three methodologies of regional flood frequency analysis in areas of limited flood records. Thirty two drainage basins of different characteristics, located mainly in the southwest region of Saudi Arabia, were selected for the study. In the first methodology, region curves were developed and used together with the mean annual flood, estimated from the characteristics of drainage basin, to estimate flood flows at a location in the basin. The second methodology was to fit probability distribution functions to annual maximum rainfall intensity in a drainage basin. The best fitted probability function was used together with common peak flow models to estimate the annual maximum flood flows in the basin. In the third methodology, duration reduction curves were developed and used together with the average flood flow in a basin to estimate the peak flood flows in the basin. The results obtained from each methodology were compared to the flood records of the selected stations using three statistical measures of goodness-of-fit. The first methodology was found best in a case of having short length of record at a drainage basin. The second methodology produced satisfactory results. Thus, it is recommended in areas where data are not sufficient and/or reliable to utilise the first methodology.  相似文献   

6.
Hydrologists use the generalized Pareto (GP) distribution in peaks-over-threshold (POT) modelling of extremes. A model with similar uses is the two-parameter kappa (KAP) distribution. KAP has had fewer hydrological applications than GP, but some studies have shown it to merit wider use. The problem of choosing between GP and KAP arises quite often in frequency analyses. This study, by comparing some discrimination methods between these two models, aims to show which method(s) is (are) recommended. Three specific methods are considered: one uses the Anderson-Darling goodness-of-fit (GoF) statistic, another uses the ratio of maximized likelihood (closely related to the Akaike information criterion and the Bayesian information criterion), and the third employs a normality transformation followed by application of the Shapiro-Wilk statistic. We show this last method to be the most recommendable, due to its advantages with sizes typically encountered in hydrology. We apply the simulation results to some flood POT datasets.
EDITOR D. Koutsoyiannis; ASSOCIATE EDITOR E. Volpi  相似文献   

7.
The objective of the paper is to show that the use of a discrimination procedure for selecting a flood frequency model without the knowledge of its performance for the considered underlying distributions may lead to erroneous conclusions. The problem considered is one of choosing between lognormal (LN) and convective diffusion (CD) distributions for a given random sample of flood observations. The probability density functions of these distributions are similarly shaped in the range of the main probability mass and the discrepancies grow with the increase in the value of the coefficient of variation (C V ). This problem was addressed using the likelihood ratio (LR) procedure. Simulation experiments were performed to determine the probability of correct selection (PCS) for the LR method. Pseudo-random samples were generated for several combinations of sample sizes and the coefficient of variation values from each of the two distributions. Surprisingly, the PCS of the LN model was twice smaller than that of the CD model, rarely exceeding 50%. The results obtained from simulation were analyzed and compared both with those obtained using real data and with the results obtained from another selection procedure known as the QK method. The results from the QK are just the opposite to that of the LR procedure.  相似文献   

8.
ABSTRACT

The problem of estimation of suspended load carried by a river is an important topic for many water resources projects. Conventional estimation methods are based on the assumption of exact observations. In practice, however, a major source of natural uncertainty is due to imprecise measurements and/or imprecise relationships between variables. In this paper, using the Multivariate Adaptive Regression Splines (MARS) technique, a novel fuzzy regression model for imprecise response and crisp explanatory variables is presented. The investigated fuzzy regression model is applied to forecast suspended load by discharge based on two real-world datasets. The accuracy of the proposed method is compared with two well-known parametric fuzzy regression models, namely, the fuzzy least-absolutes model and the fuzzy least-squares model. The comparison results reveal that the MARS-fuzzy regression model performs better than the other models in suspended load estimation for the particular datasets. This comparison is done based on four goodness-of-fit criteria: the criterion based on similarity measure, the criterion based on absolute errors and the two objective functions of the fuzzy least-absolutes model and the fuzzy least-squares model. The proposed model is general and can be used for modelling natural phenomena whose available observations are reported as imprecise rather than crisp.
Editor D. Koutsoyiannis; Associate editor H. Aksoy  相似文献   

9.
Abstract

The segmentation of flood seasons has both theoretical and practical importance in hydrological sciences and water resources management. The probability change-point analysis technique is applied to segmenting a defined flood season into a number of sub-seasons. Two alternative sampling methods, annual maximum and peaks-over-threshold, are used to construct the new flow series. The series is assumed to follow the binomial distribution and is analysed with the probability change-point analysis technique. A Monte Carlo experiment is designed to evaluate the performance of proposed flood season segmentation models. It is shown that the change-point based models for flood season segmentation can rationally partition a flood season into appropriate sub-seasons. China's new Three Gorges Reservoir, located on the upper Yangtze River, was selected as a case study since a hydrological station with observed flow data from 1882 to 2003 is located 40 km downstream of the dam. The flood season of the reservoir can be reasonably divided into three sub-seasons: the pre-flood season (1 June–2 July); the main flood season (3 July–10 September); and the post-flood season (11–30 September). The results of flood season segmentation and the characteristics of flood events are reasonable for this region.

Citation Liu, P., Guo, S., Xiong, L. & Chen, L. (2010) Flood season segmentation based on the probability change-point analysis technique. Hydrol. Sci. J. 55(4), 540–554.  相似文献   

10.
In this paper we extend the generalized likelihood uncertainty estimation (GLUE) technique to estimate spatially distributed uncertainty in models conditioned against binary pattern data contained in flood inundation maps. Untransformed binary pattern data already have been used within GLUE to estimate domain‐averaged (zero‐dimensional) likelihoods, yet the pattern information embedded within such sources has not been used to estimate distributed uncertainty. Where pattern information has been used to map distributed uncertainty it has been transformed into a continuous function prior to use, which may introduce additional errors. To solve this problem we use here ‘raw’ binary pattern data to define a zero‐dimensional global performance measure for each simulation in a Monte Carlo ensemble. Thereafter, for each pixel of the distributed model we evaluate the probability that this pixel was inundated. This probability is then weighted by the measure of global model performance, thus taking into account how well a given parameter set performs overall. The result is a distributed uncertainty measure mapped over real space. The advantage of the approach is that it both captures distributed uncertainty and contains information on global likelihood that can be used to condition predictions of further events for which observed data are not available. The technique is applied to the problem of flood inundation prediction at two test sites representing different hydrodynamic conditions. In both cases, the method reveals the spatial structure in simulation uncertainty and simultaneously enables mapping of flood probability predicted by the model. Spatially distributed uncertainty analysis is shown to contain information over and above that available from global performance measures. Overall, the paper highlights the different types of information that may be obtained from mappings of model uncertainty over real and n‐dimensional parameter spaces. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
The determination of the probability distribution of recurrence times is the most important problem in the calculation of long-term conditional probabilities for the recurrence of large and great earthquakes.The principle of maximum entropy in conjunction with a goodness-of-fit test (chi-square or Kolmogorov-Smirnov test) may be employed to obtain estimates of these densities using recurrence data for some seismic regions.Four different distributions are characterized by the property of maximum entropy, as possible laws for recurrence times of the largest earthquakes: uniform, exponential, Gaussian and log-normal. To discriminate among these different probability distributions we use the probability theory and the chi-square test to check the goodness-of-fit to the distribution of recurrence time of shocks of magnitude 6.5 and largest occurred in the west-northwestern zone of the Hellenic arc from 1791 to 1983.It is found that the recurrence times data for the west-northwestern zone of the Hellenic arc cannot be represented by the uniform and the Gaussian probability densities. The recurrence times data for the west-northwestern zone of the Hellenic arc can be described accurately by the exponential and log-normal probability densities, which were predicted from the principle of maximum entropy. In other words, the principle of maximum entropy does not necessarily lead to a unique solution. In turn, the mathematical properties of these distributions could be used to derive different physical properties of the earthquake process in the west-northwestern zone of the Hellenic arc.  相似文献   

12.
Asymmetric copula in multivariate flood frequency analysis   总被引:2,自引:0,他引:2  
The univariate flood frequency analysis is widely used in hydrological studies. Often only flood peak or flood volume is statistically analyzed. For a more complete analysis the three main characteristics of a flood event i.e. peak, volume and duration are required. To fully understand these variables and their relationships, a multivariate statistical approach is necessary. The main aim of this paper is to define the trivariate probability density and cumulative distribution functions. When the joint distribution is known, it is possible to define the bivariate distribution of volume and duration conditioned on the peak discharge. Consequently volume–duration pairs, statistically linked to peak values, become available. The authors build trivariate joint distribution of flood event variables using the fully nested or asymmetric Archimedean copula functions. They describe properties of this copula class and perform extensive simulations to highlight differences with the well-known symmetric Archimedean copulas. They apply asymmetric distributions to observed flood data and compare the results those obtained using distributions built with symmetric copula and the standard Gumbel Logistic model.  相似文献   

13.
Parametric method of flood frequency analysis (FFA) involves fitting of a probability distribution to the observed flood data at the site of interest. When record length at a given site is relatively longer and flood data exhibits skewness, a distribution having more than three parameters is often used in FFA such as log‐Pearson type 3 distribution. This paper examines the suitability of a five‐parameter Wakeby distribution for the annual maximum flood data in eastern Australia. We adopt a Monte Carlo simulation technique to select an appropriate plotting position formula and to derive a probability plot correlation coefficient (PPCC) test statistic for Wakeby distribution. The Weibull plotting position formula has been found to be the most appropriate for the Wakeby distribution. Regression equations for the PPCC tests statistics associated with the Wakeby distribution for different levels of significance have been derived. Furthermore, a power study to estimate the rejection rate associated with the derived PPCC test statistics has been undertaken. Finally, an application using annual maximum flood series data from 91 catchments in eastern Australia has been presented. Results show that the developed regression equations can be used with a high degree of confidence to test whether the Wakeby distribution fits the annual maximum flood series data at a given station. The methodology developed in this paper can be adapted to other probability distributions and to other study areas. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Abstract

Hydrological models are commonly used to perform real-time runoff forecasting for flood warning. Their application requires catchment characteristics and precipitation series that are not always available. An alternative approach is nonparametric modelling based only on runoff series. However, the following questions arise: Can nonparametric models show reliable forecasting? Can they perform as reliably as hydrological models? We performed probabilistic forecasting one, two and three hours ahead for a runoff series, with the aim of ascribing a probability density function to predicted discharge using time series analysis based on stochastic dynamics theory. The derived dynamic terms were compared to a hydrological model, LARSIM. Our procedure was able to forecast within 95% confidence interval 1-, 2- and 3-h ahead discharge probability functions with about 1.40 m3/s of range and relative errors (%) in the range [–30; 30]. The LARSIM model and the best nonparametric approaches gave similar results, but the range of relative errors was larger for the nonparametric approaches.

Editor D. Koutsoyiannis; Associate editor K. Hamed

Citation Costa, A.C., Bronstert, A. and Kneis, D., 2012. Probabilistic flood forecasting for a mountainous headwater catchment using a nonparametric stochastic dynamic approach. Hydrological Sciences Journal, 57 (1), 10–25.  相似文献   

15.
本文以中国大陆地震目录为基础资料,以泊松模型为零假设模型,并将Neyman-Scott空间丛集过程的各子模型设为检验模型,采用K-function点过程分析法和最大似然估计法计算各模型参数,并以AIC准侧判定模型的拟合优度,来检验中国大陆地震空间分布模型.检验结果表明:泊松模型的拟合优度最差,说明地震在空间的分布不是完全随机的;广义Thomas模型的拟合优度最好,说明地震的空间分布是丛集的,可用由两个高斯核组成的广义Thomas模型较好地描述.研究结果还表明,同一研究区内,采用不同时段具有不同最小完整起始震级的地震目录计算得到的地震空间分布的丛集尺度几乎不变,这意味着地震空间丛集尺度不受小地震的控制,且可能与研究区的断层规模有关.  相似文献   

16.
ABSTRACT

Water operating rules have been universally used to operate single reservoirs because of their practicability, but the efficiency of operating rules for multi-reservoir systems is unsatisfactory in practice. For better performance, the combination of water and power operating rules is proposed and developed in this paper. The framework of deriving operating rules for multi-reservoirs consists of three modules. First, a deterministic optimal operation module is used to determine the optimal reservoir storage strategies. Second, a fitting module is used to identify and estimate the operating rules using a multiple linear regression analysis (MLR) and artificial neural networks (ANN) approach. Last, a testing module is used to test the fitting operating rules with observed inflows. The Three Gorges and Qing River cascade reservoirs in the Changjiang River basin, China, are selected for a case study. It is shown that the combination of water and power operating rules can improve not only the assurance probability of output power, but also annual average hydropower generation when compared with designed operating rules. It is indicated that the characteristics of flood and non-flood seasons, as well as sample input (water or power), should be considered if the operating rules are developed for multi-reservoirs.

EDITOR D. Koutsoyiannis ASSOCIATE EDITOR not assigned  相似文献   

17.
ABSTRACT

The index flood method of the regional L-moments approach is adapted to annual maximum rainfall (AMR) series of successively increasing durations from 5 minutes to 24 hours. In Turkey, there are 14 such AMRs having standard durations of 5, 10, 15, 30, 60, 120, 180, 240, 300, 360, 480, 720, 1080 and 1440 min. The parameters of the probability distributions found suitable for these AMR series in a homogeneous region need to be adjusted so that their quantile functions will not cross each other over the entire range of probabilities. This adjustment is done so as to make (1) the derivative of the quantile function with respect to the Gumbel reduced variate of a longer-duration AMR be greater than or equal to that of the shorter-duration AMR, and (2) the quantile of a longer-duration AMR be greater than that of the shorter-duration AMR, both to be satisfied for any specific probability. Accordingly, the parameters of a probability distribution fitted to some AMR series must either increase or decrease or be constant with respect to increasing rainstorm duration; and the parameters of different distributions fitted to two sequential AMR series must be interrelated. The index flood method by the L-moments approach modified in such manner for successive-duration AMR series is applied to the Inland Anatolia region of Turkey using data recorded at 31 rain-gauging stations with recording lengths from 31 to 66 years.
EDITOR Z.W. Kundzewicz; ASSOCIATE EDITOR A. Viglione  相似文献   

18.
《水文科学杂志》2013,58(5):992-1003
Abstract

The extreme Tyne (Northumbria, UK) flood in January 2005 provided the opportunity to reassess flood risk and to link peak discharge and flooded area to probability of occurrence. However, in spite of the UK guidance on flood risk assessment given in the Flood Estimation Handbook (FEH), there is still considerable subjectivity in deriving risk estimates. A particular problem for the Tyne arises from the effects of river bed gravel extraction both on the reliability of gauged discharges and in the interpretation of historical level data. In addition, attenuation and drawdown of Kielder Water has reduced downstream flood risk since 1982. Estimates from single-site, pooled estimates and historical information are compared. It is concluded that the return period of the observed flood was around 71 years on the lower Tyne and was probably the largest flood since 1815.  相似文献   

19.
Abstract

The seasonal flood-limited water level (FLWL), which reflects the seasonal flood information, plays an important role in governing the trade-off between reservoir flood control and conservation. A risk analysis model for flood control operation of seasonal FLWL incorporating the inflow forecasting error was proposed and developed. The variable kernel estimation is implemented for deriving the inflow forecasting error density. The synthetic inflow incorporating forecasting error is simulated by Monte Carlo simulation (MCS) according to the inflow forecasting error density. The risk analysis for seasonal FLWL control was estimated by MCS based on a combination of the forecasting inflow lead-time, seasonal design flood hydrographs and seasonal operation rules. The Three Gorges reservoir is selected as a case study. The application results indicate that the seasonal FLWL control can effectively enhance flood water utilization rate without lowering the annual flood control standard.
Editor D. Koutsoyiannis; Associate editor A. Viglione

Citation Zhou, Y.-L. and Guo, S.-L., 2014. Risk analysis for flood control operation of seasonal flood-limited water level incorporating inflow forecasting error. Hydrological Sciences Journal, 59 (5), 1006–1019.  相似文献   

20.
Abstract

Two probability density functions (pdf), popular in hydrological analyses, namely the log-Gumbel (LG) and log-logistic (LL), are discussed with respect to (a) their applicability to hydrological data and (b) the drawbacks resulting from their mathematical properties. This paper—the first in a two-part series—examines a classical problem in which the considered pdf is assumed to be the true distribution. The most significant drawback is the existence of the statistical moments of LG and LL for a very limited range of parameters. For these parameters, a very rapid increase of the skewness coefficient, as a function of the coefficient of variation, is observed (especially for the log-Gumbel distribution), which is seldom observed in the hydrological data. These probability distributions can be applied with confidence only to extreme situations. For other cases, there is an important disagreement between empirical data and theoretical distributions in their tails, which is very important for the characterization of the distribution asymmetry. The limited range of shape parameters in both distributions makes the analyses (such as the method of moments), that make use of the interpretation of moments, inconvenient. It is also shown that the often-used L-moments are not sufficient for the characterization of the location, scale and shape parameters of pdfs, particularly in the case where attention is paid to the tail part of probability distributions. The maximum likelihood method guarantees an asymptotic convergence of the estimators beyond the domain of the existence of the first two moments (or L-moments), but it is not sensitive enough to the upper tails shape.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号