首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
 A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South Island have been used. Different methods of predicting the 100-year event and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were considered applying either a log-linear relationship between at-site mean annual flood and catchment characteristics or a direct log-linear relationship between 100-year events and catchment characteristics. Comparison of the results shows that the existence of at-site measurements significantly diminishes the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.  相似文献   

2.
The best information on which to base estimates of future flood frequencies is records of past flood events. Where there is a substantial record at the location for which estimates are desired the estimation process is generally straighforward, although a variety of methods is used and there is major uncertainty in the estimates. In general, the frequency of future events is assumed to be indicated by the observed frequency of past events under constant controlling watershed conditions.Techniques are available for using information on historical (pre-record) flood data to improve the reliability of flood frequency estimates. There are methods for detecting and managing extremely unusual actual events (outliers) and for improving the reliability of short-record estimates based on long-record data at related locations. Regional correlation analysis is usable for establishing flood frequency estimates for locations where records are not available.Detailed hydrologic analysis, usually involving rainfall-runoff studies, is required for establishing flood frequency relationships for modified conditions of the watershed or, in many cases, for establishing flood frequency estimates for newly formed drainage systems such as in urban areas and airports.The principal use of flood frequency functions is to compare expected changes in flood damages (due to a contemplated action) with the economic and social costs or benefits of the contemplated action.  相似文献   

3.
Radar rainfall estimation for flash flood forecasting in small, urban catchments is examined through analyses of radar, rain gage and discharge observations from the 14.3 km2 Dead Run drainage basin in Baltimore County, Maryland. The flash flood forecasting problem pushes the envelope of rainfall estimation to time and space scales that are commensurate with the scales at which the fundamental governing laws of land surface processes are derived. Analyses of radar rainfall estimates are based on volume scan WSR-88D reflectivity observations for 36 storms during the period 2003–2005. Gage-radar analyses show large spatial variability of storm total rainfall over the 14.3 km2 basin for flash flood producing storms. The ability to capture the detailed spatial variation of rainfall for flash flood producing storms by WSR-88D rainfall estimates varies markedly from event to event. As spatial scale decreases from the 14.3 km2 scale of the Dead Run watershed to 1 km2 (and the characteristic time scale of flash flood producing rainfall decreases from 1 h to 15 min) the predictability of flash flood response from WSR-88D rainfall estimates decreases sharply. Storm to storm variability of multiplicative bias in storm total rainfall estimates is a dominant element of the error structure of radar rainfall estimates, and it varies systematically over the warm season and with flood magnitude. Analyses of the 7 July 2004 and 28 June 2005 storms illustrate microphysical and dynamical controls on radar estimation error for extreme flash flood producing storms.  相似文献   

4.
The generalized gamma (GG) distribution has a density function that can take on many possible forms commonly encountered in hydrologic applications. This fact has led many authors to study the properties of the distribution and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc.). We discuss some of the most important properties of this flexible distribution and present a flexible method of parameter estimation, called the generalized method of moments (GMM) which combines any three moments of the GG distribution. The main advantage of this general method is that it has many of the previously proposed methods of estimation as special cases. We also give a general formula for the variance of theT-year eventX T obtained by the GMM along with a general formula for the parameter estimates and also for the covariances and correlation coefficients between any pair of such estimates. By applying the GMM and carefully choosing the order of the moments that are used in the estimation one can significantly reduce the variance ofT-year events for the range of return periods that are of interest.  相似文献   

5.
The log-Gumbel distribution is one of the extreme value distributions which has been widely used in flood frequency analysis. This distribution has been examined in this paper regarding quantile estimation and confidence intervals of quantiles. Specific estimation algorithms based on the methods of moments (MOM), probability weighted moments (PWM) and maximum likelihood (ML) are presented. The applicability of the estimation procedures and comparison among the methods have been illustrated based on an application example considering the flood data of the St. Mary's River.  相似文献   

6.
Weather radar been widely employed to measure precipitation and to predict flood risks. However, it is still not considered accurate enough because of radar errors. Most previous studies have focused primarily on removing errors from the radar data. Therefore, in the current study, we examined the effects of radar rainfall errors on rainfall-runoff simulation using the spatial error model (SEM). SEM was used to synthetically generate random or cross-correlated errors. A number of events were generated to investigate the effect of spatially dependent errors in radar rainfall estimates on runoff simulation. For runoff simulation, the Nam River basin in South Korea was used with the distributed rainfall-runoff model, Vflo?. The results indicated that spatially dependent errors caused much higher variations in peak discharge than independent random errors. To further investigate the effect of the magnitude of cross-correlation among radar errors, different magnitudes of spatial cross-correlations were employed during the rainfall-runoff simulation. The results demonstrated that a stronger correlation led to a higher variation in peak discharge up to the observed correlation structure while a correlation stronger than the observed case resulted in lower variability in peak discharge. We concluded that the error structure in radar rainfall estimates significantly affects predictions of the runoff peak. Therefore, efforts to not only remove the radar rainfall errors, but to also weaken the cross-correlation structure of the errors need to be taken to forecast flood events accurately.  相似文献   

7.
Conventional flood frequency analysis is concerned with providing an unbiased estimate of the magnitude of the design flow exceeded with the probabilityp, but sampling uncertainties imply that such estimates will, on average, be exceeded more frequently. An alternative approach is therefore, to derive an estimator which gives an unbiased estimate of flow risk: the difference between the two magnitudes reflects uncertainties in parameter estimation. An empirical procedure has been developed to estimate the mean true exceedance probabilities of conventional estimates made using a GEV distribution fitted by probability weighted moments, and adjustment factors have been determined to enable the estimation of flood magnitudes exceeded with, on average, the desired probability.  相似文献   

8.
9.
10.
Abstract

Applicability of log-Gumbel (LG) and log-logistic (LL) probability distributions in hydrological studies is critically examined under real conditions, where the assumed distribution differs from the true one. The set of alternative distributions consists of five two-parameter distributions with zero lower bound, including LG and LL as well as lognormal (LN), linear diffusion analogy (LD) and gamma (Ga) distributions. The log-Gumbel distribution is considered as both a false and a true distribution. The model error of upper quantiles and of the first two moments is analytically derived for three estimation methods: the method of moments (MOM), the linear moments method (LMM) and the maximum likelihood method (MLM). These estimation methods are used as methods of approximation of one distribution by another distribution. As recommended in the first of this two-part series of papers, MLM turns out to be the worst method, if the assumed LG or LL distribution is not the true one. It produces a huge bias of upper quantiles, which is at least one order higher than that of the other two methods. However, the reverse case, i.e. acceptance of LN, LD or Ga as a hypothetical distribution, while the LG or LL distribution is the true one, gives the MLM bias of reasonable magnitude in upper quantiles. Therefore, one should avoid choosing the LG and LL distributions in flood frequency analysis, especially if MLM is to be applied.  相似文献   

11.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

12.
Various regional flood frequency analysis procedures are used in hydrology to estimate hydrological variables at ungauged or partially gauged sites. Relatively few studies have been conducted to evaluate the accuracy of these procedures and estimate the error induced in regional flood frequency estimation models. The objective of this paper is to assess the overall error induced in the residual kriging (RK) regional flood frequency estimation model. The two main error sources in specific flood quantile estimation using RK are the error induced in the quantiles local estimation procedure and the error resulting from the regional quantile estimation process. Therefore, for an overall error assessment, the corresponding errors associated with these two steps must be quantified. Results show that the main source of error in RK is the error induced into the regional quantile estimation method. Results also indicate that the accuracy of the regional estimates increases with decreasing return periods. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Abstract

Flood frequency estimation is crucial in both engineering practice and hydrological research. Regional analysis of flood peak discharges is used for more accurate estimates of flood quantiles in ungauged or poorly gauged catchments. This is based on the identification of homogeneous zones, where the probability distribution of annual maximum peak flows is invariant, except for a scale factor represented by an index flood. The numerous applications of this method have highlighted obtaining accurate estimates of index flood as a critical step, especially in ungauged or poorly gauged sections, where direct estimation by sample mean of annual flood series (AFS) is not possible, or inaccurate. Therein indirect methods have to be used. Most indirect methods are based upon empirical relationships that link index flood to hydrological, climatological and morphological catchment characteristics, developed by means of multi-regression analysis, or simplified lumped representation of rainfall–runoff processes. The limits of these approaches are increasingly evident as the size and spatial variability of the catchment increases. In these cases, the use of a spatially-distributed, physically-based hydrological model, and time continuous simulation of discharge can improve estimation of the index flood. This work presents an application of the FEST-WB model for the reconstruction of 29 years of hourly streamflows for an Alpine snow-fed catchment in northern Italy, to be used for index flood estimation. To extend the length of the simulated discharge time series, meteorological forcings given by daily precipitation and temperature at ground automatic weather stations are disaggregated hourly, and then fed to FEST-WB. The accuracy of the method in estimating index flood depending upon length of the simulated series is discussed, and suggestions for use of the methodology provided.
Editor D. Koutsoyiannis  相似文献   

14.
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs.  相似文献   

15.
16.
The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.  相似文献   

17.
Conventional design practice aims at obtaining optimal estimates of floods with specified exceedance probabilities. Such estimates are, however, known on the average to be exceeded more frequently than expected. Alternatively, methods focusing on the expected exceedance probability can be used. Two different methods are considered here; the first is based on the sample distribution of true exceedance probabilities. The second is a Bayesian analogue using the likelihood function and a noninformative prior to describe the variability of exceedance probabilities. Appropriate analytical solutions are presented in both cases using the partial duration series approach.  相似文献   

18.
Conventional design practice aims at obtaining optimal estimates of floods with specified exceedance probabilities. Such estimates are, however, known on the average to be exceeded more frequently than expected. Alternatively, methods focusing on the expected exceedance probability can be used. Two different methods are considered here; the first is based on the sample distribution of true exceedance probabilities. The second is a Bayesian analogue using the likelihood function and a noninformative prior to describe the variability of exceedance probabilities. Appropriate analytical solutions are presented in both cases using the partial duration series approach.  相似文献   

19.
Probabilistic characterization of environmental variables or data typically involves distributional fitting. Correlations, when present in variables or data, can considerably complicate the fitting process. In this work, effects of high-order correlations on distributional fitting were examined, and how they are technically accounted for was described using two multi-dimensional formulation methods: maximum entropy (ME) and Koehler–Symanowski (KS). The ME method formulates a least-biased distribution by maximizing its entropy, and the KS method uses a formulation that conserves specified marginal distributions. Two bivariate environmental data sets, ambient particulate matter and water quality, were chosen for illustration and discussion. Three metrics (log-likelihood function, root-mean-square error, and bivariate Kolmogorov–Smirnov statistic) were used to evaluate distributional fit. Bootstrap confidence intervals were also employed to help inspect the degree of agreement between distributional and sample moments. It is shown that both methods are capable of fitting the data well and have the potential for practical use. The KS distributions were found to be of good quality, and using the maximum likelihood method for the parameter estimation of a KS distribution is computationally efficient.  相似文献   

20.
Spatial interpolation methods used for estimation of missing precipitation data generally under and overestimate the high and low extremes, respectively. This is a major limitation that plagues all spatial interpolation methods as observations from different sites are used in local or global variants of these methods for estimation of missing data. This study proposes bias‐correction methods similar to those used in climate change studies for correcting missing precipitation estimates provided by an optimal spatial interpolation method. The methods are applied to post‐interpolation estimates using quantile mapping, a variant of equi‐distant quantile matching and a new optimal single best estimator (SBE) scheme. The SBE is developed using a mixed‐integer nonlinear programming formulation. K‐fold cross validation of estimation and correction methods is carried out using 15 rain gauges in a temperate climatic region of the U.S. Exhaustive evaluation of bias‐corrected estimates is carried out using several statistical, error, performance and skill score measures. The differences among the bias‐correction methods, the effectiveness of the methods and their limitations are examined. The bias‐correction method based on a variant of equi‐distant quantile matching is recommended. Post‐interpolation bias corrections have preserved the site‐specific summary statistics with minor changes in the magnitudes of error and performance measures. The changes were found to be statistically insignificant based on parametric and nonparametric hypothesis tests. The correction methods provided improved skill scores with minimal changes in magnitudes of several extreme precipitation indices. The bias corrections of estimated data also brought site‐specific serial autocorrelations at different lags and transition states (dry‐to‐dry, dry‐to‐wet, wet‐to‐wet and wet‐to‐dry) close to those from the observed series. Bias corrections of missing data estimates provide better serially complete precipitation time series useful for climate change and variability studies in comparison to uncorrected filled data series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号