首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Binary data such as survival, hatching and mortality are assumed to be best described by a binomial distribution. This article provides a simple and straight forward approach for derivation of a no/lowest observed effect level (NOEL/LOEL) in a one-to-many control versus treatments setup. Practically, NOEL and LOEL values can be derived by means of different procedures, e.g. using Fisher’s exact test in coherence with adjusted p values. However, using adjusted p values heavily decreases statistical power. Alternatively, multiple t tests (e.g. Dunnett test procedure) together with arcsin-square-root transformations can be applied in order to account for variance heterogeneity of binomial data. Arcsin-square-root transformation, however, violates normal distribution because transformed data are constrained, while normal distribution provides data in the range \((-\infty ,\infty )\). Furthermore, results of statistical tests relying on an approximate normal distribution are approximate too. When testing for trends in probabilities of success (probs), the step down Cochran–Armitage trend test (CA) can be applied. The test statistic used is approximately normal. However, if probs approach 0 or 1, normal approximation of the null-distribution is suboptimal. Thus, critical values and p values lack statistical accuracy. We propose applying the closure principle (CP) and Fisher–Freeman–Halton test (FISH). The resulting CPFISH can solve the problems mentioned above. CP is used to overcome \(\alpha\)-inflation while FISH is applied to test for differences in probs between the control and any subset of treatment groups. Its applicability is presented by means of real data sets. Additionally, we performed a simulation study of 81 different setups (differing numbers of control replicates, numbers of treatments etc.), and compared the results of CPFISH to CA allowing us to point out the advantages and disadvantages of the CPFISH.  相似文献   

2.
Abstract

Spearman’s rho, a distribution-free statistic, has been suggested in the literature for testing the significance of trend in time series data. Although the use of the test based on Spearman’s rho (also known as the Daniels test) is less widespread than that based on Kendall’s tau (the Mann-Kendall test), the two tests have been shown in the literature to be equivalent for time series with independent observations. The distribution of the Mann-Kendall trend statistic for persistent data has been previously addressed in the literature. In this paper, the distribution of Spearman’s rho as a trend test statistic for persistent data is studied. Following the same procedures used for Kendall’s tau in earlier work, an exact expression for the variance of Spearman’s rho for persistent data with multivariate Gaussian dependence is derived, and a method for calculating the exact full distribution of rho for small sample sizes is also outlined. Approximations for moderate and large sample sizes are also discussed. A case study of testing the significance of trends in a group of world river flow station data using both Kendall’s tau and Spearman’s rho is presented. Both the theoretical results and those of the case study confirm the equivalence of trend testing based on Spearman’s rho and Kendall’s tau for persistent hydrologic data.
Editor Z. W. Kundzewicz; Associate editor S. Grimaldi  相似文献   

3.
Abstract

Abstract A complete regional analysis of daily precipitations is carried out in the southern half of the province of Quebec, Canada. The first step of the regional estimation procedure consists of delineating the homogeneous regions within the area of study and testing for homogeneity within each region. The delineation of homogeneous regions is based on using L-moment ratios. A simulation-based testing of statistical homogeneity allows one to verify the inter-site variability. The second step of the procedure deals with the identification of the regional distribution and the estimation of its parameters. The General Extreme Value (GEV) distribution was identified as an appropriate parent distribution. This distribution has already been recommended by several previous research studies for regional frequency analysis of precipitation extremes. The parameters of the GEV distribution are estimated based on the computation of the regional L-CV, L-CS and the mean of annual maximal daily precipitations. The third step consists of the estimation of precipitation quantiles corresponding to various return periods. The final procedure allows for the estimation of these quantiles at sites where no precipitation information is available. The use of a jack-knife resampling procedure with data from the province of Quebec allows one to demonstrate the robustness and efficiency of the regional estimation procedure. Values of the root mean square error were below 10% for a return period of 20 years, and 20% for a return period of 100 years.  相似文献   

4.
The test for exponentiality of a dataset in terms of a specific aging property constitutes an interesting problem in reliability analysis. To this end, a wide variety of tests are proposed in the literature. In this paper, the excess-wealth function is recalled and new asymptotic properties are studied. By using the characterization of the exponential distribution based on the excess-wealth function, a new exponentiality test is proposed. Through simulation techniques, it is shown that this new test works well on small sample sizes. The exact null distribution and normality asymptotic is also obtained for the statistic proposed. This test and a new empirical graph based on the excess-wealth function are applied to extreme-value examples.  相似文献   

5.
Goodness-of-fit tests for the spatial spectral density   总被引:1,自引:1,他引:0  
Detection and modeling the spatial correlation is an important issue in spatial data analysis. We extend in this work two different goodness-of-fit testing techniques for the spatial spectral density. The first approach is based on a smoothed version of the ratio between the periodogram and a parametric estimator of the spectral density. The second one is a generalized likelihood ratio test statistic, based on the log-periodogram representation as the response variable in a regression model. As a particular case, we provide tests for independence. Asymptotic normal distribution of both statistics is obtained, under the null hypothesis. For the application in practice, a resampling procedure for calibrating these tests is also given. The performance of the method is checked by a simulation study. Application to real data is also provided.  相似文献   

6.
Occurrence probability distribution (Gaussian distribution) test was carried out at selected fixed heights on the electron density profile, N(h), for selected hours around midday and midnight from representative months within 1995 and 1999/2000. The main objective of this work is to investigate the normality of the frequency distribution of the data sets around the mean value. Although the distribution is not perfectly symmetrical about the mean value, the results show that the percentage of data within ±1 standard deviation of the mean is at least 69% between 100 km and the F2 peak height around midday and midnight for all the selected hours and for the two years investigated.  相似文献   

7.
Exploring a valid model for the variogram of an isotropic spatial process   总被引:1,自引:1,他引:0  
The variogram is one of the most important tools in the assessment of spatial variability and a crucial parameter for kriging. It is widely known that an estimator for the variogram cannot be used as its representator in some contexts because of its lack of conditional semi negative definiteness. Consequently, once the variogram is estimated, a valid family must be chosen to fit an appropriate model. Under isotropy, this selection is carried out by eye from the observation of the variogram estimated curve. In this paper, a statistical methodology is proposed to explore a valid model for the variogram. The statistic for this approach is based on quadratic forms depending on smoothed random variables which gather the underlying spatial variation. The distribution of the test statistic is approximated by a shifted chi-square distribution. A simulation study is also carried out to check the power and size of the test. Reference bands, as a complementary graphical tool, are calculated. An example from the literature is used to illustrate the methodologies presented.  相似文献   

8.
Hydrologists use the generalized Pareto (GP) distribution in peaks-over-threshold (POT) modelling of extremes. A model with similar uses is the two-parameter kappa (KAP) distribution. KAP has had fewer hydrological applications than GP, but some studies have shown it to merit wider use. The problem of choosing between GP and KAP arises quite often in frequency analyses. This study, by comparing some discrimination methods between these two models, aims to show which method(s) is (are) recommended. Three specific methods are considered: one uses the Anderson-Darling goodness-of-fit (GoF) statistic, another uses the ratio of maximized likelihood (closely related to the Akaike information criterion and the Bayesian information criterion), and the third employs a normality transformation followed by application of the Shapiro-Wilk statistic. We show this last method to be the most recommendable, due to its advantages with sizes typically encountered in hydrology. We apply the simulation results to some flood POT datasets.
EDITOR D. Koutsoyiannis; ASSOCIATE EDITOR E. Volpi  相似文献   

9.
陈中天  郭星  潘华  李金臣 《地震学报》2016,38(6):898-905
根据特征地震的震级分布符合正态分布的基本假设,对特征地震的震级分布规律进行定量化研究.考虑到原地复发型大地震的震级数据稀少,提出利用大地震的同震位移转换得到震级数据,再对不同观测点所得到的震级数据进行归一化处理,使所有归一化震级数据均服从均值为0的同分布,进而可以统计得到反映特征地震震级变异性的标准差σ.最后基于广泛搜集的中国大陆54个特征地震的同震位移,利用本文给出的统计方法得到一个通用的标准差σ=0.08,为进一步研究特征地震的发生概率模型提供了重要依据.   相似文献   

10.
The uphole method is a field seismic test which uses receivers on the ground surface and an underground source. A modified form of the uphole method is introduced in order to obtain efficiently the shear wave velocity (VS) profile of a site. This method is called the standard penetration test (SPT)-uphole method because it uses the impact energy of the split spoon sampler in the SPT test as a source. Since the SPT-uphole method can be performed simultaneously with the SPT test it is economical and not labor intensive compared to the original uphole methods which use small explosives or a mechanical source. Field testing and interpretation procedures for the proposed method are described. To obtain reliable travel time information of the shear wave, the first peak point of the shear wave using two component geophones is recommended. Through a numerical study using the finite element method (FEM), the procedure of the proposed method was verified. Finally, the SPT-uphole method was performed at several sites, and the field applicability of the proposed method was verified by comparing the VS profiles determined by the SPT-uphole method with the profiles determined by the downhole, the spectral analysis of surfaces waves (SASW) method and from the SPT-N values.  相似文献   

11.
Rainfall intensity–duration–frequency (IDF) relationships describe rainfall intensity as a function of duration and return period, and they are significant for water resources planning, as well as for the design of hydraulic constructions. In this study, the two‐parameter lognormal (LN2) and Gumbel distributions are used as parent distribution functions. Derivation of the IDF relationship by this approach is quite simple, because it only requires an appropriate function of the mean of annual maximum rainfall intensity as a function of rainfall duration. It is shown that the monotonic temporal trend in the mean rainfall intensity can successfully be described by this parametric function which comprises a combination of the parameters of the quantile function a(T) and completely the duration function b(d) of the separable IDF relationship. In the case study of Aegean Region (Turkey), the IDF relationships derived through this simple generalization procedure (SGP) may produce IDF relationships as successfully as does the well‐known robust estimation procedure (REP), which is based on minimization of the nonparametric Kruskal–Wallis test statistic with respect to the parameters θ and η of the duration function. Because the approach proposed herein is based on lower‐order sample statistics, risks and uncertainties arising from sampling errors in higher‐order sample statistics were significantly reduced. The authors recommend to establish the separable IDF relationships by the SGP for a statistically favorable two‐parameter parent distribution, because it uses the same assumptions as the REP does, it maintains the observed temporal trend in the mean additionally, it is easy to handle analytically and requires considerably less computational effort. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
13.
The Kolmogorov–Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov–Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size.  相似文献   

14.
A distribution free plotting position   总被引:6,自引:1,他引:6  
 Many plotting position formulae have been proposed for the past few decades. These formulae are derived or obtained under some specific assumption of probability distribution. Because in practice the data are often plotted in order to determine its probability distribution, it causes difficulty and confusion in selecting the plotting position formula. The objective of this study is to find a plotting position formula which is distribution free. In this study, the plotting position formulae corresponding to the order statistic mean, mode and median are investigated. The order statistic mean, mode and median values are determined by numerical integration and differentiation, and the corresponding plotting position formulae are obtained by regression analysis. The results indicate that both the plotting position formulae for the order statistic mean and mode vary with the distribution of data, but the plotting position formula for the order statistic median is distribution free. The distribution free plotting position formula for the order statistic median is proposed in this study as (i−0.326)/(n+0.348).  相似文献   

15.
Efficient testing of earthquake forecasting models   总被引:1,自引:0,他引:1  
Computationally efficient alternatives are proposed to the likelihood-based tests employed by the Collaboratory for the Study of Earthquake Predictability for assessing the performance of earthquake likelihood models in the earthquake forecast testing centers. For the conditional L-test, which tests the consistency of the earthquake catalogue with a model, an exact test using convolutions of distributions is available when the number of earthquakes in the test period is small, and the central limit theorem provides an approximate test when the number of earthquakes is large. Similar methods are available for the R-test, which compares the likelihoods of two competing models. However, the R-test, like the N-test and L-test, is fundamentally a test of consistency of data with a model. We propose an alternative test, based on the classical paired t-test, to more directly compare the likelihoods of two models. Although approximate and predicated on a normality assumption, this new T-test is not computer-intensive, is easier to interpret than the R-test, and becomes increasingly dependable as the number of earthquakes increases.  相似文献   

16.
A procedure is presented for developing a rainfall intensity–duration–frequency (IDF) relationship that is consistent with bivariate normal distribution modeling. The Box–Cox transformation was used to derive the relation and two methods of determining the parameters of this transformation were evaluated. To assess the uncertainty of the parameters, a confidence interval was constructed and verified with the non-parametric bootstrap method. Additionally, the effect of sample size on the bivariate normality assumption was examined. Case studies, based on data from significant gauge stations in Korea, were performed. The result shows that the use of the bivariate normal model as an IDF relationship is particularly recommended when the available data size is small.  相似文献   

17.
This study aims to model the joint probability distribution of periodic hydrologic data using meta-elliptical copulas. Monthly precipitation data from a gauging station (410120) in Texas, US, was used to illustrate parameter estimation and goodness-of-fit for univariate drought distributions using chi-square test, Kolmogorov–Smirnov test, Cramer-von Mises statistic, Anderson-Darling statistic, modified weighted Watson statistic, and Liao and Shimokawa statistic. Pearson’s classical correlation coefficient r n , Spearman’s ρ n, Kendall’s τ, Chi-Plots, and K-Plots were employed to assess the dependence of drought variables. Several meta-elliptical copulas and Gumbel-Hougaard, Ali-Mikhail-Haq, Frank and Clayton copulas were tested to determine the best-fit copula. Based on the root mean square error and the Akaike information criterion, meta-Gaussian and t copulas gave a better fit. A bootstrap version based on Rosenblatt’s transformation was employed to test the goodness-of-fit for meta-Gaussian and t copulas. It was found that none of meta-Gaussian and t copulas considered could be rejected at the given significance level. The meta-Gaussian copula was employed to model the dependence, and these results were found satisfactory.  相似文献   

18.
Abstract

Two entities of importance in hydrological droughts, viz. the longest duration, LT , and the largest magnitude, MT (in standardized terms) over a desired time period (which could also correspond to a specific return period) T, have been analysed for weekly flow sequences of Canadian rivers. Analysis has been carried out in terms of week-by-week standardized values of flow sequences, designated as SHI (standardized hydrological index). The SHI sequence is truncated at the median level for identification and evaluation of expected values of the above random variables, E(LT ) and E(MT ). SHI sequences tended to be strongly autocorrelated and are modelled as autoregressive order-1, order-2 or autoregressive moving average order-1,1. The drought model built on the theorem of extremes of random numbers of random variables was found to be less satisfactory for the prediction of E(LT ) and E(MT ) on a weekly basis. However, the model has worked well on a monthly (weakly Markovian) and an annual (random) basis. An alternative procedure based on a second-order Markov chain model provided satisfactory prediction of E(LT ). Parameters such as the mean, standard deviation (or coefficient of variation), and lag-1 serial correlation of the original weekly flow sequences (obeying a gamma probability distribution function) were used to estimate the simple and first-order drought probabilities through closed-form equations. Second-order probabilities have been estimated based on the original flow sequences as well as SHI sequences, utilizing a counting method. The E(MT ) can be predicted as a product of drought intensity (which obeys the truncated normal distribution) and E(LT ) (which is based on a mixture of first- and second-order Markov chains).

Citation Sharma, T. C. & Panu, U. S. (2010) Analytical procedures for weekly hydrological droughts: a case of Canadian rivers. Hydrol. Sci. J. 55(1), 79–92.  相似文献   

19.
This paper provides a generic equation for the evaluation of the maximum earthquake magnitude mmax for a given seismogenic zone or entire region. The equation is capable of generating solutions in different forms, depending on the assumptions of the statistical distribution model and/or the available information regarding past seismicity. It includes the cases (i) when earthquake magnitudes are distributed according to the doubly-truncated Gutenberg-Richter relation, (ii) when the empirical magnitude distribution deviates moderately from the Gutenberg-Richter relation, and (iii) when no specific type of magnitude distribution is assumed. Both synthetic, Monte-Carlo simulated seismic event catalogues, and actual data from Southern California, are used to demonstrate the procedures given for the evaluation of mmax.The three estimates of mmax for Southern California, obtained by the three procedures mentioned above, are respectively: 8.32 ± 0.43, 8.31 ± 0.42 and 8.34 ± 0.45. All three estimates are nearly identical, although higher than the value 7.99 obtained by Field et al. (1999). In general, since the third procedure is non-parametric and does not require specification of the functional form of the magnitude distribution, its estimate of the maximum earthquake magnitude mmax is considered more reliable than the other two which are based on the Gutenberg-Richter relation.  相似文献   

20.
Nermin Sarlak 《水文研究》2008,22(17):3403-3409
Classical autoregressive models (AR) have been used for forecasting streamflow data in spite of restrictive assumptions, such as the normality assumption for innovations. The main reason for making this assumption is the difficulties faced in finding model parameters for non‐normal distribution functions. However, the modified maximum likelihood (MML) procedure used for estimating autoregressive model parameters assumes a non‐normally distributed residual series. The aim in this study is to compare the performance of the AR(1) model with asymmetric innovations with that of the classical autoregressive model for hydrological annual data. The models considered are applied to annual streamflow data obtained from two streamflow gauging stations in K?z?l?rmak Basin, Turkey. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号