首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 348 毫秒
1.
本文给出了自重和地震作用下核电厂混凝土安全壳的可靠性分析方法。首先基于半解析有限环元法和随机振型分析法,用首超概率给出了给定系统参数时安全壳的条件极限状态概率,然后基于此概率用推广的一次二阶矩法求解了考虑系统参数不确定性的安全壳极限状态概率。一个预应力混凝土安全壳的实例分析表明,系统参数的不确定性对安全壳的可靠性有重要影响。  相似文献   

2.
Following a companion article, ground motion acceleration time historiesduring earthquakes can be described as realizations of non-stationarystochastic processes with evolutionary frequency content and instantaneousintensity. The parameters characterizing those processes can be handled asuncertain variables with probabilistic distributions that depend on themagnitude of each seismic event and the corresponding source-to-sitedistance. Accordingly, the generation of finite samples of artificial groundmotion acceleration time histories for earthquakes of given intensities isformulated as a two-stage Monte Carlo simulation process. The first stageincludes the simulation of samples of sets of the parameters of thestochastic process models of earthquake ground motion. The second stageincludes the simulation of the time histories themselves, given theparameters of the associated stochastic process model. In order to accountfor the dependence of the probability distribution of the latter parameterson magnitude and source-to-site distance, the joint conditional probabilitydistribution of these variables must be obtained for a given value of theground motion intensity. This is achieved by resorting to Bayes Theoremabout the probabilities of alternate assumptions.Two options for the conditional simulation of ground motion time historiesare presented. The more refined option makes use of all the informationabout the conditional distribution of magnitude and distance for thepurpose of simulating values of the statistical parameters of the groundmotion stochastic process models. The second option considers allprobabilities concentrated at the most likely combination of magnitude anddistance for each of the seismic sources that contribute significantly to theseismic hazard at the site of interest.  相似文献   

3.
4.
The paper deals with the probability estimates of temperature extremes (annual temperature maxima and heat waves) in the Czech Republic. Two statistical methods of probability estimations are compared; one based on the stochastic modelling of time series of the daily maximum temperature (TMAX) using the first-order autoregressive (AR(1)) model, the other consisting in fitting the extreme value distribution to the sample of annual temperature peaks.The AR(1) model is able to reproduce the main characteristics of heat waves, though the estimated probabilities should be treated as upper limits because of deficiencies in simulating the temperature variability inherent to the AR(1) model. Theoretical extreme value distributions do not yield good results when applied to maximum annual lengths of heat waves and periods of tropical days (TMAX 30°C), but it is the best method for estimating the probability and recurrence time of annual one-day temperature extremes. However, there are some difficulties in the application: the use of the two-parameter Gumbel distribution and the three-parameter generalized extreme value (GEV) distribution may lead to different results, particularly for long return periods. The resulting values also depend on the chosen procedure of parameter estimation. Based on our findings, the shape parameter testing for the GEV distribution and the L moments technique for parameter estimation may be recommended.The application of the appropriate statistical tools indicates that the heat wave and particularly the long period of consecutive tropical days in 1994 were probably a more rare event than the record-breaking temperatures in July 1983 exceeding 40°C. An improvement of the probability estimate of the 1994 heat wave may be expected from a more sophisticated model of the temperature series.  相似文献   

5.
A continuous Soil Conservation Service (SCS) curve number (CN) method that considers time‐varied SCS CN values was developed based on the original SCS CN method with a revised soil moisture accounting approach to estimate run‐off depth for long‐term discontinuous storm events. The method was applied to spatially distributed long‐term hydrologic simulation of rainfall‐run‐off flow with an underlying assumption for its spatial variability using a geographic information systems‐based spatially distributed Clark's unit hydrograph method (Distributed‐Clark; hybrid hydrologic model), which is a simple few parameter run‐off routing method for input of spatiotemporally varied run‐off depth, incorporating conditional unit hydrograph adoption for different run‐off precipitation depth‐based direct run‐off flow convolution. Case studies of spatially distributed long‐term (total of 6 years) hydrologic simulation for four river basins using daily NEXRAD quantitative precipitation estimations demonstrate overall performances of Nash–Sutcliffe efficiency (ENS) 0.62, coefficient of determination (R2) 0.64, and percent bias 0.33% in direct run‐off and ENS 0.71, R2 0.72, and percent bias 0.15% in total streamflow for model result comparison against observed streamflow. These results show better fit (improvement in ENS of 42.0% and R2 of 33.3% for total streamflow) than the same model using spatially averaged gauged rainfall. Incorporation of logic for conditional initial abstraction in a continuous SCS CN method, which can accommodate initial run‐off loss amounts based on previous rainfall, slightly enhances model simulation performance; both ENS and R2 increased by 1.4% for total streamflow in a 4‐year calibration period. A continuous SCS CN method‐based hybrid hydrologic model presented in this study is, therefore, potentially significant to improved implementation of long‐term hydrologic applications for spatially distributed rainfall‐run‐off generation and routing, as a relatively simple hydrologic modelling approach for the use of more reliable gridded types of quantitative precipitation estimations.  相似文献   

6.
Hydrological scientists develop perceptual models of the catchments they study, using field measurements and observations to build an understanding of the dominant processes controlling the hydrological response. However, conceptual and numerical models used to simulate catchment behaviour often fail to take advantage of this knowledge. It is common instead to use a pre‐defined model structure which can only be fitted to the catchment via parameter calibration. In this article, we suggest an alternative approach where different sources of field data are used to build a synthesis of dominant hydrological processes and hence provide recommendations for representing those processes in a time‐stepping simulation model. Using analysis of precipitation, flow and soil moisture data, recommendations are made for a comprehensive set of modelling decisions, including Evapotranspiration (ET) parameterization, vertical drainage threshold and behaviour, depth and water holding capacity of the active soil zone, unsaturated and saturated zone model architecture and deep groundwater flow behaviour. The second article in this two‐part series implements those recommendations and tests the capability of different model sub‐components to represent the observed hydrological processes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we apply the approach of conditional nonlinear optimal perturbation related to the parameter (CNOP-P) to study parameter uncertainties that lead to the stability (maintenance or degradation) of a grassland ecosystem. The maintenance of the grassland ecosystem refers to the unchanged or increased quantity of living biomass and wilted biomass in the ecosystem, and the degradation of the grassland ecosystem refers to the reduction in the quantity of living biomass and wilted biomass or its transformation into a desert ecosystem. Based on a theoretical five-variable grassland ecosystem model, 32 physical model parameters are selected for numerical experiments. Two types of parameter uncertainties could be obtained. The first type of parameter uncertainty is the linear combination of each parameter uncertainty that is computed using the CNOP-P method. The second type is the parameter uncertainty from multi-parameter optimization using the CNOP-P method. The results show that for the 32 model parameters, at a given optimization time and with greater parameter uncertainty, the patterns of the two types of parameter uncertainties are different. The different patterns represent physical processes of soil wetness. This implies that the variations in soil wetness (surface layer and root zone) are the primary reasons for uncertainty in the maintenance or degradation of grassland ecosystems, especially for the soil moisture of the surface layer. The above results show that the CNOP-P method is a useful tool for discussing the abovementioned problems.  相似文献   

8.
C. Dobler  F. Pappenberger 《水文研究》2013,27(26):3922-3940
The increasing complexity of hydrological models results in a large number of parameters to be estimated. In order to better understand how these complex models work, efficient screening methods are required in order to identify the most important parameters. This is of particular importance for models that are used within an operational real‐time forecasting chain such as HQsim. The objectives of this investigation are to (i) identify the most sensitive parameters of the complex HQsim model applied in the Alpine Lech catchment and (ii) compare model parameter sensitivity rankings attained from three global sensitivity analysis techniques. The techniques presented are the (i) regional sensitivity analysis, (ii) Morris analysis and (iii) state‐dependent parameter modelling. The results indicate that parameters affecting snow melt as well as processes in the unsaturated soil zone reveal high significance in the analysed catchment. The snow melt parameters show clear temporal patterns in the sensitivity whereas most of the parameters affecting processes in the unsaturated soil zone do not vary in importance across the year. Overall, the maximum degree day factor (meltfunc_max) has been identified to play a key role within the HQsim model. Although the parameter sensitivity rankings are equivalent between methods for a number of parameters, for several key parameters differing results were obtained. An uncertainty analysis demonstrates that a parameter ranking attained from only one method is subjected to large uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Prediction of the peak break‐up water level, which is the maximum instantaneous stage during ice break‐up, is desirable to allow effective ice flood mitigation, but traditional hydrologic flood routing techniques are not efficient in addressing the large uncertainties caused by numerous factors driving the peak break‐up water level. This research provides a probability prediction framework based on vine copulas. The predictor variables of the peak break‐up water level are first chosen, the pair copula structure is then constructed by using vine copulas, the conditional density distribution function is derived to perform a probability prediction, and the peak break‐up water level value can then be estimated from the conditional density distribution function given the conditional probability and fixed values of the predictor variables. This approach is exemplified using data from 1957 to 2005 for the Toudaoguai and Sanhuhekou stations, which are located in the Inner Mongolia Reach of the Yellow River, and the calibration and validation periods are divided at 1986. The mean curve of the peak break‐up water level estimated from the conditional distribution function can capture the tendency of the observed series at both the Toudaoguai and Sanhuhekou stations, and more than 90% of the observed values fall within the 90% prediction uncertainty bands, which are approximately twice the standard deviation of the observed series. The probability prediction results for the validation period are consistent with those for the calibration period when the nonstationarity of the marginal distributions for the Sanhuhekou station are considered. Compared with multiple linear regression results, the uncertainty bands from the conditional distribution function are much narrower; moreover, the conditional distribution function is more capable of addressing the nonstationarity of predictor variables, and the conclusions are confirmed by jackknife analysis. Scenario predictions for cases in which the peak break‐up water level is likely to be higher than the bankfull water level can also be conducted based on the conditional distribution function, with good performance for the two stations.  相似文献   

10.
Petroleum hydrocarbon vapors biodegrade aerobically in the subsurface. Depth profiles of petroleum hydrocarbon vapor and oxygen concentrations from seven locations in sandy and clay soils across four states of Australia are summarized. The data are evaluated to support a simple model of biodegradation that can be used to assess hydrocarbon vapors migrating toward built environments. Multilevel samplers and probes that allow near‐continuous monitoring of oxygen and total volatile organic compounds (VOCs) were used to determine concentration depth profiles and changes over time. Collation of all data across all sites showed distinct separation of oxygen from hydrocarbon vapors, and that most oxygen and hydrocarbon concentration profiles were linear or near linear with depth. The low detection limit on the oxygen probe data and because it is an in situ measurement strengthened the case that little or no overlapping of oxygen and hydrocarbon vapor concentration profiles occurred, and that indeed oxygen and hydrocarbon vapors were largely only coincident near the location where they both decreased to zero. First‐order biodegradation rates determined from all depth profiles were generally lower than other published rates. With lower biodegradation rates, the overlapping of depth profiles might be expected, and yet such overlapping was not observed. A model of rapid (instantaneous) reaction of oxygen and hydrocarbon vapors compared to diffusive transport processes is shown to explain the important aspects of the 13 depth profiles. The model is simply based on the ratio of diffusion coefficients of oxygen and hydrocarbon vapors, the ratio of the maximum concentrations of oxygen and hydrocarbon vapors, the depth to the maximum hydrocarbon source concentration, and the stoichiometry coefficient. Whilst simple, the model offers the potential to incorporate aerobic biodegradation into an oxygen‐limited flux‐reduction approach for vapor intrusion assessments of petroleum hydrocarbon compounds.  相似文献   

11.
 Being a non-linear method based on a rigorous formalism and an efficient processing of various information sources, the Bayesian maximum entropy (BME) approach has proven to be a very powerful method in the context of continuous spatial random fields, providing much more satisfactory estimates than those obtained from traditional linear geostatistics (i.e., the various kriging techniques). This paper aims at presenting an extension of the BME formalism in the context of categorical spatial random fields. In the first part of the paper, the indicator kriging and cokriging methods are briefly presented and discussed. A special emphasis is put on their inherent limitations, both from the theoretical and practical point of view. The second part aims at presenting the theoretical developments of the BME approach for the case of categorical variables. The three-stage procedure is explained and the formulations for obtaining prior joint distributions and computing posterior conditional distributions are given for various typical cases. The last part of the paper consists in a simulation study for assessing the performance of BME over the traditional indicator (co)kriging techniques. The results of these simulations highlight the theoretical limitations of the indicator approach (negative probability estimates, probability distributions that do not sum up to one, etc.) as well as the much better performance of the BME approach. Estimates are very close to the theoretical conditional probabilities, that can be computed according to the stated simulation hypotheses.  相似文献   

12.
A depth‐averaged linearized meander evolution model was calibrated and tested using the field data collected at the Quinn River in the Black Rock Desert, Nevada. Two approaches used to test the model were: (1) simulating meander evolution and comparing the results with the observed 38 year migration pattern; and (2) fitting the model parameters to present bank asymmetry (the ratio of the maximum bank gradients on opposite sides of the channel). The data required as input were collected in the field during a high flow in May 2011 and from aerial photographs and LiDAR data. Both approaches yielded similar results for the best fit parameter values. The bank asymmetry analysis showed that the bank asymmetry and the velocity perturbation have high correlation at close to zero spatial lag while the maximum correlation between the bank asymmetry and maximum bend curvature is offset by about 25 m. The model sufficiently replicated 38 years of channel migration, with a few locations significantly under‐ or over‐predicted. Inadequacies of the flow model and/or variation in bank properties unaccounted for are most likely the causes for these discrepancies. Flow through the Quinn River was also simulated by a more general 3D model. The downstream pattern of near‐bank shear stresses simulated by the 3D model is nearly identical to those resulting from the linearized flow model. Topographic profiles across interior bends are essentially invariant over a wide range of migration rates, suggesting that the traditional formulation that cut bank erosion processes govern migration rates is appropriate for the Quinn River. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Satya P. Ojha 《水文研究》2014,28(18):4829-4842
This study presents the analysis of the velocity fluctuations to describe the conditional statistics of Reynolds shear stress in flow over two‐dimensional dunes in the presence of surface waves of varying frequency. The flow velocity measurements over the dunes are made using a 16‐MHz 3D acoustic Doppler velocimeter. The joint probability distributions of the normalized stream‐wise and vertical velocity fluctuations at different vertical locations are calculated in the trough region of a selected dune in quasi‐steady region of the flow. Third‐order moments of the stream‐wise and vertical velocity components over one dune length are also calculated throughout the flow depth for understanding the effect of surface waves on relative contributions to the Reynolds shear stress due to the four quadrant events. The structure of instantaneous Reynolds stresses is analysed using quadrant analysis technique. It has been shown that the contributions of second and fourth quadrant events to the Reynolds shear stress increase with increase in the frequency of surface waves. In fact, the largest contribution to turbulent stresses comes from the second quadrant. The cumulant discard method is applied to describe the statistical properties of the covariance term uw′. Conditional statistics and conditional sampling are used to compare the experimental and theoretical relative contributions to the Reynolds shear stress from the four quadrant events. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Uplift and the accompanying reduction in overburden result in anomalously high velocity in the uplifted rock unit relative to its current depth. The present work utilizes the non‐uniqueness of the parameters of instantaneous velocity versus depth functions as an effective tool for uplift studies. The linear function with its two parameters, V0 and k, is a very simple function and is used as the illustrative vehicle. In the parameter space, i.e. in a plot where one axis represents V0 and the other axis represents k, non‐uniqueness can be represented by contours of equal goodness‐of‐fit values between the observed data and the fitted function. The contour delimiting a region of equivalent solutions in the parameter space is called a ‘solution trough’. Uplift corresponds to a rotation of the solution trough in the parameter space. It is shown that, in terms of relative depth changes, there are five possible configurations (five cases) of uplift in a given area (the mobile location) relative to another area (the reference location). The cases depend on whether the uplifted location had attained a (pre‐uplift) maximum depth of burial that was greater than, similar to, or smaller than the maximum depth of burial at the reference location. Interpretation of the relationships between the solution troughs corresponding to the different locations makes it possible to establish which of the five cases applies to the uplifted location and to estimate the amount of uplift that the unit had undergone at that location. The difficulty in determining the reduction in velocity due to decompaction resulting from uplift is a main source of uncertainty in the estimate of the amount of uplift. This is a common problem with all velocity‐based methods of uplift estimation. To help around this difficulty, the present work proposes a first‐order approximation method for estimating the effect of decompaction on velocity in an uplifted area.  相似文献   

15.
A six parameter stochastic point process model, known as the modified Bartlett-Lewis Rectangular Pulses Model, is applied to fairly long hourly rainfall data recorded at Valentia (relatively a wet location) and Shannon Airport (relatively a dry location), Ireland. Five different sets of statistics of the rainfall data of each month, assuming local stationarity within the month, are used to estimate the parameters and to simulate model output. The problems of parameter stability/sensitivity and identification are discussed and it has been shown that the sensitivity of the model parameters to the choice of six statistics can be avoided by estimating the six parameters by optimization from 16 statistics namely mean, variance, lag-1 autocorrelation corfficient and proportion dry of hourly, 6-hourly, 12-hourly, and 24-hourly rainfalls. Some useful properties of the rainfall depth process are analysed using the notion of event-based statistics. The conditional distributions of rainfall depth and maximum intensity, mean event profiles, and various other features of the rainfall depth process obtained from the model simulated samples compare favourably with the historical ones.  相似文献   

16.
水库地震的综合概率增益预测法研究   总被引:1,自引:1,他引:0       下载免费PDF全文
在水库地震预测中首次引进了地震概率增益综合预测模型,叙述了该方法的基本原理;探讨了影响水库地震最大震级发生的因素;结合中国水库及其水库地震震例资料,对水库地震综合影响因素E值、库容、库水深度因素或指标预测水库地震最大震级的效能R值和经验概率增益K值进行了统计和评价,表明概率增益综合预测能够对各种预测方法预测水库地震的效果进行定量分析。在此基础上,选择国内外若干水库进行了水库地震最大震级回溯性检验,表明该方法作为一种新的水库地震最大震级预测方法是可行的  相似文献   

17.
Parametric method of flood frequency analysis (FFA) involves fitting of a probability distribution to the observed flood data at the site of interest. When record length at a given site is relatively longer and flood data exhibits skewness, a distribution having more than three parameters is often used in FFA such as log‐Pearson type 3 distribution. This paper examines the suitability of a five‐parameter Wakeby distribution for the annual maximum flood data in eastern Australia. We adopt a Monte Carlo simulation technique to select an appropriate plotting position formula and to derive a probability plot correlation coefficient (PPCC) test statistic for Wakeby distribution. The Weibull plotting position formula has been found to be the most appropriate for the Wakeby distribution. Regression equations for the PPCC tests statistics associated with the Wakeby distribution for different levels of significance have been derived. Furthermore, a power study to estimate the rejection rate associated with the derived PPCC test statistics has been undertaken. Finally, an application using annual maximum flood series data from 91 catchments in eastern Australia has been presented. Results show that the developed regression equations can be used with a high degree of confidence to test whether the Wakeby distribution fits the annual maximum flood series data at a given station. The methodology developed in this paper can be adapted to other probability distributions and to other study areas. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Stream flow predictions in ungauged basins are one of the most challenging tasks in surface water hydrology because of nonavailability of data and system heterogeneity. This study proposes a method to quantify stream flow predictive uncertainty of distributed hydrologic models for ungauged basins. The method is based on the concepts of deriving probability distribution of model's sensitive parameters by using measured data from a gauged basin and transferring the distribution to hydrologically similar ungauged basins for stream flow predictions. A Monte Carlo simulation of the hydrologic model using sampled parameter sets with assumed probability distribution is conducted. The posterior probability distributions of the sensitive parameters are then computed using a Bayesian approach. In addition, preselected threshold values of likelihood measure of simulations are employed for sizing the parameter range, which helps reduce the predictive uncertainty. The proposed method is illustrated through two case studies using two hydrologically independent sub‐basins in the Cedar Creek watershed located in Texas, USA, using the Soil and Water Assessment Tool (SWAT) model. The probability distribution of the SWAT parameters is derived from the data from one of the sub‐basins and is applied for simulation in the other sub‐basin considered as pseudo‐ungauged. In order to assess the robustness of the method, the numerical exercise is repeated by reversing the gauged and pseudo‐ungauged basins. The results are subsequently compared with the measured stream flow from the sub‐basins. It is observed that the measured stream flow in the pseudo‐ungauged basin lies well within the estimated confidence band of predicted stream flow. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Reverse‐time migration gives high‐quality, complete images by using full‐wave extrapolations. It is thus not subject to important limitations of other migrations that are based on high‐frequency or one‐way approximations. The cross‐correlation imaging condition in two‐dimensional pre‐stack reverse‐time migration of common‐source data explicitly sums the product of the (forward‐propagating) source and (backward‐propagating) receiver wavefields over all image times. The primary contribution at any image point travels a minimum‐time path that has only one (specular) reflection, and it usually corresponds to a local maximum amplitude. All other contributions at the same image point are various types of multipaths, including prismatic multi‐arrivals, free‐surface and internal multiples, converted waves, and all crosstalk noise, which are imaged at later times, and potentially create migration artefacts. A solution that facilitates inclusion of correctly imaged, non‐primary arrivals and removal of the related artefacts, is to save the depth versus incident angle slice at each image time (rather than automatically summing them). This results in a three‐parameter (incident angle, depth, and image time) common‐image volume that integrates, into a single unified representation, attributes that were previously computed by separate processes. The volume can be post‐processed by selecting any desired combination of primary and/or multipath data before stacking over image time. Separate images (with or without artifacts) and various projections can then be produced without having to remigrate the data, providing an efficient tool for optimization of migration images. A numerical example for a simple model shows how primary and prismatic multipath contributions merge into a single incident angle versus image time trajectory. A second example, using synthetic data from the Sigsbee2 model, shows that the contributions to subsalt images of primary and multipath (in this case, turning wave) reflections are different. The primary reflections contain most of the information in regions away from the salt, but both primary and multipath data contribute in the subsalt region.  相似文献   

20.
In this paper we extend the generalized likelihood uncertainty estimation (GLUE) technique to estimate spatially distributed uncertainty in models conditioned against binary pattern data contained in flood inundation maps. Untransformed binary pattern data already have been used within GLUE to estimate domain‐averaged (zero‐dimensional) likelihoods, yet the pattern information embedded within such sources has not been used to estimate distributed uncertainty. Where pattern information has been used to map distributed uncertainty it has been transformed into a continuous function prior to use, which may introduce additional errors. To solve this problem we use here ‘raw’ binary pattern data to define a zero‐dimensional global performance measure for each simulation in a Monte Carlo ensemble. Thereafter, for each pixel of the distributed model we evaluate the probability that this pixel was inundated. This probability is then weighted by the measure of global model performance, thus taking into account how well a given parameter set performs overall. The result is a distributed uncertainty measure mapped over real space. The advantage of the approach is that it both captures distributed uncertainty and contains information on global likelihood that can be used to condition predictions of further events for which observed data are not available. The technique is applied to the problem of flood inundation prediction at two test sites representing different hydrodynamic conditions. In both cases, the method reveals the spatial structure in simulation uncertainty and simultaneously enables mapping of flood probability predicted by the model. Spatially distributed uncertainty analysis is shown to contain information over and above that available from global performance measures. Overall, the paper highlights the different types of information that may be obtained from mappings of model uncertainty over real and n‐dimensional parameter spaces. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号