首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A groundwater model characterized by a lack of field data about hydraulic model parameters and boundary conditions combined with many observation data sets for calibration purpose was investigated concerning model uncertainty. Seven different conceptual models with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike information criterion (AIC and AICc), Bayesian information criterion (BIC), and Kashyap's information criterion (KIC) were calculated for a set of seven inverse calibrated models with increasing complexity. Finally, the likelihood of each model was computed. Comparing only residuals of the different conceptual models leads to an overparameterization and certainty loss in the conceptual model approach. The model employing only uncalibrated hydraulic parameters, estimated from sedimentological information, obtained the worst AIC, BIC, and KIC values. Using only sedimentological data to derive hydraulic parameters introduces a systematic error into the simulation results and cannot be recommended for generating a valuable model. For numerical investigations with high numbers of calibration data the BIC and KIC select as optimal a simpler model than the AIC. The model with 15 adjusted parameters was evaluated by AIC as the best option and obtained a likelihood of 98%. The AIC disregards the potential model structure error and the selection of the KIC is, therefore, more appropriate. Sensitivities to piezometric heads were highest for the model with only five adjustable parameters and sensitivity coefficients were directly influenced by the changes in extracted groundwater volumes.  相似文献   

2.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

3.
Z. X. Xu  J. Y. Li 《水文研究》2002,16(12):2423-2439
The primary objective of this study is to investigate the possibility of including more temporal and spatial information on short‐term inflow forecasting, which is not easily attained in the traditional time‐series models or conceptual hydrological models. In order to achieve this objective, an artificial neural network (ANN) model for short‐term inflow forecasting is developed and several issues associated with the use of an ANN model are examined in this study. The formulated ANN model is used to forecast 1‐ to 7‐h ahead inflows into a hydropower reservoir. The root‐mean‐squared error (RMSE), the Nash–Sutcliffe coefficient (NSC), the A information criterion (AIC), B information criterion (BIC) of the 1‐ to 7‐h ahead forecasts, and the cross‐correlation coefficient between the forecast and observed inflows are estimated. Model performance is analysed and some quantitative analysis is presented. The results obtained are satisfactory. Perceived strengths of the ANN model are the capability for representing complex and non‐linear relationships as well as being able to include more information in the model easily. Although the results obtained may not be universal, they are expected to reveal some possible problems in ANN models and provide some helpful insights in the development and application of ANN models in the field of hydrology and water resources. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
地震时间分布特征研究是进行地震预测和地震危险性分析的重要基础。以中国海域统一地震目录为基础资料,以指数分布模型、伽马分布模型、威布尔分布模型、对数正态分布模型以及布朗过程时间分布(BPT)模型为目标模型,采用极大似然法估算模型参数。根据赤池信息准则(AIC)、贝叶斯信息准则(BIC)以及K-S检验结果确定能够描述海域地震时间分布的最优模型。结果表明,对于震级相对较小( M <6)的地震,指数分布、伽马分布以及威布尔分布均能较好地描述其时间分布特征;在大的区域范围内(如整个海域),震级相对较大( M >6)的地震可完全采用指数分布描述其时间分布特征;在较小的区域范围内(如地震带),大地震时间间隔可能更加符合对数正态分布和BPT分布。此外,文中还采用扩散熵分析法研究地震之间的丛集性和时间相关性,结果表明,地震活动存在长期记忆性,震级相对较小( M <6)的地震受更大地震的影响,从而在时间上表现出丛集特征。本文的研究结果对地震预测、地震危险性计算中地震时间分布模型选择和地震活动性参数计算具有一定参考价值,对理解地震孕育发生机理具有一定科学意义。  相似文献   

5.
快速、准确以及可靠的震相自动识别,不仅可为政府震后决策提供快速可靠的地震信息,还对减轻地震灾害损失和提高公众对政府可信度具有较大价值。以云南强震动台网实际观测记录为基础,选取了2008年至2017年期间震级在M5.0至M7.0间共计20余次地震事件,借鉴国内外P波震相自动拾取的相关研究,用最常用的长短时平均STA/LTA结合AIC准则综合捡拾法和长短时平均STA/LTA结合BIC准则综合捡拾法这两种不同的综合分析方法,将涵盖了云南盈江、腾冲、彝良、洱源和景谷等地震多发区域的记录P波到时捡拾,并对捡拾准确度、可靠度以及相应速率进行对比探讨。统计分析结果表明:在精确捡拾部分中,相比AIC准则,BIC准则的构架与算法更加灵活简单,且其抗干扰信号能力强,能有效避免干扰信号引起的误触发,可在漏捡拾与误捡拾之间寻求最佳平衡,对地震数据实现快速有效的实时处理,更利于云南省内地震预警发展。  相似文献   

6.
Abstract

Modelling and prediction of hydrological processes (e.g. rainfall–runoff) can be influenced by discontinuities in observed data, and one particular case may arise when the time scale (i.e. resolution) is coarse (e.g. monthly). This study investigates the application of catastrophe theory to examine its suitability to identify possible discontinuities in the rainfall–runoff process. A stochastic cusp catastrophe model is used to study possible discontinuities in the monthly rainfall–runoff process at the Aji River basin in Azerbaijan, Iran. Monthly-averaged rainfall and flow data observed over a period of 20 years (1981–2000) are analysed using the Cuspfit program. In this model, rainfall serves as a control variable and runoff as a behavioural variable. The performance of this model is evaluated using four measures: correlation coefficient, log-likelihood, Akaike information criterion (AIC) and Bayesian information criterion (BIC). The results indicate the presence of discontinuities in the rainfall–runoff process, with a significant sudden jump in flow (cusp signal) when rainfall reaches a threshold value. The performance of the model is also found to be better than that of linear and logistic models. The present results, though preliminary, are promising in the sense that catastrophe theory can play a possible role in the study of hydrological systems and processes, especially when the data are noisy.

Citation Ghorbani, M. A., Khatibi, R., Sivakumar, B. & Cobb, L. (2010) Study of discontinuities in hydrological data using catastrophe theory. Hydrol. Sci. J. 55(7), 1137–1151.  相似文献   

7.
Considering complexity in groundwater modeling can aid in selecting an optimal model, and can avoid over parameterization, model uncertainty, and misleading conclusions. This study was designed to determine the uncertainty arising from model complexity, and to identify how complexity affects model uncertainty. The Ajabshir aquifer, located in East Azerbaijan, Iran, was used for comprehensive hydrogeological studies and modeling. Six unique conceptual models with four different degrees of complexity measured by the number of calibrated model parameters (6, 10, 10, 13, 13 and 15 parameters) were compared and characterized with alternative geological interpretations, recharge estimates and boundary conditions. The models were developed with Model Muse and calibrated using UCODE with the same set of observed data of hydraulic head. Different methods were used to calculate model probability and model weight to explore model complexity, including Bayesian model averaging, model selection criteria, and multicriteria decision-making (MCDM). With the model selection criteria of AIC, AICc and BIC, the simplest model received the highest model probability. The model selection criterion, KIC, and the MCDM method, in addition to considering the quality of model fit between observed and simulated data and the number of calibrated parameters, also consider uncertainty in parameter estimates with a Fisher information matrix. KIC and MCDM selected a model with moderate complexity (10 parameters) and the best parameter estimation (model 3) as the best models, over another model with the same degree of complexity (model 2). The results of these comparisons show that in choosing between models, priority should be given to quality of the data and parameter estimation rather than degree of complexity.  相似文献   

8.
Neural computing has moved beyond simple demonstration to more significant applications. Encouraged by recent developments in artificial neural network (ANN) modelling techniques, we have developed committee machine (CM) networks for converting well logs to porosity and permeability, and have applied the networks to real well data from the North Sea. Simple three‐layer back‐propagation ANNs constitute the blocks of a modular system where the porosity ANN uses sonic, density and resistivity logs for input. The permeability ANN is slightly more complex, with four inputs (density, gamma ray, neutron porosity and sonic). The optimum size of the hidden layer, the number of training data required, and alternative training techniques have been investigated using synthetic logs. For both networks an optimal number of neurons in the hidden layer is in the range 8–10. With a lower number of hidden units the network fails to represent the problem, and for higher complexity overfitting becomes a problem when data are noisy. A sufficient number of training samples for the porosity ANN is around 150, while the permeability ANN requires twice as many in order to keep network errors well below the errors in core data. For the porosity ANN the overtraining strategy is the suitable technique for bias reduction and an unconstrained optimal linear combination (OLC) is the best method of combining the CM output. For permeability, on the other hand, the combination of overtraining and OLC does not work. Error reduction by validation, simple averaging combined with range‐splitting provides the required accuracy. The accuracy of the resulting CM is restricted only by the accuracy of the real data. The ANN approach is shown to be superior to multiple linear regression techniques even with minor non‐linearity in the background model.  相似文献   

9.
Evaporation, as a major component of the hydrologic cycle, plays a key role in water resources development and management in arid and semi-arid climatic regions. Although there are empirical formulas available, their performances are not all satisfactory due to the complicated nature of the evaporation process and the data availability. This paper explores evaporation estimation methods based on artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) techniques. It has been found that ANN and ANFIS techniques have much better performances than the empirical formulas (for the test data set, ANN R2 = 0.97, ANFIS R2 = 0.92 and Marciano R2 = 0.54). Between ANN and ANFIS, ANN model is slightly better albeit the difference is small. Although ANN and ANFIS techniques seem to be powerful, their data input selection process is quite complicated. In this research, the Gamma test (GT) has been used to tackle the problem of the best input data combination and how many data points should be used in the model calibration. More studies are needed to gain wider experience about this data selection tool and how it could be used in assessing the validation data.  相似文献   

10.
This paper deals with the transient response of a non‐linear dynamical system with random uncertainties. The non‐parametric probabilistic model of random uncertainties recently published and extended to non‐linear dynamical system analysis is used in order to model random uncertainties related to the linear part of the finite element model. The non‐linearities are due to restoring forces whose parameters are uncertain and are modeled by the parametric approach. Jayne's maximum entropy principle with the constraints defined by the available information allows the probabilistic model of such random variables to be constructed. Therefore, a non‐parametric–parametric formulation is developed in order to model all the sources of uncertainties in such a non‐linear dynamical system. Finally, a numerical application for earthquake engineering analysis is proposed concerning a reactor cooling system under seismic loads. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

11.
Wensheng Wang  Jing Ding 《水文研究》2007,21(13):1764-1771
A p‐order multivariate kernel density model based on kernel density theory has been developed for synthetic generation of multivariate variables. It belongs to a kind of data‐driven approach and is able to avoid prior assumptions as to the form of probability distribution (normal or Pearson III) and the form of dependence (linear or non‐linear). The p‐order multivariate kernel density model is a non‐parametric method for synthesis of streamflow. The model is more flexible than conventional parametric models used in stochastic hydrology. The effectiveness and satisfactoriness of this model are illustrated through its application to the simultaneous synthetic generation of daily streamflow from Pingshan station and Yibin‐Pingshan region (Yi‐Ping region) of the Jinsha River in China. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
Despite the widespread application of nonlinear mathematical models, comparative studies of different models are still a huge task for modellers. This is because a large number of trial and error processes are needed to develop each model, so the workload will be multiplied into an unmanageable level if many types of models are involved. This study presents an efficient approach by using the Gamma test (GT) to select the input variables and the training data length, so that the trial and error workload can be greatly reduced. The methodology is tested in estimating solar radiation at the Brue catchment, UK. Several nonlinear models have been developed efficiently with the aid of the GT, including local linear regression, multi-layer perceptron (MLP), Elman neural network, neural network auto-regressive model with exogenous inputs (NNARX) and adaptive neuro-fuzzy inference system (ANFIS). This work is only feasible within the time and resources constraint, due to the GT in reducing huge workload of the trial and error process.  相似文献   

13.
This paper presents results of the earthquake response analysis on a large‐scale seismic test (LSST) structure which was built at Hualien in Taiwan for an international cooperative research project. The analysis is carried out using a computer program which has been developed based on axisymmetric finite element method incorporating dynamic infinite elements for far‐field soil region and a substructured wave input technique. The non‐linear behaviour of the soil medium is taken into account using an iterative equivalent linearization procedure. Two sets of the soil and structural properties, namely the unified and the FVT‐correlated models, are utilized as the initial linear values. The unified model was provided by a group of experts in charge of the geotechnical experiments, and the correlated model was obtained through a system identification procedure using the forced vibration test (FVT) results by the present authors. Three components of ground accelerations are artificially generated through an averaging process of the Fourier amplitude spectra of the ground accelerations measured near the test structure, and they are used as the control input motions for the earthquake analysis. It has been found that the earthquake responses predicted using the generated control motions and with the FVT‐correlated model as the initial linear properties in the equivalent linearization procedure compare very well with the observed responses. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
Soil particle-size distributions (PSD) have been used to estimate soil hydraulic properties. Various parametric PSD models have been proposed to describe the soil PSD from sparse experimental data. It is important to determine which PSD model best represents specific soils. Fourteen PSD models were examined in order to determine the best model for representing the deposited soils adjacent to dams in the China Loess Plateau; these were: Skaggs (S-1, S-2, and S-3), fractal (FR), Jaky (J), Lima and Silva (LS), Morgan (M), Gompertz (G), logarithm (L), exponential (E), log-exponential (LE), Weibull (W), van Genuchten type (VG) as well as Fredlund (F) models. Four-hundred and eighty samples were obtained from soils deposited in the Liudaogou catchment. The coefficient of determination (R 2), the Akaike’s information criterion (AIC), and the modified AIC (mAIC) were used. Based upon R 2 and AIC, the three- and four-parameter models were both good at describing the PSDs of deposited soils, and the LE, FR, and E models were the poorest. However, the mAIC in conjunction with R 2 and AIC results indicated that the W model was optimum for describing PSD of the deposited soils for emphasizing the effect of parameter number. This analysis was also helpful for finding out which model is the best one. Our results are applicable to the China Loess Plateau.  相似文献   

15.
Non‐linear structural identification problems have raised considerable research efforts since decades, in which the Bouc–Wen model is generally utilized to simulate non‐linear structural constitutive characteristic. Support vector regression (SVR), a promising data processing method, is studied for versatile‐typed structural identification. First, a model selection strategy is utilized to determine the unknown power parameter of the Bouc–Wen model. Meanwhile, optimum SVR parameters are selected automatically, instead of tuning manually. Consequently, the non‐linear structural equation is rewritten in linear form, and is solved by the SVR technique. A five‐floor versatile‐type structure is studied to show the effectiveness of the proposed method, in which both power parameter known and unknown cases are investigated. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
A semi‐active fuzzy control strategy for seismic response reduction using a magnetorheological (MR) damper is presented. When a control method based on fuzzy set theory for a structure with a MR damper is used for vibration reduction of a structure, it has an inherent robustness, and easiness to treat the uncertainties of input data from the ground motion and structural vibration sensors, and the ability to handle the non‐linear behavior of the structure because there is no longer the need for an exact mathematical model of the structure. For a clipped‐optimal control algorithm, the command voltage of a MR damper is set at either zero or the maximum level. However, a semi‐active fuzzy control system has benefit to produce the required voltage to be input to the damper so that a desirable damper force can be produced and thus decrease the control force to reduce the structural response. Moreover, the proposed control strategy is fail‐safe in that the bounded‐input, bounded‐output stability of the controlled structure is guaranteed. The results of the numerical simulations show that the proposed semi‐active control system consisting of a fuzzy controller and a MR damper can be beneficial in reducing seismic responses of structures. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
Although artificial neural networks (ANNs) have been applied in rainfall runoff modelling for many years, there are still many important issues unsolved that have prevented this powerful non‐linear tool from wide applications in operational flood forecasting activities. This paper describes three ANN configurations and it is found that a dedicated ANN for each lead‐time step has the best performance and a multiple output form has the worst result. The most popular form with multiple inputs and single output has the average performance. In comparison with a linear transfer function (TF) model, it is found that ANN models are uncompetitive against the TF model in short‐range predictions and should not be used in operational flood forecasting owing to their complicated calibration process. For longer range predictions, ANN models have an improved chance to perform better than the TF model; however, this is highly dependent on the training data arrangement and there are undesirable uncertainties involved, as demonstrated by bootstrap analysis in the study. To tackle the uncertainty issue, two novel approaches are proposed: distance analysis and response analysis. Instead of discarding the training data after the model's calibration, the data should be retained as an integral part of the model during its prediction stage and the uncertainty for each prediction could be judged in real time by measuring the distances against the training data. The response analysis is based on an extension of the traditional unit hydrograph concept and has a very useful potential to reveal the hydrological characteristics of ANN models, hence improving user confidence in using them in real time. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
A hybrid model that blends two non‐linear data‐driven models, i.e. an artificial neural network (ANN) and a moving block bootstrap (MBB), is proposed for modelling annual streamflows of rivers that exhibit complex dependence. In the proposed model, the annual streamflows are modelled initially using a radial basis function ANN model. The residuals extracted from the neural network model are resampled using the non‐parametric resampling technique MBB to obtain innovations, which are then added back to the ANN‐modelled flows to generate synthetic replicates. The model has been applied to three annual streamflow records with variable record length, selected from different geographic regions, namely Africa, USA and former USSR. The performance of the proposed ANN‐based non‐linear hybrid model has been compared with that of the linear parametric hybrid model. The results from the case studies indicate that the proposed ANN‐based hybrid model (ANNHM) is able to reproduce the skewness present in the streamflows better compared to the linear parametric‐based hybrid model (LPHM), owing to the effective capturing of the non‐linearities. Moreover, the ANNHM, being a completely data‐driven model, reproduces the features of the marginal distribution more closely than the LPHM, but offers less smoothing and no extrapolation value. It is observed that even though the preservation of the linear dependence structure by the ANNHM is inferior to the LPHM, the effective blending of the two non‐linear models helps the ANNHM to predict the drought and the storage characteristics efficiently. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, the applicability of an auto‐regressive model with exogenous inputs (ARX) in the frequency domain to structural health monitoring (SHM) is established. Damage sensitive features that explicitly consider non‐linear system input/output relationships are extracted from the ARX model. Furthermore, because of the non‐Gaussian nature of the extracted features, Extreme Value Statistics (EVS) is employed to develop a robust damage classifier. EVS provides superior performance to standard statistical methods because the data of interest are in the tails (extremes) of the damage sensitive feature distribution. The suitability of the ARX model, combined with EVS, to non‐linear damage detection is demonstrated using vibration data obtained from a laboratory experiment of a three‐story building model. It is found that the vibration‐based method, while able to discern when damage is present in the structure, is unable to localize the damage to a particular joint. An impedance‐based active sensing method using piezoelectric (PZT) material as both an actuator and a sensor is then investigated as an alternative solution to the problem of damage localization. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Recently, several new ground‐motion prediction equations (GMPEs) have been developed in the U.S.A. (the NGA project) and elsewhere. Unfortunately, the predictions obtained by using different models still differ considerably, although starting from the same database. In this paper, a non‐parametric approach, called the Conditional Average Estimator (CAE) method, has been used for ground‐motion prediction. The comparison between the CAE results and the predictions obtained by five NGA and one European model suggest that the model predictions depend substantially on the selection of the effective database and on the adopted functional form. Both decisions rely to some extent on judgement, and their influence is especially important at short distances from the source. The differences between the results obtained from the European and NGA databases seem to be of the same or even smaller magnitude than the differences observed between different NGA models, at least at short and moderate distances. Aftershocks in the database generally decrease the median values and increase dispersion. The non‐parametric CAE method has proved to be a simple but powerful tool for ground‐motion prediction, especially in a research environment. It can be used for quick predictions with different databases and different input parameters within the range of available data. It is easy to add to or remove data from the database, and to check the influence of additional input parameters. With availability of high quality data, the non‐parametric approach will become more reliable and more attractive also for practical applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号