首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Analytical models prepared from field drawings do not generally provide results that match with experimental results.The error may be due to uncertainties in the property of materials,size of members and errors in the modelling process.It is important to improve analytical models using experimentally obtained data.For the past several years,data obtained from ambient vibration testing have been successfully used in many cases to update and match dynamic behaviors of analytical models with real structures.This paper presents a comparison between artificial neural network(ANN) and eigensensitivity based model updating of an existing multi-story building.A simple spring-mass analytical model,developed from the structural drawings of the building,is considered and the corresponding spring stiffness and lumped mass of all floors are chosen as updating parameters.The advantages and disadvantages of these updating methods are discussed.The advantage is that both methods ensure a physically meaningful model which canbe further employed in determining structural response and health monitoring.  相似文献   

3.
4.
Hydrological model and observation errors are often non-Gaussian and/or biased, and the statistical properties of the errors are often unknown or not fully known. Thus, determining the true error covariance matrices is a challenge for data assimilation approaches such as the most widely used Kalman filter (KF) and its extensions, which assume Gaussian error nature and need fully known error statistics. This paper introduces H-infinite filter (HF) to hydrological modeling and compares HF with KF under various model and observation error conditions. HF is basically a robust version of KF. When model performance is not well known, or changes unpredictably, HF may be preferred over KF. HF is especially suitable for the cases where the estimation performance in the worst error case needs to be guaranteed. Through the application of HF to a hypothetical hydrologic model, this paper shows that HF is less sensitive to the uncertainty in the initial condition, corrects system bias more effectively, and converges to true state faster after interruptions than KF. In particular, HF performs better in dealing with instant human inputs (irrigation is used as an example), which are characterized by non-stationary, non-Gaussian and not fully known errors. However HF design can be more difficult than KF design due to the sensitivity of HF performance to design parameters (weights for model and observation error terms). Through sensitivity analysis, this paper shows the existence of a certain range of those parameters, in which the “best” value of the parameters is located. The tuning of HF design parameters, which can be based on users’ prior knowledge on the nature of model and observation errors, is critical for the implementation of HF.  相似文献   

5.
The theory of statistical communication provides an invaluable framework within which it is possible to formulate design criteria and actually obtain solutions for digital filters. These are then applicable in a wide range of geophysical problems. The basic model for the filtering process considered here consists of an input signal, a desired output signal, and an actual output signal. If one minimizes the energy or power existing in the difference between desired and actual filter outputs, it becomes possible to solve for the so-called optimum, or least squares filter, commonly known as the “Wiener” filter. In this paper we derive from basic principles the theory leading to such filters. The analysis is carried out in the time domain in discrete form. We propose a model of a seismic trace in terms of a statistical communication system. This model trace is the sum of a signal time series plus a noise time series. If we assume that estimates of the signal shape and of the noise autocorrelation are available, we may calculate Wiener filters which will attenuate the noise and sharpen the signal. The net result of these operations can then in general be expected to increase seismic resolution. We show a few numerical examples to illustrate the model's applicability to situations one might find in practice.  相似文献   

6.
There is a general lack of awareness among ‘lay’ professionals (geophysicists included) regarding the limitations in the use of least-squares. Using a simple numerical model under simulated conditions of observational errors, the performance of least-squares and other goodness-of-fit criteria under various error conditions are investigated. The results are presented in a simplified manner that can be readily understood by the lay earth scientist. It is shown that the use of least-squares is, strictly, only valid either when the errors pertain to a normal probability distribution or under certain fortuitous conditions. The correct power to use (e.g. square, cube, square root, etc.) depends on the form of error distribution. In many fairly typical practical situations, least-squares is one of the worst criteria to use. In such cases, data treatment, ‘robust statistics’ or similar processes provide an alternative approach.  相似文献   

7.
Many past studies have verified numerical simulations of tsunamis using only qualitative and subjective methods. This paper investigates the relative merits of several indices that can be used to objectively verify tsunami model performance. A number of commonly used indices, such as error in the maximum amplitude and root-mean-square error, are considered, as well as some further indices that have been developed for other specific applications. Desirable qualities of the indices are presented and these include computational efficiency, invariance when applied to tsunamis of any size or to time series of varying length (including relatively short series), and the ability to clearly identify a single best prediction from within a set of simulations. A scenario from the T2 tsunami scenario database is chosen as the control. From this, time series of sea-level elevations are extracted at designated test points located at a range of distances from the tsunami source region. Parameters of the T2 database are perturbed in order to examine the performance of the indices. Of the indices examined, several performed better than others, with Wilmott’s Index of Agreement and Watterson’s transformed Mielke index found to be the best. Combining data from multiple locations was shown to improve the performance of the indices. This study forms the basis for future evaluation of the indices using real observations of tsunamis.  相似文献   

8.
Due to the complexity of influencing factors and the limitation of existing scientific knowledge, current monthly inflow prediction accuracy is unable to meet the requirements of various water users yet. A flow time series is usually considered as a combination of quasi-periodic signals contaminated by noise, so prediction accuracy can be improved by data preprocess. Singular spectrum analysis (SSA), as an efficient preprocessing method, is used to decompose the original inflow series into filtered series and noises. Current application of SSA only selects filtered series as model input without considering noises. This paper attempts to prove that noise may contain hydrological information and it cannot be ignored, a new method that considerers both filtered and noises series is proposed. Support vector machine (SVM), genetic programming (GP), and seasonal autoregressive (SAR) are chosen as the prediction models. Four criteria are selected to evaluate the prediction model performance: Nash–Sutcliffe efficiency, Water Balance efficiency, relative error of annual average maximum (REmax) monthly flow and relative error of annual average minimum (REmin) monthly flow. The monthly inflow data of Three Gorges Reservoir is analyzed as a case study. Main results are as following: (1) coupling with the SSA, the performance of the SVM and GP models experience a significant increase in predicting the inflow series. However, there is no significant positive change in the performance of SAR (1) models. (2) After considering noises, both modified SSA-SVM and modified SSA-GP models perform better than SSA-SVM and SSA-GP models. Results of this study indicated that the data preprocess method SSA can significantly improve prediction precision of SVM and GP models, and also proved that noises series still contains some information and has an important influence on model performance.  相似文献   

9.
The phenomena of wind-field deformation above complex (mountainous) terrain is a popular subject of research related to numerical modelling using GIS techniques. This type of modelling requires, as input data, information on terrain roughness and a digital terrain/elevation model. This information may be provided by remote sensing data. Consequently, its accuracy and spatial resolution may affect the results of modelling. This paper represents an attempt to conduct wind-field modelling in the area of the ?nie?nik Massif (Eastern Sudetes). The modelling process was conducted in WindStation 2.0.10 software (using the computable fluid dynamics solver Canyon). Two different elevation models were used: the Global Land Survey Digital Elevation Model (GLS DEM) and Digital Terrain Elevation Data (DTED) Level 2. The terrain roughness raster was generated on the basis of Corine Land Cover 2006 (CLC 2006) data. The output data were post-processed in ArcInfo 9.3.1 software to achieve a high-quality cartographic presentation. Experimental modelling was conducted for situations from 26 November 2011, 25 May 2012, and 26 May 2012, based on a limited number of field measurements and using parameters of the atmosphere boundary layer derived from the aerological surveys provided by the closest meteorological stations. The model was run in a 100-m and 250-m spatial resolution. In order to verify the model’s performance, leave-one-out cross-validation was used. The calculated indices allowed for a comparison with results of former studies pertaining to WindStation’s performance. The experiment demonstrated very subtle differences between results in using DTED or GLS DEM elevation data. Additionally, CLC 2006 roughness data provided more noticeable improvements in the model’s performance, but only in the resolution corresponding to the original roughness data. The best input data configuration resulted in the following mean values of error measure: root mean squared error of velocity = 1.0 m/s and mean absolute error of direction = 30°. The author concludes that, within specific meteorological conditions (relatively strong and constant synoptic forcing) and using the aforementioned input data, the Canyon model provides fairly acceptable results. Similarly, the quality of the presented remote sensing data is suitable for wind velocity modelling in the proposed resolution. However, CLC 2006 land use data should be first verified with a higher-resolution satellite or aerial imagery.  相似文献   

10.
We evaluate in this paper the ability of several altimeter systems, considered separately as well as together with tide gauges, to control the time evolution of a barotropic model of the North Sea shelf. This evaluation is performed in the framework of the particular model errors due to uncertainties in bathymetry. An Ensemble Kalman Filter (EnKF) data assimilation approach is adopted, and observing-systems simulation experiments (OSSEs) are carried out using ensemble spread statistics. The skill criterion for the comparison of observing networks is, therefore, not based on the misfit between two simulations, as done in classic twin experiments, but on the reduction of ensemble variance occurring as a consequence of the assimilation. Future altimeter systems, such as the Wide Swath Ocean Altimeter (WSOA) and satellite constellations, are considered in this work. A single WSOA exhibits, for instance, similar performance as two-nadir satellites in terms of sea-level correction, and is better than three satellites in terms of model velocity control. Generally speaking, the temporal resolution of observations is shown to be of major importance for controlling the model error in these experiments. This result is clearly related to the focus adopted in this study on the specific high-frequency response of the ocean to meteorological forcing. Altimeter systems lack adequate temporal sampling for properly correcting the major part of model error in this context, whereas tide gauges, which provide a much finer time resolution, lead to better global statistical performance. When looking into further detail, tide gauges and altimetry are demonstrated to exhibit an interesting complementary character over the whole shelf, as tide gauge networks make it possible to properly control model error in a ∼100-km coastal band, while high-resolution altimeter systems are more efficient farther from the coast.  相似文献   

11.
基于Matlab平台,同时考虑聚羧酸系高性能减水剂的饱和掺量点、水泥浆体的初始Marsh时间或流动度、经时损失率、凝结时间和泌水率的影响,将掺有聚羧酸系高性能减水剂受检水泥与基准水泥相比较的各因子作为评判其相容性的指标,建立相应的隶属度函数,运用模糊数学评判方法,打破基准水泥的局限,对聚羧酸系高性能减水剂与水泥的相容性作出较为全面和准确的综合性评价。试验结果分析表明,掺入评价结果为优的同一聚羧酸系高性能减水剂预拌混凝土性能与其水泥浆体一致,相关性较好。研究对于绿色高性能减水剂混凝土的配制具有指导意义。  相似文献   

12.
The use of the shear wave velocity data as a field index for evaluating the liquefaction potential of sands is receiving increased attention because both shear wave velocity and liquefaction resistance are similarly influenced by many of the same factors such as void ratio, state of stress, stress history and geologic age. In this paper, the potential of support vector machine (SVM) based classification approach has been used to assess the liquefaction potential from actual shear wave velocity data. In this approach, an approximate implementation of a structural risk minimization (SRM) induction principle is done, which aims at minimizing a bound on the generalization error of a model rather than minimizing only the mean square error over the data set. Here SVM has been used as a classification tool to predict liquefaction potential of a soil based on shear wave velocity. The dataset consists the information of soil characteristics such as effective vertical stress (σ′v0), soil type, shear wave velocity (Vs) and earthquake parameters such as peak horizontal acceleration (amax) and earthquake magnitude (M). Out of the available 186 datasets, 130 are considered for training and remaining 56 are used for testing the model. The study indicated that SVM can successfully model the complex relationship between seismic parameters, soil parameters and the liquefaction potential. In the model based on soil characteristics, the input parameters used are σ′v0, soil type, Vs, amax and M. In the other model based on shear wave velocity alone uses Vs, amax and M as input parameters. In this paper, it has been demonstrated that Vs alone can be used to predict the liquefaction potential of a soil using a support vector machine model.  相似文献   

13.
Hydrological models have been widely applied in flood forecasting, water resource management and other environmental sciences. Most hydrological models calibrate and validate parameters with available records. However, the first step of hydrological simulation is always to quantitatively and objectively split samples for use in calibration and validation. In this paper, we have proposed a framework to address this issue through a combination of a hierarchical scheme through trial and error method, for systematic testing of hydrological models, and hypothesis testing to check the statistical significance of goodness-of-fit indices. That is, the framework evaluates the performance of a hydrological model using sample splitting for calibration and validation, and assesses the statistical significance of the Nash–Sutcliffe efficiency index (Ef), which is commonly used to assess the performance of hydrological models. The sample splitting scheme used is judged as acceptable if the Ef values exceed the threshold of hypothesis testing. According to the requirements of the hierarchical scheme for systematic testing of hydrological models, cross calibration and validation will help to increase the reliability of the splitting scheme, and reduce the effective range of sample sizes for both calibration and validation. It is illustrated that the threshold of Ef is dependent on the significance level, evaluation criteria (both regarded as the population), distribution type, and sample size. The performance rating of Ef is largely dependent on the evaluation criteria. Three types of distributions, which are based on an approximately standard normal distribution, a Chi square distribution, and a bootstrap method, are used to investigate their effects on the thresholds, with two commonly used significance levels. The highest threshold is from the bootstrap method, the middle one is from the approximately standard normal distribution, and the lowest is from the Chi square distribution. It was found that the smaller the sample size, the higher the threshold values are. Sample splitting was improved by providing more records. In addition, outliers with a large bias between the simulation and the observation can affect the sample values of Ef, and hence the output of the sample splitting scheme. Physical hydrology processes and the purpose of the model should be carefully considered when assessing outliers. The proposed framework in this paper cannot guarantee the best splitting scheme, but the results show the necessary conditions for splitting schemes to calibrate and validate hydrological models from a statistical point of view.  相似文献   

14.
This paper describes an application of the ensemble Kalman filter (EnKF) in which streamflow observations are used to update states in a distributed hydrological model. We demonstrate that the standard implementation of the EnKF is inappropriate because of non-linear relationships between model states and observations. Transforming streamflow into log space before computing error covariances improves filter performance. We also demonstrate that model simulations improve when we use a variant of the EnKF that does not require perturbed observations. Our attempt to propagate information to neighbouring basins was unsuccessful, largely due to inadequacies in modelling the spatial variability of hydrological processes. New methods are needed to produce ensemble simulations that both reflect total model error and adequately simulate the spatial variability of hydrological states and fluxes.  相似文献   

15.
Abstract

The automatic calibration is done not by a hill-top climbing method but by a trial and error method carried out automatically by a computer program. The feedback procedure is made by comparing some criteria obtained from the observed hydrograph and the calculated hydrograph output from the working tank model. The two criteria are discharge volume and the shape of the hydrograph. The feedbacks of these two criteria correspond to dispacement feedback and velocity feedback in automatic control. The output of the working tank model is composed of components, the outputs from each of the tanks. Correspondingly, the whole period is divided into subperiods, in each of which each of the components plays the main part. The volume and shape are calculated in each subperiod and are used for the adjustment of the respective tanks. The feedback procedure starts from some initial model and converges very quickly after several (usually less than 15) iterations, and the result obtained is very good.  相似文献   

16.
New concepts in ecological risk assessment: where do we go from here?   总被引:10,自引:0,他引:10  
Through the use of safety factors, the use of single-species test data has been adequate for use in protective hazard assessments and criteria setting but, because hazard quotients do not consider the presence of multiple species each with a particular sensitivity or the interactions that can occur between these species in a functioning community, they are ill-suited to environmental risk assessment. Significant functional redundancy occurs in most ecosystems but this is poorly considered in single-species tests conducted under laboratory conditions. A significant advance in effects assessment was the use of the microcosm as a unit within which to test interacting populations of organisms. The microcosm has allowed the measurement of the environmental effect measures such as the NOAEC(community) under laboratory or field conditions and the application of this and other similarly derived measures to ecological risk assessment (ERA). More recently, distributions of single-species laboratory test data have been used for criteria setting and, combined with distributions of exposure concentrations, for risk assessment. Distributions of species sensitivity values have been used in an a priori way for setting environmental quality criteria such as the final acute value (FAV) derived for water quality criteria. Similar distributional approaches have been combined with modeled or measured concentrations to produce estimates of the joint probability of a single species being affected or that a proportion of organisms in a community will be impacted in a posteriori risk assessments. These techniques have not been widely applied for risk assessment of dredged materials, however, with appropriate consideration of bioavailability and spatial and nature of the data these techniques can be applied to soils and sediments.  相似文献   

17.
To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.  相似文献   

18.
A framework formula for performance‐based earthquake engineering, advocated and used by researchers at the Pacific Earthquake Engineering Research (PEER) Center, is closely examined. The formula was originally intended for computing the mean annual rate of a performance measure exceeding a specified threshold. However, it has also been used for computing the probability that a performance measure will exceed a specified threshold during a given period of time. It is shown that the use of the formula to compute such probabilities could lead to errors when non‐ergodic variables (aleatory or epistemic) are present. Assuming a Poisson model for the occurrence of earthquakes in time, an exact expression is derived for the probability distribution of the maximum of a performance measure over a given period of time, properly accounting for non‐ergodic uncertainties. This result is used to assess the approximation involved in the PEER formula for computing probabilities. It is found that the PEER approximation of the probability has a negligible error for probabilities less than about 0.01. For larger probabilities, the error depends on the magnitude of non‐ergodic uncertainties and the duration of time considered and can be as much as 20% for probabilities around 0.05 and 30% for probabilities around 0.10. The error is always on the conservative side. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
李静渊  杨江  李农发  余剑锋  李震 《地震》2020,40(3):167-178
SS-Y伸缩仪在中国地壳形变观测中已安装使用多年, 为了保证观测信噪比, 伸缩仪基线长度一般在10~30 m之间, 过长的基线导致仪器易受环境干扰的影响; 同时, 由于没有密封设计, 长期使用会出现格值误差等问题。针对SS-Y伸缩仪所存在的不足, 研制了一种新型短基线伸缩仪, 仪器基线长度为1 m, 采用具有较高精度的电容传感器进行位移测量, 并设计了全新的整体密封结构, 使仪器具备更好的抗干扰能力。线性度、 灵敏度等参数测试及观测试验结果表明, 该新型短基线伸缩仪具有良好的性能特性和观测信噪比, 这为仪器的进一步优化打下了基础。  相似文献   

20.
Simulations from hydrological models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation and to prioritize efforts for uncertainty reduction. It can also support the identification of ‘disinformative data’ whose values are the consequence of measurement errors or inadequate observations. Sensitivity analysis (SA) provides the theoretical framework and the numerical tools to quantify the relative contribution of different sources of uncertainty to the variability of the model outputs. In traditional applications of global SA (GSA), model outputs are aggregations of the full set of a simulated variable. For example, many GSA applications use a performance metric (e.g. the root mean squared error) as model output that aggregates the distances of a simulated time series to available observations. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. Time‐varying sensitivity analysis (TVSA), where the aggregation and SA are repeated at different time steps, is a viable option to reduce this loss of information. In this work, we use TVSA to address two questions: (1) Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? (2) Do these influences change in catchments with different characteristics? To our knowledge, the results present one of the first quantitative investigations on the relative importance of parameter and data uncertainty across time. We find that the approach is capable of separating influential periods across data and parameter uncertainties, while also highlighting significant differences between the catchments analysed. Copyright © 2016 The Authors. Hydrological Processes. Published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号