首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Dense networks of wireless structural health monitoring systems can effectively remove the disadvantages associated with current wire‐based sparse sensing systems. However, recorded data sets may have relative time‐delays due to interference in radio transmission or inherent internal sensor clock errors. For structural system identification and damage detection purposes, sensor data require that they are time synchronized. The need for time synchronization of sensor data is illustrated through a series of tests on asynchronous data sets. Results from the identification of structural modal parameters show that frequencies and damping ratios are not influenced by the asynchronous data; however, the error in identifying structural mode shapes can be significant. The results from these tests are summarized in Appendix A. The objective of this paper is to present algorithms for measurement data synchronization. Two algorithms are proposed for this purpose. The first algorithm is applicable when the input signal to a structure can be measured. The time‐delay between an output measurement and the input is identified based on an ARX (auto‐regressive model with exogenous input) model for the input–output pair recordings. The second algorithm can be used for a structure subject to ambient excitation, where the excitation cannot be measured. An ARMAV (auto‐regressive moving average vector) model is constructed from two output signals and the time‐delay between them is evaluated. The proposed algorithms are verified with simulation data and recorded seismic response data from multi‐story buildings. The influence of noise on the time‐delay estimates is also assessed. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, the applicability of an auto‐regressive model with exogenous inputs (ARX) in the frequency domain to structural health monitoring (SHM) is established. Damage sensitive features that explicitly consider non‐linear system input/output relationships are extracted from the ARX model. Furthermore, because of the non‐Gaussian nature of the extracted features, Extreme Value Statistics (EVS) is employed to develop a robust damage classifier. EVS provides superior performance to standard statistical methods because the data of interest are in the tails (extremes) of the damage sensitive feature distribution. The suitability of the ARX model, combined with EVS, to non‐linear damage detection is demonstrated using vibration data obtained from a laboratory experiment of a three‐story building model. It is found that the vibration‐based method, while able to discern when damage is present in the structure, is unable to localize the damage to a particular joint. An impedance‐based active sensing method using piezoelectric (PZT) material as both an actuator and a sensor is then investigated as an alternative solution to the problem of damage localization. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
Introduction With the development of the seismological observation technique and deep-going of seismicdata application fields, especially the digitization of data in earthquake station networks, theimprovement of the precision, the data quantity increases as geometric order, which bringdifficulty to saving and transfering these data. To keep all information, seismic data, like medicalimages, should be compressed without error in many applications. In generally, traditionalcompression meth…  相似文献   

4.
Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in canceling data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data. Contribution No.04FE1019, Institute of Geophysics, China Earthquake Administration.  相似文献   

5.
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.  相似文献   

6.
Civil engineering structures are often subjected to multidirectional actions such as earthquake ground motion, which lead to complex structural responses. The contributions from the latter multidirectional actions to the response are highly coupled, leading to a MIMO system identification problem. Compared with single‐input, multiple‐output (SIMO) system identification, MIMO problems are more computationally complex and error prone. In this paper, a new system identification strategy is proposed for civil engineering structures with multiple inputs that induce strong coupling in the response. The proposed solution comprises converting the MIMO problem into separate SIMO problems, decoupling the outputs by extracting the contribution from the respective input signals to the outputs. To this end, a QR factorization‐based decoupling method is employed, and its performance is examined. Three factors, which affect the accuracy of the decoupling result, including memory length, input correlation, and system damping, are investigated. Additionally, a system identification method that combines the autoregressive model with exogenous input (ARX) and the Eigensystem Realization Algorithm (ERA) is proposed. The associated extended modal amplitude coherence and modal phase collinearity are used to delineate the structural and noise modes in the fitted ARX model. The efficacy of the ARX‐ERA method is then demonstrated through identification of the modal properties of a highway overcrossing bridge. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Structural identification is the inverse problem of estimating physical parameters of a structural system from its vibration response measurements. Incomplete instrumentation and ambient vibration testing generally result in incomplete and arbitrarily normalized measured modal information, often leading to an ill‐conditioned inverse problem and non‐unique identification results. The identifiability of any parameter set of interest depends on the amount of independent available information. In this paper, we consider the identifiability of the mass and stiffness parameters of shear‐type systems in output‐only situations with incomplete instrumentation. A mode shape expansion‐cum‐mass normalization approach is presented to obtain the complete mass normalized mode shape matrix, starting from the incomplete non‐normalized modes identified using any operational modal analysis technique. An analysis is presented to determine the minimum independent information carried by any given sensor set‐up. This is used to determine the minimum necessary number and location of sensors from the point of view of minimum necessary information for identification. The different theoretical discussions are illustrated using numerical simulations and shake table experiments. It is shown that the proposed identification algorithm is able to obtain reliably accurate physical parameter estimates under the constraints of minimal instrumentation, minimal a priori information, and unmeasured input. The sensor placement rules can be used in experiment design to determine the necessary number and location of sensors on the monitored system. John Wiley & Sons, Ltd.  相似文献   

8.
Images from satellite platforms are a valid aid in order to obtain distributed information about hydrological surface states and parameters needed in calibration and validation of the water balance and flood forecasting. Remotely sensed data are easily available on large areas and with a frequency compatible with land cover changes. In this paper, remotely sensed images from different types of sensor have been utilized as a support to the calibration of the distributed hydrological model MOBIDIC, currently used in the experimental system of flood forecasting of the Arno River Basin Authority. Six radar images from ERS‐2 synthetic aperture radar (SAR) sensors (three for summer 2002 and three for spring–summer 2003) have been utilized and a relationship between soil saturation indexes and backscatter coefficient from SAR images has been investigated. Analysis has been performed only on pixels with meagre or no vegetation cover, in order to legitimize the assumption that water content of the soil is the main variable that influences the backscatter coefficient. Such pixels have been obtained by considering vegetation indexes (NDVI) and land cover maps produced by optical sensors (Landsat‐ETM). In order to calibrate the soil moisture model based on information provided by SAR images, an optimization algorithm has been utilized to minimize the regression error between saturation indexes from model and SAR data and error between measured and modelled discharge flows. Utilizing this procedure, model parameters that rule soil moisture fluxes have been calibrated, obtaining not only a good match with remotely sensed data, but also an enhancement of model performance in flow prediction with respect to a previous calibration with river discharge data only. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Input data selection for solar radiation estimation   总被引:1,自引:0,他引:1  
Model input data selection is a complicated process, especially for non‐linear dynamic systems. The questions on which inputs should be used and how long the training data should be for model development have been hard to solve in practice. Despite the importance of this subject, there have been insufficient reports in the published literature about inter‐comparison between different model input data selection techniques. In this study, several methods (i.e. the Gamma test, entropy theory, AIC (Akaike's information criterion)/BIC (Bayesian information criterion) have been explored with the aid of non‐linear models of LLR (local linear regression) and ANN (artificial neural networks). The methodology is tested in estimation of solar radiation in the Brue Catchment of England. It has been found that the conventional model selection tools such as AIC/BIC failed to demonstrate their functionality. Although the entropy theory is quite powerful and efficient to compute, it failed to pick up the best input combinations. On the other hand, it is very encouraging to find that the new Gamma test was able to choose the best input selection. However, it is surprising to note that the Gamma test significantly underestimated the required training data while the entropy theory did a better job in this aspect. This is the first study to compare the performance of those techniques for model input selections and still there are many unsolved puzzles. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
This paper verifies the feasibility of the proposed system identification methods by utilizing shaking table tests of a full‐scale four‐story steel building at E‐Defense in Japan. The natural frequencies, damping ratios and modal shapes are evaluated by single‐input‐four‐output ARX models. These modal parameters are prepared to identify the mass, damping and stiffness matrices when the objective structure is modelled as a four degrees of freedom (4DOF) linear shear building in each horizontal direction. The nonlinearity in stiffness is expressed as a Bouc–Wen hysteretic system when it is modelled as a 4DOF nonlinear shear building. The identified hysteretic curves of all stories are compared to the corresponding experimental results. The simple damage detection is implemented using single‐input‐single‐output ARX models, which require only two measurements in each horizontal direction. The modal parameters are equivalent‐linearly evaluated by the recursive Least Squares Method with a forgetting factor. When the structure is damaged, its natural frequencies decrease, and the corresponding damping ratios increase. The fluctuation of the identified modal properties is the indirect information for damage detection of the structure. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Optimization of sub-band coding method for seismic data compression   总被引:2,自引:0,他引:2  
Seismic data volumes, which require huge transmission capacities and massive storage media, continue to increase rapidly due to acquisition of 3D and 4D multiple streamer surveys, multicomponent data sets, reprocessing of prestack seismic data, calculation of post‐stack seismic data attributes, etc. We consider lossy compression as an important tool for efficient handling of large seismic data sets. We present a 2D lossy seismic data compression algorithm, based on sub‐band coding, and we focus on adaptation and optimization of the method for common‐offset gathers. The sub‐band coding algorithm consists of five stages: first, a preprocessing phase using an automatic gain control to decrease the non‐stationary behaviour of seismic data; second, a decorrelation stage using a uniform analysis filter bank to concentrate the energy of seismic data into a minimum number of sub‐bands; third, an iterative classification algorithm, based on an estimation of variances of blocks of sub‐band samples, to classify the sub‐band samples into a fixed number of classes with approximately the same statistics; fourth, a quantization step using a uniform scalar quantizer, which gives an approximation of the sub‐band samples to allow for high compression ratios; and fifth, an entropy coding stage using a fixed number of arithmetic encoders matched to the corresponding statistics of the classified and quantized sub‐band samples to achieve compression. Decompression basically performs the opposite operations in reverse order. We compare the proposed algorithm with three other seismic data compression algorithms. The high performance of our optimized sub‐band coding method is supported by objective and subjective results.  相似文献   

12.
基于光谱角匹配预测的高光谱图像无损压缩   总被引:1,自引:0,他引:1  
在地表信息获取方面,高光谱遥感是对地观测技术的主要方法,观测同时产生了海量高光谱图像的存贮与传输问题.研究发现,高光谱图像具有独特的光谱上下文特征,可从光谱维分析高光谱图像的光谱相关性,并用光谱角来度量相邻像素间的光谱相似性的差异,探测水平或垂直光谱边界,由此提出了基于光谱角匹配预测(SAMP)的无损压缩算法.实验表明,SAMP算法的预测效果好于已有文献中提出的一些优秀算法,且具有低复杂性.  相似文献   

13.
Snow water equivalent (SWE) is an important indicator used in hydrology, water resources, and climate change impact. There are various methods of estimating SWE (falling in 3 categories: indirect sensors, empirical models, and process‐based models), but few studies that provide comparison across these different categories to help users make decisions on monitoring site design or method selection. Five SWE estimation methods were compared against manual snow course data collected over 2 years (2015–2016) from the Dorset Environmental Science Centre, including the gamma‐radiation‐based CS725 sensor, 3 empirical estimation models (Sexstone snow density model, McCreight & Small snow density model, and a meteorology‐based model), and the University of British Columbia Watershed Model snow energy‐balance model. Snow depth, density, and SWE were measured at the Dorset Environmental Science Centre weather station in south‐central Ontario, on a daily basis over 6 winters from 2011 to 2016. The 2 snow density‐based models, requiring daily snow depth as input, gave the best performance (R2 of .92 and .92 for McCreight & Small and Sexstone models, respectively). The CS725 sensor that receives radiation coming from soil penetrating the snowpack provided the same performance (R2 = .92), proving that the sensor is an applicable method, although it is expensive. The meteorology‐based empirical model, requiring daily climate data including temperature, precipitation and solar radiation, gave the poorest performance (R2 = .77). The energy‐balance‐based University of British Columbia Watershed Model snow module, only requiring climate data, worked better than the empirical meteorology‐based model (R2 = .9) but performed worse than the density models or CS725 sensor. Given differences in application objectives, site conditions, and budget, this comparison across SWE estimation methods may help users choose a suitable method. For ongoing and new monitoring sites, installation of a CS725 sensor coupled with intermittent manual snow course measurements (e.g., weekly) is recommended for further SWE method estimation testing and development of a snow density model.  相似文献   

14.
In a previous study a spatially distributed hydrological model, based on the MIKE SHE code, was constructed and validated for the 375 000 km2 Senegal River basin in West Africa. The model was constructed using spatial data on topography, soil types and vegetation characteristics together with time‐series of precipitation from 112 stations in the basin. The model was calibrated and validated based on river discharge data from nine stations in the basin for 11 years. Calibration and validation results suggested that the spatial resolution of the input data in parts of the area was not sufficient for a satisfactory evaluation of the modelling performance. The study further examined the spatial patterns in the model input and output, and it was found that particularly the spatial resolution of the precipitation input had a major impact on the model response. In an attempt to improve the model performance, this study examines a remotely sensed dryness index for its relationship to simulated soil moisture and evaporation for six days in the wet season 1990. The index is derived from observations of surface temperature and vegetation index as measured by the NOAA Advanced Very High Resolution Radiometer (AVHRR) sensor. The correlation results between the index and the simulation results are of mixed quality. A sensitivity analysis, conducted on both estimates, reveals significant uncertainties in both. The study suggests that the remotely sensed dryness index with its current use of NOAA AVHRR data does not offer information that leads to a better calibration or validation of the simulation model in a spatial sense. The method potentially may become more suitable with the use of the upcoming high‐resolution temporal Meteosat Second Generation data. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
16.
Hydrological modelling depends highly on the accuracy and uncertainty of model input parameters such as soil properties. Since most of these data are field surveyed, geostatistical techniques such as kriging, classification and regression trees or more sophisticated soil‐landscape models need to be applied to interpolate point information to the area. Most of the existing interpolation techniques require a random or regular distribution of points within the study area but are not adequate to satisfactorily interpolate soil catena or transect data. The soil landscape model presented in this study is predicting soil information from transect or catena point data using a statistical mean (arithmetic, geometric and harmonic mean) to calculate the soil information based on class means of merged spatial explanatory variables. A data set of 226 soil depth measurements covering a range of 0–6·5 m was used to test the model. The point data were sampled along four transects in the Stubbetorp catchment, SE‐Sweden. We overlaid a geomorphology map (8 classes) with digital elevation model‐derived topographic index maps (2–9 classes) to estimate the range of error the model produces with changing sample size and input maps. The accuracy of the soil depth predictions was estimated with the root mean square error (RMSE) based on a testing and training data set. RMSE ranged generally between 0·73 and 0·83 m ± 0·013 m depending on the amount of classes the merged layers had, but were smallest for a map combination with a low number of classes predicted with the harmonic mean (RMSE = 0·46 m). The results show that the prediction accuracy of this method depends on the number of point values in the sample, the value range of the measured attribute and the initial correlations between point values and explanatory variables, but suggests that the model approach is in general scale invariant. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
In order to couple spatial data from frequency‐domain helicopter‐borne electromagnetics with electromagnetic measurements from ground geophysics (transient electromagnetics and radiomagnetotellurics), a common 1D weighted joint inversion algorithm for helicopter‐borne electromagnetics, transient electromagnetics and radiomagnetotellurics data has been developed. The depth of investigation of helicopter‐borne electromagnetics data is rather limited compared to time‐domain electromagnetics sounding methods on the ground. In order to improve the accuracy of model parameters of shallow depth as well as of greater depth, the helicopter‐borne electromagnetics, transient electromagnetics, and radiomagnetotellurics measurements can be combined by using a joint inversion methodology. The 1D joint inversion algorithm is tested for synthetic data of helicopter‐borne electromagnetics, transient electromagnetics and radiomagnetotellurics. The proposed concept of the joint inversion takes advantage of each method, thus providing the capability to resolve near surface (radiomagnetotellurics) and deeper electrical conductivity structures (transient electromagnetics) in combination with valuable spatial information (helicopter‐borne electromagnetics). Furthermore, the joint inversion has been applied on the field data (helicopter‐borne electromagnetics and transient electromagnetics) measured in the Cuxhaven area, Germany. In order to avoid the lessening of the resolution capacities of one data type, and thus balancing the use of inherent and ideally complementary information content, a parameter reweighting scheme that is based on the exploration depth ranges of the specific methods is proposed. A comparison of the conventional joint inversion algorithm, proposed by Jupp and Vozoff ( 1975 ), and of the newly developed algorithm is presented. The new algorithm employs the weighting on different model parameters differently. It is inferred from the synthetic and field data examples that the weighted joint inversion is more successful in explaining the subsurface than the classical joint inversion approach. In addition to this, the data fittings in weighted joint inversion are also improved.  相似文献   

18.
This paper presents an input and system identification technique for a soil–structure interaction system using earthquake response data. Identification is carried out on the Hualien large‐scale seismic test structure, which was built in Taiwan for international joint research. The identified quantities are the input ground acceleration as well as the shear wave velocities of the near‐field soil regions and Young's moduli of the shell sections of the structure. The earthquake response analysis on the soil–structure interaction system is carried out using the finite element method incorporating the infinite element formulation for the unbounded layered soil medium and the substructured wave input technique. The criterion function for the parameter estimation is constructed using the frequency response amplitude ratios of the earthquake responses measured at several points of the structure, so that the information on the input motion may be excluded. The constrained steepest descent method is employed to obtain the revised parameters. The simulated earthquake responses using the identified parameters and input ground motion show excellent agreement with the measured responses. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
Dynamic characteristics of structures — viz. natural frequencies, damping ratios, and mode shapes — are central to earthquake‐resistant design. These values identified from field measurements are useful for model validation and health‐monitoring. Most system identification methods require input excitations motions to be measured and the structural response; however, the true input motions are seldom recordable. For example, when soil–structure interaction effects are non‐negligible, neither the free‐field motions nor the recorded responses of the foundations may be assumed as ‘input’. Even in the absence of soil–structure interaction, in many instances, the foundation responses are not recorded (or are recorded with a low signal‐to‐noise ratio). Unfortunately, existing output‐only methods are limited to free vibration data, or weak stationary ambient excitations. However, it is well‐known that the dynamic characteristics of most civil structures are amplitude‐dependent; thus, parameters identified from low‐amplitude responses do not match well with those from strong excitations, which arguably are more pertinent to seismic design. In this study, we present a new identification method through which a structure's dynamic characteristics can be extracted using only seismic response (output) signals. In this method, first, the response signals’ spatial time‐frequency distributions are used for blindly identifying the classical mode shapes and the modal coordinate signals. Second, cross‐relations among the modal coordinates are employed to determine the system's natural frequencies and damping ratios on the premise of linear behavior for the system. We use simulated (but realistic) data to verify the method, and also apply it to a real‐life data set to demonstrate its utility. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents a theoretical study of a predictive active control system used to improve the response of multi‐degree‐of‐freedom (MDOF) structures to earthquakes. As an example a building frame equipped with electrorheological (ER) dampers is considered. The aim of the design is to find a combination of forces that are produced by the ER dampers in order to obtain an optimal structural response. The mechanical response of ER fluid dampers is regulated by an electric field. Linear auto‐regressive model with exogenous input (ARX) is used to predict the displacements and the velocities of the frame in order to overcome the time‐delay problem in the control system. The control forces in the ER devices are calculated at every time step by the optimal control theory (OCT) according to the values of the displacements and of the velocities that are predicted at the next time step at each storey of the structure. A numerical analysis of a seven‐storey ER damped structure is presented as an example. It shows a significant improvement of the structural response when the predictive active control system is applied compared to that of an uncontrolled structure or that of a structure with controlled damping forces with time delay. The structure's displacements and velocities that were used to obtain the optimal control forces were predicted according to an ‘occurring’ earthquake by the ARX model (predictive control). The response was similar to that of the structure with control forces that were calculated from a ‘known’ complete history of the earthquake's displacement and velocity values, and were applied without delay (instantaneous control). Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号