首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data‐worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south‐western Germany, which has been established to monitor river—groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model‐based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy‐to‐implement tools for an otherwise complex task and (2) yet to consider data‐worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types.  相似文献   

2.
王戈 《地震工程学报》2020,42(3):799-805
为改善传统的基于机器学习的网络入侵检测方法只能检测已知入侵行为,对于未知入侵行为的检测存在误警率高、时效性差的不足,提出一种基于混沌算法的地震信息网络入侵检测方法。创建候选地震信息网络特征-混沌变量映射模型,实现变量之间的转化;采用混沌变量迭代演化算法进行地震信息网络特征选择;使用支持向量机对最优特征进行学习,为提高地震信息网络入侵检测精度,利用柯西蜂群算法对支持向量机参数进行寻优,建立网络入侵检测优化模型。仿真实验证明,基于混沌算法的地震信息网络入侵检测方法能有效实现高检测率、低误报率的入侵检测,具有很高的应用优势。  相似文献   

3.
Estimation of lateral displacement and acceleration responses is essential to assess safety and serviceability of high-rise buildings under dynamic loadings including earthquake excitations. However, the measurement information from the limited number of sensors installed in a building structure is often insufficient for the complete structural performance assessment. An integrated multi-type sensor placement and response reconstruction method has thus been proposed by the authors to tackle this problem. To validate the feasibility and effectiveness of the proposed method, an experimental investigation using a cantilever beam with multi-type sensors is performed and reported in this paper. The experimental setup is first introduced. The finite element modelling and model updating of the cantilever beam are then performed. The optimal sensor placement for the best response reconstruction is determined by the proposed method based on the updated FE model of the beam. After the sensors are installed on the physical cantilever beam, a number of experiments are carried out. The responses at key locations are reconstructed and compared with the measured ones. The reconstructed responses achieve a good match with the measured ones, manifesting the feasibility and effectiveness of the proposed method. Besides, the proposed method is also examined for the cases of different excitations and unknown excitation, and the results prove the proposed method to be robust and effective. The superiority of the optimized sensor placement scheme is finally demonstrated through comparison with two other different sensor placement schemes: the accelerometer-only scheme and non-optimal sensor placement scheme. The proposed method can be applied to high-rise buildings for seismic performance assessment.  相似文献   

4.
This paper deals with the design of optimal spatial sampling of water quality variables in remote regions, where logistics are complicated and the optimization of monitoring networks may be critical to maximize the effectiveness of human and material resources. A methodology that combines the probability of exceeding some particular thresholds with a measurement of the information provided by each pair of experimental points has been introduced. This network optimization concept, where the basic unit of information is not a single spatial location but a pair of spatial locations, is used to emphasize the locations with the greatest information, which are those at the border of the phenomenon (for example contamination or a quality variable exceeding a given threshold), that is, where the variable at one of the locations in the pair is above the threshold value and the other is below the threshold. The methodology is illustrated with a case of optimizing the monitoring network by optimal selection of the subset that best describes the information provided by an exhaustive survey done at a given moment in time but which cannot be repeated systematically due to time or economic constrains.  相似文献   

5.
《水文科学杂志》2013,58(2):352-361
Abstract

A real-life problem involving pumping of groundwater from a series of existing wells along a river flood plain underlain with geologically saline water is examined within a conceptual framework. Unplanned pumping results in upconing of saline water. Therefore, it is necessary to determine optimal locations of fixed capacity pumping wells in space and time from a set of pre-selected candidate wells that minimize total salinity concentration in space and time. The nonlinear, non-convex, combinatorial problem involving zero—one decision variables is solved in a simulation—optimization (S/O) framework. Optimization is accomplished by using simulated annealing (SA)—a search algorithm. The computational burden is primarily managed by replacing the numerical model with a surrogate simulator—artificial neural network (ANN). The computational burden is further reduced through intuitive algorithmic guidance. The model results suggest that the skimming wells must be operated from optimal locations such that they are staggered in space and time to obtain least saline water.  相似文献   

6.
为了降低没有破坏性的高频小震对高铁地震报警的干扰,引入谱强度SI参数作为报警参数,分析其在高铁地震报警中的适用性。研究发现低频振动对SI的影响比高频振动大,对于高铁线路附近的高频小震,SI参数能有效的排除,但对于破坏性不大的远震大震可能会引发报警,从而产生误报现象,影响高铁列车运行效率。为了降低SI在破坏性小的远震大震中的误报率,本文引入PGA与SI作为报警参数。最后给出了不同时速下基于PGA与SI联合报警的阈值供参考。  相似文献   

7.
This paper presents a linear predictor (LP)‐based lossless sensor data compression algorithm for efficient transmission, storage and retrieval of seismic data. Auto‐Regressive with eXogenous input (ARX) model is selected as the model structure of LP. Since earthquake ground motion is typically measured at the base of monitored structures, the ARX model parameters are calculated in a system identification framework using sensor network data and measured input signals. In this way, sensor data compression takes advantage of structural system information to maximize the sensor data compression performance. Numerical simulation results show that several factors including LP order, measurement noise, input and limited sensor number affect the performance of the proposed lossless sensor data compression algorithm concerned. Generally, the lossless data compression algorithm is capable of reducing the size of raw sensor data while causing no information loss in the sensor data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
The objective of this paper is to quantify the use of past seismicity to forecast the locations of future large earthquakes and introduce optimization methods for the model parameters. To achieve this the binary forecast approach is used where the surface of the Earth is divided into l° × l° cells. The cumulative Benioff strain of m ≥ m c earthquakes that occurred during the training period, ΔT tr, is used to retrospectively forecast the locations of large target earthquakes with magnitudes ≥m T during the forecast period, ΔT for. The success of a forecast is measured in terms of hit rates (fraction of earthquakes forecast) and false alarm rates (fraction of alarms that do not forecast earthquakes). This binary forecast approach is quantified using a receiver operating characteristic diagram and an error diagram. An optimal forecast can be obtained by taking the maximum value of Pierce’s skill score.  相似文献   

9.
Eleven years of daily 500 m gridded Terra Moderate Resolution Imaging Spectroradiometer (MODIS) (MOD10A1) snow cover fraction (SCF) data are evaluated in terms of snow presence detection in Colorado and Washington states. The SCF detection validation study is performed using in‐situ measurements and expressed in terms of snow and land detection and misclassification frequencies. A major aspect addressed in this study involves the shifting of pixel values in time due to sensor viewing angles and gridding artifacts of MODIS sensor products. To account for this error, 500 m gridded pixels are grouped and aggregated to different‐sized areas to incorporate neighboring pixel information. With pixel aggregation, both the probability of detection (POD) and the false alarm ratios increase for almost all cases. Of the false negative (FN) and false positive values (referred to as the total error when combined), FN estimates dominate most of the total error and are greatly reduced with aggregation. The greatest POD increases and total error reductions occur with going from a single 500 m pixel to 3×3‐pixel averaged areas. Since the MODIS SCF algorithm was developed under ideal conditions, SCF detection is also evaluated for varying conditions of vegetation, elevation, cloud cover and air temperature. Finally, using a direct insertion data assimilation approach, pixel averaged MODIS SCF observations are shown to improve modeled snowpack conditions over the single pixel observations due to the smoothing of more error‐prone observations and more accurately snow‐classified pixels. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
三分向数据震相的自动识别   总被引:3,自引:0,他引:3  
利用三分向地震数据,提出了一种自动检测P波和S波初至的算法。此算法组合使用了短时窗检测器DSp和长时窗DBp检测器,具有短时窗的较高精度和长时窗较低的误触发率。应用广东、福建和云南省的地震数据进行测试,结果表明:自动检测具有可靠的精度,地方震和近震P波到时误差小于0.2s的超过84%;S波到时误差小于0.5s的超过74%。  相似文献   

11.
A methodology is developed for optimal operation of reservoirs to control water quality requirements at downstream locations. The physicochemical processes involved are incorporated using a numerical simulation model. This simulation model is then linked externally with an optimization algorithm. This linked simulation–optimization‐based methodology is used to obtain optimal reservoir operation policy. An elitist genetic algorithm is used as the optimization algorithm. This elitist‐genetic‐algorithm‐based linked simulation–optimization model is capable of evolving short‐term optimal operation strategies for controlling water quality downstream of a reservoir. The performance of the methodology developed is evaluated for an illustrative example problem. Different plausible scenarios of management are considered. The operation policies obtained are tested by simulating the resulting pollutant concentrations downstream of the reservoir. These performance evaluations consider various scenarios of inflow, permissible concentration limits, and a number of management periods. These evaluations establish the potential applicability of the developed methodology for optimal control of water quality downstream of a reservoir. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, a Bayesian sequential sensor placement algorithm, based on the robust information entropy, is proposed for multi‐type of sensors. The presented methodology has two salient features. It is a holistic approach such that the overall performance of various types of sensors at different locations is assessed. Therefore, it provides a rational and effective strategy to design the sensor configuration, which optimizes the use of various available resources. This sequential algorithm is very efficient due to its Bayesian nature, in which prior distribution can be incorporated. Therefore, it avoids the possible unidentifiability problem encountered in a sequential process, which starts with small number of sensors. The proposed algorithm is demonstrated using a shear building and a lattice tower with consideration of up to four types of sensors. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Optimal transducer placement for health monitoring of long span bridge   总被引:7,自引:0,他引:7  
In experimental modal testing, the measurement locations and the number of measurements have a major influence on the quality of the results. In general, there are several alternative schemes for sensor placement, and the accuracy of the data increases as the number of sensors utilized increases. However, the number of transducers that can be attached to a real structure is limited by economic constraints. Therefore, algorithms that address the issue of limited instrumentation and its effects on resolution and accuracy are important from the standpoint of experimental modal analysis. The authors are particularly interested in structural dynamics based damage evaluation of large structures, and the development and implementation of suitable sensor location algorithms are critical for such a problem. A kinetic energy optimization technique (EOT) has been derived, and numerical issues are addressed and applied to real experimental data obtained from a model of an asymmetric long span bridge. Using experimental data from the bridge model, the algorithm proposed in this paper is compared to Kammer's EIM algorithm, which optimizes the transducer placement for identification and control purposes.  相似文献   

14.
In this paper, a new approach to structural damage localization is presented using as damage feature the interpolation error related to the use of a spline function in modeling the operational deformed shapes of the structure. Statistically significant variations of the interpolation error between the undamaged and the inspection phase indicate the onset of damage. A threshold value of the damage feature is defined in terms of the tolerable probability of false alarm to select variations of the interpolation error because of damage from those due to random sources. The method is successfully applied to a calibrated model of the factor building a real densely instrumented building at the University of California, Los Angeles. Results show that the method is effective for damage localization for both single and multiple locations of damage also in case of responses corrupted by noise. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
A combined simulation–genetic algorithm (GA) optimization model is developed to determine optimal reservoir operational rule curves of the Nam Oon Reservoir and Irrigation Project in Thailand. The GA and simulation models operate in parallel over time with interactions through their solution procedure. A GA is selected as an optimization model, instead of traditional techniques, owing to its powerful and robust performance and simplicity in combining with a simulation technique. A GA is different from conventional optimization techniques in the way that it uses objective function information and does not require its derivatives, whereas in real‐world optimization problems the search space may include discontinuities and may often include a number of sub‐optimum peaks. This may cause difficulties for calculus‐based and enumerative schemes, but not in a GA. The simulation model is run to determine the net system benefit associated with state and control variables. The combined simulation–GA model is applied to determine the optimal upper and lower rule curves on a monthly basis for the Nam Oon Reservoir, Thailand. The objective function is maximum net system benefit subject to given constraints for three scenarios of cultivated areas. The monthly release is calculated by the simulation model in accordance with the given release policy, which depends on water demand. The optimal upper and lower rule curves are compared with the results of the HEC‐3 model (Reservoir System Analysis for Conservation model) calculated by the Royal Irrigation Department, Thailand, and those obtained using the standard operating policy. It was found that the optimal rule curves yield the maximum benefit and minimum damages caused by floods and water shortages. The combined simulation–GA model shows an excellent performance in terms of its optimization results and efficient computation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
Improved sea ice parcel trajectories in the Arctic via data assimilation   总被引:1,自引:0,他引:1  
An assimilated sea ice motion product is used to track ice parcels in several regions of the Arctic over time periods of one day to several weeks during 1992-1993. Motions simulated using a two-dimensional, dynamic-thermodynamic sea ice model are combined with motions derived from daily 85 GHz special sensor microwave/imager (SSM/I) imagery using an optimal interpolation method that minimizes error covariance. Assimilation attenuates the tracking error over the stand-alone model in comparison to buoy trajectories with the same starting location and time. The average 14-day assimilated trajectory's displacement error is as much as 34% lower than the model trajectory, while the RMS direction error is decreased by up to 10 degrees (24%). Assimilation can also yield an estimate of dispersion, which is not retrievable by point buoy observations. An assimilation approach improves estimates of ice drift and has the potential to further the understanding of ice mass flux, freshwater flux, and pollutant transport in the polar regions.  相似文献   

17.
An adequate and reliable raingauge network is essential for observing rainfall data in hydrology and water resource applications. A raingauge network developed for a catchment area is commonly extended periodically to increase data accuracy. Due to financial constraints, the network is reviewed for the optimal number of stations. A new optimization approach is developed in this study by coupling a cross-validation technique with a geostatistical method for raingauge network optimization to prioritize raingauge stations. The spatial interpolation error of the spatial rainfall distribution, measured as the root mean square error (Erms) optimization criterion is applied to a raingauge network in a tropical urban area. The results indicate that this method can successfully optimize the number of rainfall stations in an existing raingauge network, as the stations are prioritized based on their importance in the network.  相似文献   

18.
Alaa Ali   《Journal of Hydrology》2009,374(3-4):338-350
Wetland restoration is often measured by how close the spatial and temporal water level (stage) patterns are to the pre-drainage conditions. Driven by rainfall, such multivariate conditions are governed by nonstationary, nonlinear, and nonGaussian processes and are often simulated by physically based distributed models which are difficult to run in real time due to extensive data requirements. The objective of this study is to provide the wetland restorationists with a real time rainfall–stage modeling tool of simpler input structure and capability to recognize the wetland system complexity. A dynamic multivariate Nonlinear AutoRegressive network with eXogenous inputs (NARX) combined with Principal Component Analysis (PCA) was developed. An implementation procedure was proposed and an application to Florida Everglade’s wetland systems was presented. Inputs to the model are time lagged rainfall, evapotranspiration and previously simulated stages. Data locations, preliminary time lag selection, spatial and temporal nonstationarity are identified through exploratory data analysis. PCA was used to eliminate input variable interdependence and to reduce the problem dimensions by more than 90% while retaining more than 80% of the process variance. A structured approach to select optimal time lags and network parameters was provided. NARX model results were compared to those of the linear Multivariate AutoRegressive model with eXogenous inputs. While one step ahead prediction shows comparable results, recursive prediction by NARX is far more superior to that of the linear model. Also, NARX testing under drastically different climatic conditions from those used in the development demonstrates a very good and robust performance. Driven by net rainfall, NARX exhibited robust stage prediction with an overall Efficiency Coefficient of 88%, Mean Square Error less than 0.004 m2, a standard error less than 0.06 m, a bias close to zero and normal probability plots show that the errors are close to normal distributions.  相似文献   

19.
SIM‐France is a large connected atmosphere/land surface/river/groundwater modelling system that simulates the water cycle throughout metropolitan France. The work presented in this study investigates the replacement of the river routing scheme in SIM‐France by a river network model called RAPID to enhance the capacity to relate simulated flows to river gauges and to take advantage of the automated parameter estimation procedure of RAPID. RAPID was run with SIM‐France over a 10‐year period and results compared with those of the previous river routing scheme. We found that while the formulation of RAPID enhanced the functionality of SIM‐France, the flow simulations are comparable in accuracy to those previously obtained by SIM‐France. Sub‐basin optimization of RAPID parameters was found to increase model efficiency. A single criterion for quantifying the quality of river flow simulations using several river gauges globally in a river network is developed that normalizes the square error of modelled flow to allow equal treatment of all gauging stations regardless of the magnitude of flow. The use of this criterion as the cost function for parameter estimation in RAPID allows better results than by increasing the degree of spatial variability in optimization of model parameters. Likewise, increased spatial variability of RAPID parameters through accounting for topography is shown to enhance model performance. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
This article presents a method to estimate flow variables for an open channel network governed by the linearized Saint-Venant equations and subject to periodic forcing. The discharge at the upstream end of the system and the stage at the downstream end of the system are defined as the model inputs; the flow properties at selected internal locations, as well as the other external boundary conditions, are defined as the outputs. Both inputs and outputs are affected by noise and we use the model to improve the data quality. A spatially dependent transfer matrix in the frequency domain is constructed to relate the model input and output using modal decomposition. A data reconciliation technique is used to incorporate the error in the measured data and results in a set of reconciliated external boundary conditions; subsequently, the flow properties at any location in the system can be accurately estimated from the input measurements. The applicability and effectiveness of the method is demonstrated with a case study of the river flow subject to tidal forcing in the Sacramento-San Joaquin Delta, in California. We used existing USGS sensors in place in the Delta as measurement points, and deployed our own sensors at selected locations to produce data used for the validation. The proposed method gives an accurate estimation of the flow properties at intermediate locations within the channel network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号