首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider zoning for the design criterion that minimizes the expected present value of the total cost, including the initial cost as well as losses due to damage and failure. The problem consists of the following: given the number of zones, their boundaries and design coefficients must be such that they minimize the expected present value of all structures built in the region. We will refer to solutions in one or more dimensions, depending on the number of the types of structures built in the region to be zoned. Two methods are proposed to solve the problems. The first method is based on the different combinations performed in order to attain optimum zoning. The second method uses an analogy to the evolution of biological systems. The work ends by applying the methods developed to a region of known seismicity. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
How to test the reasonability of the seismic zoning map with probabilistic means is the most concerned problem. So far, there were no good methods to test zoning map using actual intensity data. Firstly, the author suggest the concept of random field, then proved that the average value of the randomifield is ergodic by using Monte Carlo method, therefor the spatial average tend to be the average of the random field with probability of the zoning map. Thus, a method of testing seismic zoning map with probabilistic means using spatial distributing samples of intensity caused by actual earthquakes was provided. The Chinese seismic zoning map made in 1990 was tested using recent 15 years and 50 years intensity samples. The results shows that this zoning map is reasonable. The method provided in this paper can be used in other circumstance in which random field methods were used. The Chinese version of this paper appeared in the Chinese edition ofActa Seismologica Sinica,15, 53–60, 1993.  相似文献   

3.
The declining costs of small Unmanned Aerial Systems (sUAS), in combination with Structure‐from‐Motion (SfM) photogrammetry have triggered renewed interest in image‐based topography reconstruction. However, the potential uptake of sUAS‐based topography is limited by the need for ground control acquired with expensive survey equipment. Direct georeferencing (DG) is a workflow that obviates ground control and uses only the camera positions to georeference the SfM results. However, the absence of ground control poses significant challenges in terms of the data quality of the final geospatial outputs. Notably, it is generally accepted that ground control is required to georeference, refine the camera calibration parameters, and remove any artefacts of optical distortion from the topographic model. Here, we present an examination of DG carried out with low‐cost consumer‐grade sUAS. We begin with a study of surface deformations resulting from systematic perturbations of the radial lens distortion parameters. We then test a number of flight patterns and develop a novel error quantification method to assess the outcomes. Our perturbation analysis shows that there exists families of predictable equifinal solutions of K1K2 which minimize doming in the output model. The equifinal solutions can be expressed as K2 = f (K1) and they have been observed for both the DJI Inspire 1 and Phantom 3 sUAS platforms. This equifinality relationship can be used as an external reliability check of the self‐calibration and allow a DG workflow to produce topography exempt of non‐affine deformations and with random errors of 0.1% of the flying height, linear offsets below 10 m and off‐vertical tilts below 1°. Whilst not yet of survey‐grade quality, these results demonstrate that low‐cost sUAS are capable of producing reliable topography products without recourse to expensive survey equipment and we argue that direct georeferencing and low‐cost sUAS could transform survey practices in both academic and commercial disciplines. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
5.
There are many fundamental problems with the injection of nano‐zero‐valent iron (NZVI) particles to create permeable reactive barrier (PRB) treatment zone. Among them the loss of medium porosity or pore blocking over time can be considered which leads to reduction of permeability and bypass of the flow and contaminant plume up‐gradient of the PRB. Present study provides a solution for such problems by confining the target zone for injection to the gate in a funnel‐and‐gate configuration. A laboratory‐scale experimental setup is used in this work. In the designed PRB gate, no additional material from porous media exists. NZVI (d50 = 52 ± 5 nm) particles are synthesized in water mixed with ethanol solvent system. A steady‐state condition is considered for the design of PRB size based on the concept of required contact time to obtain optimum width of PRB gate. Batch experiment is carried out and the results are used in the design of PRB gate width (~50 mm). Effect of high initial NO3‐N concentration, NZVI concentration, and pore velocity of water in the range of laminar groundwater flow through porous media are evaluated on nitrate‐N reduction in PRB system. Results of PRB indicate that increasing the initial NO3‐N concentration and pore velocity has inhibitor effect—against the effect of NZVI concentration—on the process of NO3‐N removal. Settlement velocity (S.V.) of injected NZVI with different concentrations in the PRB is also investigated. Results indicate that the proposed PRB can solve the low permeability of medium in down‐gradient but increasing of the S.V. especially at higher concentration is one of the problems with this system that needs further investigations.  相似文献   

6.
The basic objective of this study is the assessment of the European seismic design codes and in particular of EC2 and EC8 with respect to the recommended behaviour factor q. The assessment is performed on two reinforced concrete multi-storey buildings, having symmetrical and non-symmetrical plan view respectively, which were optimally designed under four different values of the behaviour factor. In the mathematical formulation of the optimization problem the initial construction cost is considered as the objective function to be minimized while the cross sections and steel reinforcement of the beams and the columns constitute the design variables. The provisions of Eurocodes 2 and 8 are imposed as constraints to the optimization problem. Life-cycle cost analysis, in conjunction with structural optimization, is believed to be a reliable procedure for assessing the performance of structures during their life time. The two most important findings that can be deduced are summarized as follows: (1) The proposed Eurocode behaviour factor does not lead to a more economical design with respect to the total life-cycle cost compared to other values of q (q = 1, 2). (2) The differences of the total life-cycle cost values may be substantially greater than those observed for the initial construction cost for four different q (q = 1, 2, 3, 4).  相似文献   

7.
We present the chain of time‐reverse modeling, image space wavefield decomposition and several imaging conditions as a migration‐like algorithm called time‐reverse imaging. The algorithm locates subsurface sources in passive seismic data and diffractors in active data. We use elastic propagators to capitalize on the full waveforms available in multicomponent data, although an acoustic example is presented as well. For the elastic case, we perform wavefield decomposition in the image domain with spatial derivatives to calculate P and S potentials. To locate sources, the time axis is collapsed by extracting the zero‐lag of auto and cross‐correlations to return images in physical space. The impulse response of the algorithm is very dependent on acquisition geometry and needs to be evaluated with point sources before processing field data. Band‐limited data processed with these techniques image the radiation pattern of the source rather than just the location. We present several imaging conditions but we imagine others could be designed to investigate specific hypotheses concerning the nature of the source mechanism. We illustrate the flexible technique with synthetic 2D passive data examples and surface acquisition geometry specifically designed to investigate tremor type signals that are not easily identified or interpreted in the time domain.  相似文献   

8.
High‐tech equipments engaged in the production of ultra‐precision products have very stringent vibration criteria for their functionality in normal operation conditions and their safety during an earthquake. Most previous investigations were based on simplified planar models of building structures, despite the fact that real ground motions and structures are always three‐dimensional. This paper hence presents a three‐dimensional analytical study of a hybrid platform on which high‐tech equipments are mounted for their vibration mitigation. The design methodology of the hybrid platform proposed in this study is based on dual‐level performance objectives for high‐tech equipments: safety against seismic hazard and functionality against traffic‐induced microvibration. The passive devices (represented by springs and viscous dampers) and the active actuators are designed, respectively, to meet vibration criteria corresponding to safety level and functionality level. A prototype three‐story building with high‐tech equipments installed on the second floor is selected in the case study to evaluate the effectiveness of the hybrid platform. The optimal location of the platform on the second building floor is determined during the design procedure in terms of the minimal H 2 cost function of absolute velocity response. The simulation of the coupled actuator‐platform‐building system subjected to three‐dimensional ground motions indicates that the optimally designed hybrid platform can well achieve the dual target performance and effectively mitigate vibration at both ground motion levels. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Large‐scale flow structures (LSFS) in the streamwise direction are important features of gravel‐bed river flows, because they may contribute to sediment transport and gas exchange. In the present study, these structures are detected using Huang's empirical mode decomposition and reconstructed with phase‐averaging techniques based on a Hilbert transform of the velocity signal. The analysis is based on the fluctuating component of 15 quasi‐instantaneous velocity profiles measured with a three‐dimensional (3D) acoustic Doppler velocity profiler (ADVP) in an armoured gravel‐bed river with a low relative submergence of 2.9 (ratio between flow depth and bed grain diameter). LSFS were identified in most of the measured profiles and consistently showed similar features. We were able to characterize the geometry of these large‐scale coherent structures: the front has a vertical linear shift in the time domain and a vertical profile corresponding to a first quarter moon with the apex situated at z/h ≈ 0.4. In the vertical, the front scales with flow depth h, and in the streamwise direction, LSFS scale with three to seven times the mean flow depth. On the bed, the effect of LSFS is a periodic non‐linear variation of the friction velocity on average between 0.90 and 1.10 times the mean value. A model for the friction velocity cycle resulting from LSFS oscillation is presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Marine seismic data are always affected by noise. An effective method to handle a broad range of noise problems is a time‐frequency de‐noising algorithm. In this paper we explain details regarding the implementation of such a method. Special emphasis is given to the choice of threshold values, where several different strategies are investigated. In addition we present a number of processing results where time‐frequency de‐noising has been successfully applied to attenuate noise resulting from swell, cavitation, strumming and seismic interference. Our seismic interference noise removal approach applies time‐frequency de‐noising on slowness gathers (τ?p domain). This processing trick represents a novel approach, which efficiently handles certain types of seismic interference noise that otherwise are difficult to attenuate. We show that time‐frequency de‐noising is an effective, amplitude preserving and robust tool that gives superior results compared to many other conventional de‐noising algorithms (for example frequency filtering, τ?p or fx‐prediction). As a background, some of the physical mechanisms responsible for the different types of noise are also explained. Such physical understanding is important because it can provide guidelines for future survey planning and for the actual processing.  相似文献   

11.
The evaluation of functionality and its evolution in the aftermath of extreme events and during the restoration phase is a critical step in disaster resilience assessment. To this respect, this paper presents the ‘Functionality‐Fragility Surface’ (FFS), which is a tool for probabilistic functionality and resilience evaluation of damaged structures, infrastructure systems, and communities. FFS integrates two well‐known tools, namely Fragility Curves and Restoration Functions, to present the probability of loss of functionality of a system as a function of the extreme‐event intensity, as well as the elapsed time from the initiation of the restoration process. Because of their versatility, FFSs can be applied to components and systems belonging to different infrastructure sectors (e.g., transportation, power distribution, and telecommunication), so they provide a common rigorous paradigm for integrated resilience analyses of multiple sectors, as well as for studies on interdependencies within and across sectors. While it is shown that FFSs can be developed using available data and simple computations for different types of structures and infrastructure systems, this paper proposes also a sophisticated simulation‐based methodology to develop FFSs for individual bridges, taking into account the uncertainties involved in the response, damage, and restoration scheduling of bridges. A Multi‐Span Simply Supported Steel Girder bridge is used to showcase the application of the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
The assessment of seismic design codes has been the subject of intensive research work in an effort to reveal weak points that originated from the limitations in predicting with acceptable precision the response of the structures under moderate or severe earthquakes. The objective of this work is to evaluate the European seismic design code, i.e. the Eurocode 8 (EC8), when used for the design of 3D reinforced concrete buildings, versus a performance‐based design (PBD) procedure, in the framework of a multi‐objective optimization concept. The initial construction cost and the maximum interstorey drift for the 10/50 hazard level are the two objectives considered for the formulation of the multi‐objective optimization problem. The solution of such optimization problems is represented by the Pareto front curve which is the geometric locus of all Pareto optimum solutions. Limit‐state fragility curves for selected designs, taken from the Pareto front curves of the EC8 and PBD formulations, are developed for assessing the two seismic design procedures. Through this comparison it was found that a linear analysis in conjunction with the behaviour factor q of EC8 cannot capture the nonlinear behaviour of an RC structure. Consequently the corrected EC8 Pareto front curve, using the nonlinear static procedure, differs significantly with regard to the corresponding Pareto front obtained according to EC8. Furthermore, similar designs, with respect to the initial construction cost, obtained through the EC8 and PBD formulations were found to exhibit different maximum interstorey drift and limit‐state fragility curves. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
State‐of‐the‐art 3D seismic acquisition geometries have poor sampling along at least one dimension. This results in coherent migration noise that always contaminates pre‐stack migrated data, including high‐fold surveys, if prior‐to‐migration interpolation was not applied. We present a method for effective noise suppression in migrated gathers, competing with data interpolation before pre‐stack migration. The proposed technique is based on a dip decomposition of common‐offset volumes and a semblance‐type measure computation via offset for all constant‐dip gathers. Thus the processing engages six dimensions: offset, inline, crossline, depth, inline dip, and crossline dip. To reduce computational costs, we apply a two‐pass (4D in each pass) noise suppression: inline processing and then crossline processing (or vice versa). Synthetic and real‐data examples verify that the technique preserves signal amplitudes, including amplitude‐versus‐offset dependence, and that faults are not smeared.  相似文献   

14.
Seismic imaging is an important step for imaging the subsurface structures of the Earth. One of the attractive domains for seismic imaging is explicit frequency–space (fx) prestack depth migration. So far, this domain focused on migrating seismic data in acoustic media, but very little work assumed visco‐acoustic media. In reality, seismic exploration data amplitudes suffer from attenuation. To tackle the problem of attenuation, new operators are required, which compensates for it. We propose the weighted L 1 ‐error minimisation technique to design visco‐acoustic f – x wavefield extrapolators. The L 1 ‐error wavenumber responses provide superior extrapolator designs as compared with the previously designed equiripple L 4 ‐norm and L‐norm extrapolation wavenumber responses. To verify the new compensating designs, prestack depth migration is performed on the challenging Marmousi model dataset. A reference migrated section is obtained using non‐compensating fx extrapolators on an acoustic dataset. Then, both compensating and non‐compensating extrapolators are applied to a visco‐acoustic dataset, and both migrated sections are then compared. The final images show that the proposed weighted L 1 ‐error method enhances the resolution and results in practically stable images.  相似文献   

15.
我国新的地震区划图(1990年版)是采用地震危险性慨率分析方法编制的。该图给出的是场点地震烈度值,该值在50年内被突破的概率为0.1。人们普遍关注该图与我国曾经编制的地震区划图(1957年版,1977年版)的区别,该图超越概率概念的内含和外延以及超越概率水平为什么采用50年超越概率0.1。本文围绕这些问题进行了讨论。分析结果表明,前两张地震区划图编图的基本着眼点都是地震预测,而新的地震区划着眼于场点的地震动预测。新的地震区划图是按场点地震危险性分析方法给出的,它所表示的地震危险性只能针对具体的场点,不能完全反映区域的地震危险性特征。而弄清场点地震危险性和区域地震危险性的差异是正确进行区域防灾对策的基础。作者希望这些讨论能对正确使用新的地震区划图有所裨益。  相似文献   

16.
This paper discusses an analytical study that quantifies the expected earthquake‐induced losses in typical office steel frame buildings designed with perimeter special moment frames in highly seismic regions. It is shown that for seismic events associated with low probabilities of occurrence, losses due to demolition and collapse may be significantly overestimated when the expected loss computations are based on analytical models that ignore the composite beam effects and the interior gravity framing system of a steel frame building. For frequently occurring seismic events building losses are dominated by non‐structural content repairs. In this case, the choice of the analytical model representation of the steel frame building becomes less important. Losses due to demolition and collapse in steel frame buildings with special moment frames designed with strong‐column/weak‐beam ratio larger than 2.0 are reduced by a factor of two compared with those in the same frames designed with a strong‐column/weak‐beam ratio larger than 1.0 as recommended in ANSI/AISC‐341‐10. The expected annual losses (EALs) of steel frame buildings with SMFs vary from 0.38% to 0.74% over the building life expectancy. The EALs are dominated by repairs of acceleration‐sensitive non‐structural content followed by repairs of drift‐sensitive non‐structural components. It is found that the effect of strong‐column/weak‐beam ratio on EALs is negligible. This is not the case when the present value of life‐cycle costs is selected as a loss‐metric. It is advisable to employ a combination of loss‐metrics to assess the earthquake‐induced losses in steel frame buildings with special moment frames depending on the seismic performance level of interest. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
Shear‐wall dominant multistorey reinforced concrete structures, constructed by using a special tunnel form technique are commonly built in countries facing a substantial seismic risk, such as Chile, Japan, Italy and Turkey. In spite of their high resistance to earthquake excitations, current seismic code provisions including the Uniform Building Code (International Conference of Building Officials, Whittier, CA, 1997) and the Turkish Seismic Code (Specification for Structures to be Built in Disaster Areas, Ankara, Turkey, 1998) present limited information for their design criteria. In this study, consistency of equations in those seismic codes related to their dynamic properties are investigated and it is observed that the given empirical equations for prediction of fundamental periods of this specific type of structures yield inaccurate results. For that reason, a total of 80 different building configurations were analysed by using three‐dimensional finite‐element modelling and a set of new empirical equations was proposed. The results of the analyses demonstrate that given formulas including new parameters provide accurate predictions for the broad range of different architectural configurations, roof heights and shear‐wall distributions, and may be used as an efficient tool for the implicit design of these structures. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
本文在分析地震灾害特点的基础上,对开展辽宁省地震保险区划工作的目的意义、研究内容、工作思路及需要注意的问题等作了叙述,指出地震保险区划对辽宁保险业发展以及防震减灾工作有重要的促进作用。  相似文献   

19.
Real‐time testing with dynamic substructuring is a novel experimental technique capable of assessing the behaviour of structures subjected to dynamic loadings including earthquakes. The technique involves recreating the dynamics of the entire structure by combining an experimental test piece consisting of part of the structure with a numerical model simulating the remainder of the structure. These substructures interact in real time to emulate the behaviour of the entire structure. Time integration is the most versatile method for analysing the general case of linear and non‐linear semi‐discretized equations of motion. In this paper we propose for substructure testing, L‐stable real‐time (LSRT) compatible integrators with two and three stages derived from the Rosenbrock methods. These algorithms are unconditionally stable for uncoupled problems and entail a moderate computational cost for real‐time performance. They can also effectively deal with stiff problems, i.e. complex emulated structures for which solutions can change on a time scale that is very short compared with the interval of time integration, but where the solution of interest changes on a much longer time scale. Stability conditions of the coupled substructures are analysed by means of the zero‐stability approach, and the accuracy of the novel algorithms in the coupled case is assessed in both the unforced and forced conditions. LSRT algorithms are shown to be more competitive than popular Runge–Kutta methods in terms of stability, accuracy and ease of implementation. Numerical simulations and real‐time substructure tests are used to demonstrate the favourable properties of the proposed algorithms. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
Historic land use in the Chesapeake Bay drainage basin induced large fluxes of fluvial sediment to subestuarine tributaries. Stratigraphic and palaeoecologic analyses of deltaic deposits may be used to infer changes on the landscape, but are not sufficient to quantify past sediment supply. When viewed as an inverse boundary‐value problem, reconstruction of the sediment supply function may be achieved by combining deltaic sedimentation chronologies with an equation governing delta progradation. We propose that the diffusion equation is appropriate for simulating delta progradation and obtaining the sediment supply function provided a suitable diffusion constant (D) can be determined. Three new methods for estimating D are presented for the case of estuarine deltas. When the inverse boundary‐value technique was applied to Otter Point Creek, a tidal freshwater delta at the head of Bush River in upper Chesapeake Bay, D values ranged from 3763 to 6199 m2 a?1. Delta growth simulations showed a 1740–1760 initial pulse, a 1760–1780 erosive/redistributive interval, a 1780–1920 growth period, and a 1920‐present erosive/redistributive era. Coupling of simulated delta elevations with an empirical plant habitat predictive equation allowed for comparison of predicted versus actual relative habitat areas. Also, the model yielded reconstructed watershed erosion rates and stream suspended sediment concentrations that could be useful for development of water quality regulations. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号