首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In regions that undergo low deformation rates, as is the case for metropolitan France (i.e. the part of France in Europe), the use of historical seismicity, in addition to instrumental data, is necessary when dealing with seismic hazard assessment. This paper presents the strategy adopted to develop a parametric earthquake catalogue using moment magnitude Mw, as the reference magnitude scale to cover both instrumental and historical periods for metropolitan France. Work performed within the framework of the SiHex (SIsmicité de l’HEXagone) (Cara et al. Bull Soc Géol Fr 186:3–19, 2015. doi: 10.2113/qssqfbull.186.1.3) and SIGMA (SeIsmic Ground Motion Assessment; EDF-CEA-AREVA-ENEL) projects, respectively on instrumental and historical earthquakes, have been combined to produce the French seismic CATalogue, version 2017 (FCAT-17). The SiHex catalogue is composed of ~40,000 natural earthquakes, for which the hypocentral location and Mw magnitude are given. In the frame of the SIGMA research program, an integrated study has been realized on historical seismicity from intensity prediction equations (IPE) calibration in Mw detailed in Baumont et al. (submitted) companion paper to their application to earthquakes of the SISFRANCE macroseismic database (BRGM, EDF, IRSN), through a dedicated strategy developed by Traversa et al. (Bull Earthq Eng, 2017. doi: 10.1007/s10518-017-0178-7) companion paper, to compute their Mw magnitude and depth. Macroseismic data and epicentral location and intensity used both in IPE calibration and inversion process, are those of SISFRANCE without any revision. The inversion process allows the main macroseismic field specificities reported by SISFRANCE to be taken into account with an exploration tree approach. It also allows capturing the epistemic uncertainties associated with macroseismic data and to IPEs selection. For events that exhibit a poorly constrained macroseismic field (mainly old, cross border or off-shore earthquakes), joint inversion of Mw and depth is not possible, and depth needs to be fixed to calculate Mw. Regional a priori depths have been defined for this purpose based on analysis of earthquakes with a well constrained macroseismic field where joint inversion of Mw and depth is possible. As a result, 27% of SISFRANCE earthquake seismological parameters have been jointly inverted and for the other 73% Mw has been calculated assuming a priori depths. The FCAT-17 catalogue is composed of the SIGMA historical parametric catalogue (magnitude range between 3.5 up to 7.0), covering from AD463 to 1965, and of the SiHex instrumental one, extending from 1965 to 2009. Historical part of the catalogue results from an automatic inversion of SISFRANCE data. A quality index is estimated for each historical earthquake according to the way the events are processed. All magnitudes are given in Mw which makes this catalogue directly usable as an input for probabilistic or deterministic seismic hazard studies. Uncertainties on magnitudes and depths are provided for historical earthquakes following calculation scheme presented in Traversa et al. (2017). Uncertainties on magnitudes for instrumental events are from Cara et al. (J Seismol 21:551–565, 2017. doi: 10.1007/s10950-016-9617-1).  相似文献   

2.
In the Lake Chad basin, the quaternary phreatic aquifer (named hereafter QPA) presents large piezometric anomalies referred to as domes and depressions whose depths are ~15 and ~60 m, respectively. A previous study (Leblanc et al. in Geophys Res Lett, 2003, doi: 10.1029/2003GL018094) noticed that brightness temperatures from METEOSAT infrared images of the Lake Chad basin are correlated with the QPA piezometry. Indeed, at the same latitude, domes are ~4–5 K warmer than the depressions. Leblanc et al. (Geophys Res Lett, 2003, doi: 10.1029/2003GL018094) suggested that such a thermal behaviour results from an evapotranspiration excess above the piezometric depressions, an interpretation implicitly assuming that the QPA is separated from the other aquifers by the clay-rich Pliocene formation. Based on satellite visible images, here we find evidence of giant polygons, an observation that suggests instead a local vertical connectivity between the different aquifers. We developed a numerical water convective model giving an alternative explanation for the development of QPA depressions and domes. Beneath the depressions, a cold descending water convective current sucks down the overlying QPA, while, beneath the dome, a warm ascending current produces overpressure. Such a basin-wide circulation is consistent with the water geochemistry. We further propose that the thermal diurnal and evaporation/condensation cycles specific to the water ascending current explain why domes are warmer. We finally discuss the possible influence of the inferred convective circulation on the transient variations of the QPA reported from observations of piezometric levels and GRACE-based water mass change over the region.  相似文献   

3.
In this paper we propose Universal trace co-kriging, a novel methodology for interpolation of multivariate Hilbert space valued functional data. Such data commonly arises in multi-fidelity numerical modeling of the subsurface and it is a part of many modern uncertainty quantification studies. Besides theoretical developments we also present methodological evaluation and comparisons with the recently published projection based approach by Bohorquez et al. (Stoch Environ Res Risk Assess 31(1):53–70, 2016.  https://doi.org/10.1007/s00477-016-1266-y). Our evaluations and analyses were performed on synthetic (oil reservoir) and real field (uranium contamination) subsurface uncertainty quantification case studies. Monte Carlo analyses were conducted to draw important conclusions and to provide practical guidelines for all future practitioners.  相似文献   

4.
Downscaling techniques are the required tools to link the global climate model outputs provided at a coarse grid resolution to finer scale surface variables appropriate for climate change impact studies. Besides the at-site temporal persistence, the downscaled variables have to satisfy the spatial dependence naturally observed between the climate variables at different locations. Furthermore, the precipitation spatial intermittency should be fulfilled. Because of the complexity in describing these properties, they are often ignored, which can affect the effectiveness of the hydrologic process modeling. This study is a continuation of the work by Khalili and Nguyen (Clim Dyn 49(7–8):2261–2278.  https://doi.org/10.1007/s00382-016-3443-6, 2017) regarding the multi-site statistical downscaling of daily precipitation series. Different approach of multi-site statistical downscaling based on the concept of the spatial autocorrelation is presented in this paper. This approach has proven to give effective results for multi-site multivariate statistical downscaling of daily extreme temperature time series (Khalili et al. in Int J Climatol 33:15–32.  https://doi.org/10.1002/joc.3402, 2013). However, more challenges are presented by the precipitation variable because of the high spatio-temporal variability and intermittency. The proposed approach consists of logistic and multiple regression models, linking the global climate predictors to the precipitation occurrences and amounts respectively, and using the spatial autocorrelation concept to reproduce the spatial dependence observed between the precipitation series at different sites. An empirical technique has also been involved in this approach in order to fulfill the precipitation intermittency property. The proposed approach was performed using observed daily precipitation data from ten weather stations located in the southwest region of Quebec and southeast region of Ontario in Canada, and climate predictors from the NCEP/NCAR (National Centers for Environmental Prediction/National Centre for Atmospheric Research) reanalysis dataset. The results have proven the ability of the proposed approach to adequately reproduce the observed precipitation occurrence and amount characteristics, temporal and spatial dependence, spatial intermittency and temporal variability.  相似文献   

5.
Nowadays, most of the site classifications schemes are based on the predominant period of the site as determined from the average horizontal to vertical spectral ratios of seismic motion or microtremor. However, the difficulty lies in the identification of the predominant period in particular if the observed average response spectral ratio does not present a clear peak but rather a broadband amplification or multiple peaks. In this work, based on the Eurocode-8 (2004) site classification, and assuming bounded random fields for both shear and compression waves-velocities, damping coefficient, natural period and depth of soil profile, one propose a new site-classification approach, based on “target” simulated average \( H/V \) spectral ratios, defined for each soil class. Taking advantage of the relationship of Kawase et al. (Bull Seismol Soc Am 101:2001–2014, 2011), which link the \( H/V \) spectral ratio to the horizontal (\( HTF \)) over the vertical (\( VTF \)) transfer functions, statistics of \( H/V \) spectral ratio via deterministic visco-elastic seismic analysis using the wave propagation theory are computed for the 4 soil classes. The obtained results show that \( H/V \) and \( HTF \) have amplitudes and shapes remarkably different among the four soil classes and exhibit fundamental peaks in the period ranges remarkably similar. Moreover, the “target” simulated average \( H/V \) spectral ratios for the 4 soil classes are in good agreement with the experimental ones obtained by Zhao et al. (Bull Seismol Soc Am 96:914–925, 2006) from the abundant and reliable Japanese strong motions database Kik-net, Ghasemi et al. (Soil Dyn Earthq Eng 29:121–132, 2009) from the Iranian strong motion data, and Di Alessandro et al. (Bull Sesismol Soc Am 106:2, 2011.  https://doi.org/10.1785/0120110084) from the Italian strong motion data. In addition to the 4 EC-8 standard soil classes (A, B, C and D), the superposition of the 4 target \( H/V \) reveals 3 new boundary site classes; AB, BC and CD, for overlapping \( V_{s,30} \) ranges when the predominant peak is not clearly consistent with any of the 4 proposed classes. Finally, one proposes a site classification index based on the ratio between the cross-correlation and the mean quadratic error between the in situ \( H/V \) spectral ratio and the “target” one. In order to test the reliability of the proposed approach, data from 139 sites were used, 132 collected from the Kik-net network database from Japan and 7 from Algeria. The site classification success rate per site class are around 93, 82, 89 and 100% for rock, hard soil, medium soil and soft soil, respectively. Zhao et al. (2006) found an average success for the 4 classes of soil close to 60%, similar to what one found in the present study (63%) without considering the new soil classes, but much smaller if one considers them (86%). In the absence of \( V_{s,30} \) data, the proposed approach can be an alternative to site classification.  相似文献   

6.
According to the idea now widespread that macroseismic intensity should be expressed in probabilistic terms, a beta-binomial model has been proposed in the literature to estimate the probability of the intensity at site in the Bayesian framework and a clustering procedure has been adopted to define learning sets of macroseismic fields required to assign prior distributions of the model parameters. This article presents the results concerning the learning sets obtained by exploiting the large Italian macroseismic database DBM1I11 (Locati et al. in DBMI11, the 2011 version of the Italian Macroseismic Database, 2011. http://emidius.mi.ingv.it/DBMI11/) and discusses the problems related to their use in probabilistic modelling of the attenuation in seismic regions of the European countries partners of the UPStrat-MAFA project (2012), namely South Iceland, Portugal, SE Spain and Mt Etna volcano area (Italy). Anisotropy and the presence of offshore earthquakes are some of the problems faced. All the work has been carried out in the framework of the Task B of the project.  相似文献   

7.
Laboratory experiments of decaying grid stratified turbulence were performed in a two-layer fluid and varying the stratification intensity. Turbulence was generated by towing an array of cylinders in a square vessel and the grid was moved at a constant velocity along the total vertical extent of the tank. In order to investigate the influence of the stratification intensity on the turbulence decay, both 2C-PIV and stereo PIV were used to provide time resolved velocity fields in the horizontal plane and the out-of-plane velocity. As expected, a faster decay of the turbulence level along the vertical axis and the collapse in a quasi-horizontal motion increased with the buoyancy frequency, N. In order to characterise the decay process we investigated the time evolution of the vortex statistics, the turbulence scales and the kinetic energy and enstrophy of the horizontal flow. The exponents recovered in the corresponding scaling laws were compared with the theoretical predictions and with reference values obtained in previous experimental studies. Both the spectral analysis and the evolution of characteristic length scales indicate that, in the examined range of N, the dynamics is substantially independent of the stratification intensity. The results obtained were explained in terms of the scaling analysis of decaying turbulence in strongly stratified fluids introduced by Brethouwer et al. (J Fluid Mech 585:343–368.  https://doi.org/10.1017/S0022112007006854, 2007).  相似文献   

8.
A vulnerability analysis of c.300 unreinforced Masonry churches in New Zealand is presented. The analysis uses a recently developed vulnerability index method (Cattari et al. in Proceedings of the New Zealand Society for Earthquake Engineering NZSEE 2015 conference, Rotorua, New Zealand, 2015a; b; SECED 2015 conference: earthquake risk and engineering towards a Resilient World, Cambridge; Goded et al. in Vulnerability analysis of unreinforced masonry churches (EQC 14/660)—final report, 2016; Lagomarsino et al. in Bull Earthq Eng, 2018), specifically designed for New Zealand churches, based on a widely tested approach for European historical buildings. It consists of a macroseismic approach where the seismic hazard is defined by the intensity and correlated to post seismic damage. The many differences in typologies of New Zealand and European churches, with very simple architectural designs and a majority of one nave churches in New Zealand, justified the need to develop a method specifically created for this country. A statistical analysis of the churches damaged during the 2010–2011 Canterbury earthquake sequence was previously carried out to develop the vulnerability index modifiers for New Zealand churches. This new method has been applied to generate seismic scenarios for each church, based on the most likely seismic event for 500 years return period, using the latest version of New Zealand’s National Seismic Hazard Model. Results show that highly vulnerable churches (e.g. stone churches and/or with a weak structural design) tend to produce higher expected damage even if the intensity level is lower than for less vulnerable churches in areas with slightly higher seismicity. The results of this paper provide a preliminary tool to identify buildings requiring in depth structural analyses. This paper is considered as a first step towards a vulnerability analysis of all the historical buildings in the country, in order to preserve New Zealand’s cultural and historical heritage.  相似文献   

9.
We reviewed joint inversion studies of the rupture processes of significant earthquakes, using the definition of a joint inversion in earthquake source imaging as a source inversion of multiple kinds of datasets (waveform, geodetic, or tsunami). Yoshida and Koketsu (Geophys J Int 103:355–362, 1990), and Wald and Heaton (Bull Seismol Soc Am 84:668–691, 1994) independently initiated joint inversion methods, finding that joint inversion provides more reliable rupture process models than single-dataset inversion, leading to an increase of joint inversion studies. A list of these studies was made using the finite-source rupture model database (Mai and Thingbaijam in Seismol Res Lett 85:1348–1357, 2014). Outstanding issues regarding joint inversion were also discussed.  相似文献   

10.
The main goal of this article is to decluster Iranian plateau seismic catalog by the epidemic-type aftershock sequence (ETAS) model and compare the results with some older methods. For this purpose, Iranian plateau bounded in 24°–42°N and 43°–66°E is subdivided into three major tectonic zones: (1) North of Iran (2) Zagros (3) East of Iran. The extracted earthquake catalog had a total of 6034 earthquakes (Mw?>?4) in the time span 1983–2017. The ETAS model is an accepted stochastic approach for seismic evaluation and declustering earthquake catalogs. However, this model has not yet been used to decluster the seismic catalog of Iran. Until now, traditional methods like the Gardner and Knopoff space–time window method and the Reasenberg link-based method have been used in most studies for declustering Iran earthquake catalog. Finally, the results of declustering by the ETAS model are compared with result of Gardner and Knopoff (Bull Seismol Soc Am 64(5):1363–1367, 1974), Uhrhammer (Earthq Notes 57(1):21, 1986), Gruenthal (pers. comm.) and Reasenberg (Geophys Res 90:5479–5495, 1985) declustering methods. The overall conclusion is difficult, but the results confirm the high ability of the ETAS model for declustering Iranian earthquake catalog. Use of the ETAS model is still in its early steps in Iranian seismological researches, and more parametric studies are needed.  相似文献   

11.
We examine the implementation of a wave-breaking mechanism into a nonlinear potential flow solver. The success of the mechanism will be studied by implementing it into the numerical model HOS-NWT, which is a computationally efficient, open source code that solves for the free surface in a numerical wave tank using the high-order spectral (HOS) method. Once the breaking mechanism is validated, it can be implemented into other nonlinear potential flow models. To solve for wave-breaking, first a wave-breaking onset parameter is identified, and then a method for computing wave-breaking associated energy loss is determined. Wave-breaking onset is calculated using a breaking criteria introduced by Barthelemy et al. (J Fluid Mech https://arxiv.org/pdf/1508.06002.pdf, submitted) and validated with the experiments of Saket et al. (J Fluid Mech 811:642–658, 2017). Wave-breaking energy dissipation is calculated by adding a viscous diffusion term computed using an eddy viscosity parameter introduced by Tian et al. (Phys Fluids 20(6): 066,604, 2008, Phys Fluids 24(3), 2012), which is estimated based on the pre-breaking wave geometry. A set of two-dimensional experiments is conducted to validate the implemented wave breaking mechanism at a large scale. Breaking waves are generated by using traditional methods of evolution of focused waves and modulational instability, as well as irregular breaking waves with a range of primary frequencies, providing a wide range of breaking conditions to validate the solver. Furthermore, adjustments are made to the method of application and coefficient of the viscous diffusion term with negligible difference, supporting the robustness of the eddy viscosity parameter. The model is able to accurately predict surface elevation and corresponding frequency/amplitude spectrum, as well as energy dissipation when compared with the experimental measurements. This suggests the model is capable of calculating wave-breaking onset and energy dissipation successfully for a wide range of breaking conditions. The model is also able to successfully calculate the transfer of energy between frequencies due to wave focusing and wave breaking. This study is limited to unidirectional waves but provides a valuable basis for future application of the wave-breaking model to a multidirectional wave field. By including parameters for removing energy due to wave-breaking into a nonlinear potential flow solver, the risk of developing numerical instabilities due to an overturning wave is decreased, thereby increasing the application range of the model, including calculating more extreme sea states. A computationally efficient and accurate model for the generation of a nonlinear random wave field is useful for predicting the dynamic response of offshore vessels and marine renewable energy devices, predicting loads on marine structures, and in the study of open ocean wave generation and propagation in a realistic environment.  相似文献   

12.
The region of Blida is characterized by a relatively high seismic activity, pointed especially during the past two centuries. Indeed, it experienced a significant number of destructive earthquakes such as the earthquakes of March 2, 1825 and January 2, 1867, with intensity of X and IX, respectively. This study aims to investigate potential seismic hazard in Blida city and its surrounding regions. For this purpose, a typical seismic catalog was compiled using historical macroseismic events that occurred over a period of a few hundred years, and the recent instrumental seismicity dating back to 1900. The parametric-historic procedure introduced by Kijko and Graham (1998, 1999) was applied to assess seismic hazard in the study region. It is adapted to deal with incomplete catalogs and does not use any subjective delineation of active seismic zones. Because of the lack of recorded strong motion data, three ground prediction models have been considered, as they seem the most adapted to the seismicity of the study region. Results are presented as peak ground acceleration (PGA) seismic hazard maps, showing expected peak accelerations with 10% probability of exceedance in 50-year period. As the most significant result, hot spot regions with high PGA values are mapped. For example, a PGA of 0.44 g has been found in a small geographical area centered on Blida city.  相似文献   

13.
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599–1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenkoet al. (Pure Appl. Geophys 171:1599–1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenkoet al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599–1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.  相似文献   

14.
We summarize the main elements of a ground-motion model, as built in three-year effort within the Earthquake Model of the Middle East (EMME) project. Together with the earthquake source, the ground-motion models are used for a probabilistic seismic hazard assessment (PSHA) of a region covering eleven countries: Afghanistan, Armenia, Azerbaijan, Cyprus, Georgia, Iran, Jordan, Lebanon, Pakistan, Syria and Turkey. Given the wide variety of ground-motion predictive models, selecting the appropriate ones for modeling the intrinsic epistemic uncertainty can be challenging. In this respect, we provide a strategy for ground-motion model selection based on data-driven testing and sensitivity analysis. Our testing procedure highlights the models of good performance in terms of both data-driven and non-data-driven testing criteria. The former aims at measuring the match between the ground-motion data and the prediction of each model, whereas the latter aims at identification of discrepancies between the models. The selected set of ground models were directly used in the sensitivity analyses that eventually led to decisions on the final logic tree structure. The strategy described in great details hereafter was successfully applied to shallow active crustal regions, and the final logic tree consists of four models (Akkar and Ça?nan in Bull Seismol Soc Am 100:2978–2995, 2010; Akkar et al. in Bull Earthquake Eng 12(1):359–387, 2014; Chiou and Youngs in Earthq Spectra 24:173–215, 2008; Zhao et al. in Bull Seismol Soc Am 96:898–913, 2006). For other tectonic provinces in the considered region (i.e., subduction), we adopted the predictive models selected within the 2013 Euro-Mediterranean Seismic Hazard Model (Woessner et al. in Bull Earthq Eng 13(12):3553–3596, 2015). Finally, we believe that the framework of selecting and building a regional ground-motion model represents a step forward in ground-motion modeling, particularly for large-scale PSHA models.  相似文献   

15.
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497–518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917–1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497–518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917–1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.  相似文献   

16.
Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind–wave fields, whereas wave-induced currents were calculated using the Ko?odko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.  相似文献   

17.
Ground-motion prediction equations (GMPEs) are essential tools in seismic hazard studies to estimate ground motions generated by potential seismic sources. Global GMPEs which are based on well-compiled global strong-motion databanks, have certain advantages over local GMPEs, including more sophisticated parameters in terms of distance, faulting style, and site classification but cannot guarantee the local/region-specific propagation characteristics of shear wave (e.g., geometric spreading behavior, quality factor) for different seismic regions at larger distances (beyond about 80 km). Here, strong-motion records of northern Iran have been used to estimate the propagation characteristics of shear wave and determine the region-specific adjustment parameters for three of the NGA-West2 GMPEs to be applicable in northern Iran. The dataset consists of 260 three-component records from 28 earthquakes, recorded at 139 stations, with moment magnitudes between 4.9 and 7.4, horizontal distance to the surface projection of the rupture (R JB) less than 200 km, and average shear-wave velocity over the top 30 m of the subsurface (V S30) between 155 and 1500 m/s. The paper also presents the ranking results for three of the NGA-West2 GMPEs against strong motions recorded in northern Iran, before and after adjustment for region-dependent attenuation characteristics. The ranking is based on the likelihood and log-likelihood methods (LH and LLH) proposed by Scherbaum et al. (Bull Seismol Soc Am 94: 2164–2185, 2004, Bull Seismol Soc Am 99, 3234–3247, 2009, respectively), the Nash–Sutcliffe model efficiency coefficient (Nash and Sutcliffe, J Hydrol 10:282–290, 1970), and the EDR method of Kale and Akkar (Bull Seismol Soc Am 103:1069–1084, 2012). The best-fitting models over the whole frequency range are the ASK14 and BSSA14 models. Taking into account that the models’ performances were boosted after applying the adjustment factors, at least moderate regional variation of ground motions is highlighted. The regional adjustment based on the Iranian database reveals an upward trend (indicated as high Q factor) for the selected database. Further investigation to determine adjustment factors based on a much richer database of the Iranian strong-motion records is of utmost important for seismic hazard and risk analysis studies in northern Iran, containing major cities including the capital city of Tehran.  相似文献   

18.
Southwest Turkey, along Mediterranean coast, is prone to large earthquakes resulting from subduction of the African plate under the Eurasian plate and shallow crustal faults. Maximum observed magnitude of subduction earthquakes is Mw = 6.5 whereas that of crustal earthquakes is Mw = 6.6. Crustal earthquakes are sourced from faults which are related with Isparta Angle and Cyprus Arc tectonic structures. The primary goal of this study is to assess seismic hazard for Antalya area (SW Turkey) using a probabilistic approach. A new earthquake catalog for Antalya area, with unified moment magnitude scale, was prepared in the scope of the study. Seismicity of the area has been evaluated by the Gutenberg-Richter recurrence relationship. For hazard computation, CRISIS2007 software was used following the standard Cornell-McGuire methodology. Attenuation model developed by Youngs et al. Seismol Res Lett 68(1):58–73, (1997) was used for deep subduction earthquakes and Chiou and Youngs Earthq Spectra 24(1):173–215, (2008) model was used for shallow crustal earthquakes. A seismic hazard map was developed for peak ground acceleration and for rock ground with a hazard level of a 10% probability of exceedance in 50 years. Results of the study show that peak ground acceleration values on bedrock change between 0.215 and 0.23 g in the center of Antalya.  相似文献   

19.
The estimation of the seismological parameters of historical earthquakes is a key step when performing seismic hazard assessment in moderate seismicity regions as France. We propose an original method to assess magnitude and depth of historical earthquakes using intensity data points. A flowchart based on an exploration tree (ET) approach allows to apply a consistent methodology to all the different configurations of the earthquake macroseismic field and to explore the inherent uncertainties. The method is applied to French test case historical earthquakes, using the SisFrance (BRGM, IRSN, EDF) macroseismic database and the intensity prediction equations (IPEs) calibrated in the companion paper (Baumont et al. Bull Earthq Eng, 2017). A weighted least square scheme allowing for the joint inversion of magnitude and depth is applied to earthquakes that exhibit a decay of intensity with distance. Two cases are distinguished: (1) a “Complete ET” is applied to earthquakes located within the metropolitan territory, while (2) a “Simplified ET” is applied to both, offshore and cross border events, lacking information at short distances but disposing of reliable data at large ones. Finally, a priori-depth-based magnitude computation is applied to ancient or poorly documented events, only described by single/sporadic intensity data or few macroseismic testimonies. Specific processing of “felt” testimonies allows exploiting this complementary information for poorly described earthquakes. Uncertainties associated to magnitude and depth estimates result from both, full propagation of uncertainties related to the original macroseismic information and the epistemic uncertainty related to the IPEs selection procedure.  相似文献   

20.
In 2000, the World population was 6.2 billion people; it reached 7 billion in 2012 and is expected to reach 9.5 billion (±0.4) in 2050 and 11 billion (±1.5) in 2100, according to the 2012 UN projections (Gerland et al. in Science 346:234–237, 2014). The trend after 2100 is still one of the global demographic growths, but after 2060, Africa is the only continent where the population would still increase. The amount of water consumed annually to produce the food necessary to meet the needs of the populations varies greatly between countries, from about 600 to 2500 m3/year per capita (Zimmer in L’empreinte eau. Les faces cachées d’une ressource vitale. Charles Léopold Meyer, Paris, 2013), depending on their wealth, their food habits, and the percentage of food waste they generate (on average, 30 % of the food produced is wasted). In 2000, the total food production was on the order of 3300 million tons (in cereal equivalents). In 2014, it is estimated that about 0.8 billion inhabitants of the planet suffer from hunger (FAO in World agriculture: towards 2030–2050. FAO, Rome, 2014. http://www.fao.org/docrep/004/Y3557E/y3557e00.HTM) and do not get the nutrition they need to be in good health or, in the case of children, to grow properly (both physically and intellectually). This food deficit was on the order of 40 million tons of cereal equivalents in 2014. The number of inhabitants with a food deficit was about 0.85 billion before the 2008 crisis and was decreasing annually, but it increased abruptly after 2008 up to 1 billion inhabitants and is slowly decreasing now. Assuming a World average water consumption for food of 1300 m3/year per capita in 2000, 1400 m3/year in 2050, and 1500 m3/year in 2100, a volume of water of around 8200 km3/year was needed in 2000, 13,000 km3/year will be needed in 2050, and 16,500 km3/year in 2100 (Marsily in L’eau, un trésor en partage. Dunod, Paris, 2009). Can bioenergy be added to food production? Will that much water be available on Earth, and where will it come from? Is climate change going to modify the answers to these questions? Can severe droughts occur? Can there be conflicts related to a food deficit? Some preliminary answers and scenarios for food production will be given in this paper from a hydrologist’s viewpoint.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号