首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Large data sets covering large areas and time spans and composed of many different independent sources raise the question of the obtained degree of harmonization. The present study is an analysis of the harmonization with respect to the moment magnitude M w within the earthquake catalogue for central, northern, and northwestern Europe (CENEC). The CENEC earthquake catalogue (Grünthal et al., J Seismol, 2009) contains parameters for over 8,000 events in the time period 1000–2004 with magnitude M w ≥ 3.5. Only about 2% of the data used for CENEC have original M w magnitudes derived directly from digital data. Some of the local catalogues and data files providing data give M w, but calculated by the respective agency from other magnitude measures or intensity. About 60% of the local data give strength measures other than M w, and these have to be transformed by us using available formulae or new regressions based on original M w data. Although all events are thus unified to M w magnitude, inhomogeneity in the M w obtained from over 40 local catalogues and data files and 50 special studies is inevitable. Two different approaches have been followed to investigate the compatibility of the different M w sets throughout CENEC. The first harmonization check is performed using M w from moment tensor solutions from SMTS and Pondrelli et al. (Phys Earth Planet Inter 130:71–101, 2002; Phys Earth Planet Inter 164:90–112, 2007). The method to derive the SMTS is described, e.g., by Braunmiller et al. (Tectonophysics 356:5–22, 2002) and Bernardi et al. (Geophys J Int 157:703–716, 2004), and the data are available in greater extent since 1997. One check is made against the M w given in national catalogues and another against the M w derived by applying different empirical relations developed for CENEC. The second harmonization check concerns the vast majority of data in CENEC related to earthquakes prior to 1997 or where no moment tensor based M w exists. In this case, an empirical relation for the M w dependence on epicentral intensity (I 0) and focal depth (h) was derived for 41 master events, i.e., earthquakes, located all over central Europe, with high-quality data. To include also the data lacking h, the corresponding depth-independent relation for these 41 events was also derived. These equations are compared with the different sets of data from which CENEC has been composed, and the goodness of fit is demonstrated for each set. The vast majority of the events are very well or reasonably consistent with the respective relation so that the data can be said to be harmonized with respect to M w, but there are exceptions, which are discussed in detail.  相似文献   

2.
In their article, “New light on a dark subject: On the use of fluorescence data to deduce redox states of natural organic matter,” Macalady and Walton-Day (2009) subjected natural organic matter (NOM) samples to oxidation, reduction, and photochemical transformation. Fluorescence spectra were obtained on samples, which were diluted “to bring maximum uvvisible absorbance values below 1.0.” The spectra were fit to the Cory and McKnight (2005) parallel factor analysis (PARAFAC) model, and consistent variation in the redox state of quinone-like moieties was not detected. Based on these results they concluded that fitting fluorescence spectra to the Cory and McKnight (2005) PARAFAC model to obtain information about the redox state of quinone-like moieties in NOM is problematic. Recognizing that collection and correction of fluorescence spectra requires consideration of many factors, we investigated the potential for inner-filter effects to obscure the ability of fluorescence spectroscopy to quantify the redox state of quinone-like moieties. We collected fluorescence spectra on Pony Lake and Suwannee River fulvic acid standards that were diluted to cover a range of absorbance wavelengths, and fit these spectra to the Cory and McKnight (2005) PARAFAC model. Our results suggest that, in order for the commonly used inner-filter correction to effectively remove inner-filter effects, samples should be diluted such that the absorbance at 254 nm is less than 0.3 prior to the collection of fluorescence spectra. This finding indicates that inner-filter effects may have obscured changes in the redox signature of fluorescence spectra of the highly absorbing samples studied by Macalady and Walton-Day (2009).  相似文献   

3.
Three-dimensional attenuation structures are related to the subsurface heterogeneities present in the earth crust. An algorithm for estimation of three-dimensional attenuation structure in the part of Garhwal Himalaya, India has been presented by Joshi (Curr Sci 90:581–585, 2006b; Nat Hazards 43:129–146, 2007). In continuation of our earlier approach, we have presented a method in which strong motion data have been used to estimate frequency-dependent three-dimensional attenuation structure of the region. The border district of Pithoragarh in the Higher Himalaya, India, lies in the central seismic gap region of Himalaya. This region falls in the seismic zones IV and V of the seismic zoning map of India. A dense network consisting of eight accelerographs has been installed in this region. This network has recorded several local events. An algorithm based on inversion of strong motion digital data is developed in this paper to estimate attenuation structure at different frequencies using the data recorded by this network. Twenty strong motion records observed at five stations have been used to estimate the site amplification factors using inversion algorithm defined in this paper. Site effects obtained from inversion has been compared with that obtained using Nakamura (1988) and Lermo et al. (Bull Seis Soc Am 83:1574–1594, 1993) approach. The obtained site amplification term has been used for correcting spectral acceleration data at different stations. The corrected spectral acceleration data have been used as an input to the developed algorithm to avoid effect of near-site soil amplification term. The attenuation structure is estimated by dividing the entire area in several three-dimensional block of different frequency-dependent shear wave quality factor Q β (f). The input to this algorithm is the spectral acceleration of S phase of the corrected accelerogram. The outcome of the algorithm is given in terms of attenuation coefficient and source acceleration spectra. In the present study, this region has been divided into 25 rectangular blocks with thickness of 10 km and surface dimension of 12.5 × 12.1 km, respectively. Present study gives three-dimensional attenuation model of the region which can be used for both hazard estimation and simulation of strong ground motion.  相似文献   

4.
The issue addressed in this paper is the objective selection of appropriate ground motion models for seismic hazard assessment in the Pyrenees. The method of Scherbaum et al. (2004a) is applied in order to rank eight published ground motion models relevant to intraplate or to low deformation rate contexts. This method is based on a transparent and data-driven process which quantifies the model fit and also measures how well the underlying model assumptions are met. The method is applied to 15 accelerometric records obtained in the Pyrenees for events of local magnitude between 4.8 and 5.1, corresponding to moment magnitudes ranging from 3.7 to 3.9. Only stations at rock sites are considered. A total of 720 spectral amplitudes are used to rank the selected ground motion models. Some control parameters of these models, such as magnitude and distance definitions, may vary from one model to the other. It is thus important to correct the selected models for their difference with respect to the magnitude and distance definitions used for the Pyrenean data. Our analysis shows that, with these corrections, some of the ground motion models successfully fit the data. These are the Lussou et al. (2001) and the Berge-Thierry et al. (2003) models. According to the selected ground motion models, a possible scenario of a magnitude 6 event is proposed; it predicts response spectra accelerations of 0.08–0.1 g at 1 Hz at a hypocentral distance of 10 km.  相似文献   

5.
Let {Y, Y i , −∞ < i < ∞} be a doubly infinite sequence of identically distributed and asymptotically linear negative quadrant dependence random variables, {a i , −∞ < i < ∞} an absolutely summable sequence of real numbers. We are inspired by Wang et al. (Econometric Theory 18:119–139, 2002) and Salvadori (Stoch Environ Res Risk Assess 17:116–140, 2003). And Salvadori (Stoch Environ Res Risk Assess 17:116–140, 2003) have obtained Linear combinations of order statistics to estimate the quantiles of generalized pareto and extreme values distributions. In this paper, we prove the complete convergence of under some suitable conditions. The results obtained improve and generalize the results of Li et al. (1992) and Zhang (1996). The results obtained extend those for negative associated sequences and ρ*-mixing sequences. CIC Number O211, AMS (2000) Subject Classification 60F15, 60G50 Research supported by National Natural Science Foundation of China  相似文献   

6.
7.
After site clean-up teams have removed all of what they believe to be UXO within a specific impact area, statistical compliance sampling is a possible method for verifying with a specified probability that this area has been cleaned to specifications. Schilling [J Qual Technol 10(2):47–51, 1978, Acceptance sampling in quality control. Marcel Dekker, Inc., New York, 1982] developed a compliance sampling methodology based on the hypergeometric distribution. Bowen and Bennett (1987) also use compliance sampling where they provide an approximation for estimating the number of samples (n) required to state with desired probability that the entire population of sample units (N, where n < N) are in compliance with cleanup goals. This article describes two methods (anomaly and transect) for applying the Schilling [J Qual Technol 10(2):47–51, 1978, Acceptance sampling in quality control. Marcel Dekker, Inc., New York, 1982] compliance sampling method to military training sites. After describing these methods, a simulation study is presented which demonstrates the performance of transect compliance sampling calculations based on varied degrees of clustered UXO within a specific impact area and different types of sampling routines.  相似文献   

8.
The extension of MODFLOW onto the landscape with the Farm Process (MF-FMP) facilitates fully coupled simulation of the use and movement of water from precipitation, streamflow and runoff, groundwater flow, and consumption by natural and agricultural vegetation throughout the hydrologic system at all times. This allows for more complete analysis of conjunctive use water-resource systems than previously possible with MODFLOW by combining relevant aspects of the landscape with the groundwater and surface water components. This analysis is accomplished using distributed cell-by-cell supply-constrained and demand-driven components across the landscape within “water-balance subregions” comprised of one or more model cells that can represent a single farm, a group of farms, or other hydrologic or geopolitical entities. Simulation of micro-agriculture in the Pajaro Valley and macro-agriculture in the Central Valley are used to demonstrate the utility of MF-FMP. For Pajaro Valley, the simulation of an aquifer storage and recovery system and related coastal water distribution system to supplant coastal pumpage was analyzed subject to climate variations and additional supplemental sources such as local runoff. For the Central Valley, analysis of conjunctive use from different hydrologic settings of northern and southern subregions shows how and when precipitation, surface water, and groundwater are important to conjunctive use. The examples show that through MF-FMP's ability to simulate natural and anthropogenic components of the hydrologic cycle, the distribution and dynamics of supply and demand can be analyzed, understood, and managed. This analysis of conjunctive use would be difficult without embedding them in the simulation and are difficult to estimate a priori.  相似文献   

9.
Faulting, shallow seismicity (0–30 km), and seismic hazard of the Costa Rican Central Valley were analyzed. Faults in the study area are oriented northwest or northeast. There is an active fault system in the south flank of the Central Volcanic Ridge and another in the north flank of the Talamanca Ridge. Faults of these systems have generated 15 destructive earthquakes in the area during the last 228 years all of them shallow and their locations show one cluster near the Poas Volcano and another southward the Central Valley. These earthquakes have damaged cities of the Central Valley, two of them destroyed Cartago city and almost 1000 people were killed. Regarding recent seismicity, there are three main seismic sources at the Central Volcanic Ridge: Irazu, Bajo de la Hondura and Poas and other three in the Talamanca Ridge: Puriscal, Los Santos and Pejibaye.A seismic hazard map for the Metropolitan Area of San José has been elaborated, based on local tectonic and seismic information. The area for the hazard computation covers an area of 20×15 km2 and includes the zone where the most population and socioeconomic activities are concentrated. The computation analysis are based on areas zones and faults, each one characterized by recurrence parameters, geometry, minimum and maximum magnitude and source depth. A recent local spectral attenuation model, which includes relations for shallow crustal sources and subduction zone earthquakes, has been applied in this study. The seismic hazard results are presented in terms of contour plots of estimated peak ground acceleration (PGA) for bedrock conditions for return period of 50, 100 and 500 years. In the Central Park of San Jose City the following PGA values were found: 0.29g for 50 years, 0.36g for 100 years, and 0.53g for 500 years.  相似文献   

10.
We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra–Andaman earthquake (Han et al., Science 313(5787):658–662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8–20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.  相似文献   

11.
As part I of a sequence of two papers, previously developed L-moments by Hosking (J R Stat Soc Ser B Methodol 52(2):105–124, 1990), and the LH-moments by Wang (Water Resour Res 33(12):2841–2848, 1997) are re-visited. New relationships are developed for regional homogeneity analysis by the LH-moments, and further establishment of regional homogeneity is investigated. Previous works of Hosking (J R Stat Soc Ser B Methodol 52(2):105–124, 1990) and Wang (Water Resour Res 33(12):2841–2848, 1997) on L-moments and LH-moments for the generalized extreme value (GEV) distribution are extended to the generalized Pareto (GPA) and the generalized logistic (GLO) distributions. The Karkhe watershed, located in western Iran is used as a case study area. Regional homogeneity was investigated by first assuming the entire study area as one regional cluster. Then the entire study area was designated “homogeneous” by the L-moments (L); and was designated “heterogeneous” by all four levels of the LH-moments (L1 to L4). The k-means method was used to investigate the case of two regional clusters. All levels of the L- and LH-moments designated the upper watershed (region A), “homogeneous”, and the lower watershed (region B) “possibly-homogeneous”. The L3 level of the GPA and the L4 level of the GLO were selected for regions A and B, respectively. Wang (Water Resour Res 33(12):2841–2848, 1997) identified a reversing trend in improved performance of the GEV distribution at the LH-moments level of L3 (during the goodness-of-fit test). Similar results were also obtained in this research for the GEV distribution. However, for the case of the GPA distribution the reversing trend started at L4 for region A; and at L2 for region B. As for the case of the GLO, an improved performance was observed for all levels (moving from L to L4); for both regions.  相似文献   

12.
13.
Liverpool Bay is a region of freshwater influence which receives significant freshwater loading from a number of major English and Welsh rivers. Strong tidal current flow interacts with a persistent freshwater-induced horizontal density gradient to produce strain-induced periodic stratification (SIPS). Recent work (Palmer in Ocean Dyn 60:219–226, 2010; Verspecht et al. in Geophys Res Lett 37:L18602, 2010) has identified significant modification to tidal ellipses in Liverpool Bay during stratification due to an associated reduction in pycnocline eddy viscosity. Palmer (Ocean Dyn 60:219–226, 2010) identified that this modification results in asymmetry in flow in the upper and lower layers capable of permanently transporting freshwater away from the Welsh coastline via a SIPS pumping mechanism. Observational data from a new set of observations from the Irish Sea Observatory site B confirm these results; the measured residual flow is 4.0 cm s−1 to the north in the surface mixed layer and 2.4 cm s−1 to the south in the bottom mixed layer. A realistically forced 3D hydrodynamic ocean model POLCOMS succeeds in reproducing many of the characteristics of flow and vertical density structure at site B and is used to estimate the transport of water through a transect WT that runs parallel with the Welsh coast. Model results show that SIPS is the dominant steady state, occurring for 78.2% of the time whilst enduring stratification exists only 21.0% of the year and enduring mixed periods, <1%. SIPS produces a persistent offshore flow of freshened surface water throughout the year. The estimated net flux of water in the surface mixed layer is 327 km3 year 1, of which 281 km3 year−1 is attributable to SIPS periods. Whilst the freshwater component of this flux is small, the net flux of freshwater through WT during SIPS is significant, the model estimates 1.69 km3 year−1 of freshwater to be transported away from the coast attributable to SIPS periods equivalent to 23% of annual average river flow from the four catchment areas feeding Liverpool Bay. The results show SIPS pumping to be an important process in determining the fate of freshwater and associated loads entering Liverpool Bay.  相似文献   

14.
Shear wave splitting parameters represent a useful tool to detail the stress changes occurring in volcanic environments before impending eruptions. In the present paper, we display the parameter estimates obtained through implementation of a semiautomatic algorithm applied to all useful datasets of the following Italian active volcanic areas: Mt. Vesuvius, Campi Flegrei, and Mt. Etna. Most of these datasets have been the object of several studies (Bianco et al., Annali di Geofisica, XXXXIX 2:429–443, 1996, J Volcanol Geotherm Res 82:199–218, 1998a, Geophys Res Lett 25(10):1545–1548, 1998b, Phys Chem Earth 24:977–983, 1999, J Volcanol Geotherm Res 133:229–246, 2004, Geophys J Int 167(2):959–967, 2006; Del Pezzo et al., Bull Seismol Soc Am 94(2):439–452, 2004). Applying the semiautomatic algorithm, we confirmed the results obtained in previous studies, so we do not discuss in much detail each of our findings but give a general overview of the anisotropic features of the investigated Italian volcanoes. In order to make a comparison among the different volcanic areas, we present our results in terms of the main direction of the fast polarization (φ) and percentage of shear wave anisotropy (ξ).  相似文献   

15.
We present a simple and efficient hybrid technique for simulating earthquake strong ground motion. This procedure is the combination of the techniques of envelope function (Midorikawa et al. Tectonophysics 218:287–295, 1993) and composite source model (Zeng et al. Geophys Res Lett 21:725–728, 1994). The first step of the technique is based on the construction of the envelope function of the large earthquake by superposition of envelope functions for smaller earthquakes. The smaller earthquakes (sub-events) of varying sizes are distributed randomly, instead of uniform distribution of same size sub-events, on the fault plane. The accelerogram of large event is then obtained by combining the envelope function with a band-limited white noise. The low-cut frequency of the band-limited white noise is chosen to correspond to the corner frequency for the target earthquake magnitude and the high-cut to the Boore’s f max or a desired frequency for the simulation. Below the low-cut frequency, the fall-off slope is 2 in accordance with the ω2 earthquake source model. The technique requires the parameters such as fault area, orientation of the fault, hypocenter, size of the sub-events, stress drop, rupture velocity, duration, source–site distance and attenuation parameter. The fidelity of the technique has been demonstrated by successful modeling of the 1991 Uttarkashi, Himalaya earthquake (Ms 7). The acceptable locations of the sub-events on the fault plane have been determined using a genetic algorithm. The main characteristics of the simulated accelerograms, comprised of the duration of strong ground shaking, peak ground acceleration and Fourier and response spectra, are, in general, in good agreement with those observed at most of the sites. At some of the sites the simulated accelerograms differ from observed ones by a factor of 2–3. The local site geology and topography may cause such a difference, as these effects have not been considered in the present technique. The advantage of the technique lies in the fact that detailed parameters such as velocity-Q structures and empirical Green’s functions are not required or the records of the actual time history from the past earthquakes are not available. This method may find its application in preparing a wide range of scenarios based on simulation. This provides information that is complementary to the information available in probabilistic hazard maps.  相似文献   

16.
The earthquakes in Uttarkashi (October 20, 1991, M w 6.8) and Chamoli (March 8, 1999, M w 6.4) are among the recent well-documented earthquakes that occurred in the Garhwal region of India and that caused extensive damage as well as loss of life. Using strong-motion data of these two earthquakes, we estimate their source, path, and site parameters. The quality factor (Q β ) as a function of frequency is derived as Q β (f) = 140f 1.018. The site amplification functions are evaluated using the horizontal-to-vertical spectral ratio technique. The ground motions of the Uttarkashi and Chamoli earthquakes are simulated using the stochastic method of Boore (Bull Seismol Soc Am 73:1865–1894, 1983). The estimated source, path, and site parameters are used as input for the simulation. The simulated time histories are generated for a few stations and compared with the observed data. The simulated response spectra at 5% damping are in fair agreement with the observed response spectra for most of the stations over a wide range of frequencies. Residual trends closely match the observed and simulated response spectra. The synthetic data are in rough agreement with the ground-motion attenuation equation available for the Himalayas (Sharma, Bull Seismol Soc Am 98:1063–1069, 1998).  相似文献   

17.
The volume of groundwater stored in the subsurface in the United States decreased by almost 1000 km3 during 1900–2008. The aquifer systems with the three largest volumes of storage depletion include the High Plains aquifer, the Mississippi Embayment section of the Gulf Coastal Plain aquifer system, and the Central Valley of California. Depletion rates accelerated during 1945–1960, averaging 13.6 km3/year during the last half of the century, and after 2000 increased again to about 24 km3/year. Depletion intensity is a new parameter, introduced here, to provide a more consistent basis for comparing storage depletion problems among various aquifers by factoring in time and areal extent of the aquifer. During 2001–2008, the Central Valley of California had the largest depletion intensity. Groundwater depletion in the United States can explain 1.4% of observed sea‐level rise during the 108‐year study period and 2.1% during 2001–2008. Groundwater depletion must be confronted on local and regional scales to help reduce demand (primarily in irrigated agriculture) and/or increase supply.  相似文献   

18.
The goal of this study was to estimate the stress field acting in the Irpinia Region, an area of southern Italy that has been struck in the past by destructive earthquakes and that is now characterized by low to moderate seismicity. The dataset are records of 2,352 aftershocks following the last strong event: the 23 November 1980 earthquake (M 6.9). The earthquakes were recorded at seven seismic stations, on average, and have been located using a three-dimensional (3D) P-wave velocity model and a probabilistic, non-linear, global search technique. The use of a 3D velocity model yielded a more stable estimation of take-off angles, a crucial parameter for focal mechanism computation. The earthquake focal mechanisms were computed from the P-wave first-motion polarity data using the FPFIT algorithm. Fault plane solutions show mostly normal component faulting (pure normal fault and normal fault with a strike-slip component). Only some fault plane solutions show strike-slip and reverse faulting. The stress field is estimated using the method proposed by Michael (J Geophys Res 92:357–368, 1987a) by inverting selected focal mechanisms, and the results show that the Irpinia Region is subjected to a NE–SW extension with horizontal σ 3 (plunge 0°, trend 230°) and subvertical σ 1 (plunge 80°, trend 320°), in agreement with the results derived from other stress indicators.  相似文献   

19.
Coseismic deformation can be determined from strong-motion records of large earthquakes. Iwan et al. (Bull Seismol Soc Am 75:1225–1246, 1985) showed that baseline corrections are often required to obtain reliable coseismic deformation because baseline offsets lead to unrealistic permanent displacements. Boore (Bull Seismol Soc Am 91:1199–1211, 2001) demonstrated that different choices of time points for baseline correction can yield realistically looking displacements, but with variable amplitudes. The baseline correction procedure of Wu and Wu (J Seismol 11:159–170, 2007) improved upon Iwan et al. (Bull Seismol Soc Am 75:1225–1246, 1985) and achieved stable results. However, their time points for baseline correction were chosen by a recursive process with an artificial criterion. In this study, we follow the procedure of Wu and Wu (J Seismol 11:159–170, 2007) but use the ratio of energy distribution in accelerograms as the criterion to determine the time points of baseline correction automatically, thus avoiding the manual choice of time points and speeding up the estimation of coseismic deformation. We use the 1999 Chi-Chi earthquake in central Taiwan and the 2003 Chengkung and 2006 Taitung earthquakes in eastern Taiwan to illustrate this new approach. Comparison between the results from this and previous studies shows that our new procedure is suitable for quick and reliable determination of coseismic deformation from strong-motion records.  相似文献   

20.
Previous works based mainly on strong-motion recordings of large Japanese earthquakes showed that site amplification and soil fundamental frequency could vary over long and short time scales. These phenomena were attributed to non-linear soil behaviour: the starting fundamental frequency and amplification were both instantaneously decreasing and then recovering for a time varying from few seconds to several months. The recent April 6, 2009 earthquake (M W 6.3), occurred in the L’Aquila district (central Italy), gave us the possibility to test hypotheses on time variation of amplification function and soil fundamental frequency, thanks to the recordings provided by a pre-existing strong-motion array and by a large number of temporary stations. We investigated the intra- and inter-event soil frequency variations through different spectral analyses, including time-frequency spectral ratios and S-Transform (Stockwell et al. in IEEE Trans Signal Process 44:998–1001, 1996). Finally, analyses on noise recordings were performed, in order to study the soil behaviour in linear conditions. The results provided puzzling evidences. Concerning the long time scale, little variation was observed at the permanent stations of the Aterno Valley array. As for the short time-scale variation, the evidence was often contrasting, with some station showing a time-varying behavior, while others did not change their frequency with respect to the one evaluated from noise measurements. Even when a time-varying fundamental frequency was observed, it was difficult to attribute it to a classical, softening non-linear behaviour. Even for the strongest recorded shocks, with peak ground acceleration reaching 0.7 g, variations in frequency and amplitude seems not relevant from building design standpoint. The only exception seems to be the site named AQV, where the analyses evidence a fundamental frequency of the soil shifting from 3 Hz to about 1.5 Hz during the mainshock.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号