首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
V. Krník  K. Klíma 《Tectonophysics》1993,220(1-4):309-323
The European-Mediterranean earthquake catalogue from 1901 to 1985, which comprises uniformly determined magnitudes MS and mB(h ≥ 60 km) of 13300 events, was used in the study of cumulative magnitude-frequency relationships Nc(M) compiled for 75 earthquake regions and 25 larger provinces. In the whole magnitude range observed, the Gutenberg-Richter formula log Nc(M) = abM very rarely fits the cumulative (log Nc, M) distributions. The b-values of log-linear segments of Nc(M) vary regionally from b = 0.7 to b = 1.3; averaging of all values leads to (shallow events, MS and ).

Most distributions pertain to the Mediterranean area (b = 0.86 from the graph for shallow events) and many of them indicate the existence of characteristic earthquakes in accordance with the theoretical single-fault model. Other observed shapes of Nc(M) can be explained by the superposition of populations of different Mmax values or by the presence of swarm-type activity. The observed Nc(M) distributions depend very much on the delineation of earthquake regions i.e. on the number and dimension of seismoactive faults in the investigated region.

A premonitory enhancement of medium earthquake activity (M = 4.5–5.5) can be observed only very rarely.  相似文献   


2.
We propose a modification of the Pattern Informatics (PI) method that has been developed for forecasting the locations of future large earthquakes. This forecast is based on analyzing the space–time patterns of past earthquakes to find possible locations where future large earthquakes are expected to occur. A characteristic of our modification is that the effect of errors in the locations of past earthquakes on the output forecast is reduced. We apply the modified and original methods to seismicity in the central part of Japan and compared the forecast performances. We also invoke the Relative Intensity (RI) of seismic activity and randomized catalogs to constitute null hypotheses. We do statistical tests using the Molchan and Relative Operating Characteristic (ROC) diagrams and the log-likelihoods and show that the forecast for using the modified PI method is generally better than the competing original-PI forecast and the forecasts from the null hypotheses. Using the bootstrap technique with Monte-Carlo simulations, we further confirm that earthquake sequences simulated based on the modified-PI forecast can be statistically the same as the real earthquake sequence so that the forecast is acceptable. The main and innovative science in this paper is the modification of the PI method and the demonstration of its applicability, showing a considerable promise as an intermediate-term earthquake forecasting tool.  相似文献   

3.
Based on the tectonic framework of central Japan, including the surrounding submarine areas, the space-time relationship between destructive inland earthquakes of magnitudesM 6.4 or greater and great offshore earthquakes along the Nankai trough was examined. From east to west, four tectonic lines are defined as lines linking active faults: the Itoigawa-Shizuoka tectonic line (ISTL), the Tsurugawan-Isewan tectonic line (TITL), the Hanaore-Kongo fault line (HKFL), and the Arima-Takatsuki tectonic line (ATTL). The TITL divides central Japan into the Chubu and Kinki districts, and probably extends southward to the Nankai trough. The Chubu district is subdivided into four blocks by boundary lines linking NW-SE trending active faults having left-lateral strike slip. In the Kinki district, N-S trending, active reverse, steep-dip faults are dominant in the triangular region north of the Median Tectonic line, between the TITL and HKFL, forming a basin-and-range province.

Starting from 1586 A.D., a seismic space-time sequence of high seismic activity in the Chubu district in which earthquake occurrence migrates from the eastern to western tectonic lines of central Japan was identified. The sequence also revealed that inland earthquakes preceded great offshore earthquakes which occurred along the Nankai trough. It was also found that a destructive earthquake tends to occur on the HKFL within 30 years after the occurrence on the TITL, and that the western Nankai trough generated great earthquakes ofM≥7.0 at intervals ranging from 8 to 49 years after the HKFL earthquakes. If the eastern Nankai trough is coupled with the western Nankai trough, a forthcoming greater earthquake measuringM 8.5 may be expected. Since such great earthquakes are always accompanied by large tsunamis, much attention should be focussed on possible tsunami disasters along the Pacific coast of central Japan.

Based on its tectonic structure, a tectonic model of central Japan is proposed. The seismic space-time sequence, which attempts to explain the cause of the sequential earthquake generation, is also discussed.  相似文献   


4.
We found a characteristic space–time pattern of the tidal triggering effect on earthquake occurrence in the subducting Philippine Sea plate beneath the locked zone of the plate interface in the Tokai region, central Japan, where a large interplate earthquake may be impending. We measured the correlation between the Earth tide and earthquake occurrence using microearthquakes that took place in the Philippine Sea plate for about two decades. For each event, we assigned the tidal phase angle at the origin time by theoretically calculating the tidal shear stress on the fault plane. Based on the distribution of the tidal phase angles, we statistically tested whether they concentrate near some particular angle or not by using Schuster's test. In this test, the result is evaluated by p-value, which represents the significance level to reject the null hypothesis that earthquakes occur randomly irrespective of the tidal phase angle. As a result of analysis, no correlation was found for the data set including all the earthquakes. However, we found a systematic pattern in the temporal variation of the tidal effect; the p-value significantly decreased preceding the occurrence of M ≥ 4.5 earthquakes, and it recovered a high level afterwards. We note that those M ≥ 4.5 earthquakes were considerably larger than the normal background seismicity in the study area. The frequency distribution of tidal phase angles in the pre-event period exhibited a peak at the phase angle where the tidal shear stress is at its maximum to accelerate the fault slip. This indicates that the observed small p-value is a physical consequence of the tidal effect. We also found a distinctive feature in the spatial distribution of p-values. The small p-values appeared just beneath the strongly coupled portion of the plate interface, as inferred from the seismicity rate change in the past few years.  相似文献   

5.
The seismic characteristic of Hindukush–Pamir–Himalaya (HPH) and its vicinity is very peculiar and has experienced many widely distributed large earthquakes. Recent work on the time-dependent seismicity in the Hindukush–Pamir–Himalayas is mainly based on the so-called “regional time-predictable model”, which is expressed by the relation log T=cMp+a, where T is the inter-event time between two successive main shocks of a region and Mp is the magnitude of the preceded main shock. Parameter a is a function of the magnitude of the minimum earthquake considered and of the tectonic loading and c is positive (0.3) constant. In 90% of the cases with sufficient data, parameter c was found to be positive, which strongly supports the validity of the model. In the present study, a different approach, which assumes no prior regionalization of the area, is attempted to check the validity of the model. Nine seismic sources were defined within the considered region and the inter-event time of strong shallow main shock were determined and used for each source in an attempt at long-term prediction, which show the clustering and occurrence of at least three earthquakes of magnitude 5.5≤Ms≤7.5 giving two repeat times, satisfying the necessary and sufficient conditions of time-predictable model (TP model). Further, using the global applicability of the regional time- and magnitude-predictable model, the following relations have been obtained: log Tt=0.19 Mmin+0.52Mp+0.29 log m0−10.63 and Mf=1.31Mmin−0.60Mp−0.72 log m0+21.01, where Tt is the inter-event time, measured in years; Mmin the surface wave magnitude of the smallest main shock considered; Mp the magnitude of preceding main shock; Mf the magnitude of the following main shock; and m0 the moment rate in each source per year.

These relations may be used for seismic hazard assessment in the region. Based on these relations and taking into account the time of occurrence and the magnitude of the last main shock in each seismogenic source, time-dependent conditional probabilities for the occurrence of the next large (Ms≥5.5) shallow main shocks during the next 20 years as well as the magnitudes of the expected main shocks are determined.  相似文献   


6.
P-wave first motion and synthetic seismogram analysis of P- and SH-waveforms recorded at teleseismic distances on the WWSSN are used to estimate source parameters of seven of the largest earthquakes (6.1 ≤ mb ≤ 6.3) that occurred in the vicinity of North Island, New Zealand since 1965. The source parameters of three other (mb ≥ 6.1) events determined outside of this study are included and considered in the final analysis. Four of the earthquakes occurred at shallow depths (< 20 km), of which three were located within and to the north of North Island. Two of the shallow events show strike-slip and normal focal mechanisms with T-axes oriented in a manner consistent with their location in an area of known back-arc extension. One of the shallow events occurred in northern South Island and shows a reverse-type mechanism indicating horizontal contraction of the crust in an easterly azimuth. Six events occurred at intermediate depths (h = 39 to 195 km) of which five exhibit thrust mechanisms with T-axes consistently oriented near vertical. In the light of previously published plate tectonic models, the near vertical orientation of T-axes of the intermediate-depth events may be used to infer that the southern Kermadec plate boundary immediately north of North Island is not strongly coupled, and hence, not likely capable of producing great earthquakes. A similar inference cannot be made for the section of the Hikurangi Margin adjacent to North Island since the intermediate-depth events considered in this study lie to the north of this segment of the plate boundary.  相似文献   

7.
Frequency-size relation of earthquakes in a region can be approximated by the Gutenberg-Richter law(GR). This power-law model involves two parameters: a-value measuring seismic activity or earthquake productivity, and b-value describing the relation between frequencies of small and large earthquakes.The spatial and temporal variations of these two parameters, especially the b-value, have been substantially investigated. For example, it has been shown that b-value depends inversely on differential stress. The b-value has also been utilized as earthquake precursor in large earthquake prediction.However, the physical meaning and properties of b-value including its value range still remain as an open fundamental question. We explore the property of b-value from frequency-size GR model in a new form which relates average energy release and probability of large earthquakes. Based on this new form of GR relation the b-value can be related to the singularity index(1-2/3 b) of fractal energy-probability power-law model. This model as applied to the global database of earthquakes with size M ≥ 5 from 1964 to 2015 indicates a systematic increase of singularity from earthquakes occurring on mid-ocean ridges, to those in subduction zones and in collision zones.  相似文献   

8.
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum-likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historical data.  相似文献   

9.
In this work, we apply the Pattern Informatics technique for evaluating one surface expression of the underlying stress field, the seismicity, in order to study the Parkfield–Coalinga interaction over the years preceding the 1983 Coalinga earthquake. We find that significant anomalous seismicity changes occur during the mid-1970s in this region prior to the Coalinga earthquake that illustrate a reduction in the probability of an event at Parkfield, while the probability of an event at Coalinga is seen to increase. This suggests that the one event did not trigger or hinder the other, rather that the dynamics of the earthquake system are a function of stress field changes on a larger spatial and temporal scale.  相似文献   

10.
Based on a block structure model of the inner belt of central Japan, an examination was conducted of the space-time distribution patterns of destructiv magnitudes M 6.4 or greater (M =Japan Meteorological Agency Scale). The distribution patterns revealed a periodicity in earthquake activit seismic gaps. Major NW—SE trending left-lateral active faults divide the inner belt of central Japan into four blocks, 20–80 km wide. The occurrenc A.D. with M ≥ 6.4, which have caused significant damage, were documented in the inner belt of central Japan. The epicenters of these earthquakes close to the block boundaries.

Using the relationship between the magnitude of earthquakes which occurred in the Japanese Islands and the active length of faults that generated them, movement is calculated for each historical earthquake. Space—time distributions of earthquakes were obtained from the calculated lengths, the latitud of generation. When an active period begins, a portion or segment of the block boundary creates an earthquake, which in turn appears on the ground surf active period ends when the block boundary generates earthquakes over the entire length of the block boundary without overlapping.

Five seismic gaps with fault lengths of 20 km or longer can be found in the inner belt of central Japan. It is predicted that the gaps will generate ea magnitudes of 7.0. These data are of significance for estimating a regional earthquake risk over central Japan in the design of large earthquake resist

The time sequences of earthquakes on the block boundaries reveal a similar tendency, with alternating active periods with seismic activity and quiet pe activity. The inner belt of central Japan is now in the last stage of an active period. The next active period is predicted to occur around 2500 A.D.  相似文献   


11.
大地震发生的网络性质——兼论有关地震预测的争论   总被引:22,自引:0,他引:22  
徐道一 《地学前缘》2001,8(2):211-216
现有地震形成机制的假说大多是仅考虑震源及其邻近地区的事实依据。文中提出 :大地震的形成机制具有网络特性 ,把大地震看成是多层次网络的节点。一个地震的发生是多种动力 (包括天文因素 )作用的结果。地震形成机制的网络假说能较好地综合已有概念 ,解释地震预测研究中发现许多新现象。从网络假说看近年来“地震能否预测”的争论可有许多新启示。如果应用网络假说 ,至少一部分地震应是可以预测的。  相似文献   

12.
The magnitude frequency relation of an asperity model is constructed by means of the percolation theory. The Gutenberg-Richter relation is obtained with a b-value of 1 in the range of intermediate earthquakes. A relative enhancement in the probability of occurrence of large earthquakes is also observed. This effect is associated with “characteristic earthquakes”, whose magnitude is related to the size of the active fault.  相似文献   

13.
Whether the earthquake occurrences follow a Poisson process model is a widely debated issue. The Poisson process model has great conceptual appeal and those who rejected it under pressure of empirical evidence have tried to restore it by trying to identify main events and suppressing foreshocks and aftershocks. The approach here is to estimate the density functions for the waiting times of the future earthquakes. For this purpose, the notion of Gram-Charlier series which is a standard method for the estimation of density functions has been extended based on the orthogonality properties of certain polynomials such as Laguerre and Legendre. It is argued that it is best to estimate density functions in the context of a particular null hypothesis. Using the results of estimation a simple test has been designed to establish that earthquakes do not occur as independent events, thus violating one of the postulates of a Poisson process model. Both methodological and utilitarian aspects are dealt with.  相似文献   

14.
The Parkfield Area Seismic Observatory (PASO) was a dense, telemetered seismic array that operated for nearly 2 years in a 15 km aperture centered on the San Andreas Fault Observatory at Depth (SAFOD) drill site. The main objective of this deployment was to refine the locations of earthquakes that will serve as potential targets for SAFOD drilling and in the process develop a high (for passive seismological techniques) resolution image of the fault zone structure. A challenging aspect of the analysis of this data set was the known existence of large (20–25%) contrasts in seismic wavespeed across the San Andreas Fault. The resultant distortion of raypaths could challenge the applicability of approximate ray tracing techniques. In order to test the sensitivity of our hypocenter locations and tomographic image to the particular ray tracing and inversion technique employed, we compare an initial determination of locations and structure developed using a coarse grid and an approximate ray tracer [Thurber, C., Roecker, S., Roberts, K., Gold, M., Powell, M.L. , and Rittger, K., 2003. Earthquake locations and three-dimensional fault zone structure along the creeping section of the San Andreas fault near Parkfield, CA: Preparing for SAFOD, Geophys. Res. Lett., 30 3, 10.1029/2002GL016004.] with one derived from a relatively fine grid and an application of a finite difference algorithm [Hole, J.A., and Zelt, B.C., 1995. 3-D finite-difference reflection traveltimes, Geophys. J. Int., 121, 2, 427–434.]. In both cases, we inverted arrival-time data from about 686 local earthquakes and 23 shots simultaneously for earthquake locations and three-dimensional Vp and Vp/Vs structure. Included are data from an active source seismic experiment around the SAFOD site as well as from a vertical array of geophones installed in the 2-km-deep SAFOD pilot hole, drilled in summer 2002. Our results show that the main features of the original analysis are robust: hypocenters are located beneath the trace of the fault in the vicinity of the drill site and the positions of major contrasts in wavespeed are largely the same. At the same time, we determine that shear wave speeds in the upper 2 km of the fault zone are significantly lower than previously estimated, and our estimate of the depth of the main part of the seismogenic zone decreases in places by  100 m. Tests using “virtual earthquakes” (borehole receiver gathers of picks for surface shots) indicate that our event locations near the borehole currently are accurate to about a few tens of meters horizontally and vertically.  相似文献   

15.
Worldwide analysis of the clustering of earthquakes has lead to the hypothesis that the occurrence of abnormally large clusters indicates an increase in probability of a strong earthquake in the next 3–4 years within the same region. Three long-term premonitory seismicity patterns, which correspond to different non-contradictory definitions of abnormally large clusters, were tested retrospectively in 15 regions. The results of the tests suggest that about 80% of the strongest earthquakes can be predicted by monitoring these patterns.Most of results concern pattern B (“burst of aftershocks”) i.e. an earthquake of medium magnitude with an abnormally large number of aftershocks during the first few days. Two other patterns, S and Σ often complement pattern B and can replace it in some regions where the catalogs show very few aftershocks.The practical application of these patterns is strongly limited by the fact that neither the location of the coming earthquake within the region nor its time of occurrence within 3–4 years is indicated. However, these patterns present the possibility of increasing the reliability of medium and short-term precursors; also, they allow activation of some important early preparatory measures.The results impose the following empirical constraint on the theory of the generation of a strong earthquake: it is preceded by abnormal clustering of weaker earthquakes in the space-time-energy domain; corresponding clusters are few but may occur in a wide region around the location of the coming strong earthquake; the distances are of the same order as for the other reported precursors.  相似文献   

16.
In this paper, the energy flux of strong earthquakes at a station is determined considering the progressive rupture of a fault as the source of earthquakes. It is found that the motion of the source and the relative position of the station with respect to the fault are important in determining the energy density, the energy flux and the duration of the earthquake at this station. There is a “sphere of influence” beyond which the source may be assumed to be stationary. The analytical results are in good agreement with those of the 5 strong motion records obtained very near the fault from the Parkfield event of 27th June, 1966. 21 strong motion records are studied for energy densities at the stations from which a magnitude-energy relationship is obtained which agrees closely with other existing relationships.  相似文献   

17.
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.  相似文献   

18.
震源区能量积累和释放过程的熵模型基本特征(英文)   总被引:1,自引:0,他引:1  
在地震活动区的局部地壳地震活动性很大程度上是随机的,但在某些情况下,小的地方震的震级时间序列却具有确定性的分量,此分量很可能与一个大地震的成核有关。当小地震事件中最大的事件变小,最小的事件变大,并且它们的差别不断地减小,这个分量在地震记录上就表现为由震级的两个反向实时趋势产生的所谓能量楔。在一个大的成核事件的震源区,利用相图法,笔者依据非线性动力学已经解释了地震过程的演化和小震的大小分布。模拟地震过程的这种新的处理方法和数学模型已经被应用于来自世界各地区的大批地震目录数据,特别是中国的地震数据。  相似文献   

19.
M. Murru  R. Console  G. Falcone   《Tectonophysics》2009,470(3-4):214-223
We have applied an earthquake clustering epidemic model to real time data at the Italian Earthquake Data Center operated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) for short-term forecasting of moderate and large earthquakes in Italy. In this epidemic-type model every earthquake is regarded, at the same time, as being triggered by previous events and triggering following earthquakes. The model uses earthquake data only, with no explicit use of tectonic, geologic, or geodetic information. The forecasts are displayed as time-dependent maps showing both the expected rate density of Ml ≥ 4.0 earthquakes and the probability of ground shaking exceeding Modified Mercalli Intensity VI (PGA ≥ 0.01 g) in an area of 100 × 100 km2 around the zone of maximum expected rate density in the following 24 h. For testing purposes, the overall probability of occurrence of an Ml ≥ 4.5 earthquake in the same area of 100 × 100 km2 is also estimated. The whole procedure is tested in real time, for internal use only, at the INGV Earthquake Data Center.Forecast verification procedures have been carried out in forward-retrospective way on the 2006–2007 INGV data set, making use of statistical tools as the Relative Operating Characteristics (ROC) diagrams. These procedures show that the clustering epidemic model performs up to several hundred times better than a simple random forecasting hypothesis. The seismic hazard modeling approach so developed, after a suitable period of testing and refinement, is expected to provide a useful contribution to real time earthquake hazard assessment, even with a possible practical application for decision making and public information.  相似文献   

20.
Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models.The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized with the first one. We use these partially synchronized models to successfully forecast most of the largest earthquakes generated by the first model. This forecasting strategy outperforms others that only take into account the earthquake series. Our results suggest that probably a good way to synchronize more detailed models with real faults is to force them to reproduce the sequence of previous earthquake ruptures on the faults. This hypothesis could be tested in the future with more detailed models and actual seismic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号