首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Very little work has been done in generating alternatives to the Poisson process model. The work reported here deals with alternatives to the Poisson process model for the earthquakes and checks them using empirical data and the statistical hypothesis testing apparatus. The strategy used here for generating hypotheses is to compound the Poisson process. The parameter of the Poisson process is replaced by a random variable having prescribed density function. The density functions used are gamma, chi and extended (gamma/chi). The original distribution is then averaged out with respect to these density functions. For the compound Poisson processes the waiting time distributions for the future events are derived. As the parameters for the various statistical models for earthquake occurrences are not known, the problem is basically of composite hypothesis testing. One way of designing a test is to estimate these parameters and use them as true values. Momentmatching is used here to estimate the parameters. The results of hypothesis testing using data from Hindukush and North East India are presented.  相似文献   

2.
The model of the Poisson point process is too vague for earthquake locations in space and time: earthquakes tend to cluster in middle distances and to repulse in large ones. The Poisson point model with variable density makes it possible to describe the tendency for clustering but does not reveal the periodicity of clusters. The author proposes the point-process model where locations of points are determined not by densities of point distribution, but by densities of interpoint differences distribution. In the model, a latent periodicity is revealed and used for prediction of a point process. In 1983, the point-process model prediction was made for the Kuril Islands for 1983–1987 and two signs of danger in time and location were determined. Then they were confirmed by strong earth-quakes. In 1989, a similar prediction was made for North Armenia. The Spitak earthquake in 1988 is clearly seen from the data of previous earthquakes.  相似文献   

3.
A multidimensional version of the time varying periodogram has been developed. The estimation method based on the multidimensional time-varying periodogram has been applied to a nonstationary multidimensional storm model. This work proposes that the multidimensional time varying periodogram is capable of estimating nonstationary spectral density functions in space and time.  相似文献   

4.
Recently, a special nonhomogeneous Poisson process known as the Weibull process has been proposed by C-H. Ho for fitting historical volcanic eruptions. Revisiting this model, we learn that it possesses some undesirable features which make it an unsatisfactory tool in this context. We then consider the entire question of a nonstationary model in the light of availability and completeness of data. In our view, a nonstationary model is unnecessary and perhaps undesirable. We propose the Weibull renewal process as an alternative to the simple (homogeneous) Poisson process. For a renewal process the interevent times are independent and distributed identically with distribution function F where, in the Weibull renewal process, F has the Weibull distribution, which has the exponential as a special situation. Testing for a Weibull distribution can be achieved by testing for exponentiality of the data under a simple transformation. Another alternative considered is the lognormal distribution for F. Whereas the homogeneous Poisson process represents purely random (memoryless) occurrences, the lognormal distribution corresponds to periodic behavior and the Weibull distribution encompasses both periodicity and clustering, which aids us in characterizing the volcano. Data from the same volcanoes considered by Ho were analyzed again and we determined there is no reason to reject the hypothesis of Weibull interevent times although the lognormal interevent times were not supported. Prediction intervals for the next event are compared with Ho's nonhomogeneous model and the Weibull renewal process seems to produce more plausible results.  相似文献   

5.
The EEPAS (“Every Earthquake a Precursor According to Scale”) model is a space–time point-process model based on the precursory scale increase (Ψ) phenomenon and associated predictive scaling relations. It has previously been fitted to the New Zealand earthquake catalogue, and applied successfully in quasi-prospective tests on the CNSS catalogue for California for forecasting earthquakes with magnitudes above 5.75 and on the JMA catalogue of Japan for magnitudes above 6.75. Here we test whether the Ψ scaling relations extend to lower magnitudes, by applying EEPAS to depth-restricted subsets of the NIED catalogue of the Kanto area, central Japan, for magnitudes above 4.75. As in previous studies, the EEPAS model is found to be more informative than a quasi-static baseline model based on proximity to past earthquakes, and much more informative than the stationary uniform Poisson model. The information that it provides is illustrated by maps of the earthquake occurrence rate density, covering magnitudes from 5.0 to 8.0, for the central Japan region as at the beginning of year 2004, using the NIED and JMA catalogues to mid-2003.  相似文献   

6.
Some Bayesian methods of dealing with inaccurate or vague data are introduced in the framework of seismic hazard assessment. Inaccurate data affected by heterogeneous errors are modeled by a probability distribution instead of the usual value plus a random error representation; these data are generically called imprecise. The earthquake size and the number of events in a certain time are modeled as imprecise data. Imprecise data allow us to introduce into the estimation procedures the uncertainty inherent in the inaccuracy and heterogeneity of the measuring systems from which the data were obtained. The problem of estimating the parameter of a Poisson process is shown to be feasible by the use of Bayesian techniques and imprecise data. This background technique can be applied to a general problem of seismic hazard estimation. Initially, data in a regional earthquake catalog are assumed imprecise both in size and location (i.e errors in the epicenter or spreading over a given source). By means of scattered attenuation laws, the regional catalog can be translated into a so-called site catalog of imprecise events. The site catalog is then used to estimate return periods or occurrence probabilities, taking into account all sources of uncertainty. Special attention is paid to priors in the Bayesian estimation. They can be used to introduce additional information as well as scattered frequency-size laws for local events. A simple example is presented to illustrate the capabilities of this methodology.  相似文献   

7.
A number of high-profile seismic events have occurred in recent years, with a wide variation in the resulting economic damage and loss of life. This variation has been attributed in part to the stringency of seismic building codes implemented in different regions. Using the HAZUS Earthquake Model, a benefit?Ccost analysis was performed on varying levels of standard buildings codes for Haiti and Puerto Rico. The methodology computes expected loss assuming a Poisson event process with lognormally distributed event magnitude and idealized damage?Cmagnitude response functions. The event frequency and magnitude distributions are estimated from the historical record, while the damage functions are fit using HAZUS simulation results for events with systematically varying magnitudes and different seismic code levels. To validate the approach, a single-event analysis was conducted using alternative building codes and mean magnitude earthquakes. A probabilistic analysis was then used to evaluate the long-term expected value for alternative levels of building codes. To account for the relationship between lives saved and economic loss, the implicit cost of saving a life is computed for each code option. It was found that in the two areas studied, the expected loss of life was reduced the most by use of high seismic building code levels, but lower levels of seismic building code were more cost-effective when considering only building damages and the costs for code implementation. The methodology presented is meant to provide a basic framework for the future development of an economic-behavioral model for code adoption.  相似文献   

8.
We report an empirical determination of the probability density functions Pdata(r) of the number r of earthquakes in finite space–time windows for the California catalog for different space (5 × 5 to 50 × 50 km2) and time intervals (0.1 to 1000 days). The data can be represented by asymptotic power law tails together with several cross-overs reasonably explained by one of the most used reference model in seismology (ETAS), which assumes that each earthquake can trigger other earthquakes according to complex cascades. These results are useful to constrain the physics of earthquakes and to estimate the performance of forecasting models of seismicity.  相似文献   

9.
Whether intraplate earthquakes have different average source properties, compared to interplate events, has been long debated. It has been proposed that intraplate events tend to rupture smaller areas with higher stress drops, compared to the average interplate earthquake. Here we estimate the rupture lengths of several Brazilian earthquakes by accurately locating their immediate aftershocks. The sparsity of stations in low-seismicity regions, such as Brazil, hinders accurate epicentral determination. We use cross-correlation of P, S and Lg waves to accurately locate the aftershocks relative to a reference event. In several cases, it was possible to infer the rupture length by the distribution of the early aftershocks; with the later aftershocks tending to span a larger area. We studied six different aftershock sequences using regional stations up to several hundred km distance. The mainshock occurs close to the foreshocks, which act as triggers to the main rupture. The immediate aftershocks tend to occur in a circle around a central (presumably stress-free) zone, which we interpret as the rupture of the mainshock. Published data from other events, based mainly on local networks, were added to provide an empirical relationship between rupture length and magnitude. These data suggest that stress-drops in Brazil vary mostly between 0.1 and 10 MPa, a similar range to many other studies worldwide. However, the mean stress drop (about 1 MPa) is smaller than the mean values of both interplate and intraplate events globally (mostly between 2 and 10 MPa). A possible dependence of stress drops with hypocentral depth may explain this difference: Brazilian intraplate earthquakes tend to be shallower than most other mid plate regions giving rise to smaller stress drops, on average. This result has important implications for seismic hazard estimation when GMPE equations from other intraplate regions are used in Brazil.  相似文献   

10.
Parameter estimation has become increasingly interesting the last few decades for a variety of engineering topics. In such situations, one may face problems like (a) how to estimate parameters for which erroneous measurements are available (direct estimation), or (b) how to estimate coefficients of some process model governing a geological phenomenon when these coefficients are inaccessible or difficult to access by direct investigation (inverse estimation or identification). Both these problems are examined in this presentation from a modern stochastic viewpoint, where parameters sought are interpretated mathematically as random functions, generated and estimated in space or time with the aid of recursive models. Advantages of this methodology are remarkable, from both theoretical and physical points of view, as compared to conventional statistics or nonrecursive estimators. Particularly it may offer more accurate estimators, better representation of spatial variation, and a means of overcoming difficulties such as excessive computational time or computer storage. To test effectiveness of this type of estimation, a series of representative case studies from geotechnical practice have been computed in detail.  相似文献   

11.
In Iran, earthquakes cause enormous damage to the people and economy. If there is a proper estimation of human losses in an earthquake disaster, it could be appropriately responded and its impacts and losses will be decreased. Neural networks can be trained to solve problems involving imprecise and highly complex nonlinear data. Based on the different earthquake scenarios and diverse kind of constructions, it is difficult to estimate the number of injured people. With respect to neural network’s capabilities, this paper describes a back propagation neural network method for modeling and estimating the severity and distribution of human loss as a function of building damage in the earthquake disaster. Bam earthquake data in 2003 were used to train this neural network. The final results demonstrate that this neural network model can reveal much more accurate estimation of fatalities and injuries for different earthquakes in Iran and it can provide the necessary information required to develop realistic mitigation policies, especially in rescue operation.  相似文献   

12.
The Weibull process is a parsimoniously parameterized nonhomogeneous Poisson process with monotonic trend, which has been widely used in reliability applications. It has also been used in volcanology to model the process of eruption onsets for a volcano with waning or waxing activity, and thus produce hazard forecasts. However, particularly in the latter application, problems with missing or spurious data can strongly influence the parameter estimates, which are usually obtained by maximizing the log likelihood function, and hence the future hazard. We show how theory developed for robust estimation of a nonhomogeneous Poisson process can be implemented for the Weibull process. The flank eruptions of Mt. Etna, in Sicily, is one of the most complete and best studied records of volcanism. Nevertheless, a number of different catalogs exist. We show how these can be at least partially reconciled by robust estimation, and how the more dubious regions of the catalogs can be identified.  相似文献   

13.
关于注水地震研究的几个问题   总被引:4,自引:0,他引:4  
张宝红  邱泽华 《现代地质》1994,8(3):329-333
本文认为注水地震是相当复杂的。将注水只看成诱发因素,或将与注水相关的地震都看成构造地震是片面的。对注水过程的微震研究表明,注水可以直接造成地震。对油田地震的研究表明,大量采油造成地下介质的亏空同样可能造成地震。对其它一些地下矿藏的采掘也有类似的情况。错误地把与注水有关的地震都看成是注水诱发的构造地震,将导致对地壳构造活动特点的错误估计。  相似文献   

14.
We investigated the detailed three-dimensional (3-D) isotropic and anisotropic structures of the crust and upper mantle under the NE Japan forearc region using a large number of P and S wave arrival-time data from onshore and offshore earthquakes. The suboceanic earthquakes used in this study are well relocated using the sP depth phases. We also determined the 3-D distribution of Poisson’s ratio, crack density and saturation rate using the 3-D P and S wave velocity model obtained by this study. The relatively complex anisotropic structures in the megathrust zone may reflect the complex geological structures, lithological variations and fluids in the accretional prism under the forearc region. The tomographic images reflect strong lateral heterogeneities in the megathrust zone under the Tohoku forearc. Areas with low velocity, high Poisson’s ratio, high crack density and high saturation rate may be due to entrapment of fluid-filled, unsolidated sediments on the plate interface close to the Japan Trench. Most of the large megathrust earthquakes since 1900 (M  6.0) and the large 2011 Tohoku-oki earthquakes (M 6.0–9.0) are located in areas with high velocity, high Poisson’s ratio, low crack density and high saturation rate, which may represent strongly-coupled asperities in the megathrust zone resulting from the subducted oceanic ridges and/or seamounts. In contrast, the areas with high Poisson’s ratio may indicate that the fluids have infiltrated into the strongly coupled patches. We think that the great Tohoku-oki earthquakes were caused by not only the stress concentration but also the in situ structural heterogeneities in the megathrust zone.  相似文献   

15.
The Himalayas has experienced varying rates of earthquake occurrence in the past in its seismo-tectonically distinguished segments which may be attributed to different physical processes of accumulation of stress and its release, and due diligence is required for its inclusion for working out the seismic hazard. The present paper intends to revisit the various earthquake occurrence models applied to Himalayas and examines it in the light of recent damaging earthquakes in Himalayan belt. Due to discordant seismicity of Himalayas, three types of regions have been considered to estimate larger return period events. The regions selected are (1) the North-West Himalayan Fold and Thrust Belt which is seismically very active, (2) the Garhwal Himalaya which has never experienced large earthquake although sufficient stress exists and (3) the Nepal region which is very seismically active region due to unlocked rupture and frequently experienced large earthquake events. The seismicity parameters have been revisited using two earthquake recurrence models namely constant seismicity and constant moment release. For constant moment release model, the strain rates have been derived from global strain rate model and are converted into seismic moment of earthquake events considering the geometry of the finite source and the rates being consumed fully by the contemporary seismicity. Probability of earthquake occurrence with time has been estimated for each region using both models and compared assuming Poissonian distribution. The results show that seismicity for North-West region is observed to be relatively less when estimated using constant seismicity model which implies that either the occupied accumulated stress is not being unconfined in the form of earthquakes or the compiled earthquake catalogue is insufficient. Similar trend has been observed for seismic gap area but with lesser difference reported from both methods. However, for the Nepal region, the estimated seismicity by the two methods has been found to be relatively less when estimated using constant moment release model which implies that in the Nepal region, accumulated strain is releasing in the form of large earthquake occurrence event. The partial release in second event of May 2015 of similar size shows that the physical process is trying to release the energy with large earthquake event. If it would have been in other regions like that of seismic gap region, the fault may not have released the energy and may be inviting even bigger event in future. It is, therefore, necessary to look into the seismicity from strain rates also for its due interpretation in terms of predicting the seismic hazard in various segments of Himalayas.  相似文献   

16.
Time independent seismic hazard analysis in Alborz and surrounding area   总被引:1,自引:0,他引:1  
The Bayesian probability estimation seems to have efficiencies that make it suitable for calculating different parameters of seismicity. Generally this method is able to combine prior information on seismicity while at the same time including statistical uncertainty associated with the estimation of the parameters used to quantify seismicity, in addition to the probabilistic uncertainties associated with the inherent randomness of earthquake occurrence. In this article a time-independent Bayesian approach, which yields the probability that a certain cut-off magnitude will be exceeded at certain time intervals is examined for the region of Alborz, Iran, in order to consider the following consequences for the city of Tehran. This area is located within the Alpine-Himalayan active mountain belt. Many active faults affect the Alborz, most of which are parallel to the range and accommodate the present day oblique convergence across it. Tehran, the capital of Iran, with millions of inhabitants is located near the foothills of the southern Central Alborz. This region has been affected several times by historical and recent earthquakes that confirm the importance of seismic hazard assessment through it. As the first step in this study an updated earthquake catalog is compiled for the Alborz. Then, by assuming a Poisson distribution for the number of earthquakes which occur at a certain time interval, the probabilistic earthquake occurrence is computed by the Bayesian approach. The highest probabilities are found for zone AA and the lowest probabilities for zones KD and CA, meanwhile the overall probability is high.  相似文献   

17.
Öncel  A. O.  Alptekin  Ö. 《Natural Hazards》1999,19(1):1-11
In order to investigate the effect of aftershocks on earthquake hazard estimation, earthquake hazard parameters (m, b and Mmax) have been estimated by the maximum likelihood method from the main shocks catalogue and the raw earthquakes catalogue for the North Anatolian Fault Zone (NAFZ). The main shocks catalogue has been compiled from the raw earthquake catalogue by eliminating the aftershocks using the window method. The raw earthquake catalogue consisted of instrumentally detected earthquakes between 1900 and 1992, and historical earthquakes that occurred between 1000–1900. For the events of the mainshock catalogue the Poisson process is valid and for the raw earthquake catalogue it does not fit. The paper demonstrates differences in the hazard outputs if on one hand the main catalogues and on the other hand the raw catalogue is used. The maximum likelihood method which allows the use of the mixed earthquake catalogue containing incomplete (historical) and complete (instrumental) earthquake data is used to determine the earthquake hazard parameters. The maximum regional magnitude (Mmax, the seismic activity rate (m), the mean return period (R) and the b value of the magnitude-frequency relation have been estimated for the 24°–31° E, 31°–41° E, 41°–45° E sections of the North Anatolian Fault Zone from the raw earthquake catalogue and the main shocks catalogue. Our results indicate that inclusion of aftershocks changes the b value and the seismic activity rate m depending on the proportion of aftershocks in a region while it does not significantly effect the value of the maximum regional magnitude since it is related to the maximum observed magnitude. These changes in the earthquake hazard parameters caused the return periods to be over- and underestimated for smaller and larger events, respectively.  相似文献   

18.
In order to identify whether observed seismic signals are generated by an underground nuclear explosion or an earthquake, it is adequate to rely on one efficient identifier that provides a reasonably good clue in an unambiguous way. Although it is generally accepted that multi-station, multi-parameter discrimination can provide separation between explosions and earthquakes, it has been observed that cases do arise where signal characteristics cannot be established distinctly and satisfactorily. In the so-called “difficult” cases which are associated with some ambiguity in deducing the nature of the source using single-station seismograms, it is shown in this paper that a reliable estimate of source depth proves extremely useful. Out of the eleven typical examples of “not-easy-to-discriminate” events recorded at the Gauribidanur short-period seismic array in Southern India, seven could be successfully identified as earthquakes and the remaining four as probable underground explosions on the basis of focal-depth estimates from multi-station data.  相似文献   

19.
20.
Use of tsunami waveforms for earthquake source study   总被引:1,自引:0,他引:1  
Tsunami waveforms recorded on tide gauges, like seismic waves recorded on seismograms, can be used to study earthquake source processes. The tsunami propagation can be accurately evaluated, since bathymetry is much better known than seismic velocity structure in the Earth. Using waveform inversion techniques, we can estimate the spatial distribution of coseismic slip on the fault plane from tsunami waveforms. This method has been applied to several earthquakes around Japan. Two recent earthquakes, the 1968 Tokachi-oki and 1983 Japan Sea earthquakes, are examined for calibration purposes. Both events show nonuniform slip distributions very similar to those obtained from seismic wave analyses. The use of tsunami waveforms is more useful for the study of unusual or old earthquakes. The 1984 Torishima earthquake caused unusually large tsunamis for its earthquake size. Waveform modeling of this event shows that part of the abnormal size of this tsunami is due to the propagation effect along the shallow ridge system. For old earthquakes, many tide gauge records exist with quality comparable to modern records, while there are only a few good quality seismic records. The 1944 Tonankai and 1946 Nankaido earthquakes are examined as examples of old events, and slip distributions are obtained. Such estimates are possible only using tsunami records. Since tide-gauge records are available as far back as the 1850s, use of them will provide unique and important information on long-term global seismicity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号