首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 853 毫秒
1.
Postglacial rebound and fault instability in Fennoscandia   总被引:5,自引:0,他引:5  
The best available rebound model is used to investigate the role that postglacial rebound plays in triggering seismicity in Fennoscandia. The salient features of the model include tectonic stress due to spreading at the North Atlantic Ridge, overburden pressure, gravitationally self-consistent ocean loading, and the realistic deglaciation history and compressible earth model which best fits the sea-level and ice data in Fennoscandia. The model predicts the spatio-temporal evolution of the state of stress, the magnitude of fault instability, the timing of the onset of this instability, and the mode of failure of lateglacial and postglacial seismicity. The consistency of the predictions with the observations suggests that postglacial rebound is probably the cause of the large postglacial thrust faults observed in Fennoscandia. The model also predicts a uniform stress field and instability in central Fennoscandia for the present, with thrust faulting as the predicted mode of failure. However, the lack of spatial correlation of the present seismicity with the region of uplift, and the existence of strike-slip and normal modes of current seismicity are inconsistent with this model. Further unmodelled factors such as the presence of high-angle faults in the central region of uplift along the Baltic coast would be required in order to explain the pattern of seismicity today in terms of postglacial rebound stress. The sensitivity of the model predictions to the effects of compressibility, tectonic stress, viscosity and ice model is also investigated. For sites outside the ice margin, it is found that the mode of failure is sensitive to the presence of tectonic stress and that the onset timing is also dependent on compressibility. For sites within the ice margin, the effect of Earth rheology is shown to be small. However, ice load history is shown to have larger effects on the onset time of earthquakes and the magnitude of fault instability.  相似文献   

2.
This paper compares two land change models in terms of appropriateness for various applications and predictive power. Cellular Automata Markov (CA_Markov) and Geomod are the two models, which have similar options to allow for specification of the predicted quantity and location of land categories. The most important structural difference is that CA_Markov has the ability to predict any transition among any number of categories, while Geomod predicts only a one‐way transition from one category to one alternative category.

To assess the predictive power, each model is run several times to predict land change in central Massachusetts, USA. The models are calibrated with information from 1971 to 1985, and then the models predict the change from 1985 to 1999. The method to measure the predictive power: 1) separates the calibration process from the validation process, 2) assesses the accuracy at multiple resolutions, and 3) compares the predictive model vis‐à‐vis a null model that predicts pure persistence. Among 24 model runs, the predictive models are more accurate than the null model at resolutions coarser than two kilometres, but not at resolutions finer than one kilometre. The choice of the options account for more variation in accuracy of runs than the choice of the model per se. The most accurate model runs are those that did not use spatial contiguity explicitly. For this particular study area, the added complexity of CA_Markov is of no benefit.  相似文献   

3.
Hydro power schemes operating in a free electricity market seek to maximize profits by differing generation rates to take best advantage of fluctuating selling prices, subject to the constraints of keeping storage lakes within their operational bounds and avoiding spillage losses. Various computer algorithms can be used in place of manual scheme operation to aid this maximization process, so it is desirable to quantify any profit gained from a given algorithm. A standard approach involves applying the algorithm to a period of past river flow records to see how much additional scheme income might have been obtained. This process requires the use of a hydro power scheme model, which inevitably can only approximate operational details, so the anticipated income gains are likely to be biased estimates of actual income gained from implementation of the algorithm. In addition to preliminary algorithm evaluation, it is desirable that hydro scheme managers have methodology to confirm anticipated income gain. Such confirmation can be difficult because true income gains are typically in the order of a few percentage and may not be easily distinguishable from background noise. We develop an approach, which allows estimation of true income gain for the situation where a change is made from manual to computer control of hydro power scheme operations, or upgrading from one maximization algorithm to another. The method uses a regression model to describe the former period of scheme operation. Postimplementation residuals from the regression predictions then provide estimates of actual income gain. The method can be sensitive to small but consistent income gains. Also, there is no requirement to construct any hydro scheme simulation model so bias effects should be considerably reduced. The approach was developed in the context of evaluating an income-maximization algorithm applied to a small hydro power scheme in the Kaimai Ranges of New Zealand. However, the methodology seems sufficiently simple and general to be applicable, with modification, to other power schemes moving toward increasing income through operational changes.  相似文献   

4.
For applications in animal movement, we propose a random trajectory generator (RTG) algorithm that combines the concepts of random walks, space-time prisms, and the Brownian bridge movement model and is capable of efficiently generating random trajectories between a given origin and a destination point, with the least directional bias possible. Since we provide both a planar and a spherical version of the algorithm, it is suitable for simulating trajectories ranging from the local scale up to the (inter-)continental scale, as exemplified by the movement of migrating birds. The algorithm accounts for physical limitations, including maximum speed and maximum movement time, and provides the user with either single or multiple trajectories as a result. Single trajectories generated by the RTG algorithm can be used as a null model to test hypotheses about movement stimuli, while the multiple trajectories can be used to create a probability density surface akin to Brownian bridges.  相似文献   

5.
本文提出一种基于随机森林的元胞自动机城市扩展(RF-CA)模型。通过在多个决策树的生成过程中分别对训练样本集和分裂节点的候选空间变量引入随机因素,提取城市扩展元胞自动机的转换规则。该模型便于并行构建,能在运算量没有显著增加的前提下提高预测的精度,对城市扩展中存在的随机因素有较强的容忍度。RF-CA模型可进行袋外误差估计,以快速获取模型参数;也可度量空间变量重要性,解释各空间变量在城市扩展中的作用。将该模型应用于佛山市1988-2012年的城市扩展模拟中,结果表明,与常用的逻辑回归模型相比,RF-CA模型进行模拟和预测分别能够提高1.7%和2.6%的精度,非常适用于复杂非线性特征的城市系统演变模型与扩展研究;通过对影响佛山市城市扩展的空间变量进行重要性度量,发现对佛山城市扩张模拟研究而言,距国道的距离与距城市中心的距离具有最重要的作用。  相似文献   

6.
Ecological optima and tolerances with respect to autumn pH were estimated for 63 diatom taxa in 47 Finnish lakes. The methods used were weighted averaging (WA), least squares (LS) and maximum likelihood (ML), the two latter methods assuming the Gaussian response model.WA produces optimum estimates which are necessarily within the observed lake pH range, whereas there is no such restriction in ML and LS. When the most extreme estimates of ML and LS were excluded, a reasonably close agreement among the results of different estimation methods was observed. When the species with unrealistic optima were excluded, the tolerance estimates were also rather similar, although the ML estimates were systematically greater.The parameter estimates were used to predict the autumn pH of 34 other lakes by weighted averaging. The ML and LS estimates including the extreme optima produced inferior predictions. A good prediction was obtained, however, when prediction with these estimates was additionally scaled with inverse squared tolerances, or when the extreme values were removed (censored). Tolerance downweighting was perhaps more efficient, and when it was used, no additional improvement was gained by censoring. The WA estimates produced good predictions without any manipulations, but these predictions tended to be biased towards the centroid of the observed range of pH values.At best, the average bias in prediction, as measured by mean difference between predicted and observed pH, was 0.082 pH units and the standard deviation of the differences, measuring the average random prediction error, was 0.256 pH units.  相似文献   

7.
The Loma Prieta earthquake (magnitude 7.0), which occurred in October 1989 in central California, was preceded by a period during which the mean magnitude of background seismicity in a small region near the eventual epicentre was abnormally low. This period may have begun as early as 1979, and it continued until mid-1988, after which the mean magnitude increased to a higher than normal value until the main earthquake. These changes were observed in the seismicity of an area 40  km in radius, centred on the Loma Prieta epicentre, and are consistent with the predictions of fracture mechanics studies. The 1988 change correlates with a reported change in long-term strain.
  A procedure has been developed for resolving such temporal changes in seismicity using CUSUM statistics. It demonstrates that the anomaly was highly significant, on the basis of analyses of two independent catalogues. There was also a significant anomaly before the 1994 Northridge earthquake.
  The hypothesis that large earthquakes are preceded by periods in which the mean magnitude of background activity is abnormally low, in the immediate vicinity of the eventual epicentre, is a tantalizing one. The analysis tool examined here may be useful for resolving such changes. Care needs to be taken, however, in routine surveillance of earthquake populations that contain large aftershock sequences.  相似文献   

8.
The time-dependence of earthquake occurrence is mostly ignored in standard seismic hazard assessment even though earthquake clustering is well known. In this work, we attempt to quantify the impact of more realistic dynamics on the seismic hazard estimations. We include the time and space dependences between earthquakes into the hazard analysis via Monte Carlo simulations. Our target region is the Lower Rhine Embayment, a low seismicity area in Germany. Including aftershock sequences by using the epidemic type aftershock-sequence (ETAS) model, we find that on average the hypothesis of uncorrelated random earthquake activity underestimates the hazard by 5–10 per cent. Furthermore, we show that aftershock activity of past large earthquakes can locally increase the hazard even centuries later. We also analyse the impact of the so-called long-term behaviour, assuming a quasi-periodic occurrence of main events on a major fault in that region. We found that a significant impact on hazard is only expected for the special case of a very regular recurrence of the main shocks.  相似文献   

9.
Selecting the set of candidate viewpoints (CVs) is one of the most important procedures in multiple viewshed analysis. However, the quantity of CVs remains excessive even when only terrain feature points are selected. Here we propose the Region Partitioning for Filtering (RPF) algorithm, which uses a region partitioning method to filter CVs of a multiple viewshed. The region partitioning method is used to decompose an entire area into several regions. The quality of CVs can be evaluated by summarizing the viewshed area of each CV in each region. First, the RPF algorithm apportions each CV to a region wherein the CV has a larger viewshed than that in other regions. Then, CVs with relatively small viewshed areas are removed from their original regions or reassigned to another region in each iterative step. In this way, a set of high-quality CVs can be preserved, and the size of the preserved CVs can be controlled by the RPF algorithm. To evaluate the computational efficiency of the RPF algorithm, its performance was compared with simple random (SR), simulated annealing (SA), and ant colony optimization (ACO) algorithms. Experimental results indicate that the RPF algorithm provides more than a 20% improvement over the SR algorithm, and that, on average, the computation time of the RPF algorithm is 63% that of the ACO algorithm.  相似文献   

10.
In order to understand the underlying physics of distributed seismicity better we have considered a 2-D array of slider blocks connected by springs and interacting via static friction with a surface. There is no driving plate in this model. The time evolution of the system is found from numerical simulations in a cellular automata formulation. Energy is conserved and is the single control parameter. The distribution of energies in the springs is found to obey a modified Maxwell-Boltzmann statistics. It is found that the number-size statistics of clusters of unstable sliding blocks is identical to those in percolation clusters in the site-to-site percolation model. There is a well-defined critical point when unstable blocks become connected across the array. It has been previously suggested that distributed seismicity in a seismic zone is the percolation backbone of a 3-D percolation cluster. The fact that low-level seismicity satisfies the Gutenberg-Richter frequency-magnitude relation and is nearly constant in time also suggests that this background seismicity is similar to thermally induced noise.  相似文献   

11.
Note on rain-triggered earthquakes and their dependence on karst geology   总被引:2,自引:0,他引:2  
Recently reported rain-triggered seismicity from three separate storms occurred exclusively in karst geology. In this paper, I discuss how the hydrogeology of karst controls rain-triggered seismicity by channeling of the watershed after intense rainfall directly into the karst network. Such channeling results in very large increases in hydraulic head, and more importantly, substantially increases the vertical stress acting on the underlying pore-elastic media. Rapid loading upon a pore-elastic media induces seismicity by increasing pore pressure at depth in a manner similar to that observed from reservoir impounding. Using a simple 1-D model of a pore-elastic medium, it is shown that the instantaneous fluid pressure increase at depth is a substantial fraction of the pressure step applied at the boundary, followed by time-dependent pore pressure increases associated with the typical linear diffusion problem. These results have implications for the change in fluid pressure necessary to trigger earthquakes, and leads to the following hypothesis to be tested: Unambiguous rain-triggered seismicity will only occur in karst regions.  相似文献   

12.
Summary. In this paper, we present a matrix form of Backus' theory of linear inference with multiple predictions. The Bayesian approach used by Backus allows the treatment of erroneous data and the imposition of the essential a priori bound on the model norm. We introduce a modification which involves translating the problem in accordance with an estimated model. Such a model may be known a priori or it may be constructed from the data. We are effectively able to bound the norm of all acceptable models from above and below and this results in more confining estimates of the predictions than provided by just an upper bound. In addition, the model construction approach allows us to make maximum use of the data in the inference computation. Our algorithm is robust and efficient, and estimates comparable to to those obtained from linear programming techniques have been achieved.  相似文献   

13.
Seismic imaging of the laterally varying D" region beneath the Cocos Plate   总被引:1,自引:0,他引:1  
We use an axisymmetric, spherical Earth finite difference algorithm to model SH -wave propagation through cross-sections of laterally varying lower mantle models beneath the Cocos Plate derived from recent data analyses. Synthetic seismograms with dominant periods as short as 4 s are computed for several models: (1) a D" reflector 264 km above the core–mantle boundary with laterally varying S -wave velocity increases of 0.9–2.6 per cent, based on localized structures from a 1-D double-array stacking method; (2) an undulating D" reflector with large topography and uniform velocity increase obtained using a 3-D migration method and (3) cross-sections through the 3-D mantle S -wave velocity tomography model TXBW. We apply double-array stacking to assess model predictions of data. Of the models explored, the S -wave tomography model TXBW displays the best overall agreement with data. The undulating reflector produces a double Scd arrival that may be useful in future studies for distinguishing between D" volumetric heterogeneity and D" discontinuity topography. Synthetics for the laterally varying models show waveform variability not observed in 1-D model predictions. It is challenging to predict 3-D structure based on localized 1-D models when lateral structural variations are on the order of a few wavelengths of the energy used, particularly for the grazing geometry of our data. Iterative approaches of computing synthetic seismograms and adjusting model characteristics by considering path integral effects are necessary to accurately model fine-scale D" structure.  相似文献   

14.
A systematic test of time-to-failure analysis   总被引:7,自引:0,他引:7  
Time-to-failure analysis is a technique for predicting earthquakes in which a failure function is fit to a time-series of accumulated Benioff strain. Benioff strain is computed from regional seismicity in areas that may produce a large earthquake. We have tested the technique by fitting two functions, a power law proposed by Bufe & Varnes (1993) and a log-periodic function proposed by Sornette & Sammis (1995). We compared predictions from the two time-to-failure models to observed activity and to predicted levels of activity based upon the Poisson model. Likelihood ratios show that the most successful model is Poisson, with the simple Poisson model four times as likely to be correct as the best time-to-failure model. The best time-failure model is a blend of 90 per cent Poisson and 10 per cent log-periodic predictions. We tested the accuracy of the error estimates produced by the standard least-squares fitter and found greater accuracy for fits of the simple power law than for fits of the more complicated log-periodic function. The least-squares fitter underestimates the true error in time-to-failure functions because the error estimates are based upon linearized versions of the functions being fitted.  相似文献   

15.
Summary. A method is presented for processing three-component digital recordings of micro-earthquakes to obtain near-vertical reflection profiles in regions of shallow seismicity. The processing includes magnitude and focal-depth normalization and event stacking, where stacking is by small localized groups, with ray theoretical time and distance corrections applied to compensate for varying focal depths. In areas with high seismicity, this procedure allows earthquakes to be treated as "controlled" sources to probe layered structures of the deep crust and upper mantle. The validity of our approach is demonstrated using S-waves from aftershocks of the Borah Peak, Idaho, earthquake (Ms = 7.3) of 1983.  相似文献   

16.
Many cities in the United States and Canada offer a 311 helpline to their residents for submitting requests for non-emergency municipal services. By dialing 311, urban residents can report a range of public issues that require governmental attention, including potholes, graffito, sanitation complaints, and tree debris. The demand for these municipal services fluctuates greatly with time and location, which poses multiple challenges to effective deployment of limited resources. To address these challenges, this study uses a locally adaptive space-time kernel approach to model 311 requests as an inhomogeneous Poisson process and presents an analytical framework to generate predictions of 311 demand in space and time. The predictions can be used to optimally allocate resources and staff, reduce response time, and allow long-term dynamic planning. We use a bivariate spatial kernel to identify the spatial structure and weigh each kernel by corresponding past observations to capture the temporal dynamics. Short-term serial dependency and weekly temporality are modeled through the temporal weights, which are adaptive to local community areas. We also transform the computation-intensive parameter estimation procedure to a low dimensional optimization problem by fitting to the autocorrelation function of historical requests. The presented method is demonstrated and validated with sanitation service requests in Chicago. The results indicate that it performs better than common industry practice and conventional spatial models with a comparable computational cost.  相似文献   

17.
Seismic hazard estimations are compared using two approaches based on two different seismicity models: one which models earthquake recurrence by applying the truncated Gutenberg-Richter law and a second one which smoothes the epicentre location of past events according to the fractal distribution of earthquakes in space ( Woo 1996 ). The first method requires the definition of homogeneous source zones and the determination of maximum possible magnitudes whereas the second method requires the definition of a smoothing function. Our results show that the two approaches lead to similar hazard estimates in low seismicity regions. In regions of increased seismic activity, on the other hand, the smoothing approach yields systematically lower estimates than the zoning method. This epicentre-smoothing approach can thus be considered as a lower bound estimator for seismic hazard and can help in decision making in moderate seismicity regions where source zone definition and estimation of maximum possible magnitudes can lead to a wide variety of estimates due to lack of knowledge. The two approaches lead, however, to very different earthquake scenarios. Disaggregation studies at a representative number of sites show that if the distributions of contributions according to source–site distance are comparable between the two approaches, the distributions of contributions according to magnitude differ, reflecting the very different seismicity models used. The epicentre-smoothing method leads to scenarios with predominantly intermediate magnitudes events (5 ≤ M ≤ 5.5) while the zoning method leads to scenarios with magnitudes that increase with the return period from the minimum to the maximum magnitudes considered. These trends demonstrate that the seismicity model used plays a fundamental role in the determination of the controlling scenarios and ways to discriminate between the most appropriate models remains an important issue.  相似文献   

18.
A TEST OF SIGNIFICANCE FOR PARTIAL LEAST SQUARES REGRESSION   总被引:1,自引:0,他引:1  
Partial least squares (PLS) regression is a commonly used statistical technique for performingmultivariate calibration, especially in situations where there are more variables than samples. Choosingthe number of factors to include in a model is a decision that all users of PLS must make, but iscomplicated by the large number of empirical tests available. In most instances predictive ability is themost desired property of a PLS model and so interest has centred on making this choice based on aninternal validation process. A popular approach is the calculation of a cross-validated r~2 to gauge howmuch variance in the dependent variable can be explained from leave-one-out predictions. Using MonteCarlo simulations for different sizes of data set, the influence of chance effects on the cross-validationprocess is investigated. The results are presented as tables of critical values which are compared againstthe values of cross-validated r~2 obtained from the user's own data set. This gives a formal test forpredictive ability of a PLS model with a given number of dimensions.  相似文献   

19.
Predictive vegetation modeling can be used statistically to relate the distribution of vegetation across a landscape as a function of important environmental variables. Often these models are developed without considering the spatial pattern that is inherent in biogeographical data, resulting from either biotic processes or missing or misspecified environmental variables. Including spatial dependence explicitly in a predictive model can be an efficient way to improve model accuracy with the available data. In this study, model residuals were interpolated and added to model predictions, and the resulting prediction accuracies were assessed. Adding kriged residuals improved model accuracy more often than adding simulated residuals, although some alliances showed no improvement or worse accuracy when residuals were added. In general, the prediction accuracies that were not increased by adding kriged residuals were either rare in the sample or had high nonspatial model accuracy. Regression interpolation methods can be an important addition to current tools used in predictive vegetation models as they allow observations that are predicted well by environmental variables to be left alone, while adjusting over‐ and underpredicted observations based on local factors.  相似文献   

20.
The details of a general multiblock partial least squares(PLS)algorithm based on one originallypresented by Wold et al.have been developed and are completely presented.The algorithm can handlemost types of relationships between the blocks and constitutes a significant advancement in the modelingof complex chemical systems.The algorithm has been programmed in FORTRAN and has been testedon two simulated multiblock problems,a three-block and a five-block problem.The algorithm combinesthe score vectors for all blocks predicting a particular block into a new block.This new block is used topredict the predicted block in a manner analogous to the two-block PLS.In a similar manner if one blockpredicts more than one other block,the score vectors of all predicted blocks are combined to form a newblock,which is then predicted by the predictor block as in the two-block PLS.Blocks that both predictand are predicted are treated in such a way that both of these roles can be taken into account whencalculating interblock relationships.The results of numerical simulations indicate that the computerprogram is operating properly and that the multiblock PLS produces meaningful and consistent results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号