首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Summary. In this paper computer modelling is used to test simple approximations for simulating strong ground motions for moderate and large earthquakes in the Mexicali–Imperial Valley region. Initially, we represent an earthquake rupture process as a series of many independent small earthquakes distributed in a somewhat random manner in both space and time along the rupture surface. By summing real seismograms for small earthquakes (used as empirical Green's functions), strong ground motions at specific sites near a fault are calculated. Alternatively, theoretical Green's functions that include frequencies up to 20 Hz are used in essentially similar simulations. The model uses random numbers to emulate some of the non-deterministic irregularities associated with real earthquakes, due either to complexities in the rupture process itself and/or strong variations in the material properties of the medium. Simulations of the 1980 June 9 Victoria, Baja California earthquake ( M L= 6.1) approximately agree with the duration of shaking, the maximum ground acceleration, and the frequency content of strong ground motion records obtained at distances of up to 35 km for this moderate earthquake. In the initial stages of modelling we do not introduce any scaling of spectral shape with magnitude, in order to see at what stage the data require it. Surprisingly, such scaling is not critical in going from M = 4–5 events to the M = 6.1 Victoria earthquake. However, it is clearly required by the El Centro accelerogram for the Imperial Valley 1940 earthquake, which had a much higher moment ( Ms ∼ 7). We derive the spectral modification function for this event. The resulting model for this magnitude ∼ 7 earthquake is then used to predict the ground motions at short distances from the fault. Predicted peak horizontal accelerations for the M ∼ 7 event are about 25–50 per cent higher than those observed for the M = 6.1 Victoria event.  相似文献   

2.
An iterative solution to the non-linear 3-D electromagnetic inverse problem is obtained by successive linearized model updates using the method of conjugate gradients. Full wave equation modelling for controlled sources is employed to compute model sensitivities and predicted data in the frequency domain with an efficient 3-D finite-difference algorithm. Necessity dictates that the inverse be underdetermined, since realistic reconstructions require the solution for tens of thousands of parameters. In addition, large-scale 3-D forward modelling is required and this can easily involve the solution of over several million electric field unknowns per solve. A massively parallel computing platform has therefore been utilized to obtain reasonable execution times, and results are given for the 1840-node Intel Paragon. The solution is demonstrated with a synthetic example with added Gaussian noise, where the data were produced from an integral equation forward-modelling code, and is different from the finite difference code embedded in the inversion algorithm  相似文献   

3.
 利用2003-2007年6~9月ECMWF格点场资料,使用差分法、天气诊断、因子组合等方法构造出能反映本地天气动力学特征的预报因子库,采用press准则初选因子,尝试用最优子集方法进行神经网络夏季6~9月≥35℃高温预报模型的建模方法研究。2008年7月预报系统投入业务应用,检验证明所构造的神经网络高温预报模型具有更好的拟合和预报效果,为神经网络在灾害性天气预报的应用研究提供了新的思路和方法。  相似文献   

4.
提高干旱预测精度能为流域干旱应对及风险防范提供可靠数据支撑,构建比选合适的干旱模型是当前研究的热点。研究以4个时间尺度(3、6、9、12月)标准化降水指数(SPI)为表征指标,利用小波神经网络(WNN)、支持向量回归(SVR)、随机森林(RF)三种机器学习算法分别构建了海河北系干旱预测模型,利用Kendall、K-S、MAE三种检验方法判定模型表现及其稳定性。研究表明:(1) WNN、SVR模型呈现结果在不同时间尺度SPI存在差异,WNN最适合12个月尺度SPI干旱预测;SVR最适合6个月尺度SPI干旱预测。(2) 对3、12个月尺度SPI,RF预测性能最优(Kendall>0.898,MAE<0.05);对6、9个月尺度SPI,SVR预测性能最优(Kendall>0.95,MAE<0.04)。(3) 模型预测性能稳定性存在区别,RF预测稳定性最高,其次为SVR。(4) 构建的三种模型表现异同主要是因为SVR转为凸优化问题解决了WNN易陷入局部最优解的不足,从而提高了模型预测性能,RF集成多样化回归树,降低了弱学习器的负面影响,提高了模型预测准确率及稳定性,同时,RF处理包含噪声的降水数据的能力更强。  相似文献   

5.
基于神经网络的单元自动机CA及真实和优化的城市模拟   总被引:78,自引:8,他引:78  
黎夏  叶嘉安 《地理学报》2002,57(2):159-166
提出了一种基于神经网络的单元自动机(CA)。CA已被越来越多地应用在城市及其它地理现象的模拟中。CA模拟所碰到的最大问题是如何确定模型的结构和参数。模拟真实的城市涉及到使用许多空间变量和参数。当模型较复杂时,很难确定模型的参数值。本模型的结构较简单,模型的参数能通过对神经网络的训练来自动获取。分析表明,所提出的方法能获得更高的模拟精度,并能大大缩短寻找参数所需要的时间。通过筛选训练数据,本模型还可以进行优化的城市模拟,为城市规划提供参考依据。  相似文献   

6.
In view of increasing damage due to earthquakes, and the current problems of earthquake prediction, real-time warning of strong ground motion is attracting more interest. In principle, it allows short-term warning of earthquakes while they are occurring. With warning times of up to tens of seconds it is possible to send alerts to potential areas of strong shaking before the arrival of the seismic waves and to mitigate the damage, but only if the seismic source parameters are determined rapidly. The major problem of an early-warning system is the real-time estimation of the earthquake's size.
We investigated digitized strong-motion accelerograms from 244 earthquakes that occurred in North and Central America between 1940 and 1986 to find out whether their initial portions reflected the size of the ongoing earthquake. Applying conventional methods of time-series analyses we calculate appropriate signal parameters and describe their uncertainties in relation to the magnitude and epicentral distance. The study reveals that the magnitude of an earthquake can be predicted from the first second of a single accelerogram within ±1.36 magnitude units. The uncertainty can be reduced to about ±0.5 magnitude units if a larger number (≥8) of accelerograms are available, which requires a dense network of seismic stations in areas of high seismic risk.  相似文献   

7.
基于新型联想记忆神经网络的非线性系统辨识   总被引:1,自引:1,他引:0  
Hopfield网络模型具有联想存储器功能,但对系统辨识不适用。具有动态记忆功能的Elman神经网络的泛化能力比较低。该文提出了一种新型联想记忆神经网络结构和学习算法,通过引入联想记忆衰减因子,提高了对非线性系统的辨识能力。通过与Elman动态神经网络辨识方法的仿真比较,说明联想记忆神经网络辨识方法具有很好的动态辨识能力和泛化能力。  相似文献   

8.
The magnitude m bLg 5.0 Mont-Laurier earthquake of 1990 October 19, in Quebec, Canada, was one of the largest to have occurred in eastern North America during the past decade. High-frequency ground motions recorded on regional network instruments exceeded values anticipated for an event of its size by a factor of 3. A commonly favoured explanation for the discrepancy is that the source was a rare 'high-stress' event. In this paper, detailed fault-slip models are derived to fit waveform and spectral characteristics of the regional data. The results establish that the effective rupture stress was normal (about 100 bars), that the fault rupture developed asymmetrically, and that the average slip time for points inside the rupture area (approx. 0.1 s) was significantly less than that associated with the standard Brune (1970) source spectral model. The rupture area developed in at least four distinct episodes, each extending the previously ruptured area. Taken together with similar results for the m bLg 6.5 Saguenay earthquake of 1988 November, the results indicate that a widely used assumption in hazard analyses, that earthquake spectra are adequately represented by the standard Brune spectral model, is unreliable for the interpretation and prediction of strong ground motion.  相似文献   

9.
Seabed sediment textural parameters such as mud, sand and gravel content can be useful surrogates for predicting patterns of benthic biodiversity. Multibeam sonar mapping can provide near-complete spatial coverage of high-resolution bathymetry and backscatter data that are useful in predicting sediment parameters. Multibeam acoustic data collected across a ~1000 km2 area of the Carnarvon Shelf, Western Australia, were used in a predictive modelling approach to map eight seabed sediment parameters. Four machine learning models were used for the predictive modelling: boosted decision tree, random forest decision tree, support vector machine and generalised regression neural network. The results indicate overall satisfactory statistical performance, especially for %Mud, %Sand, Sorting, Skewness and Mean Grain Size. The study also demonstrates that predictive modelling using the combination of machine learning models has provided the ability to generate prediction uncertainty maps. However, the single models were shown to have overall better prediction performance than the combined models. Another important finding was that choosing an appropriate set of explanatory variables, through a manual feature selection process, was a critical step for optimising model performance. In addition, machine learning models were able to identify important explanatory variables, which are useful in identifying underlying environmental processes and checking predictions against the existing knowledge of the study area. The sediment prediction maps obtained in this study provide reliable coverage of key physical variables that will be incorporated into the analysis of covariance of physical and biological data for this area.  相似文献   

10.
利用聚类分析,将径流序列分为不同类型的子径流序列,对这些子序列建立神经网络模型,采用Elman动态神经网络对沂沭河流域上游临沂子流域日径流量进行预测分析,通过与不加分类的总体神经网络的模拟结果进行对比分析。确定性系数、相关系数、平均相对误差和平均相对均方根误差4个统计指数及流域径流过程线和次洪误差分析结果都表明:Elman动态神经网络能够对日径流量进行较好模拟,但基于径流分类的降雨—径流模型表现出更优良性能,能较大程度提高径流模拟精度。  相似文献   

11.
王钧  李广  聂志刚  刘强 《干旱区地理》2020,43(2):398-405
针对陇中黄土丘陵沟壑区土壤水蚀过程复杂且难以有效预测的问题,以定西市安家沟水土保持试验站2005—2016年1~12月人工草地径流场试验数据为主要来源,将流域月降雨量、月侵蚀性降雨量、月径流量、月降雨强度、径流场面积、径流场坡度、土壤砂粒含量、土壤粘粒含量8个因子作为输入因子,月土壤水蚀量作为输出,运用偏最小二乘法(Partial Least-Squares Regression,PLSR)和长短期记忆(Long Short-Term Memory,LSTM)循环神经网络建立人工草地土壤水蚀预测模型,并利用BP(Back Propagation)、RNN(Recurrent Neural Network)、LSTM常见神经网络模型,对模型的有效性进行评估。结果表明:PLSR将模型8个输入因子减少为4个,从而有效解决LSTM神经网络模型对样本数量要求过高的问题; PLSR和LSTM神经网络模型的结合可以有效提高模型对人工草地土壤水蚀过程的预测精度和收敛速度,预测结果的平均相对误差小于4%,相关系数高于其他3种神经网络模型,而迭代次数、均方根误差和平均绝对误差均低于其他3种模型;研究发现坡度对人工草地土壤水蚀过程影响较为明显,降雨量小于25 mm时,人工草地土壤水蚀量不会随坡度增加而明显增长,但当降雨量超过25 mm时,人工草地土壤水蚀量会随坡度明显增加。 PLSR LSTM神经网络土壤水蚀预测模型可以准确预测陇中黄土丘陵沟壑区人工草地土壤水蚀量,为该地区水土流失的准确预报提供新的思路和方法。  相似文献   

12.
The large thickness of Upper Carboniferous strata found in the Netherlands suggests that the area was subject to long-term subsidence. However, the mechanisms responsible for subsidence are not quantified and are poorly known. In the area north of the London Brabant Massif, onshore United Kingdom, subsidence during the Namurian–Westphalian B has been explained by Dinantian rifting, followed by thermal subsidence. In contrast, south and east of the Netherlands, along the southern margin of the Northwest European Carboniferous Basin, flexural subsidence caused the development of a foreland basin. It has been proposed that foreland flexure due to Variscan orogenic loading was also responsible for Late Carboniferous subsidence in the Netherlands. In the first part of this paper, we present a series of modelling results in which the geometry and location of the Variscan foreland basin was calculated on the basis of kinematic reconstructions of the Variscan thrust system. Although several uncertainties exist, it is concluded that most subsidence calculated from well data in the Netherlands cannot be explained by flexural subsidence alone. Therefore, we investigated whether a Dinantian rifting event could adequately explain the observed subsidence by inverse modelling. The results show that if only a Dinantian rifting event is assumed, such as is found in the United Kingdom, a very high palaeowater depth at the end of the Dinantian is required to accommodate the Namurian–Westphalian B sedimentary sequence. To better explain the observed subsidence curves, we propose (1) an additional stretching event during the Namurian and (2) a model incorporating an extra dynamic component, which might well explain the very high wavelength of the observed subsidence compared with the wavelength of the predicted flexural foreland basin.  相似文献   

13.
We have formulated a 3-D inverse solution for the magnetotelluric (MT) problem using the non-linear conjugate gradient method. Finite difference methods are used to compute predicted data efficiently and objective functional gradients. Only six forward modelling applications per frequency are typically required to produce the model update at each iteration. This efficiency is achieved by incorporating a simple line search procedure that calls for a sufficient reduction in the objective functional, instead of an exact determination of its minimum along a given descent direction. Additional efficiencies in the scheme are sought by incorporating preconditioning to accelerate solution convergence. Even with these efficiencies, the solution's realism and complexity are still limited by the speed and memory of serial processors. To overcome this barrier, the scheme has been implemented on a parallel computing platform where tens to thousands of processors operate on the problem simultaneously. The inversion scheme is tested by inverting data produced with a forward modelling code algorithmically different from that employed in the inversion algorithm. This check provides independent verification of the scheme since the two forward modelling algorithms are prone to different types of numerical error.  相似文献   

14.
基于自组织模型的酒嘉玉地区城市化动态演变   总被引:4,自引:2,他引:4  
李铭  方创琳 《地理研究》2006,25(3):551-559
根据耗散结构理论中的自组织建模原理,将人口区域之间的迁移耦合作用作为区域城市化模拟预测的影响因素纳入到预测模型中,从而对区域城市化的时空动态演变分异特征进行模拟,更有实际意义.在酒嘉玉三市历史人口及城市化数据的基础上,利用基于自组织模型的城市化动态演变模型,模拟过去50年及预测未来50年三市城市化的动态演变趋势,并对经济增长、资源开发、生态建设、交通区位、国家政策、人口迁移等因素在城市化中的作用进行分析,得出三市城市化水平动态演变差异的动因.研究认为,采用自组织模型模拟过去50年酒嘉玉三市城市化水平与实际城市化水平之间的偏离度不超过5%,可以依此进行未来50年三市城市化水平的预测.预测表明,三市至2050年总人口将达到107.4万人,城市化水平将达到72.56%,其中酒泉达到62.5%,嘉峪关达到88.62%,玉门达到66.6%.  相似文献   

15.
大尺度水循环模拟系统不确定性研究进展   总被引:4,自引:0,他引:4  
水循环过程受众多自然因素和人为因素影响,决定了水循环系统的变化性和复杂性。水循环系统模型作为研究流域水文循环过程及演变规律的重要工具,必然也存在较大的不确定性,特别是对于大尺度陆-气耦合下的水循环模拟系统,其不确定性来源包括输入和参数不确定性、结构不确定性、方法不确定性以及初始和边界条件不确定性。本文在分析不确定性量化方法和传统水文模型不确定性研究基础上,重点评述当前大尺度水循环系统模拟的不确定性研究进展和存在的瓶颈问题,并介绍一种针对大型复杂动力系统的不确定性量化解决方案和工具系统-PSUADE,基于此讨论PSUADE在大尺度水循环模拟系统不确定性量化过程中的优势。  相似文献   

16.
刘柯 《地理科学进展》2007,26(6):133-137
城市建成区规模受社会、经济、城市环境等诸多因素影响, 传统统计方法难以准确预测城 市建成区的面积。人工神经网络具有良好的非线性映射逼近性能, 在各类预测研究中得到了广泛 的应用, 尤其是BP 神经网络。主成分分析可以在有效保留数据信息前提下对数据进行降维, 它 与BP 神经网络的结合主要在数据输入端, 通过减少输入层神经元个数, 增强网络性能, 提高预 测精度。本文以北京市为例, 综合运用主成分分析和BP 神经网络方法建立预测模型, 以1986~ 2003 年数据为学习样本, 以2004 年数据为检验样本, 对2005 年北京市城市建成区面积进行模 拟预测。预测结果表明, 基于主成分分析的BP 神经网络预测结果与实际值的相对误差为2.8%, 比传统BP 神经网络预测精度提高1.8 个百分点, 网络训练收敛速度也更快, 其预测精度和效率 都有不同程度的改善。  相似文献   

17.
The controls on an earthquake's size are examined in a heterogeneous cellular automaton that includes stress concentrations which scale with rupture size. Large events only occur when stress is highly correlated with strength over the entire fault. Although the largest events occur when this correlation is the highest, the magnitude of the correlation has no predictive value as events of all magnitudes occur during times of high stress/strength correlation. Rather, the size of any particular event depends on the local stress heterogeneity encountered by the growing rupture. Patterns of energy release with time for individual ruptures reflect this heterogeneity and many show nucleation-type behaviour, although there is no relation between the duration of nucleation phase and the size of the event. These results support the view that earthquake size is determined by complex interactions between previous event history and dynamic stress concentrations and suggest that deterministic earthquake prediction based on monitoring nucleation zones will not be possible.  相似文献   

18.
Abstract

This is the first of two papers elaborating a framework for embedding urban models within GIS. This framework is based upon using the display capabilities of GIS as the user interface to the conventional modelling process, beginning with data selection and analysis, moving to model specification and calibration, and thence to prediction. In this paper, we outline how various stages in this process based on purpose-built software outside the system, are accessed and operated through the GIS. We first deal with display based on thematic maps, surfaces, graphs and linked windows, standard to any data from whatever source, be it observations, model estimates or predictions. We then describe how various datasets are selected, how the spatial system can be partitioned or aggregated, and how rudimentary exploratory spatial data analysis enables scatterplots to be associated with thematic maps. We illustrate all these functions and operations using the proprietary GIS ARC-INFO applied to population data at the tract level in the Buffalo region. In the second part of the paper, various residential location models are outlined and the full modelling framework is assembled and demonstrated.  相似文献   

19.
Statistical study of the occurrence of shallow earthquakes   总被引:1,自引:0,他引:1  
Summary. The time—space-magnitude interaction of shallow earthquakes has been investigated for three catalogues: worldwide ( M ≥ 7.0), Southern and Northern California ( M ≥ 4.0) and Central California ( M ≥ 1.5). The earthquake sequences are considered as a multi-dimensional stochastic point process; the estimates of the parameters for a branching model of the seismic process are obtained by a maximum-likelihood procedure. After applying magnitude—time and magnitude—distance scaling, the pattern of relationship among earthquakes of different magnitude ranges is almost identical. The number of foreshocks diminishes as the magnitude difference between the main shock and the foreshocks increases, while the magnitude distribution of aftershocks has the opposite property. The strongest aftershocks are likely to occur at the beginning of the sequence; later they migrate away with velocities of the order of km/day. The sequences which are composed of smaller aftershocks last longer and there are indications that they remain essentially in the focal region. Foreshocks also appear to migrate, but in this case, toward the main shock. The rate of occurrence of dependent shocks increases as t -1 as the origin time of the main shock is approached, effectively making every earthquake a multi-shock event. This interaction of earthquakes was modelled by a Monte-Carlo simulation technique. The statistical inversion of simulated catalogues was undertaken to derive the information we would be able to retrieve from actual data, as well as possible errors of estimates. The possibility of using these results as a tool for seismic risk prediction is discussed and evaluated.  相似文献   

20.
Most previous research on areas with abundant rainfall shows that simulations using rainfall-runoff modes have a very high prediction accuracy and applicability when using a back-propagation(BP), feed-forward, multilayer perceptron artificial neural network(ANN). However, in runoff areas with relatively low rainfall or a dry climate, more studies are needed. In these areas—of which oasis-plain areas are a particularly good example—the existence and development of runoff depends largely on that which is generated from alpine regions. Quantitative analysis of the uncertainty of runoff simulation under climate change is the key to improving the utilization and management of water resources in arid areas. Therefore, in this context, three kinds of BP feed-forward, three-layer ANNs with similar structure were chosen as models in this paper.Taking the oasis–plain region traverse by the Qira River Basin in Xinjiang, China, as the research area, the monthly accumulated runoff of the Qira River in the next month was simulated and predicted. The results showed that the training precision of a compact wavelet neural network is low; but from the forecasting results, it could be concluded that the training algorithm can better reflect the whole law of samples. The traditional artificial neural network(TANN) model and radial basis-function neural network(RBFNN) model showed higher accuracy in the training and prediction stage. However, the TANN model, more sensitive to the selection of input variables, requires a large number of numerical simulations to determine the appropriate input variables and the number of hidden-layer neurons. Hence, The RBFNN model is more suitable for the study of such problems. And it can be extended to other similar research arid-oasis areas on the southern edge of the Kunlun Mountains and provides a reference for sustainable water-resource management of arid-oasis areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号