首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   2篇
大气科学   1篇
地球物理   14篇
地质学   5篇
海洋学   1篇
  2022年   1篇
  2018年   2篇
  2017年   1篇
  2015年   3篇
  2013年   2篇
  2010年   1篇
  2008年   1篇
  2007年   1篇
  2005年   2篇
  2003年   1篇
  2001年   1篇
  1999年   2篇
  1994年   2篇
  1993年   1篇
排序方式: 共有21条查询结果,搜索用时 15 毫秒
11.
In 1988, an ESC Working Group Macroseismic Scales started upgrading the MSK-81 Intensity Scale. This paper presents the background and decisions made with respect to the so-called seismogeological effects. Discussion has pointed out that they cannot be treated and used in the same way as the effects on humans, objects and buildings, for many reasons. Therefore, the WG adopted the solution of using such effects as a side tool for intensity assessment, providing a comprehensive table where the experimental relations between seismogeological effects and intensity degrees - assessed by means of other effects - are presented.  相似文献   
12.
Abstract

Observations of the tidal jet issuing from Quatsino Narrows into Rupert‐Holberg Inlet, B.C. are discussed. Two types of flow are observed: a buoyant surface jet and a negatively buoyant jet. The buoyant flow is parameterized with an initial densimetric Froude number, and agreement is good between the observed vertical penetration of the jet and that predicted by several existing models. The negatively buoyant jet entrains several times its initial volume; entrainment constants for the flow are larger than those observed from the two‐dimensional plume on similar inclines, yet smaller than those for neutrally buoyant jets. A time‐scale of 2 to 3 weeks is calculated for the flushing of the Inlet during times of negatively buoyant inflow. The buoyant jet is observed to reduce the overall density of the water column, and estimated vertical eddy diffusivities are considerably larger than in most other fjords. Changes in the Froude number of the jet are controlled primarily by changes in the density and speed of the inflow. During the period of observations the density of the jet appears to be controlled by runoff.  相似文献   
13.
14.
A first tentative comparison between the structural framework related to the active tectonics and the long-term seismicity of the Umbria–Marche Apennines (affected by the 1997 seismic sequence) has provided some insight for discussing the seismotectonic characteristics of the area. This Apennine sector is affected by 15 to 20-km-long active fault systems, consisting of minor fault-segments. Each of these fault-segments may be responsible for earthquakes characterised by magnitudes ranging between 5.5 and 6.0 (such as those occurred in 1599, 1730, 1838, 1859, 1979). However, the occurrence of one large-magnitude event (1703, Ms = 6.7) and of seismic sequences (1747–1751; 1997–1998) indicate that an entire fault system may be activated suddenly (at least in the southern part of the investigated area) or during seismic crises which may last many months. The comparison between the active faulting framework and the long-term seismicity also indicates that no significant earthquakes may be related to the Mt. Vettore Fault System since 1000 AD.  相似文献   
15.
Multicomponent seismic data are acquired by orthogonal geophones that record a vectorial wavefield. Since the single component recordings are not independent, the processing should be performed jointly for all the components. A way to achieve this goal is to exploit quaternions, hyper‐complex numbers that due to their very nature are apt to represent multidimensional data. In fact, quaternion algebra allows us to extend coherence functionals used for scalar observations to multicomponent data. Therefore by means of quaternions we implement semblance and other methods based on matched filtering and on the data covariance properties. As an application we show the results from a quaternion velocity analysis carried out combining information from the geophones and from the hydrophones of an ocean bottom cable (OBC) survey, and thus recognizing the true vectorial nature of the incoming wavefield. This also allows one to relax, at least partially, vector fidelity constraints. We demonstrate that quaternion velocity analysis yields an improved resolution with respect to the single component velocity analysis for any coherence functional chosen and that it simultaneously evidences velocity trends pertaining to different wave modes. This facilitates the interpreter in the estimation of interval Vp/Vs by means of event correlation, and in making use of a priori information from VSP and well logs. It also speeds up the velocity picking that can be performed in a single pass on a multicomponent velocity panel, rather than once for each single component velocity panel.  相似文献   
16.
The wavenumber iterative modelling (WIM) method was first introduced to estimate the static corrections for 2D land profiles by performing first-break inversion in the wavenumber domain. The WIM algorithm presents some useful advantages, of robustness, stability and flexibility. Robustness is obtained by intensive exploitation of all the available data and by application of an automatic function for mispick removal. Stability is the result of an iterative procedure that ensures convergence towards a stable and plausible solution even at the end of the profile where the problem is normally ill-posed. Finally, flexibility is due to the possibility of solving for multilayered structures and of estimating vertical gradients of the velocity.This work extends the WIM method to three dimensions. The extension is feasible because the three-dimensional (3D) problem can be decomposed into a number of small independent problems, one for any pair of wavenumbers k x ,  k y . The extension preserves the above-mentioned advantages. The parameters of the estimated model are affected differently by noise: the analysis of the input/output noise transfer function demonstrates that the high spatial frequencies of the velocity distributions are the components that are most affected by noise; thus, the algorithm includes a gradual damping of the higher wavenumbers of the velocity parameter. Although the WIM 3D algorithm requires a larger amount of RAM compared with other standard approaches, considerable reduction in CPU run time can be achieved as every wavenumber pair can be treated as an independent linear problem.  相似文献   
17.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   
18.
The comparison of macroseismic intensity scales   总被引:5,自引:1,他引:4  
The number of different macroseismic scales that have been used to express earthquake shaking in the course of the last 200 years is not known; it may reach three figures. The number of important scales that have been widely adopted is much smaller, perhaps about eight, not counting minor variants. Where data sets exist that are expressed in different scales, it is often necessary to establish some sort of equivalence between them, although best practice would be to reassign intensity values rather than convert them. This is particularly true because difference between workers in assigning intensity is often greater than differences between the scales themselves, particularly in cases where one scale may not be very well defined. The extent to which a scale guides the user to arrive at a correct assessment of the intensity is a measure of the quality of the scale. There are a number of reasons why one should prefer one scale to another for routine use, and some of these tend in different directions. If a scale has many tests (diagnostics) for each degree, it is more likely that the scale can be applied in any case that comes to hand, but if the diagnostics are so numerous that they include ones that do not accurately indicate any one intensity level, then the use of the scale will tend to produce false values. The purpose of this paper is chiefly to discuss in a general way the principles involved in the analysis of intensity scales. Conversions from different scales to the European Macroseismic Scale are discussed.  相似文献   
19.
20.
We compare the performances of four stochastic optimisation methods using four analytic objective functions and two highly non‐linear geophysical optimisation problems: one‐dimensional elastic full‐waveform inversion and residual static computation. The four methods we consider, namely, adaptive simulated annealing, genetic algorithm, neighbourhood algorithm, and particle swarm optimisation, are frequently employed for solving geophysical inverse problems. Because geophysical optimisations typically involve many unknown model parameters, we are particularly interested in comparing the performances of these stochastic methods as the number of unknown parameters increases. The four analytic functions we choose simulate common types of objective functions encountered in solving geophysical optimisations: a convex function, two multi‐minima functions that differ in the distribution of minima, and a nearly flat function. Similar to the analytic tests, the two seismic optimisation problems we analyse are characterised by very different objective functions. The first problem is a one‐dimensional elastic full‐waveform inversion, which is strongly ill‐conditioned and exhibits a nearly flat objective function, with a valley of minima extended along the density direction. The second problem is the residual static computation, which is characterised by a multi‐minima objective function produced by the so‐called cycle‐skipping phenomenon. According to the tests on the analytic functions and on the seismic data, genetic algorithm generally displays the best scaling with the number of parameters. It encounters problems only in the case of irregular distribution of minima, that is, when the global minimum is at the border of the search space and a number of important local minima are distant from the global minimum. The adaptive simulated annealing method is often the best‐performing method for low‐dimensional model spaces, but its performance worsens as the number of unknowns increases. The particle swarm optimisation is effective in finding the global minimum in the case of low‐dimensional model spaces with few local minima or in the case of a narrow flat valley. Finally, the neighbourhood algorithm method is competitive with the other methods only for low‐dimensional model spaces; its performance sensibly worsens in the case of multi‐minima objective functions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号