IntroductionTrial-and-error forward modeling of wide-angle seismic reflection/refraction traveltimes for 2-D velocity structure is extremely time-consuming, even for experienced data interpreters. For wide-angle seismic reflection/refraction experiments that consist of numerous shots along a single line, it is quite difficult through repeated trial-and-error forward modeling to construct a 2-D model that fits the data within acceptable limits (Cerveny, et al, 1977; ZHANG, et al, 200 . In ad… 相似文献
A field test and analysis method has been developed to estimate the vertical distribution of hydraulic conductivity in shallow unconsolidated aquifers. The field method uses fluid injection ports and pressure transducers in a hollow auger that measure the hydraulic head outside the auger at several distances from the injection point. A constant injection rate is maintained for a duration time sufficient for the system to become steady state. Exploiting the analogy between electrical resistivity in geophysics and hydraulic flow two methods are used to estimate conductivity with depth: a half-space model based on spherical flow from a point injection at each measurement site, and a one-dimensional inversion of an entire dataset.
The injection methodology, conducted in three separate drilling operations, was investigated for repeatability, reproducibility, linearity, and for different injection sources. Repeatability tests, conducted at 10 levels, demonstrated standard deviations of generally less than 10%. Reproducibility tests conducted in three, closely spaced drilling operations generally showed a standard deviation of less than 20%, which is probably due to lateral variations in hydraulic conductivity. Linearity tests, made to determine dependency on flow rates, showed no indication of a flow rate bias. In order to obtain estimates of the hydraulic conductivity by an independent means, a series of measurements were made by injecting water through screens installed at two separate depths in a monitoring pipe near the measurement site. These estimates differed from the corresponding estimates obtained by injection in the hollow auger by a factor of less than 3.5, which can be attributed to variations in geology and the inaccurate estimates of the distance between the measurement and the injection sites at depth. 相似文献
Abstract. A simple, fast, moment-tensor inversion method using bandpass-filtered P-amplitudes was used to study the moment-tensor statistics of Long Valley caldera microearthquakes. The events were recorded in the summer of 1997, during a swarm in the caldera. The swarm was associated with geodetic extension, uplift, and subsequent moderate earthquake activity. The moment tensor solutions for 1,993 events were calculated using the new method. The majority of the resulting focal mechanisms appear to be explained in terms of double couple mechanisms. Since some events did exhibit considerable deviation from double-couples, the moment data were studied for their statistical significance. The moments of the actual data were compared to the moments of synthetic data with varying degrees of random noise in their spectra. The results of this study suggested that unless data from more than 20 stations are used and the earthquake epicenter is located inside or very close to the network area, moment-tensor inversion does not correctly resolve the non-double-couple components of microearthquakes. Analysis of the inversion residuals shows that the average noise in the P-wave spectra was close to 20%. The fluctuations of the volumetric components of the moment-tensor are in good agreement with those of the synthetic pure double-couples with 20% of added noise. Thus the moment-tensor statistics suggests that little if any volume change is required to explain the observed seismic energy release in the swarm. However, the statistics do show that a significant compensated-linear-vector-dipole component maybe present in the bulk of the seismicity. Given the network used in this study, such a component could not be precisely resolved for individual earthquakes. This possibility deserves further investigation because of its bearing on the nature of fluid-fault-earthquake processes in swarms. 相似文献
In conventional seismic processing, the classical algorithm of Hubral and Krey is routinely applied to extract an initial macrovelocity model that consists of a stack of homogeneous layers bounded by curved interfaces. Input for the algorithm are identified primary reflections together with normal moveout (NMO) velocities, as derived from a previous velocity analysis conducted on common midpoint (CMP) data. This work presents a modified version of the Hubral and Krey algorithm that is designed to extend the original version in two ways, namely (a) it makes an advantageous use of previously obtained common-reflection-surface (CRS) attributes as its input and (b) it also allows for gradient layer velocities in depth. A new strategy to recover interfaces as optimized cubic splines is also proposed. Some synthetic examples are provided to illustrate and explain the implementation of the method. 相似文献