首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Average steady source flow in heterogeneous porous formations is modelled by regarding the hydraulic conductivity K(x) as a stationary random space function (RSF). As a consequence, the flow variables become RSFs as well, and we are interested into calculating their moments. This problem has been intensively studied in the case of a Neumann type boundary condition at the source. However, there are many applications (such as well-type flows) for which the required boundary condition is that of Dirichlet. In order to fulfill such a requirement the strength of the source must be proportional to K(x), and therefore the source itself results a RSF. To solve flows driven by sources whose strength is spatially variable, we have used a perturbation procedure similar to that developed by Indelman and Abramovich (Water Resour Res 30:3385–3393, 1994) to analyze flows generated by sources of deterministic strength. Due to the linearity of the mathematical problem, we have focused on the explicit derivation of the mean head distribution G d (x) generated by a unit pulse. Such a distribution represents the fundamental solution to the average flow equations, and it is termed as mean Green function. The function G d (x) is derived here at the second order of approximation in the variance σ2 of the fluctuation (where K A is the mean value of K(x)), for arbitrary correlation function ρ(x), and any dimensionality d of the flow domain. We represent G d (x) as product between the homogeneous Green function G d (0)(x) valid in a domain with constant K A , and a distortion term Ψ d (x) = 1 + σ2ψ d (x) which modifies G d (0)(x) to account for the medium heterogeneity. In the case of isotropic formations ψ d (x) is expressed via one quadrature. This quadrature can be analytically calculated after adopting specific (e.g.. exponential and Gaussian) shape for ρ(x). These general results are subsequently used to investigate flow toward a partially-penetrating well in a semi-infinite domain. Indeed, we construct a σ2-order approximation to the mean as well as variance of the head by replacing the well with a singular segment. It is shown how the well-length combined with the medium heterogeneity affects the head distribution. We have introduced the concept of equivalent conductivity K eq(r,z). The main result is the relationship where the characteristic function ψ(w)(r,z) adjusts the homogeneous conductivity K A to account for the impact of the heterogeneity. In this way, a procedure can be developed to identify the aquifer hydraulic properties by means of field-scale head measurements. Finally, in the case of a fully penetrating well we have expressed the equivalent conductivity in analytical form, and we have shown that (being the effective conductivity for mean uniform flow), in agreement with the numerical simulations of Firmani et al. (Water Resour Res 42:W03422, 2006).  相似文献   

2.
The possibility-probability risk calculated using the interior-outer set model is referred to as fuzzy risk. A fuzzy expected value of the possibility-probability distribution is a set with E α(x) and [`(E)]a\bar E_\alpha (x) as its boundaries. The fuzzy expected values E α(x) and [`(E)]a\bar E_\alpha (x) of a possibility-probability distribution represent the fuzzy risk values being calculated. Under such an α level, three risk values can be calculated: conservative risk value, venture risk value and maximum probability risk value. As α adopts all values throughout the set [0, 1], it is possible to obtain a series of risk values. Therefore, the fuzzy risk can be a multi-valued risk or set-valued risk. Calculation of the fuzzy expected value of Yiwu city’s water resource risk has been performed based on the interior-outer set model. We can get a conservative risk value (R C ) of 800 mm for Yiwu city’s water resource risk, a venture risk value (R V ) of 1020 mm, and a maximum probability risk value (R M ) of 988 mm for the α = 0.1 level cut set.  相似文献   

3.
This study is an extension of the stochastic analysis of transient two-phase flow in randomly heterogeneous porous media (Chen et al. in Water Resour Res 42:W03425, 2006), by incorporating direct measurements of the random soil properties. The log-transformed intrinsic permeability, soil pore size distribution parameter, and van Genuchten fitting parameter are treated as stochastic variables that are normally distributed with a separable exponential covariance model. These three random variables conditioned on given measurements are decomposed via Karhunen–Loève decomposition. Combined with the conditional eigenvalues and eigenfunctions of random variables, we conduct a series of numerical simulations using stochastic transient water–oil flow model (Chen et al. in Water Resour Res 42:W03425, 2006) based on the KLME approach to investigate how the number and location of measurement points, different random soil properties, as well as the correlation length of the random soil properties, affect the stochastic behavior of water and oil flow in heterogeneous porous media.  相似文献   

4.
Small local earthquakes from two aftershock sequences in Porto dos Gaúchos, Amazon craton—Brazil, were used to estimate the coda wave attenuation in the frequency band of 1 to 24 Hz. The time-domain coda-decay method of a single backscattering model is employed to estimate frequency dependence of the quality factor (Q c) of coda waves modeled using Qc = Q0 fhQ_{\rm c} =Q_{\rm 0} f^\eta , where Q 0 is the coda quality factor at frequency of 1 Hz and η is the frequency parameter. We also used the independent frequency model approach (Morozov, Geophys J Int, 175:239–252, 2008), based in the temporal attenuation coefficient, χ(f) instead of Q(f), given by the equation c(f)=g+\fracpfQe \chi (f)\!=\!\gamma \!+\!\frac{\pi f}{Q_{\rm e} }, for the calculation of the geometrical attenuation (γ) and effective attenuation (Qe-1 )(Q_{\rm e}^{-1} ). Q c values have been computed at central frequencies (and band) of 1.5 (1–2), 3.0 (2–4), 6.0 (4–8), 9.0 (6–12), 12 (8–16), and 18 (12–24) Hz for five different datasets selected according to the geotectonic environment as well as the ability to sample shallow or deeper structures, particularly the sediments of the Parecis basin and the crystalline basement of the Amazon craton. For the Parecis basin Qc = (98±12)f(1.14±0.08)Q_{\rm c} =(98\pm 12)f^{(1.14\pm 0.08)}, for the surrounding shield Qc = (167±46)f(1.03±0.04)Q_{\rm c} =(167\pm 46)f^{(1.03\pm 0.04)}, and for the whole region of Porto dos Gaúchos Qc = (99±19)f(1.17±0.02)Q_{\rm c} =(99\pm 19)f^{(1.17\pm 0.02)}. Using the independent frequency model, we found: for the cratonic zone, γ = 0.014 s − 1, Qe-1 = 0.0001Q_{\rm e}^{-1} =0.0001, ν ≈ 1.12; for the basin zone with sediments of ~500 m, γ = 0.031 s − 1, Qe-1 = 0.0003Q_{\rm e}^{-1} =0.0003, ν ≈ 1.27; and for the Parecis basin with sediments of ~1,000 m, γ = 0.047 s − 1, Qe-1 = 0.0005Q_{\rm e}^{-1} =0.0005, ν ≈ 1.42. Analysis of the attenuation factor (Q c) for different values of the geometrical spreading parameter (ν) indicated that an increase of ν generally causes an increase in Q c, both in the basin as well as in the craton. But the differences in the attenuation between different geological environments are maintained for different models of geometrical spreading. It was shown that the energy of coda waves is attenuated more strongly in the sediments, Qc = (78±23)f(1.17±0.14)Q_{\rm c} =(78\pm 23)f^{(1.17\pm 0.14)} (in the deepest part of the basin), than in the basement, Qc = (167±46)f(1.03±0.04)Q_{\rm c} =(167\pm 46)f^{(1.03\pm 0.04)} (in the craton). Thus, the coda wave analysis can contribute to studies of geological structures in the upper crust, as the average coda quality factor is dependent on the thickness of sedimentary layer.  相似文献   

5.
The aim of this paper is to compare four different methods for binary classification with an underlying Gaussian process with respect to theoretical consistency and practical performance. Two of the inference schemes, namely classical indicator kriging and simplicial indicator kriging, are analytically tractable and fast. However, these methods rely on simplifying assumptions which are inappropriate for categorical class labels. A consistent and previously described model extension involves a doubly stochastic process. There, the unknown posterior class probability f(·) is considered a realization of a spatially correlated Gaussian process that has been squashed to the unit interval, and a label at position x is considered an independent Bernoulli realization with success parameter f(x). Unfortunately, inference for this model is not known to be analytically tractable. In this paper, we propose two new computational schemes for the inference in this doubly stochastic model, namely the “Aitchison Maximum Posterior” and the “Doubly Stochastic Gaussian Quadrature”. Both methods are analytical up to a final step where optimization or integration must be carried out numerically. For the comparison of practical performance, the methods are applied to storm forecasts for the Spanish coast based on wave heights in the Mediterranean Sea. While the error rate of the doubly stochastic models is slightly lower, their computational cost is much higher.  相似文献   

6.
It is shown that within the framework of the Kolmogorov model the “energy” of the pole E(t) = x 12 + x 22 can be interpreted as a Markovian process. The exact analytical expression has been obtained for the density of the conditional probability of the quantity E(t) and the problem of the first passage time of the process E(t) has been analyzed. It was shown that the available data on the swing of the function E(t) are not at variance with the Kolmogorov model and a short-period drop of the amplitude of the Chandler wobble in the early 20th century fits this model at Q = 50–200 too; values of Q > 350 are less reasonable.  相似文献   

7.
In this study, the KLME approach, a moment-equation approach based on the Karhunen–Loeve decomposition developed by Zhang and Lu (Comput Phys 194(2):773–794, 2004), is applied to unconfined flow with multiple random inputs. The log-transformed hydraulic conductivity F, the recharge R, the Dirichlet boundary condition H, and the Neumann boundary condition Q are assumed to be Gaussian random fields with known means and covariance functions. The F, R, H and Q are first decomposed into finite series in terms of Gaussian standard random variables by the Karhunen–Loeve expansion. The hydraulic head h is then represented by a perturbation expansion, and each term in the perturbation expansion is written as the products of unknown coefficients and Gaussian standard random variables obtained from the Karhunen–Loeve expansions. A series of deterministic partial differential equations are derived from the stochastic partial differential equations. The resulting equations for uncorrelated and perfectly correlated cases are developed. The equations can be solved sequentially from low to high order by the finite element method. We examine the accuracy of the KLME approach for the groundwater flow subject to uncorrelated or perfectly correlated random inputs and study the capability of the KLME method for predicting the head variance in the presence of various spatially variable parameters. It is shown that the proposed numerical model gives accurate results at a much smaller computational cost than the Monte Carlo simulation.  相似文献   

8.
We develop a new method for the statistical estimation of the tail of the distribution of earthquake sizes recorded in the Harvard catalog of seismic moments converted to m W -magnitudes (1977–2004 and 1977–2006). For this, we suggest a new parametric model for the distribution of main-shock magnitudes, which is composed of two branches, the pure Gutenberg-Richter distribution up to an upper magnitude threshold m 1, followed by another branch with a maximum upper magnitude bound M max, which we refer to as the two-branch model. We find that the number of main events in the catalog (N = 3975 for 1977–2004 and N = 4193 for 1977–2006) is insufficient for a direct estimation of the parameters of this model, due to the inherent instability of the estimation problem. This problem is likely to be the same for any other two-branch model. This inherent limitation can be explained by the fact that only a small fraction of the empirical data populates the second branch. We then show that using the set of maximum magnitudes (the set of T-maxima) in windows of duration T days provides a significant improvement, in particular (i) by minimizing the negative impact of time-clustering of foreshock/main shock/aftershock sequences in the estimation of the tail of magnitude distribution, and (ii) by providing via a simulation method reliable estimates of the biases in the Moment estimation procedure (which turns out to be more efficient than the Maximum Likelihood estimation). We propose a method for the determination of the optimal choice of the T value minimizing the mean-squares-error of the estimation of the form parameter of the GEV distribution approximating the sample distribution of T-maxima, which yields T optimal = 500 days. We have estimated the following quantiles of the distribution of T-maxima for the whole period 1977–2006: Q 16%(M max) = 9.3, Q 50%(M max) = 9.7 and Q 84%(M max) = 10.3. Finally, we suggest two more stable statistical characteristics of the tail of the distribution of earthquake magnitudes: The quantile Q T (q) of a high probability level q for the T-maxima, and the probability of exceedance of a high threshold magnitude ρ T (m*)  = P{m k  ≥ m*}. We obtained the following sample estimates for the global Harvard catalog and The comparison between our estimates for the two periods 1977–2004 and 1977–2006, where the latter period included the great Sumatra earthquake 24.12.2004, m W  = 9.0 confirms the instability of the estimation of the parameter M max and the stability of Q T (q) and ρ T (m*) = P{m k  ≥ m*}.  相似文献   

9.
Transport of non-ergodic solute plumes by steady-state groundwater flow with a uniform mean velocity, μ, were simulated with Monte Carlo approach in a two-dimensional heterogeneous and statistically isotropic aquifer whose transmissivity, T, is log-normally distributed with an exponential covariance. The ensemble averages of the second spatial moments of the plume about its center of mass, <S i i (t)>, and the plume centroid covariance, R i i (t) (i=1,2), were simulated for the variance of Y=log T, σ Y 2=0.1, 0.5 and 1.0 and line sources normal or parallel to μ of three dimensionless lengths, 1, 5, and 10. For σ Y 2=0.1, all simulated <S i i (t)>−S i i (0) and R i i (t) agree well with the first-order theoretical values, where S i i (0) are the initial values of S i i (t). For σ Y 2=0.5 and 1.0 and the line sources normal to μ, the simulated longitudinal moments, <S 11(t)>−S 11(0) and R 11(t), agree well with the first-order theoretical results but the simulated transverse moments <S 22(t)>−S 22(0) and R 22(t) are significantly larger than the first-order values. For the same two larger values of σ Y 2 but the line sources parallel to μ, the simulated <S 11(t)>−S 11(0) are larger than but the simulated R 11 are smaller than the first-order values, and both simulated <S 22(t)>−S 22(0) and R 22(t) stay larger than the first-order values. For a fixed value of σ Y 2, the summations of <S i i (t)>−S i i (0) and R i i , i.e., X i i (i=1,2), remain almost the same no matter what kind of source simulated. The simulated X 11 are in good agreement with the first-order theory but the simulated X 22 are significantly larger than the first-order values. The simulated X 22, however, are in excellent agreement with a previous modeling result and both of them are very close to the values derived using Corrsin's conjecture. It is found that the transverse moments may be significantly underestimated if less accurate hydraulic head solutions are used and that the decreasing of <S 22(t)>−S 22(0) with time or a negative effective dispersivity, defined as , may happen in the case of a line source parallel to μ where σ Y 2 is small.  相似文献   

10.
Transport of non-ergodic solute plumes by steady-state groundwater flow with a uniform mean velocity, μ, were simulated with Monte Carlo approach in a two-dimensional heterogeneous and statistically isotropic aquifer whose transmissivity, T, is log-normally distributed with an exponential covariance. The ensemble averages of the second spatial moments of the plume about its center of mass, <S i i (t)>, and the plume centroid covariance, R i i (t) (i=1,2), were simulated for the variance of Y=log T, σ Y 2=0.1, 0.5 and 1.0 and line sources normal or parallel to μ of three dimensionless lengths, 1, 5, and 10. For σ Y 2=0.1, all simulated <S i i (t)>−S i i (0) and R i i (t) agree well with the first-order theoretical values, where S i i (0) are the initial values of S i i (t). For σ Y 2=0.5 and 1.0 and the line sources normal to μ, the simulated longitudinal moments, <S 11(t)>−S 11(0) and R 11(t), agree well with the first-order theoretical results but the simulated transverse moments <S 22(t)>−S 22(0) and R 22(t) are significantly larger than the first-order values. For the same two larger values of σ Y 2 but the line sources parallel to μ, the simulated <S 11(t)>−S 11(0) are larger than but the simulated R 11 are smaller than the first-order values, and both simulated <S 22(t)>−S 22(0) and R 22(t) stay larger than the first-order values. For a fixed value of σ Y 2, the summations of <S i i (t)>−S i i (0) and R i i , i.e., X i i (i=1,2), remain almost the same no matter what kind of source simulated. The simulated X 11 are in good agreement with the first-order theory but the simulated X 22 are significantly larger than the first-order values. The simulated X 22, however, are in excellent agreement with a previous modeling result and both of them are very close to the values derived using Corrsin's conjecture. It is found that the transverse moments may be significantly underestimated if less accurate hydraulic head solutions are used and that the decreasing of <S 22(t)>−S 22(0) with time or a negative effective dispersivity, defined as , may happen in the case of a line source parallel to μ where σ Y 2 is small.  相似文献   

11.
Velocity measurements with vertical resolution 0.02 m were conducted in the lowest 0.5 m of the water column using acoustic Doppler current profiler (ADCP) at a test site in the western part of the East China Sea. The friction velocity u * and the turbulent kinetic energy dissipation rate ε wl(ζ) profiles were calculated using log-layer fits; ζ is the height above the bottom. During a semidiurnal tidal cycle, u * was found to vary in the range (1–7) × 10−3 m/s. The law-of-the-wall dissipation profiles ε wl(ζ) were consistent with the dissipation profiles ε mc(ζ) evaluated using independent microstructure measurements of small-scale shear, except in the presence of westward currents. It was hypothesized that an isolated bathymetric rise (25 m height at a 50-m seafloor) located to the east of the measurement site is responsible for the latter. Calculation of the depth integrated internal tide generating body force in the region showed that the flanks of the rise are hotspots of internal wave energy that may locally produce a significant turbulent zone while emitting tidal and shorter nonlinear internal waves. This distant topographic source of turbulence may enhance the microstructure-based dissipation levels ε mc(ζ) in the bottom boundary layer (BBL) beyond the dissipation ε wl(ζ) associated with purely locally generated turbulence by skin currents. Semi-empirical estimates for dissipation at a distance from the bathymetric rise agree well with the BBL values of ε mc measured 15 km upslope.  相似文献   

12.
Predictive relations are developed for peak ground acceleration (PGA) from the engineering seismoscope (SRR) records of the 2001 Mw 7.7 Bhuj earthquake and 239 strong-motion records of 32 significant aftershocks of 3.1 ≤ Mw ≤ 5.6 at epicentral distances of 1 ≤ R ≤ 288 km. We have taken advantage of the recent increase in strong-motion data at close distances to derive new attenuation relation for peak horizontal acceleration in the Kachchh seismic zone, Gujarat. This new analysis uses the Joyner-Boore’s method for a magnitude-independent shape, based on geometrical spreading and anelastic attenuation, for the attenuation curve. The resulting attenuation equation is,
where, Y is peak horizontal acceleration in g, Mw is moment magnitude, rjb is the closest distance to the surface projection of the fault rupture in kilometers, and S is a variable taking the values of 0 and 1 according to the local site geology. S is 0 for a rock site, and, S is 1 for a soil site. The relation differs from previous work in the improved reliability of input parameters and large numbers of strong-motion PGA data recorded at short distances (0–50 km) from the source. The relation is in demonstrable agreement with the recorded strong-ground motion data from earthquakes of Mw 3.5, 4.1, 4.5, 5.6, and 7.7. There are insufficient data from the Kachchh region to adequately judge the relation for the magnitude range 5.7 ≤ Mw ≤ 7.7. But, our ground-motion prediction model shows a reasonable correlation with the PGA data of the 29 March, 1999 Chamoli main shock (Mw 6.5), validating our ground-motion attenuation model for an Mw6.5 event. However, our ground-motion prediction shows no correlation with the PGA data of the 10 December, 1967 Koyna main shock (Mw 6.3). Our ground-motion predictions show more scatter in estimated residual for the distance range (0–30 km), which could be due to the amplification/noise at near stations situated in the Kachchh sedimentary basin. We also noticed smaller residuals for the distance range (30–300 km), which could be due to less amplification/noise at sites distant from the Kachchh basin. However, the observed less residuals for the longer distance range (100–300 km) are less reliable due to the lack of available PGA values in the same distance range.  相似文献   

13.
Generous statistical tests   总被引:1,自引:1,他引:0  
A common statistical problem is deciding which of two possible sources, A and B, of a contaminant is most likely the actual source. The situation considered here, based on an actual problem of polychlorinated biphenyl contamination discussed below, is one in which the data strongly supports the hypothesis that source A is responsible. The problem approach here is twofold: One, accurately estimating this extreme probability. Two, since the statistics involved will be used in a legal setting, estimating the extreme probability in such a way as to be as generous as is possible toward the defendant’s claim that the other site B could be responsible; thereby leaving little room for argument when this assertion is shown to be highly unlikely. The statistical testing for this problem is modeled by random variables {X i } and the corresponding sample mean the problem considered is providing a bound ɛ for which for a given number a 0. Under the hypothesis that the random variables {X i } satisfy E(X i ) ≤ μ, for some 0  < μ < 1, statistical tests are given, described as “generous”, because ɛ is maximized. The intent is to be able to reject the hypothesis that a 0 is a value of the sample mean while eliminating any possible objections to the model distributions chosen for the {X i } by choosing those distributions which maximize the value of ɛ for the test used.  相似文献   

14.
Alignmentsilkwormsasseismicanimalanomalousbehavior(SAAB)andelectromagneticmodelofafault:atheoryandlaboratoryexperimentMOTO...  相似文献   

15.
Fermat's variational principle states that the signal propagates from point S to R along a curve which renders Fermat's functional (l) stationary. Fermat's functional (l) depends on curves l which connect points S and R, and represents the travel times from S to R along l. In seismology, it is mostly expressed by the integral (l) = (x k,x k ')du, taken along curve l, where (x k,x k ') is the relevant Lagrangian, x k are coordinates, u is a parameter used to specify the position of points along l, and x k ' = dx k÷du. If Lagrangian (x k,x k ') is a homogeneous function of the first degree in x k ', Fermat's principle is valid for arbitrary monotonic parameter u. We than speak of the first-degree Lagrangian (1)(x k,x k '). It is shown that the conventional Legendre transform cannot be applied to the first-degree Lagrangian (1)(x k,x k ') to derive the relevant Hamiltonian (1)(x k,p k), and Hamiltonian ray equations. The reason is that the Hessian determinant of the transform vanishes identically for first-degree Lagrangians (1)(x k,x k '). The Lagrangians must be modified so that the Hessian determinant is different from zero. A modification to overcome this difficulty is proposed in this article, and is based on second-degree Lagrangians (2). Parameter u along the curves is taken to correspond to travel time , and the second-degree Lagrangian (2)(x k, k ) is then introduced by the relation (2)(x k, k ) = [(1)(x k, k )]2, with k = dx k÷d. The second-degree Lagrangian (2)(x k, k ) yields the same Euler/Lagrange equations for rays as the first-degree Lagrangian (1)(x k, k ). The relevant Hessian determinant, however, does not vanish identically. Consequently, the Legendre transform can then be used to compute Hamiltonian (2)(x k,p k) from Lagrangian (2)(x k, k ), and vice versa, and the Hamiltonian canonical equations can be derived from the Euler-Lagrange equations. Both (2)(x k, k ) and (2)(x k,p k) can be expressed in terms of the wave propagation metric tensor g ij(x k, k ), which depends not only on position x k, but also on the direction of vector k . It is defined in a Finsler space, in which the distance is measured by the travel time. It is shown that the standard form of the Hamiltonian, derived from the elastodynamic equation and representing the eikonal equation, which has been broadly used in the seismic ray method, corresponds to the second-degree Lagrangian (2)(x k, k ), not to the first-degree Lagrangian (1)(x k, k ). It is also shown that relations (2)(x k, k ) = ; and (2)(x k,p k) = are valid at any point of the ray and that they represent the group velocity surface and the slowness surface, respectively. All procedures and derived equations are valid for general anisotropic inhomogeneous media, and for general curvilinear coordinates x i. To make certain procedures and equations more transparent and objective, the simpler cases of isotropic and ellipsoidally anisotropic media are briefly discussed as special cases.  相似文献   

16.
The magnetoconvection problem under the magnetostrophic approximation is investigated as the nonlinear regime is entered. The model consists of a fluid filled sphere, internally heated, and rapidly rotating in the presence of a prescribed, axisymmetric, toroidal magnetic field. For simplicity only a dipole parity and a single azimuthal wavenumber (m = 2) is considered here. The leading order nonlinearity at small amplitude is the geostrophic flow U g which is introduced to the previously linear model (Walker and Barenghi, 1997a, b). Walker and Barenghi (1997c) considered parameter space above critical and found that U g acts as an equilibration mechanism for moderately supercritical solutions. However, for solutions well above critical a Taylor state is approached and the system can no longer equilibrate. More importantly though, in the context of this paper, is that subcritical solutions were found. Here subcritical solutions are considered in more detail. It was found that, at is strongly dependent on . ( is the critical value of the modified Rayleigh number is a measure of the maximum amplitude of the generated geostrophic flow while , the Elsasser number, defines the strength of the prescribed toroidal field.) Rm at proves to be the key measure in determining how far into the subcritical regime the system can advance.  相似文献   

17.
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains εxx (r, t), εyy (r, t) and εzz (r, t) and the bulk strain θ (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (α, β, γ) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.  相似文献   

18.
Abstract

Abstract Recently, substantial progress has been made in detection and observation of non-aqueous phase liquids (NAPLs) in the subsurface using different experimental techniques. However, there is still a lack of appropriate direct methods to measure the saturation of NAPL (θNAPL). This paper provides a guide for estimating θNAPL and water content (θ w ) in unsaturated and saturated sand based on direct measurements of soil dielectric constant (Ka ) and electrical conductivity (σ a ) using time domain reflectometry (TDR). The results show that the previously used dielectric mixing model fails to predict θNAPL in the case of a four-phase system. A new methodology is suggested and exemplified by showing that the measured Ka gives accurate estimation of θNAPL for a three-phase system while in a four-phase system, both θ a and Ka need to be measured. The results show that using the suggested methodology, accurate predictions of θ w (R 2 = 0.9998) and θNAPL lower than 0.20 m3 m-3 (average R 2 = 0.9756) are possible.  相似文献   

19.
In this paper we suggest that conditional estimator/predictor of rockburst probability (and rockburst hazard, P T (t)) can be approximated with the formula P T (t) = P 1(θ 1)…P N (θ N P dyn T (t), where P dyn T (t) is a time-dependent probability of rockburst given only the predicted seismic energy parameters, while P i (θ i ) are amplifying coefficients due to local geologic and mining conditions, as defined by the Expert Method of (rockburst) Hazard Evaluation (MRG) known in the Polish mining industry. All the elements of the formula are (approximately) calculable (on-line) and the resulting P T value satisfies inequalities 0 ≤ P T (t) ≤ 1. As a result, the hazard space (0–1) can be always divided into smaller subspaces (e.g., 0–10−5, 10−5–10−4, 10−4–10−3, 10−3–1), possibly named with symbols (e.g., A, B, C, D, …) called “hazard states” — which saves the prediction users from worrying of probabilities. The estimator P T can be interpreted as a formal statement of (reformulated) Comprehensive Method of Rockburst State of Hazard Evaluation, well known in Polish mining industry. The estimator P T is natural, logically consistent and physically interpretable. Due to full formalization, it can be easily generalized, incorporating relevant information from other sources/methods.  相似文献   

20.
Abstract

Two entities of importance in hydrological droughts, viz. the longest duration, LT , and the largest magnitude, MT (in standardized terms) over a desired time period (which could also correspond to a specific return period) T, have been analysed for weekly flow sequences of Canadian rivers. Analysis has been carried out in terms of week-by-week standardized values of flow sequences, designated as SHI (standardized hydrological index). The SHI sequence is truncated at the median level for identification and evaluation of expected values of the above random variables, E(LT ) and E(MT ). SHI sequences tended to be strongly autocorrelated and are modelled as autoregressive order-1, order-2 or autoregressive moving average order-1,1. The drought model built on the theorem of extremes of random numbers of random variables was found to be less satisfactory for the prediction of E(LT ) and E(MT ) on a weekly basis. However, the model has worked well on a monthly (weakly Markovian) and an annual (random) basis. An alternative procedure based on a second-order Markov chain model provided satisfactory prediction of E(LT ). Parameters such as the mean, standard deviation (or coefficient of variation), and lag-1 serial correlation of the original weekly flow sequences (obeying a gamma probability distribution function) were used to estimate the simple and first-order drought probabilities through closed-form equations. Second-order probabilities have been estimated based on the original flow sequences as well as SHI sequences, utilizing a counting method. The E(MT ) can be predicted as a product of drought intensity (which obeys the truncated normal distribution) and E(LT ) (which is based on a mixture of first- and second-order Markov chains).

Citation Sharma, T. C. & Panu, U. S. (2010) Analytical procedures for weekly hydrological droughts: a case of Canadian rivers. Hydrol. Sci. J. 55(1), 79–92.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号