Two different goals in fitting straight lines to data are to estimate a true linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating true straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal—predicting the dependent variable—OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. 相似文献
The elastic properties of a physical model representing a damaged rock matrix were studied using a square lattice deformed under tensile stress. The elastic modulusM of such a system varies in agreement with percolation theory as|x–xc|f, wherex is the damage parameter andxc the threshold value of the damage parameter,f3.6. Atxxc the scale dependence ofM can be expressed asML–f/v, whereL is the size of the sample andv the correlation exponent in percolation theory.The experimental results are of interest in assessing elastic properties in earthquake focal zones and fault zones in general. 相似文献
This work is devoted to the physico-chemical study of cadmium and lead interaction with diatom–water interfaces for two marine planktonic (Thalassiosira weissflogii = TW, Skeletonema costatum = SC) and two freshwater periphytic species (Achnanthidium minutissimum = AMIN, Navicula minima = NMIN) by combining adsorption measurements with surface complexation modeling. Reversible adsorption experiments were performed at 20 °C after 3 h of exposure as a function of pH, metal concentration in solution, and ionic strength. While the shape of pH-dependent adsorption edge is similar among all four diatom species, the constant-pH adsorption isotherm and maximal binding capacities differ. These observations allowed us to construct a surface complexation model for cadmium and lead binding by diatom surfaces that postulates the constant capacitance of the electric double layer and considers Cd and Pb complexation with mainly carboxylic and, partially, silanol groups. Parameters of this model are in agreement with previous acid–base titration results and allow quantitative reproduction of all adsorption experiments. 相似文献
Ice and snow have often helped physicists understand the world. On the contrary it has taken them a very long time to understand the flow of the glaciers. Naturalists only began to take an interest in glaciers at the beginning of the 19th century during the last phase of glacier advances. When the glacier flow from the upslope direction became obvious, it was then necessary to understand how it flowed. It was only in 1840, the year of the Antarctica ice sheet discovery by Dumont d'Urville, that two books laid the basis for the future field of glaciology: one by Agassiz on the ice age and glaciers, the other one by canon Rendu on glacier theory. During the 19th century, ice flow theories, adopted by most of the leading scientists, were based on melting/refreezing processes. Even though the word ‘fluid’ was first used in 1773 to describe ice, more the 130 years would have to go by before the laws of fluid mechanics were applied to ice. Even now, the parameter of Glen's law, which is used by glaciologists to model ice deformation, can take a very wide range of values, so that no unique ice flow law has yet been defined. To cite this article: F. Rémy, L. Testut, C. R. Geoscience 338 (2006).相似文献
Argillaceous rocks cover about one thirds of the earth's surface. The major engineering problems encountered with weak- to medium-strength argillaceous rocks could be slaking, erosion, slope stability, settlement, and reduction in strength. One of the key properties for classifying and determining the behavior of such rocks is the slake durability. The concept of slake durability index (SDI) has been the subject of numerous researches in which a number of factors affecting the numerical value of SDI were investigated. In this regard, this paper approaches the matter by evaluating the effects of overall shape and surface roughness of the testing material on the outcome of slake durability indices.
For the purpose, different types of rocks (marl, clayey limestone, tuff, sandstone, weathered granite) were broken into chunks and were intentionally shaped as angular, subangular, and rounded and tested for slake durability. Before testing the aggregate pieces of each rock type, their surface roughness was determined by using the fractal dimension. Despite the variation of final values of SDI test results (values of Id), the rounded aggregate groups plot relatively in a narrow range, but a greater scatter was obtained for the angular and subangular aggregate groups. The best results can be obtained when using the well rounded samples having the lowest fractal values. An attempt was made to analytically link the surface roughness with the Id parameter and an empirical relationship was proposed. A chart for various fractal values of surface roughness to use as a guide for slake durability tests is also proposed. The method proposed herein becomes efficient when well rounded aggregates are not available. In such condition, the approximate fractal value for the surface roughness profile of the testing aggregates could be obtained from the proposed chart and be plugged into the empirical relation to obtain the corrected Id value. The results presented herein represent the particular rock types used in this study and care should be taken when applying these methods to different type of rocks. 相似文献
A new earthquake catalogue for central, northern and northwestern Europe with unified Mw magnitudes, in part derived from chi-square maximum likelihood regressions, forms the basis for seismic hazard calculations
for the Lower Rhine Embayment. Uncertainties in the various input parameters are introduced, a detailed seismic zonation is
performed and a recently developed technique for maximum expected magnitude estimation is adopted and quantified. Applying
the logic tree algorithm, resulting hazard values with error estimates are obtained as fractile curves (median, 16% and 84%
fractiles and mean) plotted for pga (peak ground acceleration; median values for Cologne 0.7 and 1.2 m/s2 for probabilities of exceedence of 10% and 2%, respectively, in 50 years), 0.4 s (0.8 and 1.5 m/s2) and 1.0 s (0.3 and 0.5 m/s2) pseudoacclerations, and intensity (I0 = 6.5 and 7.2). For the ground motion parameters, rock foundation is assumed. For the area near Cologne and Aachen, maps
show the median and 84% fractile hazard for 2% probability of exceedence in 50 years based on pga (maximum median value about
1.5 m/s2), and 0.4 s (>2 m/s2) and 1.0 s (about 0.8 m/s2) pseudoaccelerations, all for rock. The pga 84% fractile map also has a maximum value above 2 m/s2 and shows similarities with the median map for 0.4 s. In all maps, the maximum values fall within the area 6.2–6.3° E and
50.8–50.9° N, i.e., east of Aachen. 相似文献