首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Monthly scenarios of relative humidity (R H) were obtained for the Malaprabha river basin in India using a statistical downscaling technique. Large-scale atmospheric variables (air temperature and specific humidity at 925 mb, surface air temperature and latent heat flux) were chosen as predictors. The predictor variables are extracted from the (1) National Centers for Environmental Prediction reanalysis dataset for the period 1978–2000, and (2) simulations of the third generation Canadian Coupled Global Climate Model for the period 1978–2100. The objective of this study was to investigate the uncertainties in regional scenarios developed for R H due to the choice of emission scenarios (A1B, A2, B1 and COMMIT) and the predictors selected. Multi-linear regression with stepwise screening is the downscaling technique used in this study. To study the uncertainty in the regional scenarios of R H, due to the selected predictors, eight sets of predictors were chosen and a downscaling model was developed for each set. Performance of the downscaling models in the baseline period (1978–2000) was studied using three measures (1) Nash–Sutcliffe error estimate (E f), (2) mean absolute error (MAE), and (3) product moment correlation (P). Results show that the performances vary between 0.59 and 0.68, 0.42 and 0.50 and 0.77 and 0.82 for E f, MAE and P. Cumulative distribution functions were prepared from the regional scenarios of R H developed for combinations of predictors and emission scenarios. Results show a variation of 1 to 6% R H in the scenarios developed for combination of predictor sets for baseline period. For a future period (2001–2100), a variation of 6 to 15% R H was observed for the combination of emission scenarios and predictors. The variation was highest for A2 scenario and least for COMMIT and B1 scenario.  相似文献   

2.
Spatial fracture intensity (P 32, fracture area by volume) is an important characteristic of a jointed rock mass. Although it can hardly ever be measured, P 32 can be modeled based on available geological information such as spatial data of the fracture network. Flow in a mass composed of low-permeability hard rock is controlled by joints and fractures. In this article, models were developed from a geological data set of fractured andesite in LanYu Island (Taiwan) where a site is investigated for possible disposal of low-level and intermediate-level radionuclide waste. Three different types of conceptual models of spatial fracture intensity distribution were generated, an Enhanced Baecher’s model (EBM), a Levy–Lee Fractal model (LLFM) and a Nearest Neighborhood model (NNM). Modeling was conducted on a 10 × 10 × 10 m synthetic fractured block. Simulated flow was forced by a 1% hydraulic gradient between two vertical xz faces of the cube (from North to South) with other boundaries set to no-flow conditions. Resulting flow vectors are very sensitive to spatial fracture intensity (P 32). Flow velocity increases with higher fracture intensity (P 32). R-squared values of regression analysis for the variables velocity (V/V max) and fracture intensity (P 32) are 0.293, 0.353, and 0.408 in linear fit and 0.028, 0.08, and 0.084 in power fit. Higher R 2 values are positively linked with structural features but the relation between velocity and fracture intensity is non-linear. Possible flow channels are identified by stream-traces in the Levy–LeeFractal model.  相似文献   

3.
Geologists unfamiliar with the application of probability theory to discrete data in other fields of research are usually acquainted with only three discrete theoretical frequency distributions: Poisson, binomial, and negative binomial distributions. In some situations these distributions may fail to adequately describe a set of experimental data. Other distributions such as the Poisson with zeros, Neyman type A, logarithmic with zeros, Poisson-binomial, and Thomas double Poisson together with the more common Poisson, binomial, and negative binomial form a generalized subset of discrete theoretical distributions, one of which should fit almost any experimental data set. A computer program is presented which allows testing of any combination of these distributions against observed discrete data.  相似文献   

4.
Commonly used methods for calculating component scores are reviewed. Means, variances, and the covariance structures of the resulting sets of scores are examined both by calculations based on a large set of electron microprobe analyses of melilite (supplied by D. Velde)and by a survey of recent geological applications of principal component analysis. Most of the procedures used to project raw data into the new vector space yield uncorrelated scores. In exceptions so far encountered, correlations between scores seem to have been occasioned by the use of unstandardized variables with components calculated from a correlation matrix. In a number of cases substantive interpretations of such correlations have been proposed. A different set of correlations results for the same data if scores are computed from standardized variables and components based on the covariance matrix. If unscaled components are rotated by the varimax procedure, the result is a return to the original space. In the work reported here, nevertheless, scores calculated from varimax-rotated scaled vectors are uncorrelated.  相似文献   

5.
Payne, R. J., Lamentowicz, M. & Mitchell, E. A. D. 2010: The perils of taxonomic inconsistency in quantitative palaeoecology: experiments with testate amoeba data. Boreas, 10.1111/j.1502‐3885.2010.00174.x. ISSN 0300‐9483. A fundamental requirement of quantitative palaeoecology is consistent taxonomy between a modern training set and palaeoecological data. In this study we assess the possible consequences of violation of this requirement by simulating taxonomic errors in testate amoeba data. Combinations of easily confused taxa were selected, and data manipulated to reflect confusion of these taxa; transfer functions based on unmodified data were then applied to these modified data sets. Initially these experiments were carried out one error at a time using four modern training sets; subsequently, multiple errors were separately simulated both in four modern training sets and in four palaeoecological data sets. Some plausible taxonomic confusions caused major biases in reconstructed values. In the case of two palaeoecological data sets, a single consistent taxonomic error was capable of changing the pattern of environmental reconstruction beyond all recognition, totally removing any real palaeoenvironmental signal. The issue of taxonomic consistency is one that many researchers would rather ignore; our results show that the consequences of this may ultimately be severe.  相似文献   

6.
The weight-percent values of four mineralogic variables (quartz, K feldspar, color index, and muscovite) for 10 sets of granitic rocks (20–50 samples in each set) from magmatic units of the Singhbhum granite were used for (1) computation of the Mahalanobis' generalized distance functions (D 2) between all pairs of the 10 sets, (2) testing significance of the difference between the multivariate means, and (3) computation of the linear discriminant functions between all possible pairs of the sets. The 10 data sets are for six magmatic units which belong to three successive but closely related phases of emplacement. The multivariate means for all sets are significantly different except for those between two of the sets of phase I. Cluster analysis on the basis of theD 2 values enables the 10 sets to be placed into four distinct groups. Group A includes two subgroups, one of which consists of two sets representing typical members of phase I; the other subgroup includes two sets which are typical of phase II. Group B includes two sets which are typical of phase III. The other four sets do not group with the typical representatives of the three phases, probably because of certain special conditions of their emplacement. A separate series ofD 2 computation from the same data, but excluding the color index, was unsuccessful in making the four aberrant sets group with the typical members of the respective phases. Efficient LDF's could be determined for discrimination between most pairs of the 10 sets of granite rocks.  相似文献   

7.
York's (1969) method of regression, determining the best-fit line to data with errors in both variables using a least-squares solution, has become an integral part of isotope geochemistry. Although other methods agree with York's best-fit line (e.g., maximum likelihood), there is little agreement on the standard-error estimates for slope and intercept values. The reasons for this are differing levels of approximation used to compute the standard error, doubts concerning procedures for determining a confidence interval once the standard error has been estimated, and a typographical error in the original publication. This paper examines York's method of regression and standard errors of the parameters of a best-fit line. A very accurate method for determining the standard error in slope and intercept values is introduced, which eliminates the need to multiply the standard-error estimate by the goodness-of-fit parameter known as MSWD. In addition, a derivation of a fixed-intercept method of regression is introduced, and interpretations of MSWD and use of the t-adjustment in confidence intervals are discussed. The accuracy of the standard-error computations is determined by comparing the results to slope and intercept statistics generated from several thousand Monte Carlo regressions using synthetic 40Ar/39Ar inverse isochron data.  相似文献   

8.
In this research, the equilibrium sorption of Zn(II) and Cu(II) by kaolinite was explained using the Freundlich, Langmuir and Redlich–Peterson isotherms, via both linear and non-linear regression analyses. In the case of non-linear regression method, the best-fitting model was evaluated using six different error functions, namely coefficient of determination (r 2), hybrid fractional error function (HYBRID), Marquardt’s percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (SSE) and sum of the absolute errors (EABS). The examination of error estimation methods showed that the Langmuir model provides the best fit for the experimental equilibrium data for both linear and non-linear regression analyses. The SSE function was found to be a better option to minimize the error distribution between the experimental equilibrium data and predicted two-parameter isotherms. In the case of three-parameter isotherm, HYBRID was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. Non-linear method was found to be more appropriate method for estimating the isotherm parameters.  相似文献   

9.
Computing with functions on the rotation group is a task carried out in various areas of application. When it comes to approximation, kernel based methods are a suitable tool to handle these functions. In this paper, we present an algorithm which allows us to evaluate linear combinations of functions on the rotation group as well as a truly fast algorithm to sum up radial functions on the rotation group. These approaches based on nonequispaced FFTs on SO(3) take O(M+N)\mathcal{O}(M+N) arithmetic operations for M and N arbitrarily distributed source and target nodes, respectively. In this paper, we investigate a selection of radial functions and give explicit theoretical error bounds, as well as numerical examples of approximation errors. Moreover, we provide an application of our method, namely the kernel density estimation from electron back scattering diffraction (EBSD) data, a problem relevant in texture analysis.  相似文献   

10.
A model, influence of water and salt on vegetation (IWSV), was developed to evaluate their influence on plant species. The main function of this model was to calculate a comprehensive index value for evaluating the suitability of plant growth. This model consists of five explanatory variables (vadose zone moisture content, vadose zone salinity, vadose zone lithology, depth to the water table, and groundwater mineralization) and two response variables (plant species and their cover). A set of independent data on three plant species, Artemisia ordosica, Salix psammophila, and Carex enervis, which are dominant species in the Mu Us Desert of northern China, were used to validate the model. Validation results show an overall correct prediction for the distribution of these three species. The results demonstrated that the IWSV model can be a useful tool for groundwater management and nature conservation in a semi-arid desert region, especially for predicting the vegetation distribution in areas with groundwater extraction.  相似文献   

11.
Dimensional Reduction of Pattern-Based Simulation Using Wavelet Analysis   总被引:2,自引:2,他引:0  
A pattern-based simulation technique using wavelet analysis is proposed for the simulation (wavesim) of categorical and continuous variables. Patterns are extracted by scanning a training image with a template and then storing them in a pattern database. The dimension reduction of patterns in the pattern database is performed by wavelet decomposition at certain scale and the approximate sub-band is used for pattern database classification. The pattern database classification is performed by the k-means clustering algorithm and classes are represented by a class prototype. For the simulation of categorical variables, the conditional cumulative density function (ccdf) for each class is generated based on the frequency of the individual categories at the central node of the template. During the simulation process, the similarity of the conditioning data event with the class prototypes is measured using the L 2-norm. When simulating categorical variables, the ccdf of the best matched class is used to draw a pattern from a class. When continuous variables are simulated, a random pattern is drawn from the best matched class. Several examples of conditional and unconditional simulation with two- and three- dimensional data sets show that the spatial continuity of geometric features and shapes is well reproduced. A comparative study with the filtersim algorithm shows that the wavesim performs better than filtersim in all examples. A full-field case study at the Olympic Dam base metals deposit, South Australia, simulates the lithological rock-type units as categorical variables. Results show that the proportions of various rock-type units in the hard data are well reproduced when similar to those in the training image; when rock-type proportions between the training image and hard data differ, the results show a compromise between the two.  相似文献   

12.
The engineering-geological environment, as any other geological environment, can be described by a number of variables. A clustering of those variables or clustering of their quantities makes it possible to divide the environment into taxonomic types. It is also possible to determine factors which are functions of those variables and which characterize the environment or its parts. In this paper we have applied R-and Q-mode factor analysis to engineering-geological research, concentrating our attention on establishing criteria for subdividing an environment into various aspects by its engineering-geological characteristics.  相似文献   

13.
All methods proposed to date for the determination of surface temperature history from temperature profiles measured in boreholes are based on the assumption that the borehole is a hole in a semiinfinite homogeneous earth of constant diffusivity , and more or less ignore the fact that the mathematical formulation for this problem is improperly posed. This assumption, which frequently represents a gross oversimplification of the situation, was originally introduced as a computational expedient. We propose a computational procedure which is independent of this assumption and takes the improperly posed nature of the problem into account. The essence of the method is: (a) determine the set of borehole profiles corresponding to a given set of linearly independent surface temperature history functions, and then (b) take the coefficients of the least-squares fit of these borehole profiles to the given borehole data as the coefficients in the linear combination of surface temperature history functions which defines the required approximation to the surface temperature history. An analogous procedure can be used to determine the lower boundary condition for the heat-flow problem if the surface-temperature history is assumed to be known. Results of numerical experimentation are used to indicate the extent to which the method is viable in practice.  相似文献   

14.
Soil erosion is one of most widespread process of degradation. The erodibility of a soil is a measure of its susceptibility to erosion and depends on many soil properties. Soil erodibility factor varies greatly over space and is commonly estimated using the revised universal soil loss equation. Neglecting information about estimation uncertainty may lead to improper decision-making. One geostatistical approach to spatial analysis is sequential Gaussian simulation, which draws alternative, equally probable, joint realizations of a regionalised variable. Differences between the realizations provide a measure of spatial uncertainty and allow us to carry out an error analysis. The objective of this paper was to assess the model output error of soil erodibility resulting from the uncertainties in the input attributes (texture and organic matter). The study area covers about 30 km2 (Calabria, southern Italy). Topsoil samples were collected at 175 locations within the study area in 2006 and the main chemical and physical soil properties were determined. As soil textural size fractions are compositional data, the additive-logratio (alr) transformation was used to remove the non-negativity and constant-sum constraints on compositional variables. A Monte Carlo analysis was performed, which consisted of drawing a large number (500) of identically distributed input attributes from the multivariable joint probability distribution function. We incorporated spatial cross-correlation information through joint sequential Gaussian simulation, because model inputs were spatially correlated. The erodibility model was then estimated for each set of the 500 joint realisations of the input variables and the ensemble of the model outputs was used to infer the erodibility probability distribution function. This approach has also allowed for delineating the areas characterised by greater uncertainty and then to suggest efficient supplementary sampling strategies for further improving the precision of K value predictions.  相似文献   

15.
In this contribution, a methodology is reported in order to build an interval fuzzy model for the pollution index PLI (a composite index using relevant heavy metal concentration) with magnetic parameters as input variables. In general, modelling based on fuzzy set theory is designed to mimic how the human brain tends to classify imprecise information or data. The “interval fuzzy model” reported here, based on fuzzy logic and arithmetic of fuzzy numbers, calculates an “estimation interval” and seems to be an adequate mathematical tool for this nonlinear problem. For this model, fuzzy c-means clustering is used to partition data, hence the membership functions and rules are built. In addition, interval arithmetic is used to obtain the fuzzy intervals. The studied sets are different examples of pollution by different anthropogenic sources, in two different study areas: (a) soil samples collected in Antarctica and (b) road-deposited sediments collected in Argentina. The datasets comprise magnetic and chemical variables, and for both cases, relevant variables were selected: magnetic concentration-dependent variables, magnetic features-dependent variables and one chemical variable. The model output gives an estimation interval; its width depends on the data density, for the measured values. The results show not only satisfactory agreement between the estimation interval and data, but also provide valued information from the rules analysis that allows understanding the magnetic behaviour of the studied variables under different conditions.  相似文献   

16.
In this paper, we study the problem of constructing a smooth approximant of a surface defined by the equation z = f(x 1, x 2), the data being a finite set of patches on this surface. This problem occurs, for example, after geophysical processing such as migration of time-maps or depth-maps. The usual algorithms to solve this problem are picking points on the patches to get Lagrange's data or trying to get local junctions on patches. But the first method does not use the continuous aspect of the data and the second one does not perform well to get a global regular approximant (C 1 or more). As an approximant of f, a discrete smoothing spline belonging to a suitable piecewise polynomial space is proposed. The originality of the method consists in the fidelity criterion used to fit the data, which takes into account their particular aspect (surface's patches): the idea is to define a function that minimizes the volume located between the data patches and the function, and which is globally C k. We first demonstrate the new method on a theoretical aspect and numerical results on real data are given.  相似文献   

17.
ABSTRACT

This paper presents the reliability analysis on the basis of the foundation failure against bearing capacity using the concept of fuzzy set theory. A surface strip footing is considered for the analysis and the bearing capacity is estimated using the conventional Finite Element Method (FEM). The spatial variability of the variables is taken into consideration to capture the physical randomness of the soil parameters for an isotropic field. A variation of the probability of failure (Pf) against a varying limiting applied pressure (q) is presented for different Coefficient of Variation (COV) of the variables and different scale of fluctuation (θ). The results reveal that the friction angle of soil (?) is the most influencing parameter among the other variables. Further, the influence of the scale of fluctuation (θ) on the probability of failure (Pf) is also examined. It is observed that for a particular COV of ?, higher value of θ predicts higher Pf whereas, Pf increases as COV of ? increases for a particular θ value. Later, a comparison study is accomplished to verify the viability of the present method and it can be noticed that the present method compares well with the other reliability method (First Order Reliability Method) to a reasonably good extent.  相似文献   

18.
19.
Semivariogram parameters are estimated by a weighted least-squares method and a jackknife kriging method. The weighted least-squares method is investigated by differing the lag increment and maximum lag used in the fit. The jackknife kriging method minimizes the variance of the jackknifing error as a function of semivariogram parameters. The effects of data sparsity and the presence of a trend are investigated by using 400-, 200-, and 100-point synthetic data sets. When the two methods yield significantly different results, more data may be needed to determine reliably the semivariogram parameters, or a trend may be present in the data.  相似文献   

20.
Spatial declustering weights   总被引:1,自引:0,他引:1  
Because of autocorrelation and spatial clustering, all data within a given dataset have not the same statistical weight for estimation of global statistics such mean, variance, or quantiles of the population distribution. A measure of redundancy (or nonredundancy) of any given regionalized random variable Z(uα)within any given set (of size N) of random variables is proposed. It is defined as the ratio of the determinant of the N X Ncorrelation matrix to the determinant of the (N - 1) X (N - 1)correlation matrix excluding random variable Z(uα).This ratio measures the increase in redundancy when adding the random variable Z(uα)to the (N - 1 )remainder. It can be used as declustering weight for any outcome (datum) z(uα). When the redundancy matrix is a kriging covariance matrix, the proposed ratio is the crossvalidation simple kriging variance. The covariance of the uniform scores of the clustered data is proposed as a redundancy measure robust with respect to data clustering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号