首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
The extended method of Q-mode factor analysis developed by Miesch for data matrices with constant row sums is generalized to data matrices with variable row sums. With the algorithm provided it is possible to compute factor scores in the metric of the original data and compute goodness-of-fit statistics and model geological systems unconstrained by constancy of row sums of data points.  相似文献   

2.
The extended method of Q-mode factor analysis developed by Miesch for data matrices with constant row sums is generalized to data matrices with variable row sums. With the algorithm provided it is possible to compute factor scores in the metric of the original data and compute goodness-of-fit statistics and model geological systems unconstrained by constancy of row sums of data points.  相似文献   

3.
It is mathematically possible to extract both R-mode and Q-mode factors simultaneously (RQ-mode factor analysis)by invoking the Eckhart-Young theorem. The resulting factors will be expressed in measures determined by the form of the scalings that have been applied to the original data matrix. Unless the measures for both solutions are meaningful for the problem at hand, the factor results may be misleading or uninterpretable. Correspondence analysis uses a symmetrical scaling of both rows and columns to achieve measures of proportional similarity between objects and variables. In the literature, the resulting similarity is a χ 2 distance appropriate for analysis of enumerated data, the original application of correspondence analysis. Justification for the use of this measure with interval or ratio data is unconvincing, but a minor modification of the scaling procedure yields the profile similarity, which is an appropriate measure. Symmetrical scaling of rows and columns is unnecessary for RQ-mode factor analysis. If the data are scaled so the minor product W'Wis the correlation matrix, the major product WW'is expressed in the Euclidean distances between objects. Therefore, RQ-mode factor analysis can be performed so that the Rmode is a principal components solution and the Qmode is a principal coordinates solution. For applications where the magnitudes of differences are important, this approach will yield more interpretable results than will correspondence analysis.  相似文献   

4.
    
Many data sets can be viewed as a collection of samples representing mixtures of a relatively small number of end members. When end members are present in the sample set, the algorithm QMODEL by Klovan and Miesch can efficiently determine proportionate contributions. EXTENDED QMODEL by Full, Ehrlich, and Klovan was designed to deduce the composition of realistic end members when the end members are not represented by samples. However, in the presence of high levels of random variation or outliers not belonging to the system of interest, EXTENDED QMODEL may not be reliable inasmuch as it is largely dependent on extreme values for definition of an initial mixing polyhedron. FUZZY QMODEL utilizes the fuzzy c-means algorithm of Bezdek to provide an alternative initial mixing polyhedron. This algorithm utilizes the collective property of all the data rather than outliers and so can produce suitable solutions in the presence of noisy or messy data points.  相似文献   

5.
Hundreds of samples and 17 variables collected from coalfields of major coal-bearing strata over China except for Tibet and Taiwan, were used in this study. The dry, ash-free basis volatile matter (V r) and caking index (G (RI)) were chosen by means of correlation analysis and stepwise discriminatory analysis as major indices of a new classification. By means of the optimum section, the boundary value of the axis of ordinate (G (RI)) and axis of abscissas (V r) can be determined in the classification system. Thus, aV rG (RI) classification scheme diagram was formed and bituminous coal was divided into nine classes. Use of correspondence analysis reduced dimensions of sample-expressive space without losing initial information. The trend on the factor surface of samples shows that the classification obtained from correspondence analysis conforms to theV rG (RI) classification result and further verified the dependability of classification by two indices. At the same time, a certain relationship between the properties of a great variety of coal and their attributes can be explained. Hence, bituminous coal classification becomes more scientific, reasonable, and practical than before.  相似文献   

6.
An interesting feature of recently published experimental data on high temperature deformation of Solnhofen limestone and Carrara marble is that it is not possible, for either rock, to fit isothermal points on a log strain-rate vs. log stress plot to a single straight line as required for a flow law of the familiar form e = Aexp(- H/RT)σn. Instead for Solnhofen limestone the data can be well fitted to two straight line segments suggesting a change from power law with high stress exponent (at high stress) to power law with low stress exponent at low stress. However, the constant strain-rate data are even better fitted by a single composite flow law formed by addition of the two power laws; a single flow law operates throughout but the strain-rate contributions of the two components change in response to changing stress. Published microstructural evidence supports this composite flow law approach.For Carrara marble constant data provides much poorer control and it is possible to propose several composite flow laws (formed by addition of two or three separate power-law components) all of which provide reasonable correspondence with the data. Stress relaxation data is then used both to test these flow models and to suggest others. Flow models that are broadly compatible with constant and stress relaxation data can then be tested against microstructural measurements.It is suggested that, by treating a set of composite flow laws as alternative hypotheses to be tested against all available data, a more realistic Theological model will result. Composite flow laws have the major advantage of being able to represent a smooth transition from one dominant deformation mechanism to another irrespective of how wide the transition zone may be.  相似文献   

7.
Changes in the stress field of an aquifer system induced by seismotectonic activity may change the mixing ratio of groundwaters with different compositions in a well, leading to hydrochemical signals which in principle could be related to discrete earthquake events. Due to the complexity of the interactions and the multitude of involved factors the identification of such relationships is a difficult task. In this study we present an empiric statistical approach suitable to analyse if there is an interdependency between changes in the chemical composition of monitoring wells and the regional seismotectonic activity of a considered area. To allow a rigorous comparison with hydrochemistry the regional earthquake time series was aggregated into an univariate time series. This was realized by expressing each earthquake in form of a parameter “e”, taking into consideration both energetic (magnitude of a seismic event) and spatial parameters (position of epi/hypocentrum relative to the monitoring site). The earthquake and the hydrochemical time-series were synchronised aggregating the e-parameters into “earthquake activity” functions E, which takes into account the time of sampling relative to the earthquakes which occurred in the considered area. For the definition of the aggregation functions a variety of different “e” parameters were considered. The set of earthquake functions E was grouped by means of factor analysis to select a limited number of significant and representative earthquake functions E to be used further on in the relation analysis with the multivariate hydrochemical data set. From the hydrochemical data a restricted number of hydrochemical factors were extracted. Factor scores allow to represent and analyse the variation of the hydrochemical factors as a function of time. Finally, regression analysis was used to detect those hydrochemical factors which significantly correlate with the aggregated earthquake functions.This methodological approach was tested with a hydrochemical data set collected from a deep well monitored for two years in the seismically active Vrancea region, Romania. Three of the hydrochemical factors were found to correlate significantly with the considered earthquake activities. A screening with different time combinations revealed that correlations are strongest when the cumulative seismicity over several weeks was considered. The case study also showed that the character of the interdependency depends sometimes on the geometrical distribution of the earthquake foci. By using aggregated earthquake information it was possible to detect interrelationships which couldn't have been identified by analysing only relations between single geochemical signals and single earthquake events. Further on, the approach allows to determine the influence of different seismotectonic patterns on the hydrochemical composition of the sampled well. The method is suitable to be used as a decision instrument in assessing if a monitoring site is suitable or not to be included in a monitoring net within a complex earthquake prediction strategy.  相似文献   

8.
The application of R-mode principal components analysis to a set of closed chemical data is described using previously published chemical analyses of rocks from Gough Island. Different measures of similarity have been used and the results compared by calculating the correlation coefficients between each of the elements of the extracted eigenvectors and each of the original variables. These correlations provide a convenient measure of the contribution of each variable to each of the principal components. The choice of similarity measure (variance-covariance or correlation coefficient)should reflect the nature of the data and the view of the investigator as to which is the proper weighting of the variables—according to their sample variance or equally. If the data are appropriate for principal components analysis, then the Chayes and Kruskal concept of the hypothetical open and closed arrays and the expected closure correlations would seem to be useful in defining the structure to be expected in the absence of significant departures from randomness. If the data are not multivariate normally distributed, then it is possible that the principal components will not be independent. This may result in significant nonzero covariances between various pairs of principal components.  相似文献   

9.
    
Compositional variations in the lavas of Parícutin volcano, Mexico, have been examined by an extended method of Q-mode factor analysis. Each sample composition is treated as a vector projected from an original eight-dimensional space into a vector system of three dimensions. The compositions represented by the vectors after projection are closely similar to the original compositions except for Na2Oand Fe2O3.The vectors in the three-dimensional system cluster about three different planes that represent three stages of compositional change in the Parícutin lavas. Because chemical data on the compositions of the minerals in the lavas are presently lacking, interpretations of the mineral phases that may have been involved in fractional crystallization are based on CIPW norm calculations. Changes during the first stage are attributed largely to the fractional crystallization of plagioclase and olivine. Changes during the second stage can be explained by the separation of plagioclase and pyroxene. Changes during the final stage may have resulted mostly from the assimilation of a granitic material, as previously proposed by R. E. Wilcox.  相似文献   

10.
Factor analysis utilizing textural data from 81 bottom samples was used to analyze the surficial sediments covering a 40,000-sq km area, which is one input data point per 500 km. On the other hand, the surficial geology of the area studied is complex as some map units are only 1 km wide in places. Under these circumstances it is interesting to determine that factor analysis nonetheless aims toward a reasonable geological solution. If the premise is accepted that factor analysis provides a solution best-fitted to the data, the geologist has carried his research one step further and is left with the problem of interpreting the results of factor analysis correctly. In this experiment, the interpretation of the factors representing the gravel and the mud is relatively simple, although the two factors representing sands are more difficult to explain. The proper interpretation of factors leads naturally to an inquiry on the optimum number of factors to use, but this problem can be solved objectively by considering the factor loadings.  相似文献   

11.
Load displacement analysis of drilled shafts can be accomplished by utilizing the “t-z” method, which models soil resistance along the length and tip of the drilled shaft as a series of springs. For non-linear soil springs, the governing differential equation that describes the soil-structure interaction may be discretized into a set of algebraic equations based upon finite difference methods. This system of algebraic equations may be solved to determine the load–displacement behavior of the drilled shaft when subjected to compression or pullout. By combining the finite difference method with Monte Carlo simulation techniques, a probabilistic load–displacement analysis can be conducted. The probabilistic analysis is advantageous compared to standard factor of safety design because uncertainties with the shaft–soil interface and tip properties can be independently quantified. This paper presents a reliability analysis of drilled shaft behavior by combining the finite difference technique for analyzing non-linear load–displacement behavior with Monte Carlo simulation method. As a result we develop probabilistic relationships for drilled shaft design for both total stress (undrained) and effective stress (drained) parameters. The results are presented in the form of factor of safety or resistance factors suitable for serviceability design of drilled shafts.  相似文献   

12.
This is the first application of minimum residuals (minres),a type of factor analysis, in the study of hypersthene minerals from a mafic norite formation at the Strathcona Mine near Sudbury, Ontario. Minres, because it yields highest communalities for some variables, is preferred to other types of factoring solutions including a common factor model with Chayes' null correlations as factor input. Oblique rotation of factors is rejected as a model for statistical and geochemical reasons. A five oxide-variable model that reasonably well determines hypersthene is reduced by minres to a two-factor model which is statistically significant. Because of the small number of variables in the analysis, it is difficult to interpret the isolated factors in terms of specific geologic processes. The factors, however, even if surrogate, are linked with substitution phenomena in the hypersthene.  相似文献   

13.
The dominant feature distinguishing one method of principal components analysis from another is the manner in which the original data are transformed prior to the other computations. The only other distinguishing feature of any importance is whether the eigenvectors of the inner product-moment of the transformed data matrix are taken directly as the Q-mode scores or scaled by the square roots of their associated eigenvalues and called the R-mode loadings. If the eigenvectors are extracted from the product-moment correlation matrix, the variables, in effect, were transformed by column standardization (zero means and unit variances), and the sum of the p-largest eigenvalues divided by the sum of all the eigenvalues indicates the degree to which a model containing pcomponents will account for the total variance in the original data. However, if the data were transformed in any manner other than column standardization, the eigenvalues cannot be used in this manner, but can only be used to determine the degree to which the model will account for the transformed data. Regardless of the type of principal components analysis that is performed—even whether it is Ror Q-mode—the goodness-of-fit of the model to the original data is given better by the eigenvalues of the correlation matrix than by those of the matrix that was actually factored.  相似文献   

14.
The engineering-geological environment, as any other geological environment, can be described by a number of variables. A clustering of those variables or clustering of their quantities makes it possible to divide the environment into taxonomic types. It is also possible to determine factors which are functions of those variables and which characterize the environment or its parts. In this paper we have applied R-and Q-mode factor analysis to engineering-geological research, concentrating our attention on establishing criteria for subdividing an environment into various aspects by its engineering-geological characteristics.  相似文献   

15.
Chemical composition and origin of alkaline granitic rocks in the Keivy area on the Kola Peninsula were investigated. Linear correlation analysis and principal-component analysis were used to determine the interrelation of major petrogenetic elements in alkaline granite and surrounding alkaline metasomatites. Estimates of linear correlation coefficients turned out to be different, and principal-component analysis of the chemical data revealed that there were three main components influencing variation of chemical composition. These factors can be interpreted in terms of petrological processes, which are different for alkaline granite and for the surrounding metasomatites, indicating a different origin of the rocks.  相似文献   

16.
Geomicrobial and geochemical studies were carried out in Argentina (Patagonia, Chubut Province) on four Au and polymetallic sulfide vein-type deposits. A horizon soils were analyzed for Bacillus reacting to lecithin [Bacillus L.(+)], Au and 12 additional elements. In two of the four sampling sites, exhibiting known and relatively simple mineralized structures, Bacillus L.(+) populations are clearly related to Au, As, Pb, Zn, Cu-sulfide mineralization. In areas containing more complex mineralized structures, the spatial relationship between Bacillus L.(+) and metals in the A horizon is more difficult to interpret. Results of a factor analysis performed on all analytical data (n = 130) suggest a partial relationship between Bacillus L.(+) and Au-As-Y pedochemical associations located above known Au mineralization. Bacillus L.(+) was first analyzed in Argentina in December 1994 and re-analyzed in Belgium five to seven months later. Most of the Bacillus contents (85%) of the Belgian tests are higher than those determined in Argentina. The present results and data of a previous study in Mexico (Melchior et al., 1994a; Melchior et al., 1994b) suggest that this may be the result of temperature variations during sample storage between periods of microbial analysis. From a strictly analytical point of view, the geomicrobial method is not an accurate, reproducible technique. However, Bacillus L.(+) can be used as a microbiological indicator of Au and polymetallic mineralization at a reconnaissance-level regional survey. At a local scale, this microbiological tool should be combined with classical exploration techniques such as soil geochemistry. It is recommended that the collection of all A horizon samples (for microbial study) should be accompanied by B or C horizon soils (for potential geochemical study, after prioritizing targets) so that a second field sampling program does not have to be undertaken.  相似文献   

17.
Conventional methods of analyzing sonic log data do not always yield accurate information on each velocity segment of a well. It is shown here that the velocity-depth parameters and the sections of approximately constant velocity may be more precisely defined by using an exponential spline to model the data.  相似文献   

18.
    
Analysis of empirical data considered to be mixtures of a finite number of end members has been a topic of increasing interest recently. The algorithms EXTENDED CABFAC and QMODEL by Klovan and Miesch (1976) represent a satisfactory solution to this problem if pure end members are captured within the data set or if the composition of true end members are known a priori. Where neither condition is satisfied, the composition of external end members can, under certain conditions, be deduced from the structure of the data. Described herein is an algorithm termed EXTENDED QMODEL which defines feasible end members which are closest to the data envelope.This research was supported in part by a grant from the Office of Naval Research (N00014-78C-0698 Code 483).  相似文献   

19.
A Spatial Analysis Neural Network (SANN) algorithm was applied for the analysis of geospatial data, on the basis of nonparametric statistical analysis and the concepts of traditional Artificial Neural Networks. SANN consists of a number of layers in which the neurons or nodes between layers are interconnected successively in a feed-forward direction. The Gaussian Kernel Function layer has several nodes, and each node has a transfer or an activation function that only responds (or activates) when the input pattern falls within its receptive field, which is defined by its smoothing parameter or width. The activation widths are functions of the model structural parameters, including the number of the nearest neighbor points P and a control factor F. The estimation method is based on two operational modes, namely, a training-validation mode in which the model structure is constructed and validated, and an interpolation mode. In this paper we discuss the effect of varying F and P upon the accuracy of the estimation in a two-dimensional domain for different input field sizes, using spatial data of wheat crop yield from Eastern Colorado. Crop yield is estimated as a function of the two-dimensional Cartesian coordinates (easting and northing). The results of the research led to the conclusion that optimal values of F and P depend on the sample size, i.e., for small data sets F=1.5 and P=7 while for large data sets F=2.5 and P=9. In addition, the accuracy of the interpolated field varies with the sample size. As expected for small sample sizes, the interpolated field and its variability may be significantly underestimated.  相似文献   

20.
Based on analysis of geophysical data such as core observation, rock slices identification, physical property, scanning electron microscope, X-ray diffraction, logging data etc., 16 factors of sedimentation, diagenesis, fluid pressure, and their relationships with reservoir physical property were analyzed, and the results indicate sedimentation is the internal factor controlling the reservoir property, diagenesis is the external and final decisive factor and abnormal fluid pressure is an important factor preserving the deep reservoir property. Quantitative characterization of diagenesis indicates that compaction and dissolution are more important than cementation and they respectively cause porosity change of ?23.6% and 7.7% and ?6.2%. Through optimizing 11 main controlling factors and constructing reservoir evaluation index (REI) according to the hierarchical cluster and principal component analysis, reservoir classification standard was established and reservoirs were divided into four classes. The studies show that Es32SQ4 consists mainly of class I and II, while Es32SQ6 is mainly of class III and II; the favorable zone is the north and south slope of Qibei sub-sag and the Liujianfang fault-nose. The successful application of the quantitative and comprehensive evaluation in the Qibei area verifies the advanced, practicable method of less artificial factor is suitable for the low porosity and permeability reservoir.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号