首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An important problem in frequency analysis is the selection of an appropriate probability distribution for a given sample data. This selection is generally based on goodness-of-fit tests. The goodness-of-fit method is an effective means of examining how well a sample data agrees with an assumed probability distribution as its population. However, the goodness of fit test based on empirical distribution functions gives equal weight to differences between empirical and theoretical distribution functions corresponding to all observations. To overcome this drawback, the modified Anderson–Darling test was suggested by Ahmad et al. (1988b). In this study, the critical values of the modified Anderson–Darling test statistics are revised using simulation experiments with extensions of the shape parameters for the GEV and GLO distributions, and a power study is performed to test the performance of the modified Anderson–Darling test. The results of the power study show that the modified Anderson–Darling test is more powerful than traditional tests such as the χ2, Kolmogorov–Smirnov, and Cramer von Mises tests. In addition, to compare the results of these goodness-of-fit tests, the modified Anderson–Darling test is applied to the annual maximum rainfall data in Korea.  相似文献   

2.
Modelling air pollution data by the skew-normal distribution   总被引:3,自引:3,他引:0  
Particulate matter with an aerodynamic equivalent diameter of up 10 μm is commonly referred to as PM10 and its harmful effects on human health are well known. We model annual means of daily concentrations of PM10 in Italy by the skew-normal distribution, giving theoretical as well as empirical motivations for the model’s choice. Its adequacy is checked through Anderson–Darling statistic. The skew-normal distribution gives a very good fit to the data and it is particularly useful in estimating probabilities of high PM10 concentrations.  相似文献   

3.
ABSTRACT

Several commonly-used nonparametric change-point detection methods are analysed in terms of power, ability and accuracy of the estimated change-point location. The analysis is performed with synthetic data for different sample sizes, two types of change and different magnitudes of change. The methods studied are the Pettitt method, a method based on the Cramér von Mises (CvM) two-sample test statistic and a variant of the CUSUM method. The methods differ considerably in behaviour. For all methods the spread of estimated change-point location increases significantly for points near one of the ends of the sample. Series of annual maximum runoff for four stations on the Yangtze River in China are used to examine the performance of the methods on real data. It was found that the CvM-based test gave the best results, but all three methods suffer from bias and low detection rates for change points near the ends of the series.  相似文献   

4.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

5.
A maximum entropy-Gumbel-Hougaard copula (MEGHC) method has been proposed for monthly streamflow simulation. The marginal distributions of monthly streamflows are estimated through the maximum entropy (ME) method with the first four non-central moments (i.e. mean, standard deviation, skewness and kurtosis) being the constraints. The Lagrange multipliers in ME-based marginal distributions are determined using the conjugate gradient (CG) method which is of superlinear convergence, simple recurrence formula and less calculation. Then the joint distributions of two adjacent monthly streamflows are constructed using the Gumbel-Hougaard copula (GHC) method. The developed MEGHC method has been applied for monthly streamflow simulation in Xiangxi river, China. The goodness-of-fit statistical tests, consisting of K–S test, A–D test, RMSE and Rosenblatt transformation with Cramér von Mises statistic, show that the MEGHC method can reflect dependence structure in adjacent monthly streamflows of Xiangxi river, China. Comparison between simulated streamflow generated by MEGHC and observations indicates the satisfactory performance of MEGHC with small relative errors.  相似文献   

6.
This study aims to model the joint probability distribution of periodic hydrologic data using meta-elliptical copulas. Monthly precipitation data from a gauging station (410120) in Texas, US, was used to illustrate parameter estimation and goodness-of-fit for univariate drought distributions using chi-square test, Kolmogorov–Smirnov test, Cramer-von Mises statistic, Anderson-Darling statistic, modified weighted Watson statistic, and Liao and Shimokawa statistic. Pearson’s classical correlation coefficient r n , Spearman’s ρ n, Kendall’s τ, Chi-Plots, and K-Plots were employed to assess the dependence of drought variables. Several meta-elliptical copulas and Gumbel-Hougaard, Ali-Mikhail-Haq, Frank and Clayton copulas were tested to determine the best-fit copula. Based on the root mean square error and the Akaike information criterion, meta-Gaussian and t copulas gave a better fit. A bootstrap version based on Rosenblatt’s transformation was employed to test the goodness-of-fit for meta-Gaussian and t copulas. It was found that none of meta-Gaussian and t copulas considered could be rejected at the given significance level. The meta-Gaussian copula was employed to model the dependence, and these results were found satisfactory.  相似文献   

7.
 The non-parametric Mann–Whitney (MW) statistic test has been popularly used to assess the significance of a shift in median or mean of hydro-meteorological time series. It has been considered that the test is more suitable for non-normally distributed data and it may be not sensitive to the distribution type of sample data. However, no evidence has been provided to demonstrate these. This study investigates the power of the test in various circumstances by means of Monte Carlo simulation. Simulation results demonstrate that the power of the test is very sensitive to various properties of sample data. The power depends on the pre-assigned significance level, magnitude of a shift, sample size, and its occurrence position within a time series; and it is also strongly affected by the variation, skewness, and distribution type of a time series. The bigger the magnitude of a shift, the more powerful the test is; the larger the sample size, the more powerful the test is; and the bigger the variation within a time series, the less power the test has. The test has the highest power if a shift occurs at the midpoint of a time series. For the samples with different distribution types, the power of the test is dramatically different. The test has the highest power for time series with the extreme value type III (EV3) distribution while it indicates the lowest power for time series with the lognormal distribution.  相似文献   

8.
A mixed model is proposed to fit earthquake interevent time distribution. In this model, the whole distribution is constructed by mixing the distribution of clustered seismicity, with a suitable distribution of background seismicity. Namely, the fit is tested assuming a clustered seismicity component modeled by a non-homogeneous Poisson process and a background component modeled using different hypothetical models (exponential, gamma and Weibull). For southern California, Japan, and Turkey, the best fit is found when a Weibull distribution is implemented as a model for background seismicity. Our study uses earthquake random sampling method we introduced recently. It is performed here to account for space–time clustering of earthquakes at different distances from a given source and to increase the number of samples used to estimate earthquake interevent time distribution and its power law scaling. For Japan, the contribution of clustered pairs of events to the whole distribution is analyzed for different magnitude cutoffs, m c, and different time periods. The results show that power laws are mainly produced by the dominance of correlated pairs at small and long time ranges. In particular, both power laws, observed at short and long time ranges, can be attributed to time–space clustering revealed by the standard Gardner and Knopoff’s declustering windows.  相似文献   

9.
Bodenfaunistische Untersuchungen in Aare und Rhein   总被引:1,自引:0,他引:1  
Summary In 1969–1972 the benthos of the rivers Aare (near Beznau) and Rhine (near Kaiseraugst) was investigated. At every two stations of both rivers, we collected 8 samples regularly distributed over the river width. Both collecting and counting were performed quantitatively. Physical and chemical data, and grain size analyses of the substratum are given for both rivers. The results of the extensive counts were summarized in a list of quantitative abundance. The benthic biocoenoses are dominated in both rivers by oligochaetes and chironomids; in the Aare the trichopteran larvae ofHydropsyche sp. are also highly abundant. In the Aare 79 species or genera and an average of 48,100 individuals per square meter were collected. This benthic fauna is richer and denser than in the Rhine, where 41 species or genera and an average of 11,400 individuals per square meter were recorded. The distribution of the organisms is inhomogeneous (contagious), and within a square meter the total number of individuals per 855 cm2 ranged between 4.056 and 23,140. In the Aare the average biomass is 8.6–41.5 g/m2, again higher than in the Rhine (0.5–3.1 g/m2). The Rhine river bed is covered with gravel and sand, whereas in the Aare the bed is muddy. As the physical and chemical conditions are quite the same in both rivers, we relate the different population densities to the differences in the substratum.
Résumé En 1969–1972, les invertébrés de l'Aar (près de Beznau) et du Rhin (près de Kaiseraugst) ont été examinés. Deux stations de prélèvement ont été choisies pour chaque cours d'eau et à chacune d'elles, 8 échantillons ont été pris, répartis régulièrement sur toute la largeur. Les organismes ont été collectionnés et dénombrés d'une manière quantitative. Pour comparer la biologie des deux cours d'eau, on a étudié les résultats physico-chimiques et les analyses de granulométrie du fond. Les résultats des nombreux comptages ont été résumés dans une liste quantitative d'abondance. Dans les deux cours d'eau, les oligochètes et les chironomides dominent les biocénoses benthiques; en outre, dans l'Aar, la larve du trichoptèreHydropsyche sp. joue un r?le assez important. L'Aar, contenant 79 espèces ou genres et en moyenne 48 100 individus par m2, est plus riche et plus dense en organismes benthiques répartition des organismes est très hétérogène (contagieuse) et sur 1 m2, le nombre total des individus par 855 cm2 varie entre 4056 et 23 140. Enfin, dans l'Aar, on trouve aussi une biomasse plus grande (en moyenne 8,6–41,5 g/m2) que dans le Rhin (en moyenne 0,5–3,1 g/m2). La nature du fond du Rhin est graveleuse-sablonneuse, tandis que dans l'Aar le limon prédomine. Puisque les conditions physico-chimiques sont assez semblables dans les deux cours d'eau, la différence de densité des populations est expliquée par la nature même des deux types de substratum.
  相似文献   

10.
Zusammenfassung Die monatlichen Sichttiefenmessungen (meist 3) mit der SECCHI-Scheibe im überlinger See (Bodensee) von 1953–1962 werden miteinander verglichen. Sie liefern Hinweise auf die planktische Prim?rproduktion. Die kurzfristige Streuung der Sichttiefen ist teilweise gross. Das Sichttiefenmittel liegt bei 7,4 m, die Extreme bei 1,9 m und 15,0 m. Das produktionsbedingte Sichttiefenminimum vom Frühjahr bis Herbst jeden Jahres wird durch ein Frühsommermaximum (Produktionszusammenbruch) unterteilt. Zwischen 1953 und 1962 besteht, trotz grosser Schwankungsbreite, eine Sichttiefenregression von j?hrlich 2 cm, eine von 6 cm jedoch zwischen dem Mittelwert von 1920–1924 (9,5 m) und dem von 1953–1962 (7,4 m). Der Sichttiefenrückgang verl?uft seit 1900 sigmoid. Dabei ist der geringere Anstieg der Kurve von 1953–1962 nicht mehr mit dem progressiv steigenden Phosphatphosphor korrelierbar. Dieser kann heute im Bodensee nur noch selten ein produktionsbegrenzender Faktor sein.
Summary In the überlinger See (Lake of Constance) the Secchi-disk transparencies are usually recorded three times a month; the results obtained from 1963–1962 have been compared with each other. They give an indication of planktic primary production. The divergence of transparencies is partly very considerable within short periods. The average transparency lies at about 7,4 m, the extreme transparencies at 1,9 m and 15,0 m. Every year, from spring to autumn, there is a minimum of transparency induced by planktic production; it is interrupted by a maximum occurring in early summer (production collapse). From 1953–1962, in spite of considerable fluctuation scope, a yearly regression of transparency of 2 cm was observed. However, between the average transparencies from 1920–1924 (9,5 m) and from 1953–1962 (7,4 m), the yearly regression amounts to 6 cm. From 1900, the regression of transparency represents a sigmoid curve. However, the slighter rise of the curve from 1953–1962 can no longer be correlated with the progressive rising of phosphate-phosphorus, which now very seldom is to be considered as a limiting factor of planktic production in the Lake of Constance.

Résumé L'auteur compare les résultats des mesures de transparence effectuées à l'aide du disque Secchi en général trois fois par mois dans le überlinger See (Lac de Constance) de 1953–1962. Ces mesures fournissent des indications sur la production primaire du phytoplancton. La transparence varie parfois considérablement en de courts intervalles. La moyenne des transparences est à 7,4 m, alors que les valeurs extrêmes se situent à 1,9 m et 15,0 m. Chaque année, du printemps à l'automne, il y a un minimum de transparence d? à la production du plancton; il est interrompu par un maximum au début de l'été (effondrement de la production). On a constaté entre 1953 et 1962 une régression annuelle de la transparence de 2 cm; toutefois, si l'on compare la moyenne de 1920–1924 (9,5 m) avec celle de 1953–1962 (7,4 m), cette régression annuelle est de 6 cm. Depuis 1900, la régression de la transparence dessine une courbe sigmo?de. Cependant il n'existe plus de corrélation entre la montée plus douce de la courbe de 1953–1962 et l'augmentation progressive du phosphate-phosphore; ce dernier ne peut en effet plus guère être considéré comme un facteur de limitation de la production planctique dans le lac de Constance.
  相似文献   

11.
In this paper, one of the distribution-free tests — randomization test, is briefly described. It doesn’t need any distribution assumption and its related parameter estimation and is applicable to random and nonrandom sample. Then it is used to the test of migration of strong earthquakes on the Xianshuihe Fault Belt and “immunity” of large earthquakes in the large northern reigon of China. The test results show that there is 98.7% confidence degree for the migration of strong earthqueks on the Xianshuihe Fault Belt and “immunity” of earthqueks withM S⩾8 toM S⩾7 is significant in the large northern region of China. The obtained test results and the test method itself have certain application in the practice. The Chinese version of this paper appeared in the Chinese edition ofActa Seismologica Sinica,15, 484–489, 1993.  相似文献   

12.
Intensity–duration–frequency (IDF) curves are used extensively in engineering to assess the return periods of rainfall events and often steer decisions in urban water structures such as sewers, pipes and retention basins. In the province of Québec, precipitation time series are often short, leading to a considerable uncertainty on the parameters of the probabilistic distributions describing rainfall intensity. In this paper, we apply Bayesian analysis to the estimation of IDF curves. The results show the extent of uncertainties in IDF curves and the ensuing risk of their misinterpretation. This uncertainty is even more problematic when IDF curves are used to estimate the return period of a given event. Indeed, standard methods provide overly large return period estimates, leading to a false sense of security. Comparison of the Bayesian and classical approaches is made using different prior assumptions for the return period and different estimation methods. A new prior distribution is also proposed based on subjective appraisal by witnesses of the extreme character of the event.  相似文献   

13.
 Computer-assisted image analysis can be successfully used to derive quantitative textural data on pyroclastic rock samples. This method provides a large number of different measurements such as grain size, particle shape and 2D orientation of particle main axes (directional- or shape-fabric) automatically and in a relatively short time. Orientation data reduction requires specific statistical tests, mainly devoted to defining the kind of particle distribution pattern, the possible occurrence of preferred particle orientation, the confidence interval of the mean direction and the degree of randomness with respect to pre-assigned theoretical frequency distributions. Data obtained from image analysis of seven lithified ignimbrite samples from the Vulsini Volcanic District (Central Italy) are used to test different statistics and to provide insight about directional fabrics. First, the possible occurrence of a significant deviation from a theoretical circular uniform distribution was evaluated by using the Rayleigh and Tukey χ 2 tests. Then, the Kuiper test was performed to evaluate whether or not the observation fits with a unimodal, Von Mises-like theoretical frequency distribution. Finally, the confidence interval of mean direction was calculated. With the exception of one sample (FPD10), which showed a well-developed bimodality, all the analysed samples display significant anisotropic and unimodal distributions. The minimum number of measurements necessary to obtain reasonable variabilities of the calculated statistics and mean directions was evaluated by repeating random collections of the measured particles at increments of 100 particles for each sample. Although the observed variabilities depend largely on the pattern of distribution and an absolute minimum number cannot be stated, approximately 1500–2000 measurements are required in order to get meaningful mean directions for the analysed samples. Received: 9 April 1996 / Accepted: 26 December 1996  相似文献   

14.
In this paper, by means of the statistical analysis method of stochastic spatial point process, statistical analysis of spatial distribution of earthquakes in the large northern region of China is made. Emphasis is on the test and analysis of the complete spatial randomness, correlation of earthquake distribution in the different magnitude interval and random labeling. It is shown by the analysis that the spatial distribution of earthquakes in the large northern region is “clustered”, the distributions of earthquakes in different magnitude interval are positively correlated and can be modeled by a two-dimensional process. The results obtained in the paper can be used for the establishment of a reasonable spatial distribution model and have some application in the reasonable estimation of seismic hazard. The Chinese version of this paper appeared in the Chinese edition ofActa Seismologica Sinica,15, 129–135, 1993.  相似文献   

15.
The need to compare the distributions of directions of the discontinuities present in rock masses prompted the development of a new surrogate measure for non-parametric statistical tests. This is used to quantify the degree of matching between polymodal azimuth direction distributions determined from remotely sensed data for different areas and also between these and field measurements. The approach is based on an application of the Kolmogorov–Smirnov (K–S) goodness-of-fit test. However, in this application the main interest is to accept the null hypothesis (instead of rejecting it) so that there is a risk of committing a Type II statistical error when it is false, particularly if sample sizes are too small. Therefore, a method that employs a set of empirical criteria for calibrating the statistical decision was devised. The statistic used (D ratio) provides a measure of the degree of reliability about the decision on whether or not to accept or reject the hypothesis. The methodology is tested and implemented using existing geological data and a tectonic model valid over a limited region, within which two study areas were taken for these developments. The results obtained indicate reasonable improvement of the performance of K–S tests for inferential purposes when empirical reliability criteria are used. This was acknowledged by increased matching between occurring and inferred discontinuities (tectonic structures) and reduction in rates of errors. Other applications envisaged include different data sources such as climate and soil data.  相似文献   

16.
Data assimilation method provides a framework to decrease the uncertainty of hydrological modeling by sequentially incorporating observations into numerical model. Such a process involves estimating statistical moments of different order based on the evolution of conditional probability distribution function. Because of the nonlinearity of many hydrological dynamics, explicit and analytical solutions for moments of state distribution are often impossible. Evensen [J Geophys Res 99(c5): 10143–10162 (1994)] introduced Ensemble Kalman Filtering (EnKF) method to address such problems. We test and evaluate the performance of EnKF in fusing model predictions and observations for a saturated–unsaturated integral-balance subsurface model. We find EnKF improve the model predictions, and also we conclude a good estimate of state variance is essential for the success of EnKF.  相似文献   

17.
Résumé Les conditions thermiques exceptionelles de l'hiver 1946–47 et de l'été 1947 sont discutées en déduisant que cela ne signifie pas un changement de climat pour l'Europe occidentale: en effet, il s'agit seulement d'une coincidence avec une période d'intense activité solaire, y ayant en général une correspondance entre les époques de grande (ou petite) variabilité du climat et les périodes de forte (ou faible) activité solaire.
Summary The exceptional thermic conditions of the winter 1946–47 and the summer 1947 are discussed. These conditions do not mean a change of the climate of Western Europe. They concide with a period of intensive solar activity. Generally narrow connexion are present between periods of large (small) variability of the climate and large (small) solar activity.
  相似文献   

18.
Conclusions En hiver 1965, on a fait un forage avec la sonde de Hiller à la profondeur de 0,60 m au milieu du lac Kolno qui est une réserve du cygne muet (Cygnus olor Gm.). On a analysé 92 épreuves du sédiment de fond au point de vue de sa structure, de sa teneur en eau, en chlorophylle non active, en matière organique, en fer, en azote, en calcium et en silice; de plus, on a déterminé son age absolu (C14). Il en résultait la caractéristique de la structure verticale des sédiments de fond et quelques périodes caractéristiques de la formation de ce lac. La première période d’une puissance du sédiment de 50 cm (4,70–4,20 m) se caractérise par un sédiment sablonneux contenant de petites quantités de chlorophylle non active, de matière organique et d’autres indicateurs montrant une extrêmement petite production. En commen?ant à une profondeur de 4,20 m, on observe de grands changements dans ce lac. La teneur en silice baisse de 90 jusqu’à 3%, la matière organique augmente de 2,1 à 7,6% jusqu’à plus de 50%, la teneur en calcium augmente également de quelques pour-cent jusqu’à 60%, de même que celle du fer et de la chlorophylle non active. La détermination de l’age absolu du sédiment de la couche entre 4,10 et 3,80 m, fait en 1965 au moyen de la méthode C14, a démontré que ces changements ont eu lieu il y a 6520±330 ans environ, ce qui correspond à la première moitié de la période atlantique. Ces sédiments montaient jusqu’à une profondeur de 2,20 m. Les sédiments de la période suivante d’une puissance de 80 cm (2,20–1,40 m) sont caractérisés par une plus grande teneur en chlorophylle non active, en matière organique, en azote et en fer. Ces valeurs élevées sont probablement dues à une plus grande production de la substance organique qui ne s’est pas décomposée totalement. Cette couche correspondrait à la période subboréale sèche et chaude. Les sédiments suivants (1,40–0,85 m) correspondent probablement à la période subatlantique dont le commencement se caractérisait par un climat plus froid et plus humide, comme il ressort de la figure 2. La période suivante commen?a par une rapide amélioration du climat (à une profondeur de 0,80 m), ce qui s’est manifesté par une augmentation considérable de la chlorophylle non active, de l’azote et du fer. Selon la classification généralement adoptée, ce sédiment correspondrait à la période néoboréale. Après cette période. le climat devenait de nouveau plus froid et plus humide, comme, à l’exception de petites déviations, il l’est resté jusqu’à nos jours.

The research was subsidized by the Hydrobiological Committee and the Commission for the Protection of Nature and its Resources of the Polish Academy of Science.  相似文献   

19.
The influence of vibrating buildings on the free-field ground motion could affect the earthquake recordings collected inside or nearby the buildings. Some evidences are known for large structures, but also small buildings could adversely affect the quality of the recordings. An example is given for a station of the Italian Accelerometric Network whose recordings show a clear mark of the frequency of the host building. To tackle this problem in a more general way, we performed numerical simulations whose first aim was to validate existing empirical evidence from a test site. Gallipoli et al. (Bull Seismol Soc Am 96:2457–2464, 2006) monitored a release test on a 2-storey R.C. building in Bagnoli (Italy), showing that a single vibrating building may affect the “free-field” motion with an influence that reaches 20% of peak ground acceleration. We re-analysed the data of that experiment following the Safak (Soil Dyn Earthq Eng 17:509–517, 1998) approach to building-soil motion, described as propagation of up- and down-going S-waves. The numerical model is a chain of single degree of freedom oscillators, whose dynamic behaviour depends on mass, stiffness and damping. The agreement between the synthetic and real data encouraged us to use this model to reproduce generalised structures as systems with a single degree of freedom. We run multiple tests varying the distance, between building and station, and the building-soil coupling, obtaining a statistical distribution of the influence of a single vibrating building on free-field ground motion taking into account the distance.  相似文献   

20.
 The conventional nonparametric tests have been widely used in many fields for the residual analysis of a fitted model on observations. Also, in recent, a new technique called the BDS (Brock–Dechert–Scheinkman) statistic has been shown that it can be used as a powerful tool for the residual analysis, especially, of a nonlinear system. The purpose of this study is to compare the powers of the nonparametric tests and BDS statistic by residual analysis of the fitted models. This study evaluates stochastic models for four monthly rainfalls in Korea through the residual analysis by using the conventional nonparametric and BDS statistics. We use SARIMA and AR Error models for fitting each rainfall and perform the residual analysis by using the test techniques. As a result, we find that the BDS statistic is more reasonable than the conventional nonparametric tests for the residual analysis and AR Error model may be more appropriate than SARIMA model for modeling of monthly rainfalls. This work was supported by grant No. R01-2001-000-00474-0 from the Basic Research Program of the Korea Science & Engineering Foundation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号