首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4327篇
  免费   435篇
  国内免费   336篇
测绘学   1942篇
大气科学   406篇
地球物理   783篇
地质学   759篇
海洋学   319篇
天文学   49篇
综合类   475篇
自然地理   365篇
  2024年   10篇
  2023年   23篇
  2022年   111篇
  2021年   142篇
  2020年   181篇
  2019年   173篇
  2018年   154篇
  2017年   259篇
  2016年   271篇
  2015年   205篇
  2014年   274篇
  2013年   322篇
  2012年   307篇
  2011年   300篇
  2010年   194篇
  2009年   206篇
  2008年   221篇
  2007年   214篇
  2006年   189篇
  2005年   172篇
  2004年   151篇
  2003年   128篇
  2002年   82篇
  2001年   100篇
  2000年   95篇
  1999年   83篇
  1998年   84篇
  1997年   69篇
  1996年   66篇
  1995年   50篇
  1994年   34篇
  1993年   50篇
  1992年   28篇
  1991年   22篇
  1990年   17篇
  1989年   18篇
  1988年   14篇
  1987年   15篇
  1986年   20篇
  1985年   12篇
  1984年   5篇
  1982年   9篇
  1981年   4篇
  1980年   5篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1973年   1篇
  1954年   1篇
排序方式: 共有5098条查询结果,搜索用时 15 毫秒
51.
作为GPS/重力边值问题理论及方法的应用,在对GPS/重力方法确定(似)大地水准面的原理进行简要介绍与分析的基础上,利用收集到的N区的600个GPS/重力数据和48个高精度GPS水准数据,计算出该区域的(似)大地水准面。通过拟合法和系统差直接改正法进行的精度分析表明,应用GPS/重力数据结合水准方法确定的该地区(似)大地水准面的精度达到厘米级精度。  相似文献   
52.
Seismic hazard analysis is based on data and models, which both are imprecise and uncertain. Especially the interpretation of historical information into earthquake parameters, e.g. earthquake size and location, yields ambiguous and imprecise data. Models based on probability distributions have been developed in order to quantify and represent these uncertainties. Nevertheless, the majority of the procedures applied in seismic hazard assessment do not take into account these uncertainties, nor do they show the variance of the results. Therefore, a procedure based on Bayesian statistics was developed to estimate return periods for different ground motion intensities (MSK scale).Bayesian techniques provide a mathematical model to estimate the distribution of random variables in presence of uncertainties. The developed method estimates the probability distribution of the number of occurrences in a Poisson process described by the parameter . The input data are the historical occurrences of intensities for a particular site, represented by a discrete probability distribution for each earthquake. The calculation of these historical occurrences requires a careful preparation of all input parameters, i.e. a modelling of their uncertainties. The obtained results show that the variance of the recurrence rate is smaller in regions with higher seismic activity than in less active regions. It can also be demonstrated that long return periods cannot be estimated with confidence, because the time period of observation is too short. This indicates that the long return periods obtained by seismic source methods only reflects the delineated seismic sources and the chosen earthquake size distribution law.  相似文献   
53.
Multivariate statistical analyses have been extensively applied to geochemical measurements to analyze and aid interpretation of the data. Estimation of the covariance matrix of multivariate observations is the first task in multivariate analysis. However, geochemical data for the rare elements, especially Ag, Au, and platinum-group elements, usually contain observations the below detection limits. In particular, Instrumental Neutron Activation Analysis (INAA) for the rare elements produces multilevel and possibly extremely high detection limits depending on the sample weight. Traditionally, in applying multivariate analysis to such incomplete data, the observations below detection limits are first substituted, for example, each observation below the detection limit is replaced by a certain percentage of that limit, and then the standard statistical computer packages or techniques are used to obtain the analysis of the data. If a number of samples with observations below detection limits is small, or the detection limits are relatively near zero, the results may be reasonable and most geological interpretations or conclusions are probably valid. In this paper, a new method is proposed to estimate the covariance matrix from a dataset containing observations below multilevel detection limits by using the marginal maximum likelihood estimation (MMLE) method. For each pair of variables, sayY andZ whose observations containing below detection limits, the proposed method consists of three steps: (i) for each variable separately obtaining the marginal MLE for the means and the variances, , , , and forY andZ: (ii) defining new variables by and and lettingA=C+D andB=CD, and obtaining MLE for variances, and forA andB; (iii) estimating the correlation coefficient YZ by and the covariance YZ by . The procedure is illustrated by using a precious metal geochemical data set from the Fox River Sill, Manitoba, Canada.  相似文献   
54.
参数的期望估计及其在形变分析中的应用   总被引:1,自引:0,他引:1  
利用“参数的期望估计”能准确定位、定量粗差及参数的期望估计不受粗差影响折特殊性质,寻找地壳变形区域和不变形区域,以确定拟稳点,进而进行拟稳变换,分析地菜变,将是一种有效方法。  相似文献   
55.
Many stochastic process models for environmental data sets assume a process of relatively simple structure which is in some sense partially observed. That is, there is an underlying process (Xn, n 0) or (Xt, t 0) for which the parameters are of interest and physically meaningful, and an observable process (Yn, n 0) or (Yt, t 0) which depends on the X process but not otherwise on those parameters. Examples are wide ranging: the Y process may be the X process with missing observations; the Y process may be the X process observed with a noise component; the X process might constitute a random environment for the Y process, as with hidden Markov models; the Y process might be a lower dimensional function or reduction of the X process. In principle, maximum likelihood estimation for the X process parameters can be carried out by some form of the EM algorithm applied to the Y process data. In the paper we review some current methods for exact and approximate maximum likelihood estimation. We illustrate some of the issues by considering how to estimate the parameters of a stochastic Nash cascade model for runoff. In the case of k reservoirs, the outputs of these reservoirs form a k dimensional vector Markov process, of which only the kth coordinate process is observed, usually at a discrete sample of time points.  相似文献   
56.
论Eppley PIR精密红外辐射仪的测量精度   总被引:1,自引:0,他引:1  
本文依据红外辐射表(PIR)的结构、性能和工作原理讨论了测量精度及有关影响因素,比较了粗测和精测两种方法的特点,在此基础上提出了各测量方法的适用范围和精度范围以及提高测量精度的措施,同时也给出了一些特定场合下的测量实例及提高测量精度的具体方法。分析表明:精测方法具有很好的测量精度以及很强的环境适应性。粗测在限定的温度范围内具有测量简单、换算方便、易于实现自动化等优点,精度尚能满足一般要求,但在限定的温度范围以外,测量精度急剧恶化。在影响测量精度的各因素中以温度响应中表体热辐射测量的影响最大,在粗测方法中表内电池的性能与测量精度关系重大。强烈日射对测量精度有一定影响,经订正后可基本消除。  相似文献   
57.
本文对成都市总人口、建成区面积等11个因子、作了主成份回归L-S估计和M-估计,讨论了成都城市发展对“热岛”强度的主要影响因子。结果表明,城区房屋建筑面积及总人口数是影响城市气候(气温)的主要因子,其次为城市人口总户数、建成面积等。 文中,对回归方程进行了拟合计算,回归效果比较满意(尤其是稳健回归)。  相似文献   
58.
Two different goals in fitting straight lines to data are to estimate a true linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating true straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal—predicting the dependent variable—OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples.  相似文献   
59.
There is a correspondence between flow in a reservoir and large scale permeability trends. This correspondence can be derived by constraining reservoir models using observed production data. One of the challenges in deriving the permeability distribution of a field using production data involves determination of the scale of resolution of the permeability. The Adaptive Multiscale Estimation (AME) seeks to overcome the problems related to choosing the resolution of the permeability field by a dynamic parameterisation selection. The standard AME uses a gradient algorithm in solving several optimisation problems with increasing permeability resolution. This paper presents a hybrid algorithm which combines a gradient search and a stochastic algorithm to improve the robustness of the dynamic parameterisation selection. At low dimension, we use the stochastic algorithm to generate several optimised models. We use information from all these produced models to find new optimal refinements, and start out new optimisations with several unequally suggested parameterisations. At higher dimensions we change to a gradient-type optimiser, where the initial solution is chosen from the ensemble of models suggested by the stochastic algorithm. The selection is based on a predefined criterion. We demonstrate the robustness of the hybrid algorithm on sample synthetic cases, which most of them were considered insolvable using the standard AME algorithm.  相似文献   
60.
Within the framework of recent research projects, basic tools for GIS-based seismic risk assessment technologies were developed and applied to the building stock and regional particularities of German earthquake regions. Two study areas are investigated, being comparable by the level of seismic hazard and the hazard-consistent scenario events (related to mean return periods of 475, 2475 and 10000 years). Significant differences exist with respect to the number of inhabitants, the grade and extent of urbanisation, the quality and quantity of building inventory: the case study of Schmölln in Eastern Thuringia seems to be representative for the majority of smaller towns in Germany, the case study of Cologne (Köln) stands for larger cities. Due to the similarities of hazard and scenario intensities, the considerable differences do not only require proper decisions concerning the appropriate methods and acceptable efforts, they enable conclusions about future research strategies and needs for disaster reduction management. Not least important, results can sharpen the focus of public interest. Seismic risk maps are prepared for different scenario intensities recognising the scatter and uncertainties of site-dependent ground motion and also of the applied vulnerability functions. The paper illustrates the impact of model assumptions and the step-wise refinements of input variables like site conditions, building stock or vulnerability functions on the distribution of expected building damage within the study areas. Furthermore, and in contrast to common research strategies, results support the conclusion that in the case of stronger earthquakes the damage will be of higher concentration within smaller cities like Schmölln due to the site-amplification potential and/or the increased vulnerability of the building stock. The extent of damage will be pronounced by the large number of masonry buildings for which lower vulnerability classes have to be assigned. Due to the effect of deep sedimentary layers and the composition of building types, the urban centre of Cologne will be less affected by an earthquake of comparable intensity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号