首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18012篇
  免费   2335篇
  国内免费   2568篇
测绘学   5624篇
大气科学   2971篇
地球物理   3257篇
地质学   4162篇
海洋学   1795篇
天文学   1389篇
综合类   1766篇
自然地理   1951篇
  2024年   83篇
  2023年   187篇
  2022年   552篇
  2021年   720篇
  2020年   752篇
  2019年   865篇
  2018年   612篇
  2017年   913篇
  2016年   869篇
  2015年   886篇
  2014年   1036篇
  2013年   1295篇
  2012年   1136篇
  2011年   1078篇
  2010年   856篇
  2009年   1058篇
  2008年   1111篇
  2007年   1301篇
  2006年   1156篇
  2005年   936篇
  2004年   899篇
  2003年   699篇
  2002年   578篇
  2001年   516篇
  2000年   449篇
  1999年   392篇
  1998年   327篇
  1997年   251篇
  1996年   240篇
  1995年   200篇
  1994年   188篇
  1993年   167篇
  1992年   113篇
  1991年   103篇
  1990年   69篇
  1989年   67篇
  1988年   61篇
  1987年   38篇
  1986年   21篇
  1985年   20篇
  1984年   17篇
  1983年   5篇
  1982年   19篇
  1981年   10篇
  1980年   10篇
  1979年   5篇
  1978年   6篇
  1977年   12篇
  1971年   5篇
  1954年   11篇
排序方式: 共有10000条查询结果,搜索用时 156 毫秒
1.
Average velocity in streams is a key variable for the analysis and modelling of hydrological and hydraulic processes underpinning water resources science and practice. The present study evaluates the impact of the sampling duration on the quality of average velocity measurements acquired with contemporary instruments such as Acoustic Doppler Velocimeters (ADV) an Acoustic Doppler Current Profilers (ADCP). The evaluation combines considerations on turbulent flows and principles and configurations of acoustic instruments with practical experience in conducting customized analysis for uncertainty analysis purposes. The study sheds new insights on the spatial and temporal variability of the uncertainty in the measurement of average velocities due to variable sampling durations acting in isolation from other sources of uncertainties. Sampling durations of 90 and 150 s are found sufficient for ADV and ADCP, respectively, to obtain reliable average velocities in a flow affected only by natural turbulence and instrument noise. Larger sampling durations are needed for measurements in most of the natural streams exposed to additional sources of data variability.  相似文献   
2.
The 33 086 ha mixed land use Fall Creek watershed in upstate New York is part of the Great Lakes drainage system. Results from more than 3500 water samples are available in a data set that compiles flow data and measurements of various water quality analytes collected between 1972 and 1995 in all seasons and under all flow regimes in Fall Creek and its tributaries. Data is freely accessible at https://ecommons.cornell.edu/handle/1813/8148 and includes measurements of suspended solids, pH, alkalinity, calcium, magnesium, potassium, sodium, chloride, nitrate nitrogen (NO3-N), sulphate sulphur (SO4-S), phosphorus (P) fractions molybdate reactive P (MRP) and total dissolved P (TDP), percent P in sediment, and ammonium nitrogen (NH4-N). Methods, sub-watershed areas, and coordinates for sampling sites are also included. The work represented in this data set has made important scientific contributions to understanding of hydrological and biogeochemical processes that influence loading in mixed use watersheds and that have an impact on algal productivity in receiving water bodies. In addition, the work has been foundational for important regulatory and management decisions in the region.  相似文献   
3.
A constitutive model that captures the material behavior under a wide range of loading conditions is essential for simulating complex boundary value problems. In recent years, some attempts have been made to develop constitutive models for finite element analysis using self‐learning simulation (SelfSim). Self‐learning simulation is an inverse analysis technique that extracts material behavior from some boundary measurements (eg, load and displacement). In the heart of the self‐learning framework is a neural network which is used to train and develop a constitutive model that represents the material behavior. It is generally known that neural networks suffer from a number of drawbacks. This paper utilizes evolutionary polynomial regression (EPR) in the framework of SelfSim within an automation process which is coded in Matlab environment. EPR is a hybrid data mining technique that uses a combination of a genetic algorithm and the least square method to search for mathematical equations to represent the behavior of a system. Two strategies of material modeling have been considered in the SelfSim‐based finite element analysis. These include a total stress‐strain strategy applied to analysis of a truss structure using synthetic measurement data and an incremental stress‐strain strategy applied to simulation of triaxial tests using experimental data. The results show that effective and accurate constitutive models can be developed from the proposed EPR‐based self‐learning finite element method. The EPR‐based self‐learning FEM can provide accurate predictions to engineering problems. The main advantages of using EPR over neural network are highlighted.  相似文献   
4.
利用内蒙古西部12个台站的地脉动噪声数据,采用噪声谱比法研究台站的场地响应情况。分析表明,台站场地响应按曲线形态可分为3类,且可能受地形地貌、局部构造和台基状况等条件影响。对比分析噪声谱比法与Moya方法的场地响应结果发现,场地响应曲线形态基本一致,只有极少数台站存在明显差异。结果表明,内蒙古西部区域大部分台站的台基状况总体较好,场地响应曲线较为平坦,无明显频率放大点。  相似文献   
5.
In the atmospheric Čerenkov technique γ-rays are detected against the abundant background produced by hadronic showers. In order to improve the signal to noise ratio of theexperiment, it is necessary to reject a significant fraction of hadronic showers. Traditional background rejection methods based on image shape parameters have been extensively used for the data from imaging telescopes. However, non-imaging Čerenkov telescopes have to develop very different means of statistically identifying and removing cosmic ray events. Some of the parameters, which could be potentially important for non-imaging arrays, are the temporal and spectral differences, the lateral distributions and density fluctuations of Čerenkov photons generated by γ-ray and hadron primaries. Here we study the differences in fluctuations of Čerenkov photon density in the light pool at the observation level from showers initiated by photons and those initiated by protons or heavier nuclei. The database of simulated events for the PACT array has been used to evaluate the efficiency of the new technique. Various types of density fluctuations like the short range and medium range fluctuations as well as flatness parameter are studied. The estimated quality factors reflect the efficiencies with which the hadrons can be rejected from the data. Since some of these parameters are independent, the cuts may be applied in tandem and we demonstrate that the proton rejection efficiency of ∼90% can be achieved. Use of density fluctuations is particularly suited for wavefront sampling observations and it seems to be a good technique to improve the signal to noise ratio. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   
6.
7.
8.
介绍了MATLAB语言特点和系统建模方法的基本理论.根据南海气象数据的实际建模处理过程,给出了建模的详细步骤及其MATLAB实现过程以及MATLABTM的主要实现程序.试验讨论和结果表明利用MATLAB语言可以方便地对南海气象数据用系统建模方法进行建模和处理,MATLAB在运用系统建模法处理南海气象数据方面具有明显的优越性.  相似文献   
9.
In the first paper of this series, we presented EBAS – Eclipsing Binary Automated Solver, a new fully automated algorithm to analyse the light curves of eclipsing binaries, based on the ebop code. Here, we apply the new algorithm to the whole sample of 2580 binaries found in the Optical Gravitational Lensing Experiment (OGLE) Large Magellanic Cloud (LMC) photometric survey and derive the orbital elements for 1931 systems. To obtain the statistical properties of the short-period binaries of the LMC, we construct a well-defined subsample of 938 eclipsing binaries with main-sequence B-type primaries. Correcting for observational selection effects, we derive the distributions of the fractional radii of the two components and their sum, the brightness ratios and the periods of the short-period binaries. Somewhat surprisingly, the results are consistent with a flat distribution in log P between 2 and 10 d. We also estimate the total number of binaries in the LMC with the same characteristics, and not only the eclipsing binaries, to be about 5000. This figure leads us to suggest that  (0.7 ± 0.4)  per cent of the main-sequence B-type stars in the LMC are found in binaries with periods shorter than 10 d. This frequency is substantially smaller than the fraction of binaries found by small Galactic radial-velocity surveys of B stars. On the other hand, the binary frequency found by Hubble Space Telescope ( HST ) photometric searches within the late main-sequence stars of 47 Tuc is only slightly higher and still consistent with the frequency we deduced for the B stars in the LMC.  相似文献   
10.
In this paper, we present a new method to estimate, for each turbulent layer labelled i , the horizontal wind speed   v ( h i )  , the standard deviation of the horizontal wind speed fluctuations  σ v ( hi )  and the integrated value of   C 2 n   over the thickness  Δ hi   of the turbulent layer   C 2 n ( hi )Δ hi   , where   hi   is the altitude of the turbulent layer. These parameters are extracted from single star scintillation spatiotemporal cross-correlation functions of atmospheric speckles obtained within the generalized mode. This method is based on the simulated annealing algorithm to find the optimal solution required to solve the problem. Astrophysics parameters for adaptive optics are also calculated using   C 2 n ( hi )  and   v ( hi )  values. The results of other techniques support this new method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号