首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   912篇
  免费   34篇
  国内免费   4篇
测绘学   47篇
大气科学   56篇
地球物理   214篇
地质学   350篇
海洋学   45篇
天文学   158篇
综合类   6篇
自然地理   74篇
  2022年   5篇
  2021年   13篇
  2020年   13篇
  2019年   23篇
  2018年   23篇
  2017年   23篇
  2016年   33篇
  2015年   24篇
  2014年   29篇
  2013年   62篇
  2012年   45篇
  2011年   28篇
  2010年   36篇
  2009年   53篇
  2008年   40篇
  2007年   40篇
  2006年   30篇
  2005年   38篇
  2004年   32篇
  2003年   22篇
  2002年   31篇
  2001年   20篇
  2000年   21篇
  1999年   17篇
  1998年   9篇
  1997年   11篇
  1996年   9篇
  1995年   7篇
  1992年   5篇
  1990年   11篇
  1989年   9篇
  1987年   9篇
  1985年   6篇
  1984年   7篇
  1983年   8篇
  1982年   8篇
  1981年   7篇
  1980年   7篇
  1979年   7篇
  1978年   7篇
  1977年   6篇
  1976年   5篇
  1973年   9篇
  1970年   11篇
  1965年   5篇
  1962年   4篇
  1954年   5篇
  1952年   8篇
  1949年   4篇
  1939年   4篇
排序方式: 共有950条查询结果,搜索用时 15 毫秒
931.
Chemical and Sr, Nd and Pb isotopic compositions of Late Cenozoic to Quaternary small-volume phonolite, trachyte and related mafic rocks from the Darfur volcanic province/NW-Sudan have been investigated. Isotope signatures indicate variable but minor crustal contributions. Some phonolitic and trachytic rocks show the same isotopic composition as their primitive mantle-derived parents, and no crustal contributions are visible in the trace element patterns of these samples. The magmatic evolution of the evolved rocks is dominated by crystal fractionation. The Si-undersaturated strongly alkaline phonolite and the Si-saturated mildly alkaline trachyte can be modelled by fractionation of basanite and basalt, respectively. The suite of basanite–basalt–phonolite–trachyte with characteristic isotope signatures from the Darfur volcanic province fits the compositional features of other Cenozoic intra-plate magmatism scattered in North and Central Africa (e.g., Tibesti, Maghreb, Cameroon line), which evolved on a lithosphere that was reworked or formed during the Neoproterozoic.  相似文献   
932.
Abstract– Whether a target is penetrated or not during hypervelocity impact depends strongly on typical impactor dimensions (Dp) relative to the absolute target thickness (T). We have therefore conducted impact experiments in aluminum1100 and TeflonFEP targets that systematically varied Dp/T (=D*), ranging from genuine cratering events in thick targets (Dp << T) to the nondisruptive passage of the impactor through very thin films (Dp >> T). The objectives were to (1) delineate the transition from cratering to penetration events, (2) characterize the diameter of the penetration hole (Dh) as a function of D*, and (3) determine the threshold target thickness that yields Dh = Dp. We employed spherical soda‐lime glass (SLG) projectiles of Dp = 50–3175 μm at impact velocities (V) from 1 to 7 km s?1, and varied target thicknesses from microns to centimeters. The transition from cratering to penetration processes in thick targets forms a continuum in all morphologic aspects. The entrance side of the target resembles that of a standard crater even when the back of the target suffers substantial, physical perforations via spallation and plastic deformation. We thus suggest that the cratering‐to‐penetration transition does not occur when the target becomes physically perforated (i.e., at the “ballistic limit”), but when the shock pulse duration in the projectile (tp) is identical to that in the target (tt), i.e., tp = tt. This condition is readily calculated from equation‐of‐state data. As a consequence, in reconstructing impactor dimensions from observations of space‐exposed substrates, we recommend that crater size (Dc) be used for the case of tp < tt, and that penetration hole diameter (Dh) be used when tp > tt. The morphologic evolution of the penetration hole and its size also forms a continuum that strongly depends on both the scaled parameter D* and on V, but it is independent of the absolute scale. The condition of Dh = Dp is approached at D* > 50. The dependence of Dh on T and V, however, is very systematic. This has led to new and detailed calibration curves, permitting the reconstruction of Dp from the measurement of either crater diameter or penetration‐hole size in Al1100 and TeflonFEP targets of arbitrary thickness. We also placed witness plates behind penetrated targets to intercept the down‐range debris plume, which is generally a mixture of both target and impactor fragments and melts. These witness plates also reveal that the debris plume systematically and diagnostically depends on D*. Thick targets shed spall debris only, and target thickness must be less than crater depth (Tc) to allow projectile material on the witness plate. Concentric plume patterns, accented by characteristic “hole saw” rings, characterize penetrated Al‐targets at D* = 1–10, but they give way to distinctly radial geometries at D* = 10–20. Most of the target debris occupies the periphery of the plume, while the projectile fragments or melts reside in its central parts. The periphery of the plume is also typically more fine‐grained than its center. At D* > 50, the exit plume is dominated by solid projectile fragments that progressively coagulate and overlap with each other, giving rise to compound craters. The latter have irregular crater interiors on account of the heterogeneous mass distribution of a collisionally produced, aggregate impactor. Similarly, complex craters are observed on LDEF and Stardust and they are produced by aggregate cosmic‐dust particles containing large, dense components within a relatively low‐density, fine‐grained matrix. The witness‐plate observations can also be used to address the enigmatic clustering of impact sites observed on Stardust’s aerogel and aluminum surfaces. We suggest that this clustering is difficult to produce by the collision of particles from comet Wild 2 with the Stardust spacecraft, and that it is more likely due to particle disaggregation in the comet’s coma.  相似文献   
933.
The novel method of inclusion barometry coupled with the calculation of the required affinity for garnet nucleation is applied to three samples from the previously well‐characterized Connecticut Valley Synclinorium in central Vermont. Raman shifts for quartz inclusions record a range of maximum peak shifts of the quartz 464 cm?1 peak from 2.4 to 3.0 cm?1. Temperature of garnet nucleation was constrained by calculating mineral assemblage diagrams in the MnNCKFMASHT system and plotting the intersection of quartz inclusion in garnet barometry (QuiG, quartz‐in‐garnet) with Zr‐in‐rutile thermometry. Utilizing the intersection of Zr‐in‐rutile thermometry with QuiG barometry, garnet nucleation is inferred to have occurred within a P–T range of ~8.6–9.5 kbar and ~560–575°C. These P–T conditions for garnet nucleation are significantly higher than calculated equilibrium garnet‐in isograds for the three samples. Affinities for garnet nucleation were calculated as the difference between the free energy of a fictive garnet composition based on the matrix assemblage and the free energy of the nucleated garnet. The calculated nucleation affinity varied from 300 to 600 kJ/mol O for St–Ky grade samples. These results suggest that the assumption that metamorphism proceeds as a sequence of near‐equilibrium conditions cannot, in general, be made for regional metamorphic terranes. This body of work agrees with numerous recent studies showing that garnet‐producing reactions must be overstepped in order to for garnet to nucleate.  相似文献   
934.
The problem of assimilating biased and inaccurate observations into inadequate models of the physical systems from which the observations were taken is common in the petroleum and groundwater fields. When large amounts of data are assimilated without accounting for model error and observation bias, predictions tend to be both overconfident and incorrect. In this paper, we propose a workflow for calibration of imperfect models to biased observations that involves model construction, model calibration, model criticism and model improvement. Model criticism is based on computation of model diagnostics which provide an indication of the validity of assumptions. During the model improvement step, we advocate identification of additional physically motivated parameters based on examination of data mismatch after calibration and addition of bias correction terms. If model diagnostics indicates the presence of residual model error after parameters have been added, then we advocate estimation of a “total” observation error covariance matrix, whose purpose is to reduce weighting of observations that cannot be matched because of deficiency of the model. Although the target applications of this methodology are in the subsurface, we illustrate the approach with two simplified examples involving prediction of the future velocity of fall of a sphere from models calibrated to a short-time series of biased measurements with independent additive random noise. The models into which the data are assimilated contain model errors due to neglect of physical processes and neglect of uncertainty in parameters. In every case, the estimated total error covariance is larger than the true observation covariance implying that the observations need not be matched to the accuracy of the measuring instrument. Predictions are much improved when all model improvement steps were taken.  相似文献   
935.
936.
The heat waves of 2003 in Western Europe and 2010 in Russia, commonly labelled as rare climatic anomalies outside of previous experience, are often taken as harbingers of more frequent extremes in the global warming-influenced future. However, a recent reconstruction of spring–summer temperatures for WE resulted in the likelihood of significantly higher temperatures in 1540. In order to check the plausibility of this result we investigated the severity of the 1540 drought by putting forward the argument of the known soil desiccation-temperature feedback. Based on more than 300 first-hand documentary weather report sources originating from an area of 2 to 3 million km2, we show that Europe was affected by an unprecedented 11-month-long Megadrought. The estimated number of precipitation days and precipitation amount for Central and Western Europe in 1540 is significantly lower than the 100-year minima of the instrumental measurement period for spring, summer and autumn. This result is supported by independent documentary evidence about extremely low river flows and Europe-wide wild-, forest- and settlement fires. We found that an event of this severity cannot be simulated by state-of-the-art climate models.  相似文献   
937.
Abstract

The magnitudes of the largest known floods of the River Rhine in Basel since 1268 were assessed using a hydraulic model drawing on a set of pre-instrumental evidence and daily hydrological measurements from 1808. The pre-instrumental evidence, consisting of flood marks and documentary data describing extreme events with the customary reference to specific landmarks, was “calibrated” by comparing it with the instrumental series for the overlapping period between the two categories of evidence (1808–1900). Summer (JJA) floods were particularly frequent in the century between 1651–1750, when precipitation was also high. Severe winter (DJF) floods have not occurred since the late 19th century despite a significant increase in winter precipitation. Six catastrophic events involving a runoff greater than 6000 m 3 s‐1 are documented prior to 1700. They were initiated by spells of torrential rainfall of up to 72 h (1480 event) and preceded by long periods of substantial precipitation that saturated the soils, and/or by abundant snowmelt. All except two (1999 and 2007) of the 43 identified severe events (SEs: defined as having runoff > 5000 and < 6000 m 3 s ‐1) occurred prior to 1877. Not a single SE is documented from 1877 to 1998. The intermediate 121-year-long “flood disaster gap” is unique over the period since 1268. The effect of river regulations (1714 for the River Kander; 1877 for the River Aare) and the building of reservoirs in the 20th century upon peak runoff were investigated using a one-dimensional hydraulic flood-routing model. Results show that anthropogenic effects only partially account for the “flood disaster gap” suggesting that variations in climate should also be taken into account in explaining these features.

Citation Wetter, O., Pfister, C., Weingartner, R., Luterbacher, J., Reist, T., & Trösch, J. (2011) The largest floods in the High Rhine basin since 1268 assessed from documentary and instrumental evidence. Hydrol. Sci. J. 56(5), 733–758.  相似文献   
938.
This study presents a methodology for estimating extreme current speeds from numerical model results using extremal analysis techniques. This method is used to estimate the extreme near-surface and near-bottom current speeds of the northwest Atlantic Ocean with 50-year return periods from 17?years of model output. The non-tidal currents produced by a three-dimensional ocean circulation model for the 1988?C2004 period were first used to estimate and map the 17-year return period extreme current speeds at the surface and near the bottom. Extremal analysis techniques (i.e., fitting the annual maxima to the Type I probability distribution) are used to estimate and map the 50-year extreme current speeds. Tidal currents are dominant in some parts of the northwest Atlantic, and a Monte Carlo-based methodology is developed to take into account the fact that large non-tidal extrema may occur at different tidal phases. The inclusion of tidal currents in this way modifies the estimated 50-year extreme current speeds, and this is illustrated along several representative transects and depth profiles. Seasonal variations are examined by calculating the extreme current speeds for fall-winter and spring?Csummer. Finally, the distribution of extreme currents is interpreted taking into account (1) variability about the time-mean current speeds, (2) wind-driven Ekman currents, and (3) flow along isobaths.  相似文献   
939.
940.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号