首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   178篇
  免费   10篇
  国内免费   1篇
测绘学   1篇
大气科学   17篇
地球物理   54篇
地质学   55篇
海洋学   15篇
天文学   40篇
综合类   2篇
自然地理   5篇
  2022年   3篇
  2021年   4篇
  2020年   3篇
  2019年   4篇
  2018年   5篇
  2017年   9篇
  2016年   3篇
  2015年   2篇
  2014年   9篇
  2013年   10篇
  2012年   3篇
  2011年   5篇
  2010年   12篇
  2009年   5篇
  2008年   17篇
  2007年   8篇
  2006年   7篇
  2005年   7篇
  2004年   10篇
  2003年   12篇
  2002年   2篇
  2001年   2篇
  2000年   3篇
  1999年   2篇
  1998年   3篇
  1997年   1篇
  1996年   1篇
  1995年   5篇
  1994年   3篇
  1993年   1篇
  1992年   2篇
  1991年   1篇
  1990年   1篇
  1989年   3篇
  1987年   2篇
  1986年   4篇
  1985年   1篇
  1984年   1篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1973年   1篇
排序方式: 共有189条查询结果,搜索用时 15 毫秒
11.
Néel temperature (Tm N of α-Fe2SiO4 (fayalite) was measured as a function of pressure by means of Mössbauer spectroscopy in the pressure range 0–16 Gpa. High pressure was generated using a clamp-type miniature diamond anvil cell which was inserted into a cryostat. The Néel temperature increased linearly with increasing pressure at a rate of dT N /dp=2.2±0.2 K/GPa. The result is discussed on the basis of the model proposed for the magnetic structure of fayalite by Santoro et al. (1966). The observed dT N /dp suggests that the superexchange interactions vary as the ?10/3 power of the volume while the volume dependence of the direct exchange interactions is positive and small.  相似文献   
12.
大气臭氧与气溶胶垂直分布的高空气球探测   总被引:17,自引:2,他引:17  
本文给出了1993年9月12日利用高空科学气球在河北省香河地区探测到的大气臭氧和气溶胶的垂直分布。结果发现:(1) 大气臭氧的数密度在整个对流层较低(~10[12]mol/cm3),并从地面到对流层顶略有下降;对流层顶以上开始快速增加,极值层高度在~24 km,其值为4.78×10[12]mol/cm3;臭氧分压有类似的分布特征,极值146×10[-4]Pa,位于同一高度;(2) 在平流层低层,臭氧分压有一个次极值62×10[-4]Pa,位于15~16 km;(3) 0~30 km大气气溶胶数密度呈现出三个峰值:143,8和1.1 个/cm[3],分别位于近地面、5 km和21 km;(4)气溶胶的数密度谱在对流层为双模态;在平流层,次峰消失。同时,我们还与其他观测结果作了比较分析。  相似文献   
13.
The 1995 Hyogoken–Nambu earthquake caused severe liquefaction over wide areas of reclaimed land. Furthermore, the liquefaction induced large ground displacement in horizontal directions, which caused serious damage to foundations of structures. However, few analyses of steel pipe piles based on field investigation have so far been conducted to identify the causes and process of such damage. The authors conducted a soil–pile-structure interaction analysis by applying a multi-lumped-mass-spring model to a steel pipe pile foundation structure to evaluate the causes and process of its damage. The damage process analyzed in the time domain corresponded well with the results of detailed field investigation. It was found that a large bending moment beyond the ultimate plastic moment of the pile foundation structure was induced mainly by the large ground displacement caused by liquefaction before lateral spreading of the ground and that the displacement appeared during the accumulating process of the excess pore water pressure.  相似文献   
14.
15.
Based on the analysis of observations from a 213-m tall meteorological tower at Tsukuba, Japan, we have investigated the favourable conditions for the predominant existence of large-scale turbulence structures in the near-neutral atmospheric boundary layer (ABL). From the wavelet variance spectrum for the streamwise velocity component ( $u$ ) measured by a sonic anemometer-thermometer at the highest level (200 m), large-scale structures (time-scale range of 100–300 s) predominantly exist under slightly unstable and close to neutral conditions. The emergence of large-scale structures also can be related to the diurnal evolution of the ABL. The large-scale structures play an important role in the overall flow structure of the lower boundary layer. For example, $u$ velocity components at the 200-m and 50-m levels show relatively high correlation with the existence of large-scale structures. Under slightly unstable (near-neutral) conditions, a low-speed region in advance of the high-speed structure shows a positive deviation of temperature and appears as the plume structure that is forced by buoyancy in the heated lower layer. In spite of the difference in buoyancy effects between the near-neutral and unstable cases, large-scale structures are frequently observed in both cases and the same vertical correlation of $u$ components is indicated. However, the vertical wind shear is smaller in the unstable cases. On the other hand, in near-neutral cases, the transport efficiency of momentum at the higher level and the flux contribution of sweep motions are larger than those in the unstable cases.  相似文献   
16.
Climate warming and human disturbance in north‐western Canada have been accompanied by degradation of permafrost, which introduces considerable uncertainty to the future availability of northern freshwater resources. This study demonstrates the rate and spatial pattern of permafrost loss in a region that typifies the southern boundary of permafrost. Remote‐sensing analysis of a 1·0 km2 area indicates that permafrost occupied 0·70 km2 in 1947 and decreased with time to 0·43 km2 by 2008. Ground‐based measurements demonstrate the importance of horizontal heat flows in thawing discontinuous permafrost, and show that such thaw produces dramatic land‐cover changes that can alter basin runoff production in this region. A major challenge to northern water resources management in the twenty‐first century therefore lies in predicting stream flows dynamically in the context of widely occurring permafrost thaw. The need for appropriate water resource planning, mitigation, and adaptation strategies is explained. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
17.
Clean Development Mechanism (CDM) project developers have long complained about the complexities of project-specific baseline setting and the vagaries of additionality determination. In response to this, the CDM Executive Board took bold steps towards the standardization of CDM methodologies, culminating in the approval of guidelines for the establishment of performance standards in November 2011. The guidelines specify a performance standard stringency level for both baseline and additionality of 80% for several priority sectors and 90% for all other sectors. However, an analysis of 14 large-scale CDM methodologies that use performance standard approaches challenges this top-down approach to the performance standard design. An appropriate performance standard stringency level strongly depends on sector and technology characteristics. A single stringency level for baseline and additionality determination is appropriate only for greenfield projects, but not for retrofit ones. Overly simple, highly aggregated performance standards are unlikely to ensure high environmental integrity, and difficult questions regarding stringency and updating frequency will eventually have to be addressed on a rather disaggregated level. A careful balance between data requirements and the practicability of performance standards is essential because the heavy data requirements of the existing performance standard methodologies have been the key barrier to their actual implementation.

Policy relevance

CDM regulators have been pushed by many stakeholders to standardize baseline setting and eliminate project-specific additionality determination. At first glance, performance standards seem to provide the perfect solution for both tasks. However, a one-size-fits-all political decision – e.g. the average of the top 20% performers as enshrined in the Marrakech Accords – is inappropriate. Substantial disaggregation of performance standards is required both technologically and geographically in order to limit over- and under-crediting and close loopholes for non-additional projects. As a lack of reliable and complete data has been and will be a key bottleneck for the development of performance standards, international support for data collection will be indispensable, but costly, and time-consuming. Empirically driven, techno-economic assessments of performance standard stringency levels must be the central task of the future work on standardized methodologies, and should not be sidelined by perceived needs of policy makers to take bold decisions under time pressures.  相似文献   
18.
The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180° azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne–Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic range and sensitivity and are subject to systematic errors and uncertainties that are characterized in this report.  相似文献   
19.
Abstract— We have carried out noble gas measurements on graphite from a large graphite‐metal inclusion in Canyon Diablo. The Ne data of the low‐temperature fractions lie on the mixing line between air and the spallogenic component, but those of high temperatures seem to lie on the mixing line between Ne‐HL and the spallogenic component. The Ar isotope data indicate the presence of Q in addition to air, spallogenic component and Ar‐HL. As the elemental concentration of Ne in Q is low, we could not detect the Ne‐Q from the Ne data. On the other hand, we could not observe Xe‐HL in our Xe data. As the Xe concentration and the Xe/Ne ratio in Q is much higher than that in the HL component, it is likely that only the contribution of Q is observed in the Xe data. Xenon isotopic data can be explained as a mixture of Q, air, and “El Taco Xe.” The Canyon Diablo graphite contains both HL and Q, very much like carbonaceous chondrites, retaining the signatures of various primordial noble gas components. This indicates that the graphite was formed in a primitive nebular environment and was not heated to high, igneous temperatures. Furthermore, a large excess of 129Xe was observed, which indicates that the graphite was formed at a very early stage of the solar system when 129I was still present. The HL/Q ratios in the graphite in Canyon Diablo are lower than those in carbonaceous chondrites, indicating that some thermal metamorphism occurred on the former. We estimated the temperature of the thermal metamorphism to about 500–600 °C from the difference of thermal retentivities of HL and Q. It is also noted that “El Taco Xe” is commonly observed in many IAB iron meteorites, but its presence in carbonaceous chondrites has not yet been established.  相似文献   
20.
In this paper, the physico-chemical effects of the nebula gas on the planets are reviewed from a standpoint of planetary formation in the solar nebula.The proto-Earth growing in the nebula was surrounded by a primordial atmosphere with a solar chemical composition and solar isotopic composition. When the mass of the proto-Earth was greater than 0.3 times the present Earth mass, the surface was molten because of the blanketing effect of the atmosphere. Therefore, the primordial rare gasses contained in the primordial atmosphere dissolved into the molten Earth material without fractionation and in particular the dissolved neon is expected to be conserved in the present Earth material. Hence, if dissolved neon with a solar isotopic ratio is discovered in the Earth material, it will indicate that the Earth was formed in the nebula and that the dissolved rare gases were one of the sources which degassed to form the present atmosphere.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号