全文获取类型
收费全文 | 1112篇 |
免费 | 229篇 |
国内免费 | 181篇 |
专业分类
测绘学 | 192篇 |
大气科学 | 142篇 |
地球物理 | 512篇 |
地质学 | 357篇 |
海洋学 | 101篇 |
天文学 | 5篇 |
综合类 | 55篇 |
自然地理 | 158篇 |
出版年
2024年 | 7篇 |
2023年 | 12篇 |
2022年 | 25篇 |
2021年 | 28篇 |
2020年 | 60篇 |
2019年 | 62篇 |
2018年 | 63篇 |
2017年 | 74篇 |
2016年 | 78篇 |
2015年 | 69篇 |
2014年 | 74篇 |
2013年 | 168篇 |
2012年 | 78篇 |
2011年 | 70篇 |
2010年 | 60篇 |
2009年 | 58篇 |
2008年 | 64篇 |
2007年 | 96篇 |
2006年 | 74篇 |
2005年 | 49篇 |
2004年 | 44篇 |
2003年 | 22篇 |
2002年 | 31篇 |
2001年 | 15篇 |
2000年 | 25篇 |
1999年 | 17篇 |
1998年 | 18篇 |
1997年 | 18篇 |
1996年 | 11篇 |
1995年 | 10篇 |
1994年 | 12篇 |
1993年 | 5篇 |
1992年 | 3篇 |
1991年 | 9篇 |
1990年 | 7篇 |
1989年 | 3篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1983年 | 1篇 |
排序方式: 共有1522条查询结果,搜索用时 46 毫秒
161.
Jefferson S. Wong Jim E. Freer Paul D. Bates Jeff Warburton Tom J. Coulthard 《地球表面变化过程与地形》2021,46(10):1981-2003
Landscape evolution models (LEMs) have the capability to characterize key aspects of geomorphological and hydrological processes. However, their usefulness is hindered by model equifinality and paucity of available calibration data. Estimating uncertainty in the parameter space and resultant model predictions is rarely achieved as this is computationally intensive and the uncertainties inherent in the observed data are large. Therefore, a limits-of-acceptability (LoA) uncertainty analysis approach was adopted in this study to assess the value of uncertain hydrological and geomorphic data. These were used to constrain simulations of catchment responses and to explore the parameter uncertainty in model predictions. We applied this approach to the River Derwent and Cocker catchments in the UK using a LEM CAESAR-Lisflood. Results show that the model was generally able to produce behavioural simulations within the uncertainty limits of the streamflow. Reliability metrics ranged from 24.4% to 41.2% and captured the high-magnitude low-frequency sediment events. Since different sets of behavioural simulations were found across different parts of the catchment, evaluating LEM performance, in quantifying and assessing both at-a-point behaviour and spatial catchment response, remains a challenge. Our results show that evaluating LEMs within uncertainty analyses framework while taking into account the varying quality of different observations constrains behavioural simulations and parameter distributions and is a step towards a full-ensemble uncertainty evaluation of such models. We believe that this approach will have benefits for reflecting uncertainties in flooding events where channel morphological changes are occurring and various diverse (and yet often sparse) data have been collected over such events. 相似文献
162.
Barry Hankin Trevor J. C. Page Nick A. Chappell Keith J. Beven Paul J. Smith Ann Kretzschmar Rob Lamb 《水文研究》2021,35(11):e14418
The Q-natural flood management project has co-developed with the Environment Agency 18 monitored micro-catchments (~1 km2) in Cumbria, UK installing calibrated flumes aimed at quantifying the potential shift in observed flows resulting from a range of nature-based-solutions installed by local organizations. The small-scale reduces the influence of variability characterizing larger catchments that would otherwise mask any such shifts, which we attempt to relate to a shift in model parameters. This paper demonstrates an approach to applying donor-parameter-shifts obtained from modelling two of the paired micro-catchments to a much larger scale, in order to understand the potential for improved distributed modelling of nature-based solutions in the form of additional tree-planting. The models include a rainfall-runoff model, Dynamic Topmodel, and a 2D hydrodynamic model, JFlow, permitting analysis of changes in hillslope processes and channel hydrodynamics resulting from a range of distributed measures designed to emulate natural hydrological processes that evaporate, store or infiltrate flows. We report on attempts to detect shift in hydrological response using one of the paired-micro-catchment moorland versus forestry sites in Lorton using Dynamic Topmodel. A donor-parameter-shift approach is used in a hypothetical experiment to represent new woodland in a much larger catchment, although testing all combinations of spatial planting strategies, responses to multiple-extremes, failure-modes and changes to synchronization becomes intractable to support good decision making. We argue that the problem can be re-framed to use donor-parameter-shifts at multi-local-scale catchments above communities known to be at risk, commensurate with most of the evidence of NbS impacts being effective at the small scale (ca. 10 km2). This might lead to more effective modelling to help catchment managers prioritize those communities-at-risk where there is more evidence that NbS might be effective. 相似文献
163.
遥感数据的模糊不确定性及其处理方法探讨 总被引:11,自引:0,他引:11
通过对遥感数据生成机理的分析,得出遥感数据存在不确定性,并进一步论证了不确定性中含有模糊不确定性,这样对遥感数据的不确定性处理更加全面和合理,从而达到提高遥感数据的精度和消除遥感数据不确定性的目的。综合国内外对遥感数据模糊不确定性的处理研究,探讨了几种处理方法,发现还没有一种方法能圆满解决遥感数据的模糊不确定性。 相似文献
164.
M. A. Hariri-Ardebili P. Boodagh 《Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards》2019,13(1):34-52
This paper presents a methodology to evaluate the seismic reliability of geostructures in an optimal way. Taguchi design of experiments are adopted to find the most efficient and cost-effective combination of material properties in the uncertainty domain. Twelve uniform and mixed design models are tested. A polynomial-based response surface meta-model is built for each one and the accuracy of perdition is examined using 10,000 Monte Carlo simulations. A two-dimensional gravity dam is used as a vehicle for probabilistic transient analyses. The ground motion record-to-record variability is added as well using over one hundred earthquake records selected based on probabilistic seismic hazard analysis. Dynamic sensitivity of epistemic random variables are evaluated for the first time. Finally, an efficient and practical procedure is proposed in order to determine the reliability index of the geostructures. This approach, in fact, can be generalised for any type of engineering structures dealing with multi-hazard problems. 相似文献
165.
Kok-Kwang Phoon 《Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards》2019,13(2):101-130
The calculated response from a numerical model will deviate from the measured one given the presence of modelling idealizations and real world construction effects. This deviation can be directly captured by a ratio between the measured and the calculated quantity. The ratio is also called a model factor in many design guides. The probabilistic distribution of the model factor is arguably the most common and simplest complete representation of model uncertainty. The characterisation of model uncertainty is identified as one of the critical elements in a geotechnical reliability-based design process in Annex D of ISO 2394:2015 “General Principles on Reliability of Structures”. This Spotlight paper reviews the databases for various geo-structures and determines their associated model statistics. Foundation load test databases are the most prevalent. A recent effort to compile a large generic database (PILE/2739) that contains 2739 field load tests conducted on various piles and installed in different soils and countries, is highlighted. This systematic compilation of load test data is part of a broader research agenda to digitalise foundation design for “precision construction”, which is targeted at characterising “site-specific” model factors and soil parameters based on both site-specific and generic data for further customisation of design to a particular site. The mean and COV of the model factor for a range of geo-structures, geomaterials, and limit states (both ultimate and serviceability) are summarized in a form suitable for adoption in design and codes of practice. Based on this summary, it is proposed that a model factor for a design model can be classified as: (1) moderately conservative (1?≤?mean?2), (2) highly conservative (2?≤?mean?3), or (3) very highly conservative (mean?≥?3). The model uncertainty can be as: (1) low dispersion (COV?0.3), (2) medium dispersion (0.3?≤?COV?0.6), (3) high dispersion (0.6?≤?COV?0.9), and (4) very high dispersion (COV?≥?0.9). This summary represents the most extensive and significant update of Table 3.7.5.1 in the 2006 JCSS Probabilistic Model Code. 相似文献
166.
J. Ignacio López-Moreno Leena Leppänen Bartłomiej Luks Ladislav Holko Ghislain Picard Alba Sanmiguel-Vallelado Esteban Alonso-González David C. Finger Ali N. Arslan Katalin Gillemot Aynur Sensoy Arda Sorman M. Cansaran Ertaş Steven R. Fassnacht Charles Fierz Christoph Marty 《水文研究》2020,34(14):3120-3133
Manually collected snow data are often considered as ground truth for many applications such as climatological or hydrological studies. However, there are many sources of uncertainty that are not quantified in detail. For the determination of water equivalent of snow cover (SWE), different snow core samplers and scales are used, but they are all based on the same measurement principle. We conducted two field campaigns with 9 samplers commonly used in observational measurements and research in Europe and northern America to better quantify uncertainties when measuring depth, density and SWE with core samplers. During the first campaign, as a first approach to distinguish snow variability measured at the plot and at the point scale, repeated measurements were taken along two 20 m long snow pits. The results revealed a much higher variability of SWE at the plot scale (resulting from both natural variability and instrumental bias) compared to repeated measurements at the same spot (resulting mostly from error induced by observers or very small scale variability of snow depth). The exceptionally homogeneous snowpack found in the second campaign permitted to almost neglect the natural variability of the snowpack properties and focus on the separation between instrumental bias and error induced by observers. Reported uncertainties refer to a shallow, homogeneous tundra-taiga snowpack less than 1 m deep (loose, mostly recrystallised snow and no wind impact). Under such measurement conditions, the uncertainty in bulk snow density estimation is about 5% for an individual instrument and is close to 10% among different instruments. Results confirmed that instrumental bias exceeded both the natural variability and the error induced by observers, even in the case when observers were not familiar with a given snow core sampler. 相似文献
167.
168.
169.
地表粗糙度的不确定性是引起SAR土壤水分反演结果不确定性的主要因素,现有研究大多着重于研究单个粗糙度参数(主要是相关长度)的不确定性,直接研究地表组合粗糙度不确定性的较少。本文使用偏度、峰度和四分位距3个指标来量化不确定性,通过在组合粗糙度中加入不同量级高斯噪声进行随机扰动的方法,研究组合粗糙度不确定性在反演过程中的传递,并对反演土壤水分的不确定性进行定量分析。进一步研究反演土壤水分的均方根误差对组合粗糙度不同比例误差范围的响应特征,得到满足反演精度要求的组合粗糙度误差控制范围。样区的实验分析结果表明:组合粗糙度高斯噪声标准差在0-0.045之间时,峰度取值从-0.1984到1.2501,偏度取值从0.0191到0.6791,四分位距取值从0.0018到0.0167,3个量化指标都随组合粗糙度高斯噪声量级的增大而增大,土壤水分反演值有集中在众数附近的趋势,土壤水分低估倾向比高估倾向更明显;本文提出的组合粗糙度误差控制范围可满足反演精度要求,误差控制范围与入射角负相关。 相似文献
170.