首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1059篇
  免费   97篇
  国内免费   158篇
测绘学   344篇
大气科学   142篇
地球物理   156篇
地质学   220篇
海洋学   90篇
天文学   12篇
综合类   122篇
自然地理   228篇
  2024年   23篇
  2023年   61篇
  2022年   163篇
  2021年   179篇
  2020年   153篇
  2019年   117篇
  2018年   60篇
  2017年   66篇
  2016年   36篇
  2015年   26篇
  2014年   27篇
  2013年   54篇
  2012年   77篇
  2011年   28篇
  2010年   23篇
  2009年   19篇
  2008年   25篇
  2007年   30篇
  2006年   26篇
  2005年   16篇
  2004年   14篇
  2003年   10篇
  2002年   8篇
  2001年   14篇
  2000年   9篇
  1999年   11篇
  1998年   5篇
  1997年   10篇
  1996年   7篇
  1995年   5篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1991年   3篇
  1988年   1篇
  1985年   1篇
  1984年   1篇
  1954年   1篇
排序方式: 共有1314条查询结果,搜索用时 31 毫秒
201.
元认知是人们对自身认知活动的自我意识、自我监控与自我调节,元认知理论在学习实践中具有广泛的适应性。大学英语教学中运用元认知策略改革教学与测试模式,能提高学生的自主学习能力,取得良好的教学效果。  相似文献   
202.
全波形反演方法利用叠前地震波场的运动学和动力学信息重建地下速度结构,具有揭示复杂地质背景下构造与岩性细节信息的潜力。然而,巨大的计算量是阻碍其发展的一个瓶颈问题。为此,研究者们提出了震源编码技术来减少计算量,但是此方法在模型更新过程中会引进随机串扰噪声,降低反演结果准确性。所以,在保证计算精度的情况下,本文提出了采用逐减随机震源采样的方法来高效计算全波形反演问题。笔者将此方法应用于频率域二维黏滞声波波动方程全波形反演,开始了在频率域进行随机震源采样类方法的研究,计算过程中共使用了依次增大的8个频率段;并应用Overthrust模型来验证此类随机震源采样法的正确性。实验结果表明:基于逐减随机震源采样法的反演结果与实际Overthrust模型的拟合误差为0.065 65,而应用基于全部震源的全波形反演方法得到的反演结果与实际Overthrust模型的拟合误差为0.064 64,两者差别不大;但计算用时由740 min减少到291.2 min,即计算效率提高了2.54倍。为了更好地确定方法的有效性,将其应用于Marmousi模型进行试算。模型试算结果表明:基于逐减随机震源和基于全部震源得到的反演结果与实际Marmousi模型的拟合误差分别为0.080 12和0.078 97,相差不大;但计算用时由1 218.9 min减少到274.4 min,计算效率提高了4.44倍。综上,在保证反演精度的情况下,基于逐减随机震源采样法的频率域全波形反演方法大大减少了计算量,具有不可替代的计算优势,并且没有引进随机串扰噪声。  相似文献   
203.
随着WLAN的普及,基于Wi-Fi的室内定位方法逐渐成为研究与应用的热点。虽然,其中基于位置指纹的定位算法研究相对广泛,应用效果较好,然而现有的指纹定位方法或系统仍存在以下3个问题:① 离线阶段的数据标定和定位模型的训练需要耗费大量人力物力,以及时间消耗,使系统很难得到实际应用;② 真实环境中WLAN信号波动呈现高动态性,采集的数据存在显著的时效性,无法提供长时间的有效定位保证;③ 实际环境中AP设备变动频繁,导致训练数据与定位数据特征维度不等长,造成模型失效。针对上述问题,本文提出了一种基于众包数据的模型更新方法,通过不断融合增量数据,使定位模型保持实时有效。该方法主要包括半监督极速学习机(SELM)、具有时效机制的增量式定位方法(TMELM)和特征自适应的在线极速学习机(FA-OSELM)3部分。基于上述方法,本文设计并实现了基于众包数据的室内定位平台系统。实际应用表明,本文提出的方法能够显著降低模型训练阶段的数据采集工作量,有效提升模型训练速度,并且长时间保持较高的定位精度。  相似文献   
204.
The introduction of automated generalisation procedures in map production systems requires that generalisation systems are capable of processing large amounts of map data in acceptable time and that cartographic quality is similar to traditional map products. With respect to these requirements, we examine two complementary approaches that should improve generalisation systems currently in use by national topographic mapping agencies. Our focus is particularly on self‐evaluating systems, taking as an example those systems that build on the multi‐agent paradigm. The first approach aims to improve the cartographic quality by utilising cartographic expert knowledge relating to spatial context. More specifically, we introduce expert rules for the selection of generalisation operations based on a classification of buildings into five urban structure types, including inner city, urban, suburban, rural, and industrial and commercial areas. The second approach aims to utilise machine learning techniques to extract heuristics that allow us to reduce the search space and hence the time in which a good cartographical solution is reached. Both approaches are tested individually and in combination for the generalisation of buildings from map scale 1:5000 to the target map scale of 1:25 000. Our experiments show improvements in terms of efficiency and effectiveness. We provide evidence that both approaches complement each other and that a combination of expert and machine learnt rules give better results than the individual approaches. Both approaches are sufficiently general to be applicable to other forms of self‐evaluating, constraint‐based systems than multi‐agent systems, and to other feature classes than buildings. Problems have been identified resulting from difficulties to formalise cartographic quality by means of constraints for the control of the generalisation process.  相似文献   
205.
ABSTRACT

Sedimentation in navigable waterways and harbours is of concern for many water and port managers. One potential source of variability in sedimentation is the annual sediment load of the river that empties in the harbour. The main objective of this study was to use some of the regularly monitored hydro-meteorological variables to compare estimates of hourly suspended sediment concentration in the Saint John River using a sediment rating curve and a model tree (M5?) with different combinations of predictors. Estimated suspended sediment concentrations were multiplied by measured flows to estimate suspended sediment loads. Best results were obtained using M5? with four predictors, returning an R2 of 0.72 on calibration data and an R2 of 0.46 on validation data. Total load was underestimated by 1.41% for the calibration period and overestimated by 2.38% for the validation period. Overall, the model tree approach is suggested for its relative ease of implementation and constant performance.
EDITOR M.C. Acreman; ASSOCIATE EDITOR B. Touaibia  相似文献   
206.
Abstract

The quantification of the sediment carrying capacity of a river is a difficult task that has received much attention. For sand-bed rivers especially, several sediment transport functions have appeared in the literature based on various concepts and approaches; however, since they present a significant discrepancy in their results, none of them has become universally accepted. This paper employs three machine learning techniques, namely artificial neural networks, symbolic regression based on genetic programming and an adaptive-network-based fuzzy inference system, for the derivation of sediment transport formulae for sand-bed rivers from field and laboratory flume data. For the determination of the input parameters, some of the most prominent fundamental approaches that govern the phenomenon, such as shear stress, stream power and unit stream power, are utilized and a comparison of their efficacy is provided. The results obtained from the machine learning techniques are superior to those of the commonly-used sediment transport formulae and it is shown that each of the input combinations tested has its own merit, as they produce similarly good results with respect to the data-driven technique employed.
Editor Z.W. Kundzewicz  相似文献   
207.
ABSTRACT

This study investigates misregistration issues between Landsat-8/ Operational Land Imager and Sentinel-2A/ Multi-Spectral Instrument at 30?m resolution, and between multi-temporal Sentinel-2A images at 10?m resolution using a phase-correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30?m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10?m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear random forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07?±?0.02 pixels at 30?m resolution, and 0.09?±?0.05 and 0.15?±?0.06 pixels at 10?m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08?±?0.02 pixels at 30?m resolution and 0.12?±?0.06 (same Sentinel-2A orbits) and 0.20?±?0.09 (adjacent orbits) pixels at 10?m resolution.  相似文献   
208.
《The Cartographic journal》2013,50(2):144-156
Abstract

Isolines have proved to be a highly effective way of conveying the shape of a surface (most commonly in the form of height contours to convey geographical landscape). Selecting the right contour interval is a compromise between showing sufficient detail in flat regions, whilst avoiding excessive crowding of lines in steep and morphologically complex areas. The traditional way of avoiding coalescence and confusion across steep regions has been to manually remove short sections of intermediate contours, while retaining index contours. Incorporating humans in automated environments is not viable. This research reports on the design, implementation and evaluation of an automated solution to this problem involving the automatic identification of coalescing lines, and removal of line segments to ensure clarity in the interpretation of contour information. Evaluation was made by subjective comparison with Ordnance Survey products. The results were found to be very close to the quality associated with manual techniques.  相似文献   
209.
Abstract

A novel artificial intelligence approach of Bayesian Logistic Regression (BLR) and its ensembles [Random Subspace (RS), Adaboost (AB), Multiboost (MB) and Bagging] was introduced for landslide susceptibility mapping in a part of Kamyaran city in Kurdistan Province, Iran. A spatial database was generated which includes a total of 60 landslide locations and a set of conditioning factors tested by the Information Gain Ratio technique. Performance of these models was evaluated using the area under the ROC curve (AUROC) and statistical index-based methods. Results showed that the hybrid ensemble models could significantly improve the performance of the base classifier of BLR (AUROC?=?0.930). However, RS model (AUROC?=?0.975) had the highest performance in comparison to other landslide ensemble models, followed by Bagging (AUROC?=?0.972), MB (AUROC?=?0.970) and AB (AUROC?=?0.957) models, respectively.  相似文献   
210.
ABSTRACT

Making and sharing maps is easier than ever, and social media platforms make it possible for maps to rapidly attain widespread visibility and engagement. Such maps can be considered examples of viral cartography – maps that reach rapid popularity via social media dissemination. In this research we propose a framework for evaluating the design and social dissemination characteristics of viral maps. We apply this framework in two case studies using maps that reached wide audiences on Twitter. We then analyze collections of maps derived from and inspired by viral maps using image analysis and machine learning to characterize their design elements. Based on our initial work to conceptualize and analyze virality in cartography, we propose a set of new research challenges to better understand viral mapmaking and leverage its social affordances.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号