首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   996篇
  免费   80篇
  国内免费   154篇
测绘学   338篇
大气科学   138篇
地球物理   129篇
地质学   182篇
海洋学   84篇
天文学   11篇
综合类   120篇
自然地理   228篇
  2024年   23篇
  2023年   61篇
  2022年   163篇
  2021年   178篇
  2020年   153篇
  2019年   117篇
  2018年   59篇
  2017年   66篇
  2016年   31篇
  2015年   26篇
  2014年   24篇
  2013年   53篇
  2012年   73篇
  2011年   25篇
  2010年   20篇
  2009年   17篇
  2008年   15篇
  2007年   18篇
  2006年   20篇
  2005年   13篇
  2004年   12篇
  2003年   9篇
  2002年   5篇
  2001年   11篇
  2000年   7篇
  1999年   10篇
  1998年   2篇
  1997年   6篇
  1996年   3篇
  1995年   4篇
  1994年   1篇
  1993年   1篇
  1991年   2篇
  1985年   1篇
  1984年   1篇
排序方式: 共有1230条查询结果,搜索用时 31 毫秒
31.
Building damage maps after disasters can help us to better manage the rescue operations. Researchers have used Light Detection and Ranging (LiDAR) data for extracting the building damage maps. For producing building damage maps from LiDAR data in a rapid manner, it is necessary to understand the effectiveness of features and classifiers. However, there is no comprehensive study on the performance of features and classifiers in identifying damaged areas. In this study, the effectiveness of three texture extraction methods and three fuzzy systems for producing the building damage maps was investigated. In the proposed method, at first, a pre-processing stage was utilized to apply essential processes on post-event LiDAR data. Second, textural features were extracted from the pre-processed LiDAR data. Third, fuzzy inference systems were generated to make a relation between the extracted textural features of buildings and their damage extents. The proposed method was tested across three areas over the 2010 Haiti earthquake. Three building damage maps with overall accuracies of 75.0%, 78.1% and 61.4% were achieved. Based on outcomes, the fuzzy inference systems were stronger than random forest, bagging, boosting and support vector machine classifiers for detecting damaged buildings.  相似文献   
32.
For many researchers, government agencies, and emergency responders, access to the geospatial data of US electric power infrastructure is invaluable for analysis, planning, and disaster recovery. Historically, however, access to high quality geospatial energy data has been limited to few agencies because of commercial licenses restrictions, and those resources which are widely accessible have been of poor quality, particularly with respect to reliability. Recent efforts to develop a highly reliable and publicly accessible alternative to the existing datasets were met with numerous challenges – not the least of which was filling the gaps in power transmission line voltage ratings. To address the line voltage rating problem, we developed and tested a basic methodology that fuses knowledge and techniques from power systems, geography, and machine learning domains. Specifically, we identified predictors of nominal voltage that could be extracted from aerial imagery and developed a tree-based classifier to classify nominal line voltage ratings. Overall, we found that line support height, support span, and conductor spacing are the best predictors of voltage ratings, and that the classifier built with these predictors had a reliable predictive accuracy (that is, within one voltage class for four out of the five classes sampled). We applied our approach to a study area in Minnesota.  相似文献   
33.
We report on how visual realism might influence map-based route learning performance in a controlled laboratory experiment with 104 male participants in a competitive context. Using animations of a dot moving through routes of interest, we find that participants recall the routes more accurately with abstract road maps than with more realistic satellite maps. We also find that, irrespective of visual realism, participants with higher spatial abilities (high-spatial participants) are more accurate in memorizing map-based routes than participants with lower spatial abilities (low-spatial participants). On the other hand, added visual realism limits high-spatial participants in their route recall speed, while it seems not to influence the recall speed of low-spatial participants. Competition affects participants’ overall confidence positively, but does not affect their route recall performance neither in terms of accuracy nor speed. With this study, we provide further empirical evidence demonstrating that it is important to choose the appropriate map type considering task characteristics and spatial abilities. While satellite maps might be perceived as more fun to use, or visually more attractive than road maps, they also require more cognitive resources for many map-based tasks, which is true even for high-spatial users.  相似文献   
34.
35.
我国建立了包含海量数据的高质量的勘查地球化学数据库,为矿产勘查、环境评价和地质调查等提供了重要的数据支撑。如何高效处理勘查地球化学数据,并从中发掘和识别深层次信息一直是勘查地球化学学科研究的热点和前沿领域。本文在系统调研国内外学者过去十年发表的论著基础上,对勘查地球化学数据处理方法进行分析与对比,从勘查地球化学数据库建设、地球化学异常识别及其不确定性评价等方面概述了我国近十年来在该领域取得的主要研究进展,包括:(1)分形与多重分形模型由于考虑了地球化学空间模式的复杂性和尺度不变性,在全球范围内得到极大的发展和推广,我国学者引领了基于分形与多重分形的勘查地球化学数据处理;(2)机器学习和大数据思维开始在该领域启蒙,并迅速得到关注,正在成为研究热点和前沿领域,我国学者率先开展基于机器学习算法的勘查地球化学大数据挖掘研究;(3)我国学者需要进一步加强勘查地球化学数据缺失值处理以及成分数据闭合效应研究。今后该领域应进一步加强对弱缓地球化学异常识别、异常不确定性评价以及异常识别与其形成机理相结合等方面的研究。  相似文献   
36.
作为近年来爆炸式发展的方法模型,机器学习为地质找矿提供了新的思维和研究方法。本文探讨矿产预测研究的理论方法体系,总结机器学习在矿产预测领域的特征信息提取和信息综合集成两个方面的应用现状,并讨论机器学习在矿产资源定量预测领域面临的训练样本稀少且不均衡、模型训练中缺乏不确定性评估、缺少反哺研究、方法选择等困难和挑战。进一步以闽西南马坑式铁矿为实例论述基于机器学习方法的矿产预测基本流程:(1)通过成矿系统研究建立成矿模型,确定矿床控矿要素;(2)通过勘查系统研究建立找矿模型,并为评价预测提供相关的勘查数据;(3)通过预测评价系统研究,建立预测模型,并提取预测要素;(4)利用机器学习模型对预测要素进行信息综合集成,获取成矿有利度图;(5)对预测性能和结果进行不确定性评估;(6)找矿靶区/成矿远景区圈定及资源量估算。最后,总结建立以地学大数据和地球系统理论为指导,以“地球系统-成矿系统-勘查系统-预测评价系统”为研究路线的基于地学大数据的矿产资源定量预测理论和方法体系的研究愿景。  相似文献   
37.
In recent years,landslide susceptibility mapping has substantially improved with advances in machine learning.However,there are still challenges remain in landslide mapping due to the availability of limited inventory data.In this paper,a novel method that improves the performance of machine learning techniques is presented.The proposed method creates synthetic inventory data using Generative Adversarial Networks(GANs)for improving the prediction of landslides.In this research,landslide inventory data of 156 landslide locations were identified in Cameron Highlands,Malaysia,taken from previous projects the authors worked on.Elevation,slope,aspect,plan curvature,profile curvature,total curvature,lithology,land use and land cover(LULC),distance to the road,distance to the river,stream power index(SPI),sediment transport index(STI),terrain roughness index(TRI),topographic wetness index(TWI)and vegetation density are geo-environmental factors considered in this study based on suggestions from previous works on Cameron Highlands.To show the capability of GANs in improving landslide prediction models,this study tests the proposed GAN model with benchmark models namely Artificial Neural Network(ANN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF)and Bagging ensemble models with ANN and SVM models.These models were validated using the area under the receiver operating characteristic curve(AUROC).The DT,RF,SVM,ANN and Bagging ensemble could achieve the AUROC values of(0.90,0.94,0.86,0.69 and 0.82)for the training;and the AUROC of(0.76,0.81,0.85,0.72 and 0.75)for the test,subsequently.When using additional samples,the same models achieved the AUROC values of(0.92,0.94,0.88,0.75 and 0.84)for the training and(0.78,0.82,0.82,0.78 and 0.80)for the test,respectively.Using the additional samples improved the test accuracy of all the models except SVM.As a result,in data-scarce environments,this research showed that utilizing GANs to generate supplementary samples is promising because it can improve the predictive capability of common landslide prediction models.  相似文献   
38.
One important step in binary modeling of environmental problems is the generation of absence-datasets that are traditionally generated by random sampling and can undermine the quality of outputs.To solve this problem,this study develops the Absence Point Generation(APG)toolbox which is a Python-based ArcGIS toolbox for automated construction of absence-datasets for geospatial studies.The APG employs a frequency ratio analysis of four commonly used and important driving factors such as altitude,slope degree,topographic wetness index,and distance from rivers,and considers the presence locations buffer and density layers to define the low potential or susceptibility zones where absence-datasets are gener-ated.To test the APG toolbox,we applied two benchmark algorithms of random forest(RF)and boosted regression trees(BRT)in a case study to investigate groundwater potential using three absence datasets i.e.,the APG,random,and selection of absence samples(SAS)toolbox.The BRT-APG and RF-APG had the area under receiver operating curve(AUC)values of 0.947 and 0.942,while BRT and RF had weaker per-formances with the SAS and Random datasets.This effect resulted in AUC improvements for BRT and RF by 7.2,and 9.7%from the Random dataset,and AUC improvements for BRT and RF by 6.1,and 5.4%from the SAS dataset,respectively.The APG also impacted the importance of the input factors and the pattern of the groundwater potential maps,which proves the importance of absence points in environmental bin-ary issues.The proposed APG toolbox could be easily applied in other environmental hazards such as landslides,floods,and gully erosion,and land subsidence.  相似文献   
39.
The selection of a suitable discretization method(DM)to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment.However,few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV.These issues were well addressed in this study.The information loss rate(ILR),an index based on the informa-tion entropy,seems can be used to select optimal DM for each SCV.However,the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV.Facing this issue,we propose an index,infor-mation change rate(ICR),that focuses on the changed amount of information due to the discretization based on each cell,enabling the identification of the optimal DM.We develop a case study with Random Forest(training/testing ratio of 7:3)to assess flood susceptibility in Wanan County,China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR.The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases.Moreover,we observed the ILR values are unnaturally small(<1%),whereas the ICR values are obviously more in line with general recognition(usually 10%-30%).The above results all demonstrate the superiority of the ICR.We consider this study fills up the existing research gaps,improving the ML-based natural hazard susceptibility assessments.  相似文献   
40.
高时空分辨率的自然资源指标数据对大尺度自然资源动态观测与趋势评估至关重要。大数据时代下的海量多源数据为数据高效融合利用提供了可能。以重构汉江流域归一化植被指数(Normalized Difference Vegetation Index,NDVI)数据为例,搭建了PostgreSQL自然资源时空大数据处理底层架构,集成了数据级融合法、特征级融合法和决策级融合法,基于机器学习算法构建了一套面向自然资源信息提取的多源异构数据智能融合技术,实现了多源数据的高效利用与特征空间优选。同时,重构了2000—2019年汉江流域NDVI 1 km逐年数据集,全面反映了汉江流域植被动态变化。研究结果可为地球科学时空大数据的高效提取与模拟分析提供科学参考,为定量核算林草资源禀赋规模、探究生态系统时空演变规律提供一种更精准、更便捷的技术手段。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号