首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   321篇
  免费   35篇
  国内免费   56篇
测绘学   81篇
大气科学   24篇
地球物理   47篇
地质学   146篇
海洋学   17篇
综合类   37篇
自然地理   60篇
  2024年   2篇
  2023年   3篇
  2022年   6篇
  2021年   9篇
  2020年   10篇
  2019年   10篇
  2018年   8篇
  2017年   11篇
  2016年   19篇
  2015年   17篇
  2014年   21篇
  2013年   23篇
  2012年   30篇
  2011年   30篇
  2010年   22篇
  2009年   31篇
  2008年   22篇
  2007年   24篇
  2006年   20篇
  2005年   17篇
  2004年   21篇
  2003年   10篇
  2002年   3篇
  2001年   8篇
  2000年   6篇
  1999年   5篇
  1998年   3篇
  1997年   6篇
  1996年   2篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1992年   2篇
  1990年   2篇
  1989年   3篇
  1988年   1篇
  1987年   1篇
排序方式: 共有412条查询结果,搜索用时 31 毫秒
21.
利用鄂尔多斯西南缘的重力观测数据对长短期记忆循环神经网络(long short-term memory, LSTM)进行训练,结果表明,该神经网络可基于有限的数据获取较好的推估结果。基于自由空气重力异常数据,对比分析长短期记忆循环神经网络和传统克里金方法的推估结果发现,神经网络的推估能力优于传统克里金方法,但运算效率低于后者。利用自由空气重力异常对整个区域进行推估,结果表明,LSTM方法明显优于克里金方法,加入高程数据作为约束条件可有效提升LSTM方法推估自由空气重力异常场的精度。  相似文献   
22.
本文推导出了普通二变量及泛二变量组合克立金方程式.同时,给出了两种方法的估计误差标准差求解方程,可用于地下水动态观测网的密度优化;该方法还可用于地下水位、水中离子浓度的最优估计.  相似文献   
23.
Based on an environmental geochemistry case study carried out in the neighbourhood of a W–Sn abandoned mine, the pollution in stream sediments was modelled through a Global Contamination Index. Such an index permits one to summarize the combination of deleterious elements in a single variable, obtained by the projection of samples onto the first axis of a PCASD (Principal Components Analysis of Standardized Data) applied to the entire n × p matrix containing the available concentrations of p = 16 elements in the set of n = 220 collected samples.In order to provide a sound basis for a coherent planning of the remediation process which will be put in operation in the affected area, it is necessary to balance the costs of reclaiming with the probabilities of exceeding the upper limits accepted for concentrations of environmentally harmful elements in sediments. Given these limits, they are back-transformed in the index values, providing a practical threshold between ‘clean’ and ‘contaminated’ samples. On the other hand, the minimum dimension of the cell to be reclaimed is restrained by the selected remediation process to be applied in the affected area. Hence, to meet the constraints of such a remediation process, it is required to estimate the probabilities of exceeding the index threshold in technologically meaningful sub-areas. For this end, the Indicator Block Kriging technique was applied, producing a series of maps where sub-areas to be reclaimed can be spotted for different probability levels. These maps, on which the decision making remediation agency can rely for its cost-benefit analysis, take into account both the spatial structure of ‘clean’ vs. ‘contaminated’ samples and the constraints of the reclaiming process.  相似文献   
24.
Geospatial technology is increasing in demand for many applications in geosciences. Spatial variability of the bed/hard rock is vital for many applications in geotechnical and earthquake engineering problems such as design of deep foundations, site amplification, ground response studies, liquefaction, microzonation etc. In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 km2. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, Geostatistical model based on Ordinary Kriging technique, Artificial Neural Network (ANN) and Support Vector Machine (SVM) models have been developed. In Ordinary Kriging, the knowledge of the semi-variogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of the Bangalore, where field measurements are not available. A new type of cross-validation analysis developed proves the robustness of the Ordinary Kriging model. ANN model based on multi layer perceptrons (MLPs) that are trained with Levenberg–Marquardt backpropagation algorithm has been adopted to train the model with 90% of the data available. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing loss function has been used to predict the reduced level of rock from a large set of data. In this study, a comparative study of three numerical models to predict reduced level of rock has been presented and discussed.  相似文献   
25.
实验变差函数计算方法的研究与运用   总被引:1,自引:0,他引:1  
本文系统、详尽地分析讨论了不同样品数据条件下实验变差函数的计算方法。在实验变差函数计算流程的设计和有效样品搜索域的确定方面提出了自己独特的见解。最后用一个矿山的控矿工程教据对文中介绍的方法进行了测试。结果表明,利用本文的方法,容易得到稳健、准确、信息量高的实验变差函数。  相似文献   
26.
杜英坤  燕琴  童李霞  王晓波 《测绘科学》2016,41(9):87-90,169
针对利用像元二分模型估算植被覆盖度的精度不高的问题,该文基于OSAVI,提出了选定模型参数(OSAVIs和OSAVIv)的方法,并将该方法应用于青海省植被覆盖度估算。该方法通过高分辨率影像在研究区内选取纯裸地和纯植被样点,并将纯裸地样点的OSAVI作为纯裸地样点像元的OSAVIs,将纯植被样点的OSAVI作为纯植被样点像元的OSAVIv,利用样点像元的OSAVIs和OSAVIv值,通过普通克里金内插法,求得研究区每个像元对应的OSAVIs和OSAVIv。经精度验证结果表明:此方法较常规的参数选取方法,RMSE由0.170降至0.156,MAE由0.137降至0.124。经进一步分析表明,此方法对边缘验证点和非边缘验证点的估算精度都有所提高,由于配准误差和周围地表漫反射的影响,边缘验证点的估算精度低于对非边缘验证点的估算精度。  相似文献   
27.
Studies in transportation planning routinely use data in which location attributes are an important source of information. Thus, using spatial attributes in urban travel forecasting models seems reasonable. The main objective of this paper is to estimate transit trip production using Factorial Kriging with External Drift (FKED) through an aggregated data case study of Traffic Analysis Zones in São Paulo city, Brazil. The method consists of a sequential application of Principal Components Analysis (PCA) and Kriging with External Drift (KED). The traditional Linear Regression (LR) model was adopted with the aim of validating the proposed method. The results show that PCA summarizes and combines 23 socioeconomic variables using 4 components. The first component is introduced in KED, as secondary information, to estimate transit trip production by public transport in geographic coordinates where there is no prior knowledge of the values. Cross-validation for the FKED model presented high values of the correlation coefficient between estimated and observed values. Moreover, low error values were observed. The accuracy of the LR model was similar to FKED. However, the proposed method is able to map the transit trip production in several geographical coordinates of non-sampled values.  相似文献   
28.
费龙  田秋艳 《地理科学》2016,36(4):597-602
对不同的地貌区进行空间内插时选择的适宜方法不同,以典型丘陵地貌区长春市净月潭地区的高程为研究对象,用ArcGIS 9.2分别作反距离权重法、最近邻点法、趋势面分析和克里金法做高程内插。应用交叉检验方法对其内插的精度作出分析,检验其科学性和可靠性,得出反距离权重法、最近邻点法、趋势面法、样条函数法和克里金法高程内插的不同精度,并对各种方法的适用性进行讨论。结果表明最近邻点法精度最高,以下依次为普通克里金插值法、样条函数、反距离权重法、趋势面法。为类似于净月潭地区的丘陵地貌区高程内插提供选择参考。  相似文献   
29.
遥感图像中云层遮挡影响消除方法研究述评   总被引:4,自引:2,他引:4  
 综合评述了目前用于消除遥感图像云层遮挡影响方法的原理、应用现状及处理过程中存在的问题,并在同一实验区比较了同态滤波和Kriging插值等方法的处理效果,结果表明,Kriging插值处理具有一定的优越性。  相似文献   
30.
Bayesian data fusion in a spatial prediction context: a general formulation   总被引:1,自引:1,他引:1  
In spite of the exponential growth in the amount of data that one may expect to provide greater modeling and predictions opportunities, the number and diversity of sources over which this information is fragmented is growing at an even faster rate. As a consequence, there is real need for methods that aim at reconciling them inside an epistemically sound theoretical framework. In a statistical spatial prediction framework, classical methods are based on a multivariate approach of the problem, at the price of strong modeling hypotheses. Though new avenues have been recently opened by focusing on the integration of uncertain data sources, to the best of our knowledges there have been no systematic attemps to explicitly account for information redundancy through a data fusion procedure. Starting from the simple concept of measurement errors, this paper proposes an approach for integrating multiple information processing as a part of the prediction process itself through a Bayesian approach. A general formulation is first proposed for deriving the prediction distribution of a continuous variable of interest at unsampled locations using on more or less uncertain (soft) information at neighboring locations. The case of multiple information is then considered, with a Bayesian solution to the problem of fusing multiple information that are provided as separate conditional probability distributions. Well-known methods and results are derived as limit cases. The convenient hypothesis of conditional independence is discussed by the light of information theory and maximum entropy principle, and a methodology is suggested for the optimal selection of the most informative subset of information, if needed. Based on a synthetic case study, an application of the methodology is presented and discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号