全文获取类型
收费全文 | 1111篇 |
免费 | 147篇 |
国内免费 | 161篇 |
专业分类
测绘学 | 414篇 |
大气科学 | 83篇 |
地球物理 | 166篇 |
地质学 | 425篇 |
海洋学 | 70篇 |
天文学 | 36篇 |
综合类 | 134篇 |
自然地理 | 91篇 |
出版年
2023年 | 13篇 |
2022年 | 23篇 |
2021年 | 44篇 |
2020年 | 47篇 |
2019年 | 41篇 |
2018年 | 34篇 |
2017年 | 62篇 |
2016年 | 70篇 |
2015年 | 85篇 |
2014年 | 87篇 |
2013年 | 93篇 |
2012年 | 74篇 |
2011年 | 79篇 |
2010年 | 66篇 |
2009年 | 56篇 |
2008年 | 47篇 |
2007年 | 59篇 |
2006年 | 62篇 |
2005年 | 56篇 |
2004年 | 60篇 |
2003年 | 42篇 |
2002年 | 45篇 |
2001年 | 37篇 |
2000年 | 21篇 |
1999年 | 22篇 |
1998年 | 19篇 |
1997年 | 18篇 |
1996年 | 9篇 |
1995年 | 11篇 |
1994年 | 8篇 |
1993年 | 6篇 |
1992年 | 4篇 |
1991年 | 4篇 |
1990年 | 5篇 |
1989年 | 3篇 |
1987年 | 1篇 |
1984年 | 1篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1419条查询结果,搜索用时 15 毫秒
1.
为满足油田中后期精细开发及后续调整挖潜的需要,亟需进行更精细的储量研究,其中最重要的一环就是纵向细分计算单元.对于多层构造油藏,纵向细分主要根据油层组内隔夹层分布特征、小层物性特征,细分到以小层或者分布特征和物性相近且纵向上连续分布的小层组合为计算单元.根据纵向细分计算单元前后储量参数选值的理论推导,结合储层的沉积展布特征,认为含油面积范围内无储层尖灭时,仅平面油层厚度差异较大的油藏平均有效厚度变小.含油面积范围内储层尖灭时,正常三角洲沉积储层的油藏各参数一般都变小;而辫状河三角洲沉积储层的油藏一般平均有效厚度变小,平均有效孔隙度和平均含油饱和度变大.这一结论可以有效指导储量评估过程中纵向细分计算单元方案划分,为同类油田的精细储量研究提供技术支撑. 相似文献
2.
由于地震灾害的不确定性,使得应急救援设备运行速率及使用效率均受到影响,需要进行并行优化处理。对此,提出基于双向并行计算的地震灾害应急救援设备优化方法。以地震灾区灾情等级评估结果为基础,将地震等级及应急救援设备,设备及设备之间的关系进行标准化处理,转化为求解最优解问题;在考虑不确定性的情况下,通过通信时间与救援设备需求进行双向并行处理,优化地震灾害应急救援设备。实验结果表明,采用改进方法进行地震灾害应急救援设备并行优化,能够对地震灾害应急救援设备需求量进行准确预测,提高应急救援设备的运行速率,缩短通信时间,提高应急救援设备的使用效率,具有一定的优势。 相似文献
3.
针对"房桥合一"铁路客站节点数多、空间相互作用显著等特点,利用沿结构整体坐标系方向功率谱的表达式和合理谱强度因子的确定方法,提出能合理考虑地震激励输入角度变化的快速多维虚拟激励法,并以天津西站Ⅱ区作为工程案例,对利用快速多维虚拟激励法、时程响应分析法和已有虚拟激励法的计算结果进行比较。结果表明:三种算法的计算结果吻合较好,证明快速多维虚拟激励法的计算值是合理的;且与已有虚拟激励法相比、快速多维虚拟激励法的计算效率显著提高,更适合于"房桥合一"铁路客站的抗震计算。 相似文献
4.
Peng Yue Fan Gao Boyi Shangguan Zheren Yan 《International journal of geographical information science》2020,34(11):2243-2274
ABSTRACT High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning. 相似文献
5.
Jayakrishnan Ajayakumar Eric Shook 《International journal of geographical information science》2020,34(9):1683-1707
ABSTRACT Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis. 相似文献
6.
针对数字化地震台网JOPENS软件系统的技术特点及功能需求,在对当前各种主流云计算平台进行比较的基础上,基于较为合适的阿里云计算平台,提出了在云环境下部署JOPENS系统的应用方案。测试结果表明JOPENS系统部署在云环境下能够提升测震台网中心运行的稳定性及可扩展性,并节约运行维护成本。该研究对于当前三网融合新形势下云南省地震台网的建设及运行工作具有借鉴意义。 相似文献
7.
Fangli Zhang 《International journal of geographical information science》2019,33(10):1984-2010
High-performance simulation of flow dynamics remains a major challenge in the use of physical-based, fully distributed hydrologic models. Parallel computing has been widely used to overcome efficiency limitation by partitioning a basin into sub-basins and executing calculations among multiple processors. However, existing partition-based parallelization strategies are still hampered by the dependency between inter-connected sub-basins. This study proposed a particle-set strategy to parallelize the flow-path network (FPN) model for achieving higher performance in the simulation of flow dynamics. The FPN model replaced the hydrological calculations on sub-basins with the movements of water packages along the upstream and downstream flow paths. Unlike previous partition-based task decomposition approaches, the proposed particle-set strategy decomposes the computational workload by randomly allocating runoff particles to concurrent computing processors. Simulation experiments of the flow routing process were undertaken to validate the developed particle-set FPN model. The outcomes of hourly outlet discharges were compared with field gauged records, and up to 128 computing processors were tested to explore its speedup capability in parallel computing. The experimental results showed that the proposed framework can achieve similar prediction accuracy and parallel efficiency to that of the Triangulated Irregular Network (TIN)-based Real-Time Integrated Basin Simulator (tRIBS). 相似文献
8.
Boussinesq波浪模型是一类相位解析模型,在时域内求解需要较高的空间和时间分辨率以保证计算精度。为提高计算效率,有必要针对该类模型开展并行算法的研究。与传统的中央处理器(CPU)相比,图形处理器(GPU)有大量的运算器,可显著提高计算效率。基于统一计算设备架构CUDA C语言和图形处理器,实现了Boussinesq模型的并行运算。将本模型的计算结果同CPU数值模拟结果和解析解相比较,发现得到的结果基本一致。同时也比较了CPU端与GPU端的计算效率,结果表明,GPU数值模型的计算效率有明显提升,并且伴随数值网格的增多,提升效果更为明显。 相似文献
9.
Yasaman Jafariavval 《Marine Georesources & Geotechnology》2020,38(2):214-222
AbstractLiquefaction of loose saturated soil deposits is a hazardous type of ground failure occurring under earthquake excitations. Therefore, an accurate estimation of liquefaction potential is extremely important in geotechnical engineering. In the current study, a new model is proposed which estimates the level of strain energy needed for liquefaction initiation. A compiled database containing cyclic tests gathered from previously published works was used to develop new models to predict liquefaction potential. M5′ algorithm was used to find the best correlation between parameters. It was shown that not only the derived formulas are acceptably accurate but also they feature a very simple structure in comparison with available formulas in the literature. The proposed equations are accurate, physically sound and uncomplicated. Furthermore, safety factors were given for different levels of risk, which can be useful for engineering practice. In addition, the influence of different predictors on the liquefaction potential was evaluated and also the significance of input variables was assessed via sensitivity analysis. Finally, a new model was introduced for preliminary estimation of liquefaction potential. 相似文献
10.