首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   176篇
  免费   13篇
  国内免费   3篇
测绘学   1篇
大气科学   3篇
地球物理   43篇
地质学   88篇
海洋学   23篇
天文学   4篇
自然地理   30篇
  2023年   1篇
  2022年   1篇
  2020年   2篇
  2019年   4篇
  2018年   6篇
  2017年   4篇
  2016年   9篇
  2015年   13篇
  2014年   9篇
  2013年   13篇
  2012年   5篇
  2011年   6篇
  2010年   10篇
  2009年   15篇
  2008年   9篇
  2007年   15篇
  2006年   11篇
  2005年   1篇
  2004年   6篇
  2003年   13篇
  2002年   6篇
  2001年   5篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   4篇
  1993年   1篇
  1992年   1篇
  1991年   6篇
  1990年   1篇
  1989年   1篇
  1984年   1篇
  1979年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有192条查询结果,搜索用时 15 毫秒
191.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   
192.
In recent years, data assimilation techniques have been applied to an increasingly wider specter of problems. Monte Carlo variants of the Kalman filter, in particular, the ensemble Kalman filter (EnKF), have gained significant popularity. EnKF is used for a wide variety of applications, among them for updating reservoir simulation models. EnKF is a Monte Carlo method, and its reliability depends on the actual size of the sample. In applications, a moderately sized sample (40–100 members) is used for computational convenience. Problems due to the resulting Monte Carlo effects require a more thorough analysis of the EnKF. Earlier we presented a method for the assessment of the error emerging at the EnKF update step (Kovalenko et al., SIAM J Matrix Anal Appl, in press). A particular energy norm of the EnKF error after a single update step was studied. The energy norm used to assess the error is hard to interpret. In this paper, we derive the distribution of the Euclidean norm of the sampling error under the same assumptions as before, namely normality of the forecast distribution and negligibility of the observation error. The distribution depends on the ensemble size, the number and spatial arrangement of the observations, and the prior covariance. The distribution is used to study the error propagation in a single update step on several synthetic examples. The examples illustrate the changes in reliability of the EnKF, when the parameters governing the error distribution vary.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号