首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
A combination of factorial kriging and probability field simulation is proposed to correct realizations resulting from any simulation algorithm for either too high nugget effect (noise) or poor histogram reproduction. First, a factorial kriging is done to filter out the noise from the noisy realization. Second, the uniform scores of the filtered realization are used as probability field to sample the local probability distributions conditional to the same dataset used to generate the original realization. This second step allows to restore the data variance. The result is a corrected realization which reproduces better target variogram and histogram models, yet honoring the conditioning data.  相似文献   

2.
Correcting the Smoothing Effect of Estimators: A Spectral Postprocessor   总被引:1,自引:0,他引:1  
The postprocessing algorithm introduced by Yao for imposing the spectral amplitudes of a target covariance model is shown to be efficient in correcting the smoothing effect of estimation maps, whether obtained by kriging or any other interpolation technique. As opposed to stochastic simulation, Yao's algorithm yields a unique map starting from an original, typically smooth, estimation map. Most importantly it is shown that reproduction of a covariance/semivariogram model (global accuracy) is necessarily obtained at the cost of local accuracy reduction and increase in conditional bias. When working on one location at a time, kriging remains the most accurate (in the least squared error sense) estimator. However, kriging estimates should only be listed, not mapped, since they do not reflect the correct (target) spatial autocorrelation. This mismatch in spatial autocorrelation can be corrected via stochastic simulation, or can be imposed a posteriori via Yao's algorithm.  相似文献   

3.
Fluid inclusion microthermometric data are often reported as homogenization temperature frequency histograms. Interpretation of such histograms for a single fluid inclusion assemblage (FIA) of non-reequilibrated fluid inclusions is usually straightforward and provides an accurate determination of the original density (Th) of that FIA. However, interpretation of such histograms for reequilibrated inclusions is more problematic. Decompression experiments using synthetic inclusions in natural quartz and conducted at 2–5 kbar and 600–700 °C with a maximum internal overpressure of 2 kbar indicate that histogram shape reflects the sample's P-T history. Our results further indicate that the mean, mode, range, standard deviation, extreme values, etc., all have a significance with respect to the P-T history of the sample. Thus, a mound-shaped, unimodal histogram with low range is indicative of a nearly isochoric cooling P-T path. A unimodal histogram that is slightly skewed to the right, and with a low standard deviation but high range, results from inclusion deformation in the plastic regime (high temperature/low strain rates). Fluid inclusions deformed plastically show no correlation between size and density. Histogram outliers should not be ignored and may be used to determine an isochore that passes close to the conditions of entrapment (minimum Th) or close to the final reequilibration conditions (maximum Th). The histogram mean Th value corresponds to an isochore that represents the internal overpressure (about 1 kbar) that can be maintained over geologic time by a majority of reequilibrated fluid inclusions. A multimodal histogram with high range and high standard deviation indicates inclusion brittle deformation (low temperature/high strain environments). Fluid inclusions deformed in a brittle manner show strong positive correlation between size and density. Histograms produced in the laboratory show many similarities to histograms for natural samples, offering the hope that laboratory results may be used to interpret P-T histories of natural samples. Received: 20 May 1997 / Accepted: 3 April 1998  相似文献   

4.
The cumulative distribution function (CDF) of magnitude of seismic events is one of the most important probabilistic characteristics in Probabilistic Seismic Hazard Analysis (PSHA). The magnitude distribution of mining induced seismicity is complex. Therefore, it is estimated using kernel nonparametric estimators. Because of its model-free character the nonparametric approach cannot, however, provide confidence interval estimates for CDF using the classical methods of mathematical statistics.To assess errors in the seismic events magnitude estimation, and thereby in the seismic hazard parameters evaluation in the nonparametric approach, we propose the use of the resampling methods. Resampling techniques applied to a one dataset provide many replicas of this sample, which preserve its probabilistic properties. In order to estimate the confidence intervals for the CDF of magnitude, we have developed an algorithm based on the bias corrected and accelerated method (BCa method). This procedure uses the smoothed bootstrap and second-order bootstrap samples. We refer to this algorithm as the iterated BCa method. The algorithm performance is illustrated through the analysis of Monte Carlo simulated seismic event catalogues and actual data from an underground copper mine in the Legnica–Głogów Copper District in Poland.The studies show that the iterated BCa technique provides satisfactory results regardless of the sample size and actual shape of the magnitude distribution.  相似文献   

5.
The application of spectral simulation is gaining acceptance because it honors the spatial distribution of petrophysical properties, such as reservoir porosity and shale volume. While it has been widely assumed that spectral simulation will reproduce the mean and variance of the important properties such as the observed net/gross ratio or global average of porosity, this paper shows the traditional way of implementing spectral simulation yields a mean and variance that deviates from the observed mean and variance. Some corrections (shift and rescale) could be applied to generate geologic models yielding the observed mean and variance; however, this correction implicitly rescales the input variogram model, so the variogram resulting from the generated cases has a higher sill than the input variogram model. Therefore, the spectral simulation algorithm cannot build geologic models honoring the desired mean, variance, and variogram model simultaneously, which is contrary to the widely accepted assumption that spectral simulation can reproduce all the target statistics. However, by using Fourier transform just once to generate values at all the cells instead of visiting each cell sequentially, spectral simulation does reproduce the observed variogram better than sequential Gaussian simulation. That is, the variograms calculated from the generated geologic models show smaller fluctuations around the target variogram. The larger the generated model size relative to the variogram range, the smaller the observed fluctuations.  相似文献   

6.
Covariance models provide the basic measure of spatial continuity in geostatistics. Traditionally, a closed-form analytical model is fitted to allow for interpolation of sample Covariance values while ensuring the positive definiteness condition. For cokriging, the modeling task is made even more difficult because of the restriction imposed by the linear coregionalization model. Bochner's theorem maps the positive definite constraints into much simpler constraints on the Fourier transform of the covariance, that is the density spectrum. Accordingly, we propose to transform the experimental (cross) covariance tables into quasidensity spectrum tables using Fast Fourier Transform (FFT). These quasidensity spectrum tables are then smoothed under constraints of positivity and unit sum. A backtransform (FFT) yields permissible (jointly) positive definite (cross) covariance tables. At no point is any analytical modeling called for and the algorithm is not restricted by the linear coregionalization model. A case study shows the proposed covariance modeling to be easier and much faster than the traditional analytical covariance modeling, yet yields comparable kriging or simulation results.  相似文献   

7.
Accounting for Estimation Optimality Criteria in Simulated Annealing   总被引:1,自引:0,他引:1  
This paper presents both estimation and simulation as optimization problems that differ in the optimization criteria, minimization of a local expected loss for estimation and reproduction of global statistics (semivariogram, histogram) for simulation. An intermediate approach is proposed whereby an initial random image is gradually modified using simulated annealing so as to better match both local and global constraints. The relative weights of the different constraints in the objective function allow the user to strike a balance between smoothness of the estimated map and reproduction of spatial variability by simulated maps. The procedure is illustrated using a synthetic dataset. The proposed approach is shown to enhance the influence of observations on neighboring simulated values, hence the final realizations appear to be better conditioned to the sample information. It also produces maps that are more accurate (smaller prediction error) than stochastic simulation ignoring local constraints, but not as accurate as E-type estimation. Flow simulation results show that accounting for local constraints yields, on average, smaller errors in production forecast than a smooth estimated map or a simulated map that reproduces only the histogram and semivariogram. The approach thus reduces the risk associated with the use of a single realization for forecasting and planning.  相似文献   

8.
通过将砂样图像进行单颗粒分割,识别砂样成分,可显著提高砂样岩性分析的准确性和效率。现有的砂样图像分割方法主要以传统分水岭算法和卷积神经网络为主,但由于对单颗粒岩屑轮廓细节提取不足,误分割率高。本文提出一种以图像融合算法为桥梁,将卷积神经网络和分水岭算法相结合的单颗粒图像分割提取方法。首先利用改进的Mask R-CNN网络快速分割砂样原图,获得其初分割图像;然后,将初分割图像与砂样原图进行融合,再使用改进的分水岭算法对融合结果进行分割;最后,利用砂样原图坐标点匹配方法,将分水岭分割得到的结果图像进行修正,完成单颗粒岩屑图像提取。实验结果表明,本文的单颗粒自动分割提取方法准确率高达96.77%,且模型更轻量和精准,为岩屑图像分割提供了一种可行且有效的方法,可满足有效测算油藏层构造变化、查找潜在沉积物源及储层动态变化的需求。  相似文献   

9.
Direct Sequential Simulation and Cosimulation   总被引:7,自引:0,他引:7  
Sequential simulation of a continuous variable usually requires its transformation into a binary or a Gaussian variable, giving rise to the classical algorithms of sequential indicator simulation or sequential Gaussian simulation. Journel (1994) showed that the sequential simulation of a continuous variable, without any prior transformation, succeeded in reproducing the covariance model, provided that the simulated values are drawn from local distributions centered at the simple kriging estimates with a variance corresponding to the simple kriging estimation variance. Unfortunately, it does not reproduce the histogram of the original variable, which is one of the basic requirements of any simulation method. This has been the most serious limitation to the practical application of the direct simulation approach. In this paper, a new approach for the direct sequential simulation is proposed. The idea is to use the local sk estimates of the mean and variance, not to define the local cdf but to sample from the global cdf. Simulated values of original variable are drawn from intervals of the global cdf, which are calculated with the local estimates of the mean and variance. One of the main advantages of the direct sequential simulation method is that it allows joint simulation of N v variables without any transformation. A set of examples of direct simulation and cosimulation are presented.  相似文献   

10.
The two sides of a rock fracture are geometrically often very different. The difference or heterogeneity can be captured using aperture. This is also practically very significant because locally low or zero aperture in a rock fracture impedes flow. The aperture measurements of each of five paired rock blocks are discretely digitized (0–9). Each resulting weakly continuous two-dimensional matrix is then traversed by the nflow (new-flow) flow algorithm, once from each entry at one end to the other end. The paths are either incomplete and end surrounded by zeros or are completed at the other (exit) end. The numerical digit distribution is held constant from block to block. Fracture through-flow is defined as the average percentage of complete paths from one end to the other for that block. Fracture roughness is negatively correlated with fracture through-flow (r ~ −0.893). A rough fracture will only allow little flow compared with a smooth fracture with a smoother aperture distribution. Channeling of complete paths toward exit points is discussed for all blocks. Channeling is not related to roughness. A long complete sample path in a very rough fracture is shown. The distribution of numbers of complete paths is also investigated.  相似文献   

11.
An upscaling algorithm has been developed that generates an irregular coarse grid that preserves flow connectivity by applying a rule-based upscaling algorithm to a fine-scale facies distribution. The algorithm is demonstrated using stochastically generated paleo-fluvial facies distributions. First, an irregular grid honoring the channel facies is created, followed by computation of effective anisotropic parameters for all coarse-grid cells. For the apparent layer-cake geometry of overbank deposits seen in outcrop, two local upscaling methods are compared: (1) the layered system approximation and (2) the mode. To assess upscaling performance, flow simulations for the original and upscaled grids are compared. The horizontal layered approximation (arithmetic mean) performs poorly, over-predicting lateral connectivity where even infrequent disconnection becomes important. Performance of the mode as an upscaling algorithm depends on the probability that a coarse-grid cell will be dominated by a single facies, and it performs surprisingly well because the upscaled grid-generation algorithm honors the channels, informing the upscaling process. Lastly, the irregular coarse grid was compared to a uniform coarse grid, showing superior performance with the irregular grid. The reduction in grid size achieved by irregular-grid generation will be a function of the geometrical complexity of the geologic objects to be honored.  相似文献   

12.
This paper presents a reformulation of the original Matsuoka–Nakai criterion for overcoming the limitations which make its use in a stress point algorithm problematic. In fact, its graphical representation in the principal stress space is not convex as it comprises more branches, plotting also in negative octants, and it does not increase monotonically as the distance of the stress point from the failure surface rises. The proposed mathematical reformulation plots as a single, convex surface, which entirely lies in the positive octant of the stress space and evaluates to a quantity which monotonically increases as the stress point moves away from the failure surface. It is an exact reproduction, and not an approximated one, of the only significant branch of the original criterion. It is also suitable for shaping in the deviatoric plane the yield and plastic potential surfaces of complex constitutive models. A very efficient numerical algorithm for the implicit integration of the proposed formulation is presented, which enables the evaluation of the stress at the end of each increment by solving a single scalar equation, both for associated and non‐associated plasticity. The algorithm can be easily adapted for other smooth surfaces with linear meridian section. Finally, a close expression of the consistent Jacobian matrix is given for achieving quadratic convergence in the external structural newton loop. It is shown that all this results in extremely fast solutions of boundary value problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Estimation or simulation? That is the question   总被引:1,自引:0,他引:1  
The issue of smoothing in kriging has been addressed either by estimation or simulation. The solution via estimation calls for postprocessing kriging estimates in order to correct the smoothing effect. Stochastic simulation provides equiprobable images presenting no smoothing and reproducing the covariance model. Consequently, these images reproduce both the sample histogram and the sample semivariogram. However, there is still a problem, which is the lack of local accuracy of simulated images. In this paper, a postprocessing algorithm for correcting the smoothing effect of ordinary kriging estimates is compared with sequential Gaussian simulation realizations. Based on samples drawn from exhaustive data sets, the postprocessing algorithm is shown to be superior to any individual simulation realization yet, at the expense of providing one deterministic estimate of the random function.  相似文献   

14.
赵荣军 《物探与化探》2000,24(2):150-153
基于可视化程序设计思想,用BorlandDelphi4编程实现了直方图的组数选择与绘制。程序能够绘制出[5,25]区间内各组数的原始或对数数据频率分布直方图图形,从而使用户能够直观地根据图形选择合适的直方图分组。程序界面友好、使用方便。  相似文献   

15.
The Necessity of a Multiple-Point Prior Model   总被引:9,自引:0,他引:9  
Any interpolation, any hand contouring or digital drawing of a map or a numerical model necessarily calls for a prior model of the multiple-point statistics that link together the data to the unsampled nodes, then these unsampled nodes together. That prior model can be implicit, poorly defined as in hand contouring; it can be explicit through an algorithm as in digital mapping. The multiple-point statistics involved go well beyond single-point histogram and two-point covariance models; the challenge is to define algorithms that can control more of such statistics, particularly those that impact most the utilization of the resulting maps beyond their visual appearance. The newly introduced multiple-point simulation (mps) algorithms borrow the high order statistics from a visually and statistically explicit model, a training image. It is shown that mps can simulate realizations with high entropy character as well as traditional Gaussian-based algorithms, while offering the flexibility of considering alternative training images with various levels of low entropy (organized) structures. The impact on flow performance (spatial connectivity) of choosing a wrong training image among many sharing the same histogram and variogram is demonstrated.  相似文献   

16.
A multivariate probability transformation between random variables, known as the Nataf transformation, is shown to be the appropriate transformation for multi-Gaussian kriging. It assumes a diagonal Jacobian matrix for the transformation of the random variables between the original space and the Gaussian space. This allows writing the probability transformation between the local conditional probability density function in the original space and the local conditional Gaussian probability density function in the Gaussian space as a ratio equal to the ratio of their respective marginal distributions. Under stationarity, the marginal distribution in the original space is modeled from the data histogram. The stationary marginal standard Gaussian distribution is obtained from the normal scores of the data and the local conditional Gaussian distribution is modeled from the kriging mean and kriging variance of the normal scores of the data. The equality of ratios of distributions has the same form as the Bayes’ rule and the assumption of stationarity of the data histogram can be re-interpreted as the gathering of the prior distribution. Multi-Gaussian kriging can be re-interpreted as an updating of the data histogram by a Gaussian likelihood. The Bayes’ rule allows for an even more general interpretation of spatial estimation in terms of equality for the ratio of the conditional distribution over the marginal distribution in the original data uncertainty space with the same ratio for a model of uncertainty with a distribution that can be modeled using the mean and variance from direct kriging of the original data values. It is based on the principle of conservation of probability ratio and no transformation is required. The local conditional distribution has a variance that is data dependent. When used in sequential simulation mode, it reproduces histogram and variogram of the data, thus providing a new approach for direct simulation in the original value space.  相似文献   

17.
The Markov chain random field (MCRF) theory provided the theoretical foundation for a nonlinear Markov chain geostatistics. In a MCRF, the single Markov chain is also called a “spatial Markov chain” (SMC). This paper introduces an efficient fixed-path SMC algorithm for conditional simulation of discrete spatial variables (i.e., multinomial classes) on point samples with incorporation of interclass dependencies. The algorithm considers four nearest known neighbors in orthogonal directions. Transiograms are estimated from samples and are model-fitted to provide parameter input to the simulation algorithm. Results from a simulation example show that this efficient method can effectively capture the spatial patterns of the target variable and fairly generate all classes. Because of the incorporation of interclass dependencies in the simulation algorithm, simulated realizations are relatively imitative of each other in patterns. Large-scale patterns are well produced in realizations. Spatial uncertainty is visualized as occurrence probability maps, and transition zones between classes are demonstrated by maximum occurrence probability maps. Transiogram analysis shows that the algorithm can reproduce the spatial structure of multinomial classes described by transiograms with some ergodic fluctuations. A special characteristic of the method is that when simulation is conditioned on a number of sample points, simulated transiograms have the tendency to follow the experimental ones, which implies that conditioning sample data play a crucial role in determining spatial patterns of multinomial classes. The efficient algorithm may provide a powerful tool for large-scale structure simulation and spatial uncertainty analysis of discrete spatial variables.  相似文献   

18.
张鲁渝  张建民 《岩土力学》2006,27(11):1902-1908
对Abdallah I.Husein等人提出的Monte Carlo搜索技术进行了改进;(1)增加了若干条几何合理性条件;(2)增加了防止节点重合的机制;(3)通过确定滑面段旋转角的上、下限,使其能够适用于上凸型滑面;(4)增加了节点数调整机制,以使搜索到的临界滑面更光滑。算例分析表明,改进后的算法不但保持了原方法的优点,而且更实用,临界滑面的自动搜索变得更为可靠与稳定,并将此算法纳入到自主研发的ZSlope边坡稳定分析软件中。  相似文献   

19.
Stochastic simulation techniques which do not depend on a back transform step to reproduce a prior marginal cumulative distribution function (cdf)may lead to deviations from that distribution which are deemed unacceptable. This paper presents an algorithm to post process simulated realizations or any spatial distribution to reproduce the target cdfin the case of continuous variables or target proportions in the case of categorical variables, yet honoring the conditioning data. Validations conducted for both continuous and categorical cases show that. by adjusting the value of a correction level parameter , the target cdfor proportions can be well reproduced without significant modification of the spatial correlation patterns of the original simulated realizations.  相似文献   

20.
为了消除大地电磁测深数据中的工频干扰,提出基于DWT-EEMD的盲源算法,利用DWT、EEMD和盲源分离的优良特性,在进行DWT和EEMD处理之后再进行盲源分离以消除噪声。该方法主要优势在于DWT-EEMD模型的采用和自适应权重因子的引入,在降低独立分量分析算法对恢复信号的幅值的不确定性的同时,使得在工频干扰噪声的幅值高于原始信号很多的情况下依然能较好地分离出原始信号。通过对实测大地电磁信号进行处理后发现,该方法使视电阻率曲线和相位曲线均变得平滑而稳定,较好地消除了大地电磁信号中的工频干扰噪声。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号