首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
Precipitation is one of the main components of the hydrological cycle and knowledge of its spatial distribution is fundamental for the prediction of other closely related environmental variables, for example, runoff, flooding and aquifer recharge. Most of the precipitation in Mexico City is due to convective storms characterized by a high spatial variability, implying that modeling its behavior is very complex. In this work stochastic simulation techniques with a geostatistical approach were applied to model the spatial variability of the rainfall of three convective storms. The analysis of the results shows that using the proposed methodology spatial distributions of rain are obtained that reproduce the statistical characteristics present in the available information.  相似文献   

2.
In this study, we focus on a hydrogeological inverse problem specifically targeting monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data. Technical challenges exist in the inversion of GPR tomographic data for handling non-uniqueness, nonlinearity and high-dimensionality of unknowns. We have developed a new method for estimating soil moisture fields from crosshole GPR data. It uses a pilot-point method to provide a low-dimensional representation of the relative dielectric permittivity field of the soil, which is the primary object of inference: the field can be converted to soil moisture using a petrophysical model. We integrate a multi-chain Markov chain Monte Carlo (MCMC)–Bayesian inversion framework with the pilot point concept, a curved-ray GPR travel time model, and a sequential Gaussian simulation algorithm, for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as the corresponding geostatistical parameters (i.e., spatial correlation range). We infer the dielectric permittivity as a probability density function, thus capturing the uncertainty in the inference. The multi-chain MCMC enables addressing high-dimensional inverse problems as required in the inversion setup. The method is scalable in terms of number of chains and processors, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. The proposed inversion approach can successfully approximate the posterior density distributions of the pilot points, and capture the true values. The computational efficiency, accuracy, and convergence behaviors of the inversion approach were also systematically evaluated, by comparing the inversion results obtained with different levels of noises in the observations, increased observational data, as well as increased number of pilot points.  相似文献   

3.
基于MCMC的叠前地震反演方法研究   总被引:6,自引:5,他引:1       下载免费PDF全文
马尔科夫链蒙特卡洛方法(MCMC)是一种启发式的全局寻优算法[1].它在贝叶斯框架下,利用已有资料进行约束,既可使最优解满足参数的统计特性,又通过融入的先验信息,提高解的精度;寻优过程可跳出局部最优,得到全局最优解.利用MCMC方法,可以得到大量来自于后验概率分布的样本,不仅可以得到每个未知参数的估计值,而且可以得到与...  相似文献   

4.
Geostatistical seismic inversion methods are routinely used in reservoir characterisation studies because of their potential to infer the spatial distribution of the petro‐elastic properties of interest (e.g., density, elastic, and acoustic impedance) along with the associated spatial uncertainty. Within the geostatistical seismic inversion framework, the retrieved inverse elastic models are conditioned by a global probability distribution function and a global spatial continuity model as estimated from the available well‐log data for the entire inversion grid. However, the spatial distribution of the real subsurface elastic properties is complex, heterogeneous, and, in many cases, non‐stationary since they directly depend on the subsurface geology, i.e., the spatial distribution of the facies of interest. In these complex geological settings, the application of a single distribution function and a spatial continuity model is not enough to properly model the natural variability of the elastic properties of interest. In this study, we propose a three‐dimensional geostatistical inversion technique that is able to incorporate the reservoir's heterogeneities. This method uses a traditional geostatistical seismic inversion conditioned by local multi‐distribution functions and spatial continuity models under non‐stationary conditions. The procedure of the proposed methodology is based on a zonation criterion along the vertical direction of the reservoir grid. Each zone can be defined by conventional seismic interpretation, with the identification of the main seismic units and significant variations of seismic amplitudes. The proposed method was applied to a highly non‐stationary synthetic seismic dataset with different levels of noise. The results of this work clearly show the advantages of the proposed method against conventional geostatistical seismic inversion procedures. It is important to highlight the impact of this technique in terms of higher convergence between real and inverted reflection seismic data and the more realistic approximation towards the real subsurface geology comparing with traditional techniques.  相似文献   

5.
The assessment of soil heavy metal contamination and the quantification of its sources and spatial extent represent a serious challenge to the environmental scientists and engineers. To date, statistical and spatial analysis tools have been used successfully to assess the amount and spatial distribution of soil contamination. However, these techniques require vast amounts of samples and a good historical record of the study area. Furthermore, they cannot be applied in cases of complex or poorly recorded contamination and provide only a qualitative assessment of the pollution sources. The author has developed a methodology that combines statistical and geostatistical analysis tools with geographic information systems for the quantitative and spatial assessment of contamination sources. This paper focuses on the techniques that may be employed to explore the structure of a soil data set. Soil contamination data from Lavrio old mine site in Greece were used to illustrate the methodology. Through the research, it was found that principal component and factor analysis tools delineate the principal processes that drive pollution distribution. However, the spatial assessment and quantification of multiple pollution sources cannot be resolved. This aspect is explored in detail in the second paper of the series, focusing on the exploitation of principal component and factor analysis results as inputs for canonical correlation, geostatistical analysis and geographic information systems tools.  相似文献   

6.
Site effects in Mexico City are discussed in terms of simple 1D, one-layer, linear models. The analysis is focussed on two parameters: dominant period and maximum amplification relative to a firm site within the city. The data used is a compilation of strong motion data and microtremor measurements. Strong motion data consist of digital acceleration records for nine events recorded by the Accelerographic Network of Mexico City. The authors analyzed spectral ratios of horizontal components of soft soil sites relative to an average of firm site observations for this data set. Dominant period, maximum relative amplification and an estimate of material damping were computed from the empirical transfer functions thus obtained. Microtremor data were compiled from measurement of different groups during the period 1985–1992. In all, 409 measurement points were analyzed. Values of dominant period obtained from microtremor measurements are in excellent agreement with those obtained from empirical transfer functions for strong motion data. The synthesis of results allows us to draw a detailed and robust map of dominant period for Mexico City. Based on this map, the authors propose some modifications to the current microzonation of Mexico City and evaluate a proposed model to account for site effects in this city.  相似文献   

7.
马尔科夫链蒙特卡洛方法(MCMC)是一种启发式的全局寻优算法,可以用来解决概率反演的问题.基于MCMC方法的反演不依赖于准确的初始模型,可以引入任意复杂的先验信息,通过对先验概率密度函数的采样来获得大量的后验概率分布样本,在寻找最优解的过程中可以跳出局部最优得到全局最优解.MCMC方法由于计算量巨大,应用难度较高,在地...  相似文献   

8.
In order to understand and simulate site effects on strong ground motion records of recent earthquakes in Mexico City, it is fundamental to determine the in situ elastic and anelastic properties of the shallow stratigraphy of the basin. The main properties of interest are the shear wave velocities and Q-quality factors and their correlation with similar parameters in zones of the city. Despite population density and paved surfaces, it is feasible to gather shallow refraction data to obtain laterally homogeneous subsoil structures at some locations. We focused our analysis in the Texcoco Lake region of the northeastern Mexico City basin. This area consists of unconsolidated clay sediments, similar to those of the lake bed zone in Mexico City, where ground motion amplification and long duration disturbances are commonly observed. We recorded Rayleigh and Love waves using explosive and sledgehammer sources and 4.5 Hz vertical and horizontal geophones, respectively. Additionally, for the explosive source, we recorded three-component seismograms using 1 Hz seismometers. We obtained phase velocity dispersion curves from ray parameter-frequency domain analyses and inverted them for vertical distribution of S wave velocity. The initial model was obtained from a standard first-break refraction analysis. We also obtained an estimation of the QS shear wave quality factor for the uppermost stratigraphy. Results compare well with tilt and cone penetrometer resistance measurements at the same test site, emphasizing the importance of these studies for engineering purposes.  相似文献   

9.
Sudden water pollution accidents in surface waters occur with increasing frequency. These accidents significantly threaten people’s health and lives. To prevent the diffusion of pollutants, identifying these pollution sources is necessary. The identification problem of pollution source, especially for multi-point source, is one of the difficulties in the inverse problem area. This study examines this issue. A new method is designed by combining differential evolution algorithm (DEA) and Metropolis–Hastings–Markov Chain Monte Carlo (MH–MCMC) based on Bayesian inference to identify multi-point sudden water pollution sources. The effectiveness and accuracy of this proposed method is verified through outdoor experiments and comparison between DEA and MH–MCMC. The average absolute error of the sources’ position and intensity, the relative error and the average standard deviations obtained using the proposed method are less than those of DEA and MH–MCMC. Moreover, the relative error and the sampling relative error under four different standard deviations of measurement error (σ = 0.01, 0.05, 0.1, 0.15) are less than 2 and 0.11 %, respectively. The proposed method (i.e., DEMH–MCMC) is effective even when the standard deviation of the measurement error increases to 0.15. Therefore, the proposed method can identify sources of multi-point sudden water pollution accidents efficiently and accurately.  相似文献   

10.
Forecasting of space–time groundwater level is important for sparsely monitored regions. Time series analysis using soft computing tools is powerful in temporal data analysis. Classical geostatistical methods provide the best estimates of spatial data. In the present work a hybrid framework for space–time groundwater level forecasting is proposed by combining a soft computing tool and a geostatistical model. Three time series forecasting models: artificial neural network, least square support vector machine and genetic programming (GP), are individually combined with the geostatistical ordinary kriging model. The experimental variogram thus obtained fits a linear combination of a nugget effect model and a power model. The efficacy of the space–time models was decided on both visual interpretation (spatial maps) and calculated error statistics. It was found that the GP–kriging space–time model gave the most satisfactory results in terms of average absolute relative error, root mean square error, normalized mean bias error and normalized root mean square error.  相似文献   

11.
叠前地质统计学反演将随机模拟与叠前反演相结合,不仅可以反演各种储层弹性参数,还提高了反演结果的分辨率.基于联合概率分布的直接序贯协模拟方法可以在原始数据域对数据进行模拟,不需要对数据进行高斯变换,拓展了地质统计学反演的应用范围;而联合概率分布的应用确保了反演参数之间相关性,提高了反演的精度.本文将基于联合概率分布的直接序贯协模拟方法与蒙特卡洛抽样算法相结合,参考全局随机反演策略,提出了基于蒙特卡洛优化算法的全局迭代地质统计学反演方法.为了提高反演的稳定性,我们修改了局部相关系数的计算公式,提出了一种新的基于目标函数的优化局部相关系数计算公式并应用到协模拟之中.模型测试及实际数据应用表明,该方法可以很好的应用于叠前反演之中.  相似文献   

12.
Due to the fast pace increasing availability and diversity of information sources in environmental sciences, there is a real need of sound statistical mapping techniques for using them jointly inside a unique theoretical framework. As these information sources may vary both with respect to their nature (continuous vs. categorical or qualitative), their spatial density as well as their intrinsic quality (soft vs. hard data), the design of such techniques is a challenging issue. In this paper, an efficient method for combining spatially non-exhaustive categorical and continuous data in a mapping context is proposed, based on the Bayesian maximum entropy paradigm. This approach relies first on the definition of a mixed random field, that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function. When incorporating general knowledge about the first- and second-order moments of these fields, it is shown that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions, with parameters estimation that can be obtained from entropy maximization. A posterior distribution that incorporates the various (soft or hard) continuous and categorical data at hand can then be obtained by a straightforward conditionalization step. The use and potential of the method is illustrated by the way of a simulated case study. A comparison with few common geostatistical methods in some limit cases also emphasizes their similarities and differences, both from the theoretical and practical viewpoints. As expected, adding categorical information may significantly improve the spatial prediction of a continuous variable, making this approach powerful and very promising.  相似文献   

13.
In this paper we combine a multiscale data integration technique introduced in [Lee SH, Malallah A, Datta-Gupta A, Hidgon D. Multiscale data integration using Markov Random Fields. SPE Reservoir Evaluat Eng 2002;5(1):68–78] with upscaling techniques for spatial modeling of permeability. The main goal of this paper is to find fine-scale permeability fields based on coarse-scale permeability measurements. The approach introduced in the paper is hierarchical and the conditional information from different length scales is incorporated into the posterior distribution using a Bayesian framework. Because of a complicated structure of the posterior distribution Markov chain Monte Carlo (MCMC) based approaches are used to draw samples of the fine-scale permeability field.  相似文献   

14.
This paper proposes a non-parametric method of classification of maps (i.e., variable fields such as wave energy maps for the Western Mediterranean Sea) into a set of D typical regimes (calm, E-, SW- or N/NW-wind dominated storms, the 4 synoptic situations more often occurring in this region). Each map in the training set is described by its values at P measurement points and one of these regime classes. A map is thus identified as a labelled point in a P-dimensional feature space, and the problem is to find a discrimination rule that may be used for attaching a classification probability to future unlabelled maps. The discriminant model proposed assumes that some log-contrasts of these classification probabilities form a Gaussian random field on the feature space. Then, available data (labelled maps of the training set) are linked to these latent probabilities through a multinomial model. This model is quite common in model-based Geostatistics and the Gaussian process classification literature. Inference is here approximated numerically using likelihood based techniques. The multinomial likelihood of labelled features is combined in a Bayesian updating with the Gaussian random field, playing the role of prior distribution. The posterior corresponds to an Aitchison distribution. Its maximum posterior estimates are obtained in two steps, exploiting several properties of this family. The first step is to obtain the mode of this distribution for labelled features, by solving a mildly non-linear system of equations. The second step is to propagate these estimates to unlabelled features, with simple kriging of log-contrasts. These inference steps can be extended via Markov-chain Monte Carlo (MCMC) sampling to a hierarchical Bayesian problem. This MCMC sampling can be improved by further exploiting the Aitchison distribution properties, though this is only outlined here. Results for the application case study suggest that E- and N/NW-dominated storms can be successfully discriminated from calm situations, but not so easily distinguished from each other.  相似文献   

15.
Kil Seong Lee  Sang Ug Kim 《水文研究》2008,22(12):1949-1964
This study employs the Bayesian Markov Chain Monte Carlo (MCMC) method with the Metropolis–Hastings algorithm and maximum likelihood estimation (MLE) using a quadratic approximation of the likelihood function for the evaluation of uncertainties in low flow frequency analysis using a two‐parameter Weibull distribution. The two types of prior distributions, a non‐data‐based distribution and a data‐based distribution using regional information collected from neighbouring stations, are used to establish a posterior distribution. Eight case studies using the synthetic data with a sample size of 100, generated from two‐parameter Weibull distribution, are performed to compare with results of analysis using MLE and Bayesian MCMC. Also, Bayesian MCMC and MLE are applied to 36 years of gauged data to validate the efficiency of the developed scheme. These examples illustrate the advantages of Bayesian MCMC and the limitations of MLE based on a quadratic approximation. From the point of view of uncertainty analysis, Bayesian MCMC is more effective than MLE using a quadratic approximation when the sample size is small. In particular, Bayesian MCMC method is more attractive than MLE based on a quadratic approximation because the sample size of low flow at the site of interest is mostly not enough to perform the low flow frequency analysis. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
Iterative posterior inference for Bayesian Kriging   总被引:1,自引:1,他引:0  
We propose a method for estimating the posterior distribution of a standard geostatistical model. After choosing the model formulation and specifying a prior, we use normal mixture densities to approximate the posterior distribution. The approximation is improved iteratively. Some difficulties in estimating the normal mixture densities, including determining tuning parameters concerning bandwidth and localization, are addressed. The method is applicable to other model formulations as long as all the parameters, or transforms thereof, are defined on the whole real line, $(-\infty, \infty).$ Ad hoc treatments in the posterior inference such as imposing bounds on an unbounded parameter or discretizing a continuous parameter are avoided. The method is illustrated by two examples, one using digital elevation data and the other using historical soil moisture data. The examples in particular examine convergence of the approximate posterior distributions in the iterations.  相似文献   

17.
Exposure estimation using repeated blood concentration measurements   总被引:3,自引:3,他引:0  
Physiologically based toxicokinetic (PBTK) modeling has been well established to study the distributions of chemicals in target tissues. In addition, the hierarchical Bayesian statistical approach using Markov Chain Monte Carlo (MCMC) simulations has been applied successfully for parameter estimation. The aim was to estimate the constant inhalation exposure concentration (assumed) using a PBTK model based on repeated measurements in venous blood, so that exposures could be estimated. By treating the constant exterior exposure as an unknown parameter of a four-compartment PBTK model, we applied MCMC simulations to estimate the exposure based on a hierarchical Bayesian approach. The dataset on 16 volunteers exposed to 100 ppm (≅0.538 mg/L) trichloroethylene vapors for 4 h was reanalyzed as an illustration. Cases of time-dependent exposures with a constant mean were also studied via 100 simulated datasets. The posterior geometric mean of 0.571, with narrow 95% posterior confidence interval (CI) (0.506, 0.645), estimated the true trichloroethylene inhalation concentration (0.538 mg/L) with very high precision. Also, the proposed method estimated the overall constant mean of the simulated time-dependent exposure scenarios well with slightly wider 95% CIs. The proposed method justifies the accuracy of exposure estimation from biomonitoring data using PBTK model and MCMC simulations from a real dataset and simulation studies numerically, which provides a starting point for future applications in occupational exposure assessment.  相似文献   

18.
Markov Chain Monte Carlo (MCMC) methods are often used to probe the posterior probability distribution in inverse problems. This allows for computation of estimates of uncertain system responses conditioned on given observational data by means of approximate integration. However, MCMC methods suffer from the computational complexities in the case of expensive models as in the case of subsurface flow models. Hence, it is of great interest to develop alterative efficient methods utilizing emulators, that are cheap to evaluate, in order to replace the full physics simulator. In the current work, we develop a technique based on sparse response surfaces to represent the flow response within a subsurface reservoir and thus enable efficient exploration of the posterior probability density function and the conditional expectations given the data.Polynomial Chaos Expansion (PCE) is a powerful tool to quantify uncertainty in dynamical systems when there is probabilistic uncertainty in the system parameters. In the context of subsurface flow model, it has been shown to be more accurate and efficient compared with traditional experimental design (ED). PCEs have a significant advantage over other response surfaces as the convergence to the true probability distribution when the order of the PCE is increased can be proved for the random variables with finite variances. However, the major drawback of PCE is related to the curse of dimensionality as the number of terms to be estimated grows drastically with the number of the input random variables. This renders the computational cost of classical PCE schemes unaffordable for reservoir simulation purposes when the deterministic finite element model is expensive to evaluate. To address this issue, we propose the reduced-terms polynomial chaos representation which uses an impact factor to only retain the most relevant terms of the PCE decomposition. Accordingly, the reduced-terms polynomial chaos proxy can be used as the pseudo-simulator for efficient sampling of the probability density function of the uncertain variables.The reduced-terms PCE is evaluated on a two dimensional subsurface flow model with fluvial channels to demonstrate that with a few hundred trial runs of the actual reservoir simulator, it is feasible to construct a polynomial chaos proxy which accurately approximates the posterior distribution of the high permeability zones, in an analytical form. We show that the proxy precision improves with increasing the order of PCE and corresponding increase of the number of initial runs used to estimate the PCE coefficient.  相似文献   

19.
20.
Two new algorithms are presented for efficiently selecting suites of ground motions that match a target multivariate distribution or conditional intensity measure target. The first algorithm is a Markov chain Monte Carlo (MCMC) approach in which records are sequentially added to a selected set such that the joint probability density function (PDF) of the target distribution is progressively approximated by the discrete distribution of the selected records. The second algorithm derives from the concept of the acceptance ratio within MCMC but does not involve any sampling. The first method takes advantage of MCMC's ability to efficiently explore a sampling distribution through the implementation of a traditional MCMC algorithm. This method is shown to enable very good matches to multivariate targets to be obtained when the numbers of records to be selected is relatively large. A weaker performance for fewer records can be circumvented by the second method that uses greedy optimisation to impose additional constraints upon properties of the target distribution. A preselection approach based upon values of the multivariate PDF is proposed that enables near‐optimal record sets to be identified with a very close match to the target. Both methods are applied for a number response analyses associated with different sizes of record sets and rupture scenarios. Comparisons are made throughout with the Generalised Conditional Intensity Measure (GCIM) approach. The first method provides similar results to GCIM but with slightly worse performance for small record sets, while the second method outperforms method 1 and GCIM for all considered cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号