首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
This paper introduces an efficiency improvement to the sparse‐grid geometric sampling methodology for assessing uncertainty in non‐linear geophysical inverse problems. Traditional sparse‐grid geometric sampling works by sampling in a reduced‐dimension parameter space bounded by a feasible polytope, e.g., a generalization of a polygon to dimension above two. The feasible polytope is approximated by a hypercube. When the polytope is very irregular, the hypercube can be a poor approximation leading to computational inefficiency in sampling. We show how the polytope can be regularized using a rotation and scaling based on principal component analysis. This simple regularization helps to increase the efficiency of the sampling and by extension the computational complexity of the uncertainty solution. We demonstrate this on two synthetic 1D examples related to controlled‐source electromagnetic and amplitude versus offset inversion. The results show an improvement of about 50% in the performance of the proposed methodology when compared with the traditional one. However, as the amplitude versus offset example shows, the differences in the efficiency of the proposed methodology are very likely to be dependent on the shape and complexity of the original polytope. However, it is necessary to pursue further investigations on the regularization of the original polytope in order to fully understand when a simple regularization step based on rotation and scaling is enough.  相似文献   

2.
Despite their apparent high dimensionality, spatially distributed hydraulic properties of geologic formations can often be compactly (sparsely) described in a properly designed basis. Hence, the estimation of high-dimensional subsurface flow properties from dynamic performance and monitoring data can be formulated and solved as a sparse reconstruction inverse problem. Recent advances in statistical signal processing, formalized under the compressed sensing paradigm, provide important guidelines on formulating and solving sparse inverse problems, primarily for linear models and using a deterministic framework. Given the uncertainty in describing subsurface physical properties, even after integration of the dynamic data, it is important to develop a practical sparse Bayesian inversion approach to enable uncertainty quantification. In this paper, we use sparse geologic dictionaries to compactly represent uncertain subsurface flow properties and develop a practical sparse Bayesian method for effective data integration and uncertainty quantification. The multi-Gaussian assumption that is widely used in classical probabilistic inverse theory is not appropriate for representing sparse prior models. Following the results presented by the compressed sensing paradigm, the Laplace (or double exponential) probability distribution is found to be more suitable for representing sparse parameters. However, combining Laplace priors with the frequently used Gaussian likelihood functions leads to neither a Laplace nor a Gaussian posterior distribution, which complicates the analytical characterization of the posterior. Here, we first express the form of the Maximum A-Posteriori (MAP) estimate for Laplace priors and then use the Monte-Carlo-based Randomize Maximum Likelihood (RML) method to generate approximate samples from the posterior distribution. The proposed Sparse RML (SpRML) approximate sampling approach can be used to assess the uncertainty in the calibrated model with a relatively modest computational complexity. We demonstrate the suitability and effectiveness of the SpRML formulation using a series of numerical experiments of two-phase flow systems in both Gaussian and non-Gaussian property distributions in petroleum reservoirs and successfully apply the method to an adapted version of the PUNQ-S3 benchmark reservoir model.  相似文献   

3.
4.
In this study, we focus on a hydrogeological inverse problem specifically targeting monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data. Technical challenges exist in the inversion of GPR tomographic data for handling non-uniqueness, nonlinearity and high-dimensionality of unknowns. We have developed a new method for estimating soil moisture fields from crosshole GPR data. It uses a pilot-point method to provide a low-dimensional representation of the relative dielectric permittivity field of the soil, which is the primary object of inference: the field can be converted to soil moisture using a petrophysical model. We integrate a multi-chain Markov chain Monte Carlo (MCMC)–Bayesian inversion framework with the pilot point concept, a curved-ray GPR travel time model, and a sequential Gaussian simulation algorithm, for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as the corresponding geostatistical parameters (i.e., spatial correlation range). We infer the dielectric permittivity as a probability density function, thus capturing the uncertainty in the inference. The multi-chain MCMC enables addressing high-dimensional inverse problems as required in the inversion setup. The method is scalable in terms of number of chains and processors, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. The proposed inversion approach can successfully approximate the posterior density distributions of the pilot points, and capture the true values. The computational efficiency, accuracy, and convergence behaviors of the inversion approach were also systematically evaluated, by comparing the inversion results obtained with different levels of noises in the observations, increased observational data, as well as increased number of pilot points.  相似文献   

5.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

6.
A new methodology is proposed for the development of parameter-independent reduced models for transient groundwater flow models. The model reduction technique is based on Galerkin projection of a highly discretized model onto a subspace spanned by a small number of optimally chosen basis functions. We propose two greedy algorithms that iteratively select optimal parameter sets and snapshot times between the parameter space and the time domain in order to generate snapshots. The snapshots are used to build the Galerkin projection matrix, which covers the entire parameter space in the full model. We then apply the reduced subspace model to solve two inverse problems: a deterministic inverse problem and a Bayesian inverse problem with a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is validated with a conceptual one-dimensional groundwater flow model. We then apply the methodology to a basin-scale, conceptual aquifer in the Oristano plain of Sardinia, Italy. Using the methodology, the full model governed by 29,197 ordinary differential equations is reduced by two to three orders of magnitude, resulting in a drastic reduction in computational requirements.  相似文献   

7.
How can spatially explicit nonlinear regression modelling be used for obtaining nonpoint source loading estimates in watersheds with limited information? What is the value of additional monitoring and where should future data‐collection efforts focus on? In this study, we address two frequently asked questions in watershed modelling by implementing Bayesian inference techniques to parameterize SPAtially Referenced Regressions On Watershed attributes (SPARROW), a model that empirically estimates the relation between in‐stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed. Our case study is the Hamilton Harbour watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. The proposed Bayesian approach explicitly accounts for the uncertainty associated with the existing knowledge from the system and the different types of spatial correlation typically underlying the parameter estimation of watershed models. Informative prior parameter distributions were formulated to overcome the problem of inadequate data quantity and quality, whereas the potential bias introduced from the pertinent assumptions is subsequently examined by quantifying the relative change of the posterior parameter patterns. Our modelling exercise offers the first estimates of export coefficients and delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export ‘hot spots’ in the studied watershed. Despite substantial uncertainties characterizing our calibration dataset, ranging from 17% to nearly 400%, we arrived at an uncertainty level for the whole‐basin nutrient export estimates of only 36%. Finally, we conduct modelling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty if the uncertainty associated with the current nutrient loading estimates is reduced. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Finding an operational parameter vector is always challenging in the application of hydrologic models, with over‐parameterization and limited information from observations leading to uncertainty about the best parameter vectors. Thus, it is beneficial to find every possible behavioural parameter vector. This paper presents a new methodology, called the patient rule induction method for parameter estimation (PRIM‐PE), to define where the behavioural parameter vectors are located in the parameter space. The PRIM‐PE was used to discover all regions of the parameter space containing an acceptable model behaviour. This algorithm consists of an initial sampling procedure to generate a parameter sample that sufficiently represents the response surface with a uniform distribution within the “good‐enough” region (i.e., performance better than a predefined threshold) and a rule induction component (PRIM), which is then used to define regions in the parameter space in which the acceptable parameter vectors are located. To investigate its ability in different situations, the methodology is evaluated using four test problems. The PRIM‐PE sampling procedure was also compared against a Markov chain Monte Carlo sampler known as the differential evolution adaptive Metropolis (DREAMZS) algorithm. Finally, a spatially distributed hydrological model calibration problem with two settings (a three‐parameter calibration problem and a 23‐parameter calibration problem) was solved using the PRIM‐PE algorithm. The results show that the PRIM‐PE method captured the good‐enough region in the parameter space successfully using 8 and 107 boxes for the three‐parameter and 23‐parameter problems, respectively. This good‐enough region can be used in a global sensitivity analysis to provide a broad range of parameter vectors that produce acceptable model performance. Moreover, for a specific objective function and model structure, the size of the boxes can be used as a measure of equifinality.  相似文献   

9.
In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety of applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models that require significant computational time to run and produce the desired output. In this paper we improve the computational efficiency of GLUE by sampling the prior parameter space using an adaptive Markov Chain Monte Carlo scheme (the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm). Moreover, we propose an alternative strategy to determine the value of the cutoff threshold based on the appropriate coverage of the resulting uncertainty bounds. We demonstrate the superiority of this revised GLUE method with three different conceptual watershed models of increasing complexity, using both synthetic and real-world streamflow data from two catchments with different hydrologic regimes.  相似文献   

10.
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the forward physics and also in the observed data. Thus, problems with structure should be approached via linear algebra and optimization techniques. This paper provides new insights to understand uncertainty from a deterministic point of view, which is a necessary step to design more efficient methods to sample the uncertainty region(s) of equivalent solutions.  相似文献   

11.
We study the appraisal problem for the joint inversion of seismic and controlled source electro‐magnetic (CSEM) data and utilize rock‐physics models to integrate these two disparate data sets. The appraisal problem is solved by adopting a Bayesian model and we incorporate four representative sources of uncertainty. These are uncertainties in 1) seismic wave velocity, 2) electric conductivity, 3) seismic data and 4) CSEM data. The uncertainties in porosity and water saturation are quantified by a posterior random sampling in the model space of porosity and water saturation in a marine one‐dimensional structure. We study the relative contributions from the four individual sources of uncertainty by performing several statistical experiments. The uncertainties in the seismic wave velocity and electric conductivity play a more significant role on the variation of posterior uncertainty than do the seismic and CSEM data noise. The numerical simulations also show that the uncertainty in porosity is most affected by the uncertainty in the seismic wave velocity and that the uncertainty in water saturation is most influenced by the uncertainty in electric conductivity. The framework of the uncertainty analysis presented in this study can be utilized to effectively reduce the uncertainty of the porosity and water saturation derived from the integration of seismic and CSEM data.  相似文献   

12.
This paper concerns efficient uncertainty quantification techniques in inverse problems for Richards’ equation which use coarse-scale simulation models. We consider the problem of determining saturated hydraulic conductivity fields conditioned to some integrated response. We use a stochastic parameterization of the saturated hydraulic conductivity and sample using Markov chain Monte Carlo methods (MCMC). The main advantage of the method presented in this paper is the use of multiscale methods within an MCMC method based on Langevin diffusion. Additionally, we discuss techniques to combine multiscale methods with stochastic solution techniques, specifically sparse grid collocation methods. We show that the proposed algorithms dramatically reduce the computational cost associated with traditional Langevin MCMC methods while providing similar sampling performance.  相似文献   

13.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   

14.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

15.
A method for using remotely sensed snow cover information in updating a hydrological model is developed, based on Bayes' theorem. A snow cover mass balance model structure adapted to such use of satellite data is specified, using a parametric snow depletion curve in each spatial unit to describe the subunit variability in snow storage. The snow depletion curve relates the accumulated melt depth to snow‐covered area, accumulated snowmelt runoff volume, and remaining snow water equivalent. The parametric formulation enables updating of the complete snow depletion curve, including mass balance, by satellite data on snow coverage. Each spatial unit (i.e. grid cell) in the model maintains a specific depletion curve state that is updated independently. The uncertainty associated with the variables involved is formulated in terms of a joint distribution, from which the joint expectancy (mean value) represents the model state. The Bayesian updating modifies the prior (pre‐update) joint distribution into a posterior, and the posterior joint expectancy replaces the prior as the current model state. Three updating experiments are run in a 2400 km2 mountainous region in Jotunheimen, central Norway (61°N, 9°E) using two Landsat 7 ETM+ images separately and together. At 1 km grid scale in this alpine terrain, three parameters are needed in the snow depletion curve. Despite the small amount of measured information compared with the dimensionality of the updated parameter vector, updating reduces uncertainty substantially for some state variables and parameters. Parameter adjustments resulting from using each image separately differ, but are positively correlated. For all variables, uncertainty reduction is larger with two images used in conjunction than with any single image. Where the observation is in strong conflict with the prior estimate, increased uncertainty may occur, indicating that prior uncertainty may have been underestimated. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we present the uncertainty analysis of the 2D electrical tomography inverse problem using model reduction and performing the sampling via an explorative member of the Particle Swarm Optimization family, called the Regressive‐Regressive Particle Swarm Optimization. The procedure begins with a local inversion to find a good resistivity model located in the nonlinear equivalence region of the set of plausible solutions. The dimension of this geophysical model is then reduced using spectral decomposition, and the uncertainty space is explored via Particle Swarm Optimization. Using this approach, we show that it is possible to sample the uncertainty space of the electrical tomography inverse problem. We illustrate this methodology with the application to a synthetic and a real dataset coming from a karstic geological set‐up. By computing the uncertainty of the inverse solution, it is possible to perform the segmentation of the resistivity images issued from inversion. This segmentation is based on the set of equivalent models that have been sampled, and makes it possible to answer geophysical questions in a probabilistic way, performing risk analysis.  相似文献   

17.
The inverse problem of parameter structure identification in a distributed parameter system remains challenging. Identifying a more complex parameter structure requires more data. There is also the problem of over-parameterization. In this study, we propose a modified Tabu search for parameter structure identification. We embed an adjoint state procedure in the search process to improve the efficiency of the Tabu search. We use Voronoi tessellation for automatic parameterization to reduce the dimension of the distributed parameter. Additionally, a coarse-fine grid technique is applied to further improve the effectiveness and efficiency of the proposed methodology. To avoid over-parameterization, at each level of parameter complexity we calculate the residual error for parameter fitting, the parameter uncertainty error and a modified Akaike Information Criterion. To demonstrate the proposed methodology, we conduct numerical experiments with synthetic data that simulate both discrete hydraulic conductivity zones and a continuous hydraulic conductivity distribution. Our results indicate that the Tabu search allied with the adjoint state method significantly improves computational efficiency and effectiveness in solving the inverse problem of parameter structure identification.  相似文献   

18.
Modern ground water characterization and remediation projects routinely require calibration and inverse analysis of large three-dimensional numerical models of complex hydrogeological systems. Hydrogeologic complexity can be prompted by various aquifer characteristics including complicated spatial hydrostratigraphy and aquifer recharge from infiltration through an unsaturated zone. To keep the numerical models computationally efficient, compromises are frequently made in the model development, particularly, about resolution of the computational grid and numerical representation of the governing flow equation. The compromise is required so that the model can be used in calibration, parameter estimation, performance assessment, and analysis of sensitivity and uncertainty in model predictions. However, grid properties and resolution as well as applied computational schemes can have large effects on forward-model predictions and on inverse parameter estimates. We investigate these effects for a series of one- and two-dimensional synthetic cases representing saturated and variably saturated flow problems. We show that "conformable" grids, despite neglecting terms in the numerical formulation, can lead to accurate solutions of problems with complex hydrostratigraphy. Our analysis also demonstrates that, despite slower computer run times and higher memory requirements for a given problem size, the control volume finite-element method showed an advantage over finite-difference techniques in accuracy of parameter estimation for a given grid resolution for most of the test problems.  相似文献   

19.
叠前逆时偏移等基于波场互相关原理的地球物理方法存在极大的计算与存储需求,因此采用合适的波场重构方法显得尤为重要.常规的随机边界法容易产生成像噪声,而有效边界法在三维情况仍难以实现,检查点技术具有内存要求小的特点,但存在较高的重算率,因此本文提出了插值原理的检查点技术波场重构方法.在满足Nyquist采样定理的前提下对相邻检查点间的波场进行规则抽样,将抽样波场作为插值节点,运用多项式插值算法重构任意时刻的波场,从而避免优化检查点技术反复递推造成的计算效率问题.数值实验表明:插值检查点重构算法能有效的恢复波场,其中三次样条插值重构精度最高,而牛顿法插值法计算代价较小适合于快速重构.经Sigsbee模型的叠前逆时偏移证明了插值算法的可行性,并且极大的提高了波场重构的计算效率.三维模型分析得出在增加少量存储的情况下插值重构法的重算率大幅度降低,存储量减少为有效边界法的7.1%,对于三维尺度的叠前逆时偏移有实际意义.  相似文献   

20.
Improved Monte Carlo inversion of surface wave data   总被引:2,自引:0,他引:2  
Inversion of surface wave data suffers from solution non‐uniqueness and is hence strongly biased by the initial model. The Monte Carlo approach can handle this non‐uniqueness by evidencing the local minima but it is inefficient for high dimensionality problems and makes use of subjective criteria, such as misfit thresholds, to interpret the results. If a smart sampling of the model parameter space, which exploits scale properties of the modal curves, is introduced the method becomes more efficient and with respect to traditional global search methods it avoids the subjective use of control parameters that are barely related to the physical problem. The results are interpreted drawing inference by means of a statistical test that selects an ensemble of feasible shear wave velocity models according to data quality and model parameterization. Tests on synthetic data demonstrate that the application of scale properties concentrates the sampling of model parameter space in high probability density zones and makes it poorly sensitive to the initial boundary of the model parameters. Tests on synthetic and field data, where boreholes are available, prove that the statistical test selects final results that are consistent with the true model and which are sensitive to data quality. The implemented strategies make the Monte Carlo inversion efficient for practical applications and able to effectively retrieve subsoil models even in complex and challenging situations such as velocity inversions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号