首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Categorical parameter distributions consisting of geologic facies with distinct properties, for example, high-permeability channels embedded in a low-permeability matrix, are common at contaminated sites. At these sites, low-permeability facies store solute mass, acting as secondary sources to higher-permeability facies, sustaining concentrations for decades while increasing risk and cleanup costs. Parameter estimation is difficult in such systems because the discontinuities in the parameter space hinder the inverse problem. This paper presents a novel approach based on Traveling Pilot Points (TRIPS) and an iterative ensemble smoother (IES) to solve the categorical inverse problem. Groundwater flow and solute transport in a hypothetical aquifer with a categorical parameter distribution are simulated using MODFLOW 6. Heads and concentrations are recorded at multiple monitoring locations. IES is used to generate posterior ensembles assuming a TRIPS prior and an approximate multi-Gaussian prior. The ensembles are used to predict solute concentrations and mass into the future. The evaluation also includes an assessment of how the number of measurements and the choice of the geological prior determine the characteristics of the posterior ensemble and the resulting predictions. The results indicate that IES was able to efficiently sample the posterior distribution and showed that even with an approximate geological prior, a high degree of parameterization and history matching could lead to parameter ensembles that can be useful for making certain types of predictions (heads, concentrations). However, the approximate geological prior was insufficient for predicting mass. The analysis demonstrates how decision-makers can quantify uncertainty and make informed decisions with an ensemble-based approach.  相似文献   

2.
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run‐times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision‐making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient‐based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density‐dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.  相似文献   

3.
ABSTRACT

With the increasing use of telemetry in the control of water resource systems, a considerable amount of effort is being devoted to the development of models and parameter estimation techniques for on-line use. A variety of models and parameter estimation algorithms have been considered, ranging from complex conceptual models of the soil moisture accounting type, which are traditionally calibrated off-line, to state-space/Kalman filter models which, perhaps, have enjoyed undue popularity in the recent literature due to their mathematical elegance. The fundamental assumptions underlying the various approaches are reviewed, and the validity of these assumptions in the hydrological forecasting context is assessed. The paper draws on some results obtained during a recent workshop at the Institute of Hydrology in making assessments of the relative merits of different models and parameter estimation algorithms; these results have been derived from an intercomparison of a number of real time forecasting models.  相似文献   

4.
Abstract

An approach is presented to solve the inverse problem for simultaneous identification of different aquifer parameters under steady-state conditions. The proposed methodology is formulated as a maximum likelihood parameter estimation problem. Gauss-Newton and full Newton algorithms are used for optimization with an adjoint-state method for calculating the complete Hessian matrix. The methodology is applied to a realistic groundwater model and Monte-Carlo analysis is used to check the results.  相似文献   

5.
Markov chain Monte Carlo algorithms are commonly employed for accurate uncertainty appraisals in non-linear inverse problems. The downside of these algorithms is the considerable number of samples needed to achieve reliable posterior estimations, especially in high-dimensional model spaces. To overcome this issue, the Hamiltonian Monte Carlo algorithm has recently been introduced to solve geophysical inversions. Different from classical Markov chain Monte Carlo algorithms, this approach exploits the derivative information of the target posterior probability density to guide the sampling of the model space. However, its main downside is the computational cost for the derivative computation (i.e. the computation of the Jacobian matrix around each sampled model). Possible strategies to mitigate this issue are the reduction of the dimensionality of the model space and/or the use of efficient methods to compute the gradient of the target density. Here we focus the attention to the estimation of elastic properties (P-, S-wave velocities and density) from pre-stack data through a non-linear amplitude versus angle inversion in which the Hamiltonian Monte Carlo algorithm is used to sample the posterior probability. To decrease the computational cost of the inversion procedure, we employ the discrete cosine transform to reparametrize the model space, and we train a convolutional neural network to predict the Jacobian matrix around each sampled model. The training data set for the network is also parametrized in the discrete cosine transform space, thus allowing for a reduction of the number of parameters to be optimized during the learning phase. Once trained the network can be used to compute the Jacobian matrix associated with each sampled model in real time. The outcomes of the proposed approach are compared and validated with the predictions of Hamiltonian Monte Carlo inversions in which a quite computationally expensive, but accurate finite-difference scheme is used to compute the Jacobian matrix and with those obtained by replacing the Jacobian with a matrix operator derived from a linear approximation of the Zoeppritz equations. Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability.  相似文献   

6.
With the popularity of complex hydrologic models, the time taken to run these models is increasing substantially. Comparing and evaluating the efficacy of different optimization algorithms for calibrating computationally intensive hydrologic models is becoming a nontrivial issue. In this study, five global optimization algorithms (genetic algorithms, shuffled complex evolution, particle swarm optimization, differential evolution, and artificial immune system) were tested for automatic parameter calibration of a complex hydrologic model, Soil and Water Assessment Tool (SWAT), in four watersheds. The results show that genetic algorithms (GA) outperform the other four algorithms given model evaluation numbers larger than 2000, while particle swarm optimization (PSO) can obtain better parameter solutions than other algorithms given fewer number of model runs (less than 2000). Given limited computational time, the PSO algorithm is preferred, while GA should be chosen given plenty of computational resources. When applying GA and PSO for parameter optimization of SWAT, small population size should be chosen. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Ye Zhang 《Ground water》2014,52(3):343-351
Modeling and calibration of natural aquifers with multiple scales of heterogeneity is a challenging task due to limited subsurface access. While computer modeling plays an essential role in aquifer studies, large uncertainty exists in developing a conceptual model of an aquifer and in calibrating the model for decision making. Due to uncertainties such as a lack of understanding of subsurface processes and a lack of techniques to parameterize the subsurface environment (including hydraulic conductivity, source/sink rate, and aquifer boundary conditions), existing aquifer models often suffer nonuniqueness in calibration, leading to poor predictive capability. A robust calibration methodology is needed that can address the simultaneous estimations of aquifer parameters, source/sink, and boundary conditions. In this paper, we propose a multistage and multiscale approach that addresses subsurface heterogeneity at multiple scales, while reducing uncertainty in estimating the model parameters and model boundary conditions. The key to this approach lies in the appropriate development, verification, and synthesis of existing and new techniques of static and dynamic data integration. In particular, based on a given set of observation data, new inversion techniques can be first used to estimate aquifer large‐scale effective parameters and smoothed boundary conditions, based on which parameter and boundary condition estimation can be refined at increasing detail using standard or highly parameterized estimation techniques.  相似文献   

8.
Global optimization methods such as simulated annealing, genetic algorithms and tabu search are being increasingly used to solve groundwater remediation design and parameter identification problems. While these methods enjoy some unique advantages over traditional gradient based methods, they typically require thousands to tens of thousands of forward simulation runs before reaching optimal or near-optimal solutions. Thus, one severe limitation associated with these global optimization methods is very long computation time. To mitigate this limitation, this paper presents a new approach for obtaining, repeatedly and efficiently, the solutions of a linear forward simulation model subject to successive perturbations. The proposed approach takes advantage of the fact that successive forward simulation runs, as required by a global optimization procedure, usually involve only slight changes in the coefficient matrices of the resultant linear equations. As a result, the new solution to a system of linear equations perturbed by the changes in aquifer properties and/or sinks/sources can be obtained as the sum of a non-perturbed base solution and the solution to the perturbed portion of the linear equations. The computational efficiency of the proposed approach arises from the fact that the perturbed solution can be derived directly without solving the linear equations again. A two-dimensional test problem with 20 by 30 nodes demonstrates that the proposed approach is much more efficient than repeatedly running the simulation model, by more than 15 times after a fixed number of model evaluations. The ratio of speedup increases with the number of model evaluations and also the size of simulation model. The main limitation of the proposed approach is the large amount of computer memory required to store the inverse matrix. Effective ways for limiting the storage requirement are briefly discussed.  相似文献   

9.
An important stage in two-dimensional magnetotelluric modelling is the calculation of the Earth's response functions for an assumed conductivity model and the calculation of the associated Jacobian relating those response functions to the model parameters. The efficiency of the calculation of the Jacobian will affect the efficiency of the inversion modelling. Rodi (1976) produced all the Jacobian elements by inverting a single matrix and using an approximate first-order algorithm. Since only one inverse matrix required calculation the procedure speeded up the inversion. An iterative scheme to improve the approximation to the Jacobian information is presented in this paper. While this scheme takes a little longer than Rodi's algorithm, it enables a more accurate determination of the Jacobian information. It is found that the Jacobian elements can be produced in 10% of the time required to calculate an inverse matrix or to calculate a 2D starting model. A modification of the algorithm can further be used to improve the accuracy of the original inverse matrix calculated in a 2D finite difference program and hence the solution this program produces. The convergence of the iteration scheme is found to be related both to the originally calculated inverse matrix and to the change in the newly formed matrix arising from perturbation of the model parameter. A ridge regression inverse algorithm is used in conjunction with the iterative scheme for forward modelling described in this paper to produce a 2D conductivity section from field data.  相似文献   

10.
Information theory is the basis for understanding how information is transmitted as observations. Observation data can be used to compare uncertainty on parameter estimates and predictions between models. Jacobian Information (JI) is quantified as the determinant of the weighted Jacobian (sensitivity) matrix. Fisher Information (FI) is quantified as the determinant of the weighted FI matrix. FI measures the relative disorder of a model (entropy) in a set of models. One‐dimensional models are used to demonstrate the relationship between JI and FI, and the resulting uncertainty on estimated parameter values and model predictions for increasing model complexity, different model structures, different boundary conditions, and over‐fitted models. Greater model complexity results in increased JI accompanied by an increase in parameter and prediction uncertainty. FI generally increases with increasing model complexity unless model error is large. Models with lower FI have a higher level of disorder (increase in entropy) which results in greater uncertainty of parameter estimates and model predictions. A constant‐head boundary constrains the heads in the area near the boundary, reducing sensitivity of simulated equivalents to estimated parameters. JI and FI are lower for this boundary condition as compared to a constant‐outflow boundary in which the heads in the area of the boundary can adjust freely. Complex, over‐fitted models, in which the structure of the model is not supported by the observation dataset, result in lower JI and FI because there is insufficient information to estimate all parameters in the model.  相似文献   

11.
With the recent development of distributed hydrological models, the use of multi‐site observed data to evaluate model performance is becoming more common. Distributed hydrological model have many advantages, and at the same time, it also faces the challenge to calibrate over‐do parameters. As a typical distributed hydrological model, problems also exist in Soil and Water Assessment Tool (SWAT) parameter calibration. In the paper, four different uncertainty approaches – Particle Swarm Optimization (PSO) techniques, Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting algorithm (SUFI‐2) and Parameter Solution (PARASOL) – are taken to a comparative study with the SWAT model applied in Peace River Basin, central Florida. In our study, the observed river discharge data used in SWAT model calibration were collected from the three gauging stations at the main tributary of the Peace River. Behind these approaches, there is a shared philosophy; all methods seek out many parameter set to fit the uncertainties due to the non‐uniqueness in model parameter evaluation. On the basis of the statistical results of four uncertainty methods, difficulty level of each method, the number of runs and theoretical basis, the reasons that affected the accuracy of simulation were analysed and compared. Furthermore, for the four uncertainty method with SWAT model in the study area, the pairwise correlation between parameters and the distributions of model fit summary statistics computed from the sampling over the behavioural parameter and the entire model calibration parameter feasible spaces were identified and examined. It provided additional insight into the relative identifiability of the four uncertainty methods Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Two new algorithms have been introduced as a further development of a robust interferometric method for structural health monitoring (SHM) of buildings during earthquakes using data from seismic sensors. The SHM method is intended to be used in an automatic seismic alert system, to issue a warning of significant damage during or immediately after the earthquake, and facilitate decision making on evacuation, to avoid loss of life and injury from possible collapse of the weekend structure during aftershock shaking. The method identifies a wave velocity profile of the building by fitting an equivalent layered shear beam model in impulse response functions (virtual source at roof) of the recorded earthquake response. The structural health is monitored by detecting changes in the identified velocities in moving time windows, the initial window being used as reference. Because the fit involves essentially matching phase difference between motion at different floors, the identified velocity profile is not affected by rigid body rocking, and soil-structure interaction in general, as demonstrated in this paper. Consequently, detected changes in wave velocity during an earthquake are not affected by changes in the soil-foundation system, which is a major advantage over SHM by detecting changes in the observed modal frequencies. Further, the method is robust when applied to real buildings and large amplitude earthquake response, as demonstrated in previous work. The new fitting algorithms introduced are the nonlinear least squares (LSQ) fit and the time shift matching (TSM) algorithms. The former involves waveform inversion of the impulse responses, and the latter - iterative matching of the pulse time shifts, both markedly reducing the identification error as compared to the previously used direct ray algorithm, especially for more detailed models, i.e., with fewer floors per layer. Results are presented of identification of the NS, EW and torsional responses of the densely instrumented Millikan Library (9-story reinforced concrete building in Pasadena, California) during a small earthquake.  相似文献   

13.
A new parameter estimation algorithm based on ensemble Kalman filter (EnKF) is developed. The developed algorithm combined with the proposed problem parametrization offers an efficient parameter estimation method that converges using very small ensembles. The inverse problem is formulated as a sequential data integration problem. Gaussian process regression is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen–Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative regularized EnKF algorithm. The filter is converted to an optimization algorithm by using a pseudo time-stepping technique such that the model output matches the time dependent data. The EnKF Kalman gain matrix is regularized using truncated SVD to filter out noisy correlations. Numerical results show that the proposed algorithm is a promising approach for parameter estimation of subsurface flow models.  相似文献   

14.
在地震子波非因果、混合相位的假设下,本文应用自回归滑动平均(ARMA)模型对地震子波进行参数化建模,并提出利用线性(矩阵方程法)和非线性(ARMA拟合方法)相结合的参数估计方式对该模型进行参数估计.在利用矩阵方程法确定模型参数范围的基础上,利用累积量拟合法精确估计参数.理论分析和仿真结果表明,该方式有较好的适应性:一方面提高了子波估计精度,避免单独使用矩阵方程法在短数据地震记录情况下可能带来的估计误差;另一方面提高了子波提取运算效率,降低了ARMA模型拟合方法参数范围确定的复杂性,避免了单纯使用滑动平均(MA)模型拟合法估计过多参数所导致的运算规模过大问题.初步应用结果表明该方法是有效可行的.  相似文献   

15.
Calibration is typically used for improving the predictability of mechanistic simulation models by adjusting a set of model parameters and fitting model predictions to observations. Calibration does not, however, account for or correct potential misspecifications in the model structure, limiting the accuracy of modeled predictions. This paper presents a new approach that addresses both parameter error and model structural error to improve the predictive capabilities of a model. The new approach simultaneously conducts a numeric search for model parameter estimation and a symbolic (regression) search to determine a function to correct misspecifications in model equations. It is based on an evolutionary computation approach that integrates genetic algorithm and genetic programming operators. While this new approach is designed generically and can be applied to a broad array of mechanistic models, it is demonstrated for an illustrative case study involving water quality modeling and prediction. Results based on extensive testing and evaluation, show that the new procedure performs consistently well in fitting a set of training data as well as predicting a set of validation data, and outperforms a calibration procedure and an empirical model fitting procedure.  相似文献   

16.
A consistent approach to the frequency analysis of hydrologic data in arid and semiarid regions, i.e. the data series containing several zero values (e.g. monthly precipitation in dry seasons, annual peak flow discharges, etc.), requires using discontinuous probability distribution functions. Such an approach has received relatively limited attention. Along the lines of physically based models, the extensions of the Muskingum‐based models to three parameter forms are considered. Using 44 peak flow series from the USGS data bank, the fitting ability of four three‐parameter models was investigated: (1) the Dirac delta combined with Gamma distribution; (2) the Dirac delta combined with two‐parameter generalized Pareto distribution; (3) the Dirac delta combined with two‐parameter Weibull (DWe) distribution; (4) the kinematic diffusion with one additional parameter that controls the probability of the zero event (KD3). The goodness of fit of the models was assessed and compared both by evaluation of discrepancies between the results of both estimation methods (i.e. the method of moments (MOM) and the maximum likelihood method (MLM)) and using the log of likelihood function as a criterion. In most cases, the DWe distribution with MLM‐estimated parameters showed the best fit of all the three‐parameter models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
In this work a new algorithm for the fast and efficient 3D inversion of conventional 2D surface electrical resistivity tomography lines is presented. The proposed approach lies on the assumption that for every surface measurement there is a large number of 3D parameters with very small absolute Jacobian matrix values, which can be excluded in advance from the Jacobian matrix calculation, as they do not contribute significant information in the inversion procedure. A sensitivity analysis for both homogeneous and inhomogeneous earth models showed that each measurement has a specific region of influence, which can be limited to parameters in a critical rectangular prism volume. Application of the proposed algorithm accelerated almost three times the Jacobian (sensitivity) matrix calculation for the data sets tested in this work. Moreover, application of the least squares regression iterative inversion technique, resulted in a new 3D resistivity inversion algorithm more than 2.7 times faster and with computer memory requirements less than half compared to the original algorithm. The efficiency and accuracy of the algorithm was verified using synthetic models representing typical archaeological structures, as well as field data collected from two archaeological sites in Greece, employing different electrode configurations. The applicability of the presented approach is demonstrated for archaeological investigations and the basic idea of the proposed algorithm can be easily extended for the inversion of other geophysical data.  相似文献   

18.
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data.  相似文献   

19.
Yager RM 《Ground water》2004,42(3):390-400
Nonlinear regression is increasingly applied to the calibration of hydrologic models through the use of perturbation methods to compute the Jacobian or sensitivity matrix required by the Gauss-Newton optimization method. Sensitivities obtained by perturbation methods can be less accurate than those obtained by direct differentiation, however, and concern has arisen that the optimal parameter values and the associated parameter covariance matrix computed by perturbation could also be less accurate. Sensitivities computed by both perturbation and direct differentiation were applied in nonlinear regression calibration of seven ground water flow models. The two methods gave virtually identical optimum parameter values and covariances for the three models that were relatively linear and two of the models that were relatively nonlinear, but gave widely differing results for two other nonlinear models. The perturbation method performed better than direct differentiation in some regressions with the nonlinear models, apparently because approximate sensitivities computed for an interval yielded better search directions than did more accurately computed sensitivities for a point. The method selected to avoid overshooting minima on the error surface when updating parameter values with the Gauss-Newton procedure appears for nonlinear models to be more important than the method of sensitivity calculation in controlling regression convergence.  相似文献   

20.
This paper investigates the effects of uncertainty in rock-physics models on reservoir parameter estimation using seismic amplitude variation with angle and controlled-source electromagnetics data. The reservoir parameters are related to electrical resistivity by the Poupon model and to elastic moduli and density by the Xu-White model. To handle uncertainty in the rock-physics models, we consider their outputs to be random functions with modes or means given by the predictions of those rock-physics models and we consider the parameters of the rock-physics models to be random variables defined by specified probability distributions. Using a Bayesian framework and Markov Chain Monte Carlo sampling methods, we are able to obtain estimates of reservoir parameters and information on the uncertainty in the estimation. The developed method is applied to a synthetic case study based on a layered reservoir model and the results show that uncertainty in both rock-physics models and in their parameters may have significant effects on reservoir parameter estimation. When the biases in rock-physics models and in their associated parameters are unknown, conventional joint inversion approaches, which consider rock-physics models as deterministic functions and the model parameters as fixed values, may produce misleading results. The developed stochastic method in this study provides an integrated approach for quantifying how uncertainty and biases in rock-physics models and in their associated parameters affect the estimates of reservoir parameters and therefore is a more robust method for reservoir parameter estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号