首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A multivariate spatial sampling design that uses spatial vine copulas is presented that aims to simultaneously reduce the prediction uncertainty of multiple variables by selecting additional sampling locations based on the multivariate relationship between variables, the spatial configuration of existing locations and the values of the observations at those locations. Novel aspects of the methodology include the development of optimal designs that use spatial vine copulas to estimate prediction uncertainty and, additionally, use transformation methods for dimension reduction to model multivariate spatial dependence. Spatial vine copulas capture non-linear spatial dependence within variables, whilst a chained transformation that uses non-linear principal component analysis captures the non-linear multivariate dependence between variables. The proposed design methodology is applied to two environmental case studies. Performance of the proposed methodology is evaluated through partial redesigns of the original spatial designs. The first application is a soil contamination example that demonstrates the ability of the proposed methodology to address spatial non-linearity in the data. The second application is a forest biomass study that highlights the strength of the methodology in incorporating non-linear multivariate dependence into the design.  相似文献   

2.
复杂结构的增量动力分析(IDA)对于结构的抗震设计和分析有着重要意义,但需对结构进行大量的非线性时程分析,计算量成本高。本文结合Kriging元模型和自适应顺序采样并用于结构增量动力分析以提高其计算效率和精度,其中:Kriging元模型用于预测结构的地震响应,顺序采样根据候选点的熵值补充非线性时程分析逐步增加Kriging模型的预测精度。借助本文方法,IDA曲线可通过少量的时程分析实现较高的精度。为了校验本文方法的可行性与有效性,对二层和九层钢框架结构模型应用直接IDA、hunt&fill方法和本文方法分析并比较了上述三种方法的计算误差、计算效率和IDA曲线差别。在此基础上本文将自适应顺序采样Kriging方法用于考虑结构不确定性参数的IDA分析,并和传统的蒙特卡洛方法进行比较。结果表明:该方法具有较高的计算效率,可以保证IDA曲线的精度。  相似文献   

3.
4.
Groundwater characterization involves the resolution of unknown system characteristics from observation data, and is often classified as an inverse problem. Inverse problems are difficult to solve due to natural ill-posedness and computational intractability. Here we adopt the use of a simulation–optimization approach that couples a numerical pollutant-transport simulation model with evolutionary search algorithms for solution of the inverse problem. In this approach, the numerical transport model is solved iteratively during the evolutionary search. This process can be computationally intensive since several hundreds to thousands of forward model evaluations are typically required for solution. Given the potential computational intractability of such a simulation–optimization approach, parallel computation is employed to ease and enable the solution of such problems. In this paper, several variations of a groundwater source identification problem is examined in terms of solution quality and computational performance. The computational experiments were performed on the TeraGrid cluster available at the National Center for Supercomputing Applications. The results demonstrate the performance of the parallel simulation–optimization approach in terms of solution quality and computational performance.  相似文献   

5.
Highly detailed physically based groundwater models are often applied to make predictions of system states under unknown forcing. The required analysis of uncertainty is often unfeasible due to the high computational demand. We combine two possible solution strategies: (1) the use of faster surrogate models; and (2) a robust data worth analysis combining quick first-order second-moment uncertainty quantification with null-space Monte Carlo techniques to account for parametric uncertainty. A structurally and parametrically simplified model and a proper orthogonal decomposition (POD) surrogate are investigated. Data worth estimations by both surrogates are compared against estimates by a complex MODFLOW benchmark model of an aquifer in New Zealand. Data worth is defined as the change in post-calibration predictive uncertainty of groundwater head, river-groundwater exchange flux, and drain flux data, compared to the calibrated model. It incorporates existing observations, potential new measurements of system states (“additional” data) as well as knowledge of model parameters (“parametric” data). The data worth analysis is extended to account for non-uniqueness of model parameters by null-space Monte Carlo sampling. Data worth estimates of the surrogates and the benchmark suggest good agreement for both surrogates in estimating worth of existing data. The structural simplification surrogate only partially reproduces the worth of “additional” data and is unable to estimate “parametric” data, while the POD model is in agreement with the complex benchmark for both “additional” and “parametric” data. The variance of the POD data worth estimates suggests the need to account for parameter non-uniqueness, like presented here, for robust results.  相似文献   

6.
ABSTRACT

The calibration of hydrological models is formulated as a blackbox optimization problem where the only information available is the objective function value. Distributed hydrological models are generally computationally intensive, and their calibration may require several hours or days which can be an issue for many operational contexts. Different optimization algorithms have been developed over the years and exhibit different strengths when applied to the calibration of computationally intensive hydrological models. This paper shows how the dynamically dimensioned search (DDS) and the mesh adaptive direct search (MADS) algorithms can be combined to significantly reduce the computational time of calibrating distributed hydrological models while ensuring robustness and stability regarding the final objective function values. Five transitional features are described to adequately merge both algorithms. The hybrid approach is applied to the distributed and computationally intensive HYDROTEL model on three different river basins located in Québec (Canada).  相似文献   

7.
The ensemble Kalman filter (EnKF) performs well because that the covariance of background error is varying along time. It provides a dynamic estimate of background error and represents the reasonable statistic characters of background error. However, high computational cost due to model ensemble in EnKF is employed. In this study, two methods referred as static and dynamic sampling methods are proposed to obtain a good performance and reduce the computation cost. Ensemble adjustment Kalman filter (EAKF) method is used in a global surface wave model to examine the performance of EnKF. The 24-h interval difference of simulated significant wave height (SWH) within 1 year is used to compose the static samples for ensemble errors, and these errors are used to construct the ensemble states at each time the observations are available. And then, the same method of updating the model states in the EAKF is applied for the ensemble states constructed by a static sampling method. The dynamic sampling method employs a similar method to construct the ensemble states, but the period of the simulated SWH is changing with time. Here, 7 days before and after the observation time is used as this period. To examine the performance of three schemes, EAKF, static, or dynamic sampling method, observations from satellite Jason-2 in 2014 are assimilated into a global wave model, and observations from satellite Saral are used for validation. The results indicate that the EAKF performs best, while the static sampling method is relatively worse. The dynamic sampling method improves an assimilation effect dramatically compared to the static sampling method, and its overall performance is closed to the EAKF. In low latitudes, the dynamic sampling method has a slight advantage over the EAKF. In the dynamic or static sampling methods, only one wave model is required to run and their computational cost is reduced sharply. According to the performance of these three methods, the dynamic sampling method can treated as an effective alternative of EnKF, which could reduce the computational cost and provide a good performance of data assimilation.  相似文献   

8.
The main goal of this study is to assess the potential of evolutionary algorithms to solve highly non-linear and multi-modal tomography problems (such as first arrival traveltime tomography) and their abilities to estimate reliable uncertainties. Classical tomography methods apply derivative-based optimization algorithms that require the user to determine the value of several parameters (such as regularization level and initial model) prior to the inversion as they strongly affect the final inverted model. In addition, derivative-based methods only perform a local search dependent on the chosen starting model. Global optimization methods based on Markov chain Monte Carlo that thoroughly sample the model parameter space are theoretically insensitive to the initial model but turn out to be computationally expensive. Evolutionary algorithms are population-based global optimization methods and are thus intrinsically parallel, allowing these algorithms to fully handle available computer resources. We apply three evolutionary algorithms to solve a refraction traveltime tomography problem, namely the differential evolution, the competitive particle swarm optimization and the covariance matrix adaptation–evolution strategy. We apply these methodologies on a smoothed version of the Marmousi velocity model and compare their performances in terms of optimization and estimates of uncertainty. By performing scalability and statistical analysis over the results obtained with several runs, we assess the benefits and shortcomings of each algorithm.  相似文献   

9.
10.
Smoothing is essential to many oceanographic, meteorological, and hydrological applications. There are two predominant classes of smoothing problems. The first is fixed-interval smoothing, where the objective is to estimate model states within a time interval using all available observations in the interval. The second is fixed-lag smoothing, where the objective is to sequentially estimate model states over a fixed or indefinitely growing interval by restricting the influence of observations within a fixed window of time ahead of the evolving estimation time. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm is a fixed-interval smoother whose computation time is linear in the interval. The second algorithm is a fixed-lag smoother whose computation time is independent of the lag length. The complexity of these algorithms is presented, shown to improve upon existing implementations and verified with identical-twin experiments conducted with the Lorenz-95 system. Results suggest that ensemble methods yield efficient fixed-interval and fixed-lag smoothing solutions in the sense that the additional increment for smoothing is a small fraction of either filtering or model propagation costs in a practical ensemble application. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing, and it may not be necessary to use a fixed-lag approximation for computational savings alone.  相似文献   

11.
Split-operator methods are commonly used to approximate environmental models. These methods facilitate the tailoring of different approximation approaches to different portions of the differential operator and provide a means to split large coupled problems into pieces that are more amenable to parallel computation than the original fully-coupled problem. However, split-operator methods introduce an additional source of approximation error into the solution, which is typically either ignored or controlled heuristically. In this work, we develop two methods to estimate and control the error in split-operator methods, which lead to a dynamic adjustment of the temporal splitting step based upon the error estimators. The proposed methods are shown to yield robust solutions that provide the desired control of error. In addition, for a typical nonlinear reaction problem, the new methods are shown to reduce the solution error by more than two orders of magnitude compared to standard methods for an identical level of computational effort. The algorithms introduced and evaluated have widespread applicability in environmental modeling.  相似文献   

12.
In recent years sampling approaches have been used more widely than optimization algorithms to find parameters of conceptual rainfall–runoff models, but the difficulty of calibration of such models remains in dispute. The problem of finding a set of optimal parameters for conceptual rainfall–runoff models is interpreted differently in various studies, ranging from simple to relatively complex and difficult. In many papers, it is claimed that novel calibration approaches, so-called metaheuristics, outperform the older ones when applied to this task, but contradictory opinions are also plentiful. The present study aims at calibration of two simple lumped conceptual hydrological models, HBV and GR4J, by means of a large number of metaheuristic algorithms. The tests are performed on four catchments located in regions with relatively similar climatic conditions, but on different continents. The comparison shows that, although parameters found may somehow differ, the performance criteria achieved with simple lumped models calibrated by various metaheuristics are very similar and differences are insignificant from the hydrological point of view. However, occasionally some algorithms find slightly better solutions than those found by the vast majority of methods. This means that the problem of calibration of simple lumped HBV or GR4J models may be deceptive from the optimization perspective, as the vast majority of algorithms that follow a common evolutionary principle of survival of the fittest lead to sub-optimal solutions.  相似文献   

13.
Field Demonstrations Using the Waterloo Ground Water Profiler   总被引:3,自引:0,他引:3  
Use of direct-push sampling tools fur rapid investigations of contaminated sites has proliferated in the past several years. A direct-push device, referred to as a ground water sampling profiler, was recently developed at the University of Waterloo. This tool differs from oilier direct-push tools in that point samples are collected at multiple depths in the same hole without retrieving, decontaminating, and re-driving the tool alter each sampling event. The collection of point samples, rather than samples from a longer screened interval, allows an exceptional level of detail to be generated about the vertical distribution of contamination from each hole. The benefits of acquiring this level of detail arc contingent on minimization of vertical cross contamination of samples caused by drag down from high concentration zones into underlying low concentration zones. In a detailed study of chlorinated solvent plumes in sandy aquifers, we found that drag down using the profiler is minimal or non-detectable even when the tool is driven through high concentration zones of dissolved chlorinated solvent contamination. Chlorinated solvent concentrations, primarily PCE and TCE at or below a detection limit of 1 μg/L, were obtained directly beneath plumes with maximum concentrations up to thousands of μg/L. Minimal drag down, on the order of a few μg/L to 20 μg/L, may have been observed below chlorinated solvent concentrations of several tens of thousands to hundreds of thousands of μg/L. Drag down through DNAPL zones was not evaluated.  相似文献   

14.
IAEA-MEL participated in five expeditions to the Kara Sea with the aim of assessing the radiological consequences of dumped radioactive wastes in the Novaya Zemlya Bays and Trough. The programme included sampling, in-situ underwater investigations, laboratory analyses of water, sediment and biota samples, the development of a marine radioactivity database, modelling and radiological assessment, the organization of intercomparison exercises and the evaluation of distribution coefficients. Radiometric investigations have shown that no radiologically significant environmental contamination has occurred. Leakages which have led to locally increased levels of radionuclides in sediment have only been observed in Stepovoy and Abrosimov Bays. Computer modelling results suggest that only radiological effects on local and regional scales may be of importance. The global radiological impact of the disposals in the Arctic Seas will be negligible.  相似文献   

15.
The paper discusses the performance and robustness of the Bayesian (probabilistic) approach to seismic tomography enhanced by the numerical Monte Carlo sampling technique. The approach is compared with two other popular techniques, namely the damped least-squares (LSQR) method and the general optimization approach. The theoretical considerations are illustrated by an analysis of seismic data from the Rudna (Poland) copper mine. Contrary to the LSQR and optimization techniques the Bayesian approach allows for construction of not only the “best-fitting” model of the sought velocity distribution but also other estimators, for example the average model which is often expected to be a more robust estimator than the maximum likelihood solution. We demonstrate that using the Markov Chain Monte Carlo sampling technique within the Bayesian approach opens up the possibility of analyzing tomography imaging uncertainties with minimal additional computational effort compared to the robust optimization approach. On the basis of the considered example it is concluded that the Monte Carlo based Bayesian approach offers new possibilities of robust and reliable tomography imaging.  相似文献   

16.
During flood events, breaching of flood defences along a river system can have a significant reducing effect on downstream water levels and flood risks. This paper presents a Monte Carlo based flood risk framework for policy decision making, which takes this retention effect into account. The framework is developed to estimate societal flood risk in terms of potential numbers of fatalities and associated probabilities. It is tested on the Rhine–Meuse delta system in the Netherlands, where floods can be caused by high flows in the Rhine and Meuse rivers and/or high sea water levels in the North Sea. Importance sampling is applied in the Monte Carlo procedure to increase computational efficiency of the flood risk computations. This paper focuses on the development and testing of efficient importance sampling strategies for the framework. The development of an efficient importance sampling strategy for river deltas is more challenging than for non-tidal rivers where only discharges are relevant, because the relative influence of river discharge and sea water level on flood levels differs from location to location. As a consequence, sampling methods that are efficient and accurate for one location may be inefficient for other locations or, worse, may introduce errors in computed design water levels. Nevertheless, in the case study described in this paper the required simulation time was reduced by a factor 100 after the introduction of an efficient importance sampling method in the Monte Carlo framework, while at the same time the accuracy of the Monte Carlo estimates were improved.  相似文献   

17.
This paper presents an effective approach for achieving minimum cost designs for seismic retrofitting using viscous fluid dampers. A new and realistic retrofitting cost function is formulated and minimized subject to constraints on inter-story drifts at the peripheries of frame structures. The components of the new cost function are related to both the topology and to the sizes of the dampers. This constitutes an important step forward towards a realistic definition of the optimal retrofitting problem. The optimization problem is first posed and solved as a mixed-integer problem. To improve the efficiency of the solution scheme, the problem is then re-formulated and solved by nonlinear programming using only continuous variables. Material interpolation techniques, that have been successfully applied in topology optimization and in multi-material optimization, play a key role in achieving practical final design solutions with a reasonable computational effort. Promising results attained for 3-D irregular frames are presented and compared with those achieved using genetic algorithms.  相似文献   

18.
基于稀疏反演的地震插值方法是一种重要的插值方法,然而大多数这类方法只针对无噪声数据或者高信噪比数据插值.实际上,地震数据含有各种噪声,使得插值问题变得更加困难.凸集投影方法是一种高效的插值算法,但是对于含噪声数据的插值效果不理想,针对含噪声数据提出的加权凸集投影方法能够实现同时插值和去噪,但是除了最小阈值需要认真选取外,增加一个权重因子来实现去噪功能.本文由迭代阈值算法推导出加权凸集投影方法,证明其是解无约束优化问题的一种方法,加权因子可以看作拟合误差项的系数.本文还提出了一种改进的凸集投影方法,与原始凸集投影方法相比该方法不需要增加任何计算量,只要通过阈值的选择来进行插值和去噪.数值模拟证明了该算法的计算效率,并且对含噪声数据能够实现较好的插值效果;先插值后去噪的结果证明了同时去噪和插值算法的可靠性和稳定性.  相似文献   

19.
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.  相似文献   

20.
《Geofísica Internacional》2014,53(2):183-198
As a result of a gasoline spill in an urban area, Electrical Resistivity Tomography (ERT), Electromagnetic Profiling (EMP) and Volatile Organic Compounds (VOC) methods were used in order to define the contamination plume and to optimize the drilling and soil sampling activities. The VOC anomalies (recent contamination) indicated that a gas station located at the study site is an active contamination source. The mature contaminated zones defined by ERT and EMP methods corresponded with low resistivity anomalies due to degradation process of the hydrocarbons contaminants. The ERT, EMP and VOC results were integrated on a map, allowing the final configuration of contamination plumes and the optimization of drilling and soil/free-product sampling. Laboratory analyses of free-product samples suggest the existence of more than one contamination event in the site, with the presence of recent and degraded-hydrocarbon contaminants classified in the gasoline range. This study shows the advantages of joint application of ERT, EMP and VOC methods in sites with active contamination source, where the existence of recent and mature contaminants in subsoil is assumed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号