首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Pump‐and‐treat systems can prevent the migration of groundwater contaminants and candidate systems are typically evaluated with groundwater models. Such models should be rigorously assessed to determine predictive capabilities and numerous tools and techniques for model assessment are available. While various assessment methodologies (e.g., model calibration, uncertainty analysis, and Bayesian inference) are well‐established for groundwater modeling, this paper calls attention to an alternative assessment technique known as screening‐level sensitivity analysis (SLSA). SLSA can quickly quantify first‐order (i.e., main effects) measures of parameter influence in connection with various model outputs. Subsequent comparisons of parameter influence with respect to calibration vs. prediction outputs can suggest gaps in model structure and/or data. Thus, while SLSA has received little attention in the context of groundwater modeling and remedial system design, it can nonetheless serve as a useful and computationally efficient tool for preliminary model assessment. To illustrate the use of SLSA in the context of designing groundwater remediation systems, four SLSA techniques were applied to a hypothetical, yet realistic, pump‐and‐treat case study to determine the relative influence of six hydraulic conductivity parameters. Considered methods were: Taguchi design‐of‐experiments (TDOE); Monte Carlo statistical independence (MCSI) tests; average composite scaled sensitivities (ACSS); and elementary effects sensitivity analysis (EESA). In terms of performance, the various methods identified the same parameters as being the most influential for a given simulation output. Furthermore, results indicate that the background hydraulic conductivity is important for predicting system performance, but calibration outputs are insensitive to this parameter (KBK). The observed insensitivity is attributed to a nonphysical specified‐head boundary condition used in the model formulation which effectively “staples” head values located within the conductivity zone. Thus, potential strategies for improving model predictive capabilities include additional data collection targeting the KBK parameter and/or revision of model structure to reduce the influence of the specified head boundary.  相似文献   

2.
Hydrological models demand large numbers of input parameters, which are to be optimally identified for better simulation of various hydrological processes. Identifying the most relevant parameters and their values using efficient sensitivity analysis methods helps to better understand model performance. In this study, the physically-based distributed model SHETRAN is used for hydrological simulation on the Netravathi River Basin in south India and the most important parameters are identified using the Morris screening method. Further, the influence of a particular model parameter on streamflow is quantified using local sensitivity analysis and optimal parameters are obtained for calibration of the SHETRAN model. The results demonstrate the capability of two-stage sensitivity analysis, combining qualitative and quantitative methods in the initial screening-out of insignificant model parameters, identifying parameter interactions and quantifying the contribution of each model parameter to the streamflow. The results of the sensitivity analysis simplified the calibration procedure of SHETRAN for the study area.  相似文献   

3.
C. Dobler  F. Pappenberger 《水文研究》2013,27(26):3922-3940
The increasing complexity of hydrological models results in a large number of parameters to be estimated. In order to better understand how these complex models work, efficient screening methods are required in order to identify the most important parameters. This is of particular importance for models that are used within an operational real‐time forecasting chain such as HQsim. The objectives of this investigation are to (i) identify the most sensitive parameters of the complex HQsim model applied in the Alpine Lech catchment and (ii) compare model parameter sensitivity rankings attained from three global sensitivity analysis techniques. The techniques presented are the (i) regional sensitivity analysis, (ii) Morris analysis and (iii) state‐dependent parameter modelling. The results indicate that parameters affecting snow melt as well as processes in the unsaturated soil zone reveal high significance in the analysed catchment. The snow melt parameters show clear temporal patterns in the sensitivity whereas most of the parameters affecting processes in the unsaturated soil zone do not vary in importance across the year. Overall, the maximum degree day factor (meltfunc_max) has been identified to play a key role within the HQsim model. Although the parameter sensitivity rankings are equivalent between methods for a number of parameters, for several key parameters differing results were obtained. An uncertainty analysis demonstrates that a parameter ranking attained from only one method is subjected to large uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities—this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.  相似文献   

5.
湖泊富营养化响应与流域优化调控决策的模型研究进展   总被引:2,自引:0,他引:2  
湖泊富营养化是全球水环境领域面临的长期挑战,富营养化响应与流域优化决策模型是制定经济和高效调控方案的关键.然而已有的模型研究综述主要集中于模型开发、案例应用、敏感性分析、不确定性分析等单一方面,而缺少针对非线性响应、生态系统长期演变等最新湖泊治理挑战的研究总结.本文对数据驱动的统计模型、因果驱动的机理模型和决策导向的优化模型进行了综述.其中,统计模型包含经典统计、贝叶斯统计和机器学习模型,常用于建立响应关系、时间序列特征分析以及预报预警;机理模型包含流域的水文与污染物输移模拟以及湖泊的水文、水动力、水质、水生态等过程的模拟,用于不同时空尺度的变化过程模拟,其中复杂机理模型的敏感性分析、参数校验、模型不确定性等需要较高的计算成本;优化模型结合机理模型形成“模拟优化”体系,在不确定性条件下衍生出随机、区间优化等多种方法,通过并行计算、简化与替代模型可一定程度上解决计算时间成本的瓶颈.本文识别了湖泊治理面临的挑战,包括:①如何定量表征外源输入的非线性叠加和湖泊氮、磷、藻变化的非均匀性?②如何提高优化调控决策和水质目标的关联与精准性?③如何揭示湖泊生态系统的长期变化轨迹与驱动因素?最后,本文针对这些挑战提出研究展望,主要包括:①基于多源数据融合与机器学习算法以提升湖泊的短期水质预测精度;②以生物量为基础的机理模型与行为驱动的个体模型的升尺度或降尺度耦合以表达多种尺度的物质交互过程;③机器学习算法与机理模型的直接耦合或数据同化以降低模拟误差;④时空尺度各异的多介质模拟模型融合以实现精准和动态的优化调控.  相似文献   

6.
Realistic environmental models used for decision making typically require a highly parameterized approach. Calibration of such models is computationally intensive because widely used parameter estimation approaches require individual forward runs for each parameter adjusted. These runs construct a parameter-to-observation sensitivity, or Jacobian, matrix used to develop candidate parameter upgrades. Parameter estimation algorithms are also commonly adversely affected by numerical noise in the calculated sensitivities within the Jacobian matrix, which can result in unnecessary parameter estimation iterations and less model-to-measurement fit. Ideally, approaches to reduce the computational burden of parameter estimation will also increase the signal-to-noise ratio related to observations influential to the parameter estimation even as the number of forward runs decrease. In this work a simultaneous increments, an iterative ensemble smoother (IES), and a randomized Jacobian approach were compared to a traditional approach that uses a full Jacobian matrix. All approaches were applied to the same model developed for decision making in the Mississippi Alluvial Plain, USA. Both the IES and randomized Jacobian approach achieved a desirable fit and similar parameter fields in many fewer forward runs than the traditional approach; in both cases the fit was obtained in fewer runs than the number of adjustable parameters. The simultaneous increments approach did not perform as well as the other methods due to inability to overcome suboptimal dropping of parameter sensitivities. This work indicates that use of highly efficient algorithms can greatly speed parameter estimation, which in turn increases calibration vetting and utility of realistic models used for decision making.  相似文献   

7.
We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding data using a non-linear inversion. The inversion scheme uses a theoretically correct Metropolis–Gibbs' sampling technique and an approximate method using numerous models sampled by a global optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to be computationally efficient in several geophysical parameter estimation problems. Unlike conventional simulated annealing (SA), in VFSA the perturbations are generated from the model parameters according to a Cauchy-like distribution whose shape changes with each iteration. This results in an algorithm that converges much faster than a standard SA. In the course of finding the optimal solution, VFSA samples several models from the search space. All these models can be used to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about the shape of an a posteriori probability density function in the model space. Here, we carry out a VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory. The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and uncertainty of layer parameters, would generally be more useful than the conventional best-fit techniques.  相似文献   

8.
Modern ground water characterization and remediation projects routinely require calibration and inverse analysis of large three-dimensional numerical models of complex hydrogeological systems. Hydrogeologic complexity can be prompted by various aquifer characteristics including complicated spatial hydrostratigraphy and aquifer recharge from infiltration through an unsaturated zone. To keep the numerical models computationally efficient, compromises are frequently made in the model development, particularly, about resolution of the computational grid and numerical representation of the governing flow equation. The compromise is required so that the model can be used in calibration, parameter estimation, performance assessment, and analysis of sensitivity and uncertainty in model predictions. However, grid properties and resolution as well as applied computational schemes can have large effects on forward-model predictions and on inverse parameter estimates. We investigate these effects for a series of one- and two-dimensional synthetic cases representing saturated and variably saturated flow problems. We show that "conformable" grids, despite neglecting terms in the numerical formulation, can lead to accurate solutions of problems with complex hydrostratigraphy. Our analysis also demonstrates that, despite slower computer run times and higher memory requirements for a given problem size, the control volume finite-element method showed an advantage over finite-difference techniques in accuracy of parameter estimation for a given grid resolution for most of the test problems.  相似文献   

9.
Global sensitivity analysis techniques are better suited for analyzing input-output relationships over the full range of parameter variations and model outcomes, as opposed to local sensitivity analysis carried out around a reference point. This article describes three such techniques: (1) stepwise rank regression analysis for building input-output models to identify key contributors to output variance, (2) mutual information (entropy) analysis for determining the strength of nonmonotonic patterns of input-output association, and (3) classification tree analysis for determining what variables or combinations are responsible for driving model output into extreme categories. These techniques are best applied in conjunction with Monte Carlo simulation-based probabilistic analyses. Two examples are presented to demonstrate the applicability of these methods. The usefulness of global sensitivity techniques is examined vis-a-vis local sensitivity analysis methods, and recommendations are provided for their applications in ground water modeling practice.  相似文献   

10.
Spatial (two-dimensional) distributions in ecology are often influenced by spatial autocorrelation. In standard regression models, however, observations are assumed to be statistically independent. In this paper we present an alternative to other methods that allow for autocorrelation. We show that the theory of wavelets provides an efficient method to remove autocorrelations in regression models using data sampled on a regular grid. Wavelets are particularly suitable for data analysis without any prior knowledge of the underlying correlation structure. We illustrate our new method, called wavelet-revised model, by applying it to multiple regression for both normal linear models and logistic regression. Results are presented for computationally simulated data and real ecological data (distribution of species richness and distribution of the plant species Dianthus carthusianorum throughout Germany). These results are compared to those of generalized linear models and models based on generalized estimating equations. We recommend wavelet-revised models, in particular, as a method for logistic regression using large datasets.  相似文献   

11.
Numerical modeling of groundwater-surface water interactions provides vital information necessary for determining the extent of nutrient transport, quantifying water budgets, and delineating zones of ecological support. The hydrologic data that drive these models are often collected at disparate scales and subsequently incorporated into numerical models through upscaling techniques such as piecewise constancy or geostatistical methods. However, these techniques either use basic interpolation methods, which often simplifies the system of interest, or utilize complex statistical methods that are computationally expensive, time consuming, and generate complex subsurface configurations. We propose a bulk parameter termed “vertically integrated hydraulic conductivity” (KV), and defined as the depth-integrated resistance to fluid flow sensed at the groundwater-surface water interface, as an alternative to hydraulic conductivity when investigating vertical fluxes across the groundwater-surface water interface. This bulk parameter replaces complex subsurface configurations in situations dominated by vertical fluxes and where heterogeneity is not of primary importance. To demonstrate the utility of KV, we extracted synthetic temperature time series data from a forward numerical model under a variety of scenarios and used those data to quantify vertical fluxes using the amplitude ratio method. These quantified vertical fluxes and the applied hydraulic head gradient were subsequently input into Darcy's Law and used to quantify KV. This KV was then directly compared to the equivalent hydraulic conductivity (KT) assuming an infinitely extending layer. Vertically integrated hydraulic conductivity allows for more accurate and robust flow modeling across the groundwater-surface water interface in instances where complex heterogeneities are not of primary concern.  相似文献   

12.
This paper evaluates a class of practical optimization techniques for parameter identification of realistic structural dynamic systems. The techniques involve quasi-Newton methods together with an efficient procedure for estimating complicated error functions. The optimization procedures are verified through their application to several representative examples, including finite-element models of realistic structural systems. Extensive numerical and graphical results demonstrate the effects of various optimization algorithm parameters on the rate of convergence of the objective function, the parameter vector error norm and the gradient norm. Guidelines are presented as an aid for addressing several significant issues in the practical application of structural dynamics optimization procedures, such as sensitivity problems, uniqueness, initial value definition for the parameter vector, convergence rates, constraints, the effect of alternative cost function definitions, accuracy of alternative gradient evaluation procedures, alternative procedures for estimating the inverse of the Hessian matrix and the use of a quadratic approximation of the objective function.  相似文献   

13.
The application of performance-based design and assessment procedures requires an accurate estimation of local component deformation demands. In the case of steel moment-resisting frames, these are usually defined in terms of plastic rotations. A rigorous estimation of this response parameter is not straightforward, requiring not only the adoption of complex nonlinear structural models, but also of time-consuming numerical integration calculations. Moreover, the majority of existing codes and guidelines do not provide any guidance in terms of how these response parameters should be estimated. Part 3 of Eurocode 8 (EC8-3) requires the quantification of plastic rotations even when linear methods of analysis are used. Therefore, the aim of the research presented in this paper is to evaluate different methods of quantifying local component demands and also to answer the question of how reliable are the estimates obtained using the EC8-3 linear analysis procedures in comparison to more accurate nonlinear methods of analysis, particularly when the linear analysis applicability criterion proposed by EC8-3 is verified. An alternative methodology to assess the applicability of linear analysis is proposed which overcomes the important limitations identified in the EC8-3 criterion.  相似文献   

14.
Satellites provide important information on many meteorological and oceanographic variables. State-space models are commonly used to analyse such data sets with measurement errors. In this work, we propose to extend the usual linear and Gaussian state-space to analyse time series with irregular time sampling, such as the one obtained when keeping all the satellite observations available at some specific location. We discuss the parameter estimation using a method of moment and the method of maximum likelihood. Simulation results indicate that the method of moment leads to a computationally efficient and numerically robust estimation procedure suitable for initializing the Expectation–Maximisation algorithm, which is combined with a standard numerical optimization procedure to maximize the likelihood function. The model is validated on sea surface temperature (SST) data from a particular satellite. The results indicate that the proposed methodology can be used to reconstruct realistic SST time series at a specific location and also give useful information on the quality of satellite measurement and the dynamics of the SST.  相似文献   

15.
Watershed water quality models are increasingly used in management. However, simulations by such complex models often involve significant uncertainty, especially those for non-conventional pollutants which are often poorly monitored. This study first proposed an integrated framework for watershed water quality modeling. Within this framework, Probabilistic Collocation Method (PCM) was then applied to a WARMF model of diazinon pollution to assess the modeling uncertainty. Based on PCM, a global sensitivity analysis method named PCM-VD (VD stands for variance decomposition) was also developed, which quantifies variance contribution of all uncertain parameters. The study results validated the applicability of PCM and PCM-VD to the WARMF model. The PCM-based approach is much more efficient, regarding computational time, than conventional Monte Carlo methods. It has also been demonstrated that analysis using the PCM-based approach could provide insights into data collection, model structure improvement and management practices. It was concluded that the PCM-based approach could play an important role in watershed water quality modeling, as an alternative to conventional Monte Carlo methods to account for parametric uncertainty and uncertainty propagation.  相似文献   

16.
Quantitative analyses of groundwater flow and transport typically rely on a physically‐based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data‐driven models (DDMs) to reduce the predictive error of physically‐based groundwater models. Two machine learning techniques, the instance‐based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real‐world case studies of the Republican River Compact Administration model and the Spokane Valley‐Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root‐mean‐square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically‐based model.  相似文献   

17.
The paper presents a computationally efficient algorithm to integrate a probabilistic, non-Gaussian parameter estimation approach for nonlinear finite element models with the performance-based earthquake engineering (PBEE) framework for accurate performance evaluations of instrumented civil infrastructures. The algorithm first utilizes a minimum variance framework to fuse predictions from a numerical model of a civil infrastructure with its measured behavior during a past earthquake to update the parameters of the numerical model that is, then, used for performance prediction of the civil infrastructure during future earthquakes. A nonproduct quadrature rule, based on the conjugate unscented transformation, forms an enabling tool to drive the computationally efficient model prediction, model-data fusion, and performance evaluation. The algorithm is illustrated and validated on Meloland Road overpass, a heavily instrumented highway bridge in El Centro, CA, which experienced three moderate earthquake events in the past. The benefits of integrating measurement data into the PBEE framework are highlighted by comparing damage fragilities of and annual probabilities of damages to the bridge estimated using the presented algorithm with that estimated using the conventional PBEE approach.  相似文献   

18.
Thermal methods are promising for remediating fractured geologic media contaminated with volatile organic compounds, and the success of this process depends on the coupled heat transfer, multiphase flow, and thermodynamics. This study analyzed field‐scale removal of trichloroethylene (TCE) and heat transfer behavior in boiling fractured geologic media using the multiple interacting continua method. This method can resolve local gradients in the matrix and is less computationally demanding than alternative methods like discrete fracture‐matrix models. A 2D axisymmetric model was used to simulate a single element of symmetry in a repeated pattern of extraction wells inside a large heated zone and evaluate effects of parameter sensitivity on contaminant recovery. The results showed that the removal of TCE increased with matrix permeability, and the removal rate was more sensitive to matrix permeability than any other parameter. Increasing fracture density promoted TCE removal, especially when the matrix permeability was low (e.g., <10?17 m2). A 3D model was used to simulate an entire treatment zone and the surrounding groundwater in fractured material, with the interaction between them being considered. Boiling was initiated in the center of the upper part of the heated region and expanded toward the boundaries. This boiling process resulted in a large increase in the TCE removal rate and spread of TCE to the vadose zone and the peripheries of the heated zone. The incorporation of extraction wells helped control the contaminant from migrating to far regions. After 22 d, more than 99.3% of TCE mass was recovered in the simulation.  相似文献   

19.
Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.  相似文献   

20.
Concern has been expressed that anthropogenic climate change may lead to a slowdown or even collapse of the Atlantic thermohaline circulation (THC). Because of the possibly severe consequences that such an event could have on the northern North Atlantic and northwestern Europe, integrated assessment models (IAMs) are needed to explore the associated political and socioeconomic implications. State-of-the-art climate models representing the THC are, however, often too complex to be incorporated into an integrated assessment framework. In this paper we present a low-order model of the Atlantic THC which meets the main requirements of IAMs: it (1) is physically based, (2) is computationally highly efficient, (3) allows for comprehensive uncertainty analysis and (4) can be linked to globally aggregated climate models that are mostly used in IAMs. The model is an interhemispheric extension of the seminal Stommel model. Its parameters are determined by a least-squares fit to the output of a coupled climate model of intermediate complexity. Results of a number of transient global warming simulations indicate that the model is able to reproduce many features of the behaviour of coupled ocean–atmosphere circulation models such as the sensitivity of the THC to the amount, regional distribution and rate of climate change.Responsible Editor: Richard Greatbatch  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号