首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Like tree rings, high‐resolution soil sampling of low‐permeability (low‐k) zones can be used to evaluate the style of source history at contaminated sites (i.e., historical pattern of concentration and composition vs. time since releases occurred at the interface with the low‐k zone). This is valuable for the development of conceptual site model (CSM) and can serve as an important line of evidence supporting monitored natural attenuation (MNA) as a long‐term remedy. Source histories were successfully reconstructed at two sites at Naval Air Station Jacksonville using a simple one‐dimensional (1D) model. The plume arrival time and historical composition were reconstructed from the time initial releases that were suspected to occur decades earlier. At the first site (Building 106), the source reconstructions showed relatively constant source concentrations, but significant attenuation over time in the downgradient plume in the transmissive zone, suggesting MNA may not be an appropriate remedy if source control is a requirement, but attenuation processes are clearly helping to maintain plume stability and reduce risk. At the second site (Building 780), source concentrations in the transmissive zone showed an approximately a one order of magnitude over time, but apparently less attenuation in the downgradient plume. The source reconstruction method appeared to reflect site remediation efforts (excavation, soil vapor extraction) implemented in the 1990s. Finally, a detailed analysis using molecular biological tools, carbon isotopes, and by‐products suggests that most degradation activity is associated with high‐k zones but not with low‐k zones at these source areas. Overall, the source reconstruction methodology provided insight into historical concentration trends not obtainable otherwise given the limited long‐term monitoring data.  相似文献   

2.
A new steady‐state analytical solution to the two‐dimensional radial‐flow equation was developed for drawdown (head) conditions in an aquifer with constant transmissivity, no‐flow conditions at the top and bottom, constant head conditions at a known radial distance, and a partially completed pumping well. The solution was evaluated for accuracy by comparison to numerical simulations using MODFLOW. The solution was then used to estimate the rise of the salt water‐fresh water interface (upconing) that occurs under a pumping well, and to calculate the critical pumping rate at which the interface becomes unstable, allowing salt water to enter the pumping well. The analysis of salt water‐fresh water interface rise assumed no significant effect on upconing by recharge; this assumption was tested and supported using results from a new steady‐state analytical solution developed for recharge under two‐dimensional radial‐flow conditions. The upconing analysis results were evaluated for accuracy by comparison to those from numerical simulations using SEAWAT for salt water‐fresh water interface positions under mild pumping conditions. The results from the equation were also compared with those of a published numerical sharp‐interface model applied to a case on Cape Cod, Massachusetts. This comparison indicates that estimating the interface rise and maximum allowable pumping rate using the analytical method will likely be less conservative than the maximum allowable pumping rate and maximum stable interface rise from a numerical sharp‐interface model.  相似文献   

3.
Contaminated groundwater in fractured bedrock can expose ecosystems to undesired levels of risk for extended periods due to prolonged back-diffusion from rock matrix to permeable fractures. Therefore, it is key to characterize the diffusive mass loading (intrusion) of contaminants into the rock matrix for successful management of contaminated bedrock sites. Even the most detailed site characterization techniques often fail to delineate contamination in rock matrix. This study presents a set of analytical solutions to estimate diffusive mass intrusion into matrix blocks, it is recovered by pumping and concentration rebound when pumping ceases. The analytical models were validated by comparing the results with (1) numerical model results using the same model parameters and (2) observed chloride mass recovery, rebound concentration, and concentration in pumped groundwater at a highly fractured bedrock site in Alberta, Canada. It is also demonstrated that the analytical solutions can be used to estimate the total mass stored in the fractured bedrock prior to any remediation thereby providing insights into site contamination history. The predictive results of the analytical models clearly show that successful remediation by pumping depends largely on diffusive intrusion period. The results of initial mass from the analytical model was used to successfully calibrate a three-dimensional discrete fracture network numerical model further highlighting the utility of the simple analytical solutions in supplementing the more detailed site numerical modeling. Overall, the study shows the utility of simple analytical methods to support long-term management of a contaminated fractured bedrock site including site investigations and complex numerical modeling.  相似文献   

4.
A systematic hydrogeologic site characterization has been completed in a fractured rock flow system, with the objective of identifying contaminant migration and fate pathways from a historical release of 1,1,1-trichloroethane (TCA). The study integrated hydrogeologic analysis techniques such as borehole geophysical logging, pumping test analysis, and hydrochemical facies analysis to study the impact of a dense nonaqueous phase liquid (DNAPL) in a sparsely fractured crystalline bedrock. The assessment methodology can be divided into two parts: (1) characterization of the source area, where DNAPL is acting as a residual source of TCA, and (2) characterization of the downgradient plume. Reduction in DNAPL mass in the source area has resulted in significant and sustained reductions in downgradient concentrations, suggesting that remediation of fractured crystalline bedrock contaminated with DNAPL is possible and not "technically infeasible."  相似文献   

5.
Contaminant plumes whose characteristic length is smaller than the horizontal integral scale of the hydraulic conductivity, K, are abundant in shallow, phreatic aquifers. In such cases, the aquifer can be regarded as layered, with K being only a function of the vertical coordinate. The heterogeneity of K has a critical role upon the efficiency of remediation of such sites, for example, by Pump and Treat schemes. The expected efficiency is a random variable, with uncertainty. Quantifying this uncertainty can be of great importance to decision making. In this study, we focus on a case study in the coastal aquifer of Israel and compare two different approaches for constructing realizations of K: continuous and indicator. We observe a significant difference between the constructed realizations, which results in a considerable difference in the predicted remediation efficiency and its uncertainty. Furthermore, we study the effect of conditioning the realizations by a rather limited number of K data points. We find that the conditioning results in a major reduction of the uncertainty. In addition, we compare the results of the transport model to a simplified semi‐analytical solution that is based on assuming radial flow. We find a good agreement with the three‐dimensional numerical model. This result illustrates that the simplified solution can be used for prediction of the remediation efficiency when the flow at the plume vicinity can be regarded as radial.  相似文献   

6.
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation‐optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump‐and‐treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism‐based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large‐scale field remediation sites, the EMOTS‐based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness.  相似文献   

7.
The characterization of heterogeneity in hydraulic conductivity (K) is a major challenge for subsurface remediation projects. There are a number of field studies that compare the K estimates obtained using various techniques, but to our knowledge, no field‐based studies exists that compare the performance of estimated K heterogeneity fields or the associated characterization costs. In this paper, we compare the costs of characterizing the three‐dimensional K heterogeneity and its uncertainty estimates of a glaciofluvial aquifer‐aquitard sequence at a 15 m × 15 m × 18 m field site situated on the University of Waterloo campus. We compare geostatistical analysis of high resolution permeameter K data obtained from repacked core samples in five boreholes and hydraulic tomography analysis of four pumping tests consisting of up to 41 monitoring points per test. Aside from the comparison of costs, we also assess the performance of each method by predicting several pumping tests. Our analysis reveals that hydraulic tomography is somewhat more costly than the geostatistical analysis of high resolution permeameter K data due to the higher capital costs associated with the method. However, the equipment may be reused at other sites; hence these costs may be recovered over the life of the equipment. More significantly, hydraulic tomography is able to capture the most important features of the aquifer‐aquitard sequence leading to more accurate predictions of independent pumping tests. This suggests that more robust remediation systems may be designed if site characterization is performed with hydraulic tomography.  相似文献   

8.
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression‐based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least‐squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least‐squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least‐squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.  相似文献   

9.
Community-scale simulations were performed to investigate the risk to groundwater and indoor air receptors downgradient of a contaminated site following the remediation of a long-term source. Six suites of Monte Carlo simulations were performed using a numerical model that accounted for groundwater flow, reactive solute transport, soil gas flow, and vapour intrusion in buildings. The model was applied to a three-dimensional, community-scale (250 m × 1000 m × 14 m) domain containing heterogeneous, spatially correlated distributions of the hydraulic conductivity, fraction of organic carbon, and biodegradation rate constant, which were varied between realizations. Analysis considered results from both individual realizations as well as the suite of Monte Carlo simulations expressed through several novel, integrated parameters, such as the probability of exceeding a regulatory standard in either groundwater or indoor air. Results showed that exceedance probabilities varied considerably with the consideration of biodegradation in the saturated zone, and were less sensitive to changes in the variance of hydraulic conductivity or the incorporation of heterogeneous distributions of organic carbon at this spatial scale. A sharp gradient in exceedance probability existed at the lateral edges of the plumes due to variability in lateral dispersion, which defined a narrow region of exceedance uncertainty. Differences in exceedance probability between realizations (i.e., due to heterogeneity uncertainty) were similar to differences attributed to changes in the variance of hydraulic conductivity or fraction of organic carbon. Simulated clean-up times, defined by reaching an acceptable exceedance probability, were found to be on the order of decades to centuries in these community-scale domains. Results also showed that the choice of the acceptable exceedance probability level (e.g., 1 vs. 5 %) would likely affect clean up times on the order of decades. Moreover, in the scenarios examined here, the risk of exceeding indoor air standards was greater than that of exceeding groundwater standards at all times and places. Overall, simulations of coupled transport processes combined with novel spatial and temporal quantification metrics for Monte Carlo analyses, provide practical tools for assessing risk in wider communities when considering site remediation.  相似文献   

10.
The values of parameters in a groundwater flow model govern the precision of predictions of future system behavior. Predictive precision, thus, typically depends on an ability to infer values of system properties from historical measurements through calibration. When such data are scarce, or when their information content with respect to parameters that are most relevant to predictions of interest is weak, predictive uncertainty may be high, even if the model is "calibrated." Recent advances help recognize this condition, quantitatively evaluate predictive uncertainty, and suggest a path toward improved predictive accuracy by identifying sources of predictive uncertainty and by determining what observations will most effectively reduce this uncertainty. We demonstrate linear and nonlinear predictive error/uncertainty analyses as applied to a groundwater flow model of Yucca Mountain, Nevada, the United States' proposed site for disposal of high-level radioactive waste. Linear and nonlinear uncertainty analyses are readily implemented as an adjunct to model calibration with medium to high parameterization density. Linear analysis yields contributions made by each parameter to a prediction's uncertainty and the worth of different observations, both existing and yet-to-be-gathered, toward reducing this uncertainty. Nonlinear analysis provides more accurate characterization of the uncertainty of model predictions while yielding their (approximate) probability distribution functions. This article applies the above methods to a prediction of specific discharge and confirms the uncertainty bounds on specific discharge supplied in the Yucca Mountain Project License Application.  相似文献   

11.
Soil vapor extraction (SVE) is a prevalent remediation remedy for volatile organic compound (VOC) contaminants in the vadose zone. To support selection of an appropriate condition at which SVE may be terminated for site closure or for transition to another remedy, an evaluation is needed to determine whether vadose zone VOC contamination has been diminished sufficiently to keep groundwater concentrations below threshold values. A conceptual model for this evaluation was developed for VOC fate and transport from a vadose zone source to groundwater when vapor‐phase diffusive transport is the dominant transport process. A numerical analysis showed that, for these conditions, the groundwater concentration is controlled by a limited set of parameters, including site‐specific dimensions, vadose zone properties, and source characteristics. On the basis of these findings, a procedure was then developed for estimating groundwater concentrations using results from the three‐dimensional multiphase transport simulations for a matrix of parameter value combinations and covering a range of potential site conditions. Interpolation and scaling processes are applied to estimate groundwater concentrations at compliance (monitoring) wells for specific site conditions of interest using the data from the simulation results. The interpolation and scaling methodology using these simulation results provides a far less computationally intensive alternative to site‐specific three‐dimensional multiphase site modeling, while still allowing for parameter sensitivity and uncertainty analyses. With iterative application, the approach can be used to consider the effect of a diminishing vadose zone source over time on future groundwater concentrations. This novel approach and related simulation results have been incorporated into a user‐friendly Microsoft® Excel®‐based spreadsheet tool entitled SVEET (Soil Vapor Extraction Endstate Tool), which has been made available to the public.  相似文献   

12.
Saez JA  Harmon TC 《Ground water》2006,44(2):244-255
This work focuses on improving pump-and-treat remediation by optimizing a two-stage operational scheme to reduce volumes extracted when confronted with nonequilibrium desorption, low-permeability units, and continuous contaminant sources such as non-aqueous phase liquids (NAPL). Q1 and Q2 are the initial short-term high pumping rate and later long-term low pumping rate, respectively. A two-dimensional ground water flow and transport management model was used to test the proposed strategy for plumes developed from finite (NAPL-free) and continuous (NAPL-driven) contaminant sources in homogeneous and nonhomogeneous (zoned) aquifers. Remediation scenarios were simulated over durations of 2000, 6000, and 15,000 d to determine (1) the optimal time to switch from a preset Q1 to Q2 and (2) the value of Q2. The problem was constrained by mass removal requirements, maximum allowable downgradient concentrations, and practical bounds on Q2. Q1 was fixed at preset values 50% to 200% higher than the single-stage pumping rates (i.e., steady pumping rates during entire remediation period) necessary to achieve a desired cleanup level and capture the plume. Results for the NAPL-free homogeneous case under nonequilibrium desorption conditions achieved the same level of cleanup as single-stage pumping, while reducing extracted volumes by up to 36%. Comparable savings were obtained with NAPL-driven sources only when the source concentration was reduced by at least 2 orders of magnitude. For the zoned aquifer, the proposed strategy provided volume savings of up to 24% under NAPL-free and reduced source conditions.  相似文献   

13.
An improved seismic hazard model for use in performance‐based earthquake engineering is presented. The model is an improved approximation from the so‐called ‘power law’ model, which is linear in log–log space. The mathematics of the model and uncertainty incorporation is briefly discussed. Various means of fitting the approximation to hazard data derived from probabilistic seismic hazard analysis are discussed, including the limitations of the model. Based on these ‘exact’ hazard data for major centres in New Zealand, the parameters for the proposed model are calibrated. To illustrate the significance of the proposed model, a performance‐based assessment is conducted on a typical bridge, via probabilistic seismic demand analysis. The new hazard model is compared to the current power law relationship to illustrate its effects on the risk assessment. The propagation of epistemic uncertainty in the seismic hazard is also considered. To allow further use of the model in conceptual calculations, a semi‐analytical method is proposed to calculate the demand hazard in closed form. For the case study shown, the resulting semi‐analytical closed form solution is shown to be significantly more accurate than the analytical closed‐form solution using the power law hazard model, capturing the ‘exact’ numerical integration solution to within 7% accuracy over the entire range of exceedance rate. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
Assessing uncertainty in estimation of seismic response for PBEE   总被引:1,自引:0,他引:1       下载免费PDF全文
State‐of‐the‐art approaches to probabilistic assessment of seismic structural reliability are based on simulation of structural behavior via nonlinear dynamic analysis of computer models. Simulations are carried out considering samples of ground motions supposedly drawn from specific populations of signals virtually recorded at the site of interest. This serves to produce samples of structural response to evaluate the failure rate, which in turn allows to compute the failure risk (probability) in a time interval of interest. This procedure alone implies that uncertainty of estimation affects the probabilistic results. The latter is seldom quantified in risk analyses, although it may be relevant. This short paper discusses some basic issues and some simple statistical tools, which can aid the analyst towards the assessment of the impact of sample variability on fragility functions and the resulting seismic structural risk. On the statistical inference side, the addressed strategies are based on consolidated results such as the well‐known delta method and on some resampling plans belonging to the bootstrap family. On the structural side, they rely on assumptions and methods typical in performance‐based earthquake engineering applications. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
In this study a simulation-based fuzzy chance-constrained programming (SFCCP) model is developed based on possibility theory. The model is solved through an indirect search approach which integrates fuzzy simulation, artificial neural network and simulated annealing techniques. This approach has the advantages of: (1) handling simulation and optimization problems under uncertainty associated with fuzzy parameters, (2) providing additional information (i.e. possibility of constraint satisfaction) indicating that how likely one can believe the decision results, (3) alleviating computational burdens in the optimization process, and (4) reducing the chances of being trapped in local optima. The model is applied to a petroleum-contaminated aquifer located in western Canada for supporting the optimal design of groundwater remediation systems. The model solutions provide optimal groundwater pumping rates for the 3, 5 and 10 years of pumping schemes. It is observed that the uncertainty significantly affects the remediation strategies. To mitigate such impacts, additional cost is required either for increased pumping rate or for reinforced site characterization.  相似文献   

16.
17.
This paper investigates numerical optimization of dense nonaqueous phase liquid (DNAPL) site remediation design considering effects of prediction and measurement uncertainty. Results are presented for a hypothetical problem involving remediation using thermal source reduction (TSR) and bioremediation with electron donor (ED) injection. Pump-and-treat is utilized as a backup measure if compliance criteria are not met. Remediation system design variables are optimized to minimize expected net present value (ENPV) cost. Adaptive criteria are assumed for real-time control of TSR and ED duration. Source zone dissolved concentration data enabled more reliable and lower cost operation of TSR than soil concentration data, but using both soil and dissolved data improved results sufficiently to more than offset the additional cost. Decisions to terminate remediation and monitoring or to initiate pump-and-treat are complicated by measurement noise. Simultaneous optimization of monitoring frequency, averaging period, and lookback periods to confirm decisions, in addition to remediation design variables, reduced ENPV cost. Results indicate that remediation design under conditions of uncertainty is affected by subtle interactions and tradeoffs between design variables, compliance rules, site characteristics, and uncertainty in model predictions and monitoring data. Optimized designs yielded cost savings of up to approximately 50% compared with a nonoptimized design based on common engineering practices. Significant improvements in accuracy and reductions in cost were achieved by recalibrating the model to data collected during remediation and re-optimizing design variables. Repeating this process periodically is advisable to minimize total costs and maximize reliability.  相似文献   

18.
The last decade of performance‐based earthquake engineering (PBEE) research has seen a rapidly increasing emphasis placed on the explicit quantification of uncertainties. This paper examines uncertainty consideration in input ground‐motion and numerical seismic response analyses as part of PBEE, with particular attention given to the physical consistency and completeness of uncertainty consideration. It is argued that the use of the commonly adopted incremental dynamic analysis leads to a biased representation of the seismic intensity and that when considering the number of ground motions to be used in seismic response analyses, attention should be given to both reducing parameter estimation uncertainty and also limiting ground‐motion selection bias. Research into uncertainties in system‐specific numerical seismic response analysis models to date has been largely restricted to the consideration of ‘low‐level’ constitutive model parameter uncertainties. However, ‘high‐level’ constitutive model and model methodology uncertainties are likely significant and therefore represent a key research area in the coming years. It is also argued that the common omission of high‐level seismic response analysis modelling uncertainties leads to a fallacy that ground‐motion uncertainty is more significant than numerical modelling uncertainty. The author's opinion of the role of uncertainty analysis in PBEE is also presented. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the forward physics and also in the observed data. Thus, problems with structure should be approached via linear algebra and optimization techniques. This paper provides new insights to understand uncertainty from a deterministic point of view, which is a necessary step to design more efficient methods to sample the uncertainty region(s) of equivalent solutions.  相似文献   

20.
A stochastic optimization model based on an adaptive feedback correction process and surrogate model uncertainty was proposed and applied for remediation strategy design at a dense non-aqueous phase liquids (DNAPL)-contaminated groundwater site. One hundred initial training samples were obtained using the Latin hypercube sampling method. A surrogate model of a multiphase flow simulation model was constructed based on these samples employing the self-adaptive particle swarm optimization kriging (SAPSOKRG) method. An optimization model was built, using the SAPSOKRG surrogate model as a constraint. Then, an adaptive feedback correction process was designed and applied to iteratively update the training samples, surrogate model, and optimization model. Results showed that the training samples, the surrogate model, and the optimization model were effectively ameliorated. However, the surrogate model is an approximation of the simulation model, and some degree of uncertainty exists even though the surrogate model was ameliorated. Therefore, residuals between the surrogate model and the simulation model were calculated, and an uncertainty analysis was conducted. Based on the uncertainty analysis results, a stochastic optimization model was constructed and solved to obtain optimal remediation strategies at different confidence levels (60, 70, 80, 90, 95%) and under different remediation objectives (average DNAPL removal rate ≥?70,?≥?75,?≥?80,?≥?85,?≥?90%). The optimization results demonstrated that the higher the confidence level and remediation objective, the more expensive was remediation. Therefore, decision makers can weigh remediation costs, confidence levels, and remediation objectives to make an informed choice. This also allows decision makers to determine the reliability of a selected strategy and provides a new tool for DNAPL-contaminated groundwater remediation design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号