首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A multivariate spatial sampling design that uses spatial vine copulas is presented that aims to simultaneously reduce the prediction uncertainty of multiple variables by selecting additional sampling locations based on the multivariate relationship between variables, the spatial configuration of existing locations and the values of the observations at those locations. Novel aspects of the methodology include the development of optimal designs that use spatial vine copulas to estimate prediction uncertainty and, additionally, use transformation methods for dimension reduction to model multivariate spatial dependence. Spatial vine copulas capture non-linear spatial dependence within variables, whilst a chained transformation that uses non-linear principal component analysis captures the non-linear multivariate dependence between variables. The proposed design methodology is applied to two environmental case studies. Performance of the proposed methodology is evaluated through partial redesigns of the original spatial designs. The first application is a soil contamination example that demonstrates the ability of the proposed methodology to address spatial non-linearity in the data. The second application is a forest biomass study that highlights the strength of the methodology in incorporating non-linear multivariate dependence into the design.  相似文献   

2.
We examine the effect of uncertainty due to limited information on the remediation design of a contaminated aquifer using the pump and treat method. The hydraulic conductivity and contaminant concentration distributions for a fictitious contaminated aquifer are generated assuming a limited number of sampling locations. Stochastic optimization with multiple realizations is used to account for aquifer uncertainty. The optimization process involves a genetic algorithm (GA). As the number of realizations increases, a greater extraction rate and more wells are needed. There was a total cost increase, but the optimal remediation designs became more reliable. Stochastic optimization analysis also determines the locations for extraction wells, the variation in extraction rates as a function of the change of well locations, and the reliability of the optimal designs. The number of realizations (stack number) that caused the design factors to converge could be determined. Effective stochastic optimization may be achieved by reducing computational resources. An increase in the variability of the conductivity distribution requires more extraction wells. Information about potential extraction wells can be used to prevent failure of the remediation task.  相似文献   

3.
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data‐worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south‐western Germany, which has been established to monitor river—groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model‐based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy‐to‐implement tools for an otherwise complex task and (2) yet to consider data‐worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types.  相似文献   

4.
A number of optimization approaches regarding the design location of groundwater pumping facilities in heterogeneous porous media have elicited little discussion. However, the location of groundwater pumping facilities is an important factor because it affects water resource usage. This study applies two optimization approaches to estimate the best recharge zone and suitable locations of the pumping facilities in southwestern Taiwan for different hydrogeological scales. First, for the regional scale, this study employs numerical modelling, MODFLOW‐96, to simulate groundwater direction and the optimal recharge zone in the study area. Based on the model's calibration and verification results, this study preliminarily utilizes the simulated spatial direction of groundwater and compares the safe yield for each well group in order to determine the best recharge zone. Additionally, for the local scale, the micro‐hydrogeological characteristics are considered before determining the design locations of the pumping facilities. According to drawdown record data from six observation wells derived from pumping tests at the best recharge area, this study further utilizes the modified artificial neural network approach to improve the accuracy of the estimation parameters as well as to analyse the direction and anisotropy of the hydraulic conductivities of an equivalent homogeneous aquifer. The results suggested that the best locations for the pumping facilities are along the more permeable major direction. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
We analyze the optimal design of a pumping test for estimating hydrogeologic parameters that are subsequently used to predict stream depletion caused by groundwater pumping in a leaky aquifer. A global optimization method is used to identify the test’s optimal duration and the number and locations of observation wells. The objective is to minimize predictive uncertainty (variance) of the estimated stream depletion, which depends on the sensitivities of depletion and drawdown to relevant hydrogeologic parameters. The sensitivities are computed analytically from the solutions of Zlotnik and Tartakovsky [Zlotnik, V.A., Tartakovsky, D.M., 2008. Stream depletion by groundwater pumping in leaky aquifers. ASCE Journal of Hydrologic Engineering 13, 43–50] and the results are presented in a dimensionless form, facilitating their use for planning of pumping test at a variety of sites with similar hydrogeological settings. We show that stream depletion is generally very sensitive to aquitard’s leakage coefficient and stream-bed’s conductance. The optimal number of observation wells is two, their optimal locations are one close to the stream and the other close to the pumping well. We also provide guidelines on the test’s optimal duration and demonstrate that under certain conditions estimation of aquitard’s leakage coefficient and stream-bed’s conductance requires unrealistic test duration and/or signal-to-noise ratio.  相似文献   

6.
This study evaluates and compares two methodologies, Monte Carlo simple genetic algorithm (MCSGA) and noisy genetic algorithm (NGA), for cost-effective sampling network design in the presence of uncertainties in the hydraulic conductivity (K) field. Both methodologies couple a genetic algorithm (GA) with a numerical flow and transport simulator and a global plume estimator to identify the optimal sampling network for contaminant plume monitoring. The MCSGA approach yields one optimal design each for a large number of realizations generated to represent the uncertain K-field. A composite design is developed on the basis of those potential monitoring wells that are most frequently selected by the individual designs for different K-field realizations. The NGA approach relies on a much smaller sample of K-field realizations and incorporates the average of objective functions associated with all K-field realizations directly into the GA operators, leading to a single optimal design. The efficacy of the MCSGA-based composite design and the NGA-based optimal design is assessed by applying them to 1000 realizations of the K-field and evaluating the relative errors of global mass and higher moments between the plume interpolated from a sampling network and that output by the transport model without any interpolation. For the synthetic application examined in this study, the optimal sampling network obtained using NGA achieves a potential cost savings of 45% while keeping the global mass and higher moment estimation errors comparable to those errors obtained using MCSGA. The results of this study indicate that NGA can be used as a useful surrogate of MCSGA for cost-effective sampling network design under uncertainty. Compared with MCSGA, NGA reduces the optimization runtime by a factor of 6.5.  相似文献   

7.
Setting limit on groundwater extractions is important to ensure sustainable groundwater management. Lack of extraction data can affect interpretations of historical pressure changes, predictions of future impacts, accuracy of groundwater model calibration, and identification of sustainable management options. Yet, many groundwater extractions are unmetered. Therefore, there is a need for models that estimate extraction rates and quantify model outputs uncertainties arising due to a lack of data. This paper develops such a model within the Generalized Linear Modeling (GLM) framework, using a case study of stock and domestic (SD) extractions in the Surat Cumulative Management Area, a predominantly cattle farming region in eastern Australia. Various types of extraction observations were used, ranging from metering to analytically-derived estimates. GLMs were developed and applied to estimate the property-level extraction amounts, where observation types were weighted by perceived relative accuracy, and well usage status. The primary variables found to affect property-level extraction rates were: yearly average temperature and rainfall, pasture, property area, and number of active wells; while variables most affecting well usage were well water electrical conductivity, spatial coordinates, and well age. Results were compared with analytical estimates of property-level extraction, illustrating uncertainties and potential biases across 20 hydrogeological units. Spatial patterns of mean extraction rates (and standard deviations) are presented. It is concluded that GLMs are well suited to the problem of extraction rate estimation and uncertainty analysis, and are ideal when model verification is supported by measurement of a random sample of properties.  相似文献   

8.
The design and the management of pump-and-treat (PAT) remediation systems for contaminated aquifers under uncertain hydrogeological settings and parameters often involve decisions that trade off cost optimality against reliability. Both design objectives can be improved by planning site characterization programs that reduce subsurface parameter uncertainty. However, the cost for subsurface investigation often weighs heavily upon the budget of the remedial action and must thus be taken into account in the trade-off analysis. In this paper, we develop a stochastic data-worth framework with the purpose of estimating the economic opportunity of subsurface investigation programs. Since the spatial distribution of hydraulic conductivity is most often the major source of uncertainty, we focus on the direct sampling of hydraulic conductivity at prescribed locations of the aquifer. The data worth of hydraulic conductivity measurements is estimated from the reduction of the overall management cost ensuing from the reduction in parameter uncertainty obtained from sampling. The overall cost is estimated as the expected value of the cost of installing and operating the PAT system plus penalties incurred due to violations of cleanup goals and constraints. The crucial point of the data-worth framework is represented by the so-called pre-posterior analysis. Here, the tradeoff between decreasing overall costs and increasing site-investigation budgets is assessed to determine a management strategy proposed on the basis of the information available at the start of remediation. The goal of the pre-posterior analysis is to indicate whether the proposed management strategy should be implemented as is, or re-designed on the basis of additional data collected with a particular site-investigation program. The study indicates that the value of information is ultimately related to the estimates of cleanup target violations and decision makers’ degree of risk-aversion.  相似文献   

9.
In this study, uncertainty in model input data (precipitation) and parameters is propagated through a physically based, spatially distributed hydrological model based on the MIKE SHE code. Precipitation uncertainty is accounted for using an ensemble of daily rainfall fields that incorporate four different sources of uncertainty, whereas parameter uncertainty is considered using Latin hypercube sampling. Model predictive uncertainty is assessed for multiple simulated hydrological variables (discharge, groundwater head, evapotranspiration, and soil moisture). Utilizing an extensive set of observational data, effective observational uncertainties for each hydrological variable are assessed. Considering not only model predictive uncertainty but also effective observational uncertainty leads to a notable increase in the number of instances, for which model simulation and observations are in good agreement (e.g., 47% vs. 91% for discharge and 0% vs. 98% for soil moisture). Effective observational uncertainty is in several cases larger than model predictive uncertainty. We conclude that the use of precipitation uncertainty with a realistic spatio‐temporal correlation structure, analyses of multiple variables with different spatial support, and the consideration of observational uncertainty are crucial for adequately evaluating the performance of physically based, spatially distributed hydrological models.  相似文献   

10.
The comparison between two series of optimal remediation designs using deterministic and stochastic approaches showed a number of converging features. Limited sampling measurements in a supposed contaminated aquifer formed the hydraulic conductivity field and the initial concentration distribution used in the optimization process. The deterministic and stochastic approaches employed a single simulation–optimization method and a multiple realization approach, respectively. For both approaches, the optimization model made use of a genetic algorithm. In the deterministic approach, the total cost, extraction rate, and the number of wells used increase when the design must satisfy the intensified concentration constraint. Growing the stack size in the stochastic approach also brings about same effects. In particular, the change in the selection frequency of the used extraction wells, with increasing stack size, for the stochastic approach can indicate the locations of required additional wells in the deterministic approach due to the intensified constraints. These converging features between the two approaches reveal that a deterministic optimization approach with controlled constraints is achievable enough to design reliable remediation strategies, and the results of a stochastic optimization approach are readily available to real contaminated sites.  相似文献   

11.
This paper deals with the design of optimal spatial sampling of water quality variables in remote regions, where logistics are complicated and the optimization of monitoring networks may be critical to maximize the effectiveness of human and material resources. A methodology that combines the probability of exceeding some particular thresholds with a measurement of the information provided by each pair of experimental points has been introduced. This network optimization concept, where the basic unit of information is not a single spatial location but a pair of spatial locations, is used to emphasize the locations with the greatest information, which are those at the border of the phenomenon (for example contamination or a quality variable exceeding a given threshold), that is, where the variable at one of the locations in the pair is above the threshold value and the other is below the threshold. The methodology is illustrated with a case of optimizing the monitoring network by optimal selection of the subset that best describes the information provided by an exhaustive survey done at a given moment in time but which cannot be repeated systematically due to time or economic constrains.  相似文献   

12.
The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement.  相似文献   

13.
Ye Zhang 《Ground water》2014,52(3):343-351
Modeling and calibration of natural aquifers with multiple scales of heterogeneity is a challenging task due to limited subsurface access. While computer modeling plays an essential role in aquifer studies, large uncertainty exists in developing a conceptual model of an aquifer and in calibrating the model for decision making. Due to uncertainties such as a lack of understanding of subsurface processes and a lack of techniques to parameterize the subsurface environment (including hydraulic conductivity, source/sink rate, and aquifer boundary conditions), existing aquifer models often suffer nonuniqueness in calibration, leading to poor predictive capability. A robust calibration methodology is needed that can address the simultaneous estimations of aquifer parameters, source/sink, and boundary conditions. In this paper, we propose a multistage and multiscale approach that addresses subsurface heterogeneity at multiple scales, while reducing uncertainty in estimating the model parameters and model boundary conditions. The key to this approach lies in the appropriate development, verification, and synthesis of existing and new techniques of static and dynamic data integration. In particular, based on a given set of observation data, new inversion techniques can be first used to estimate aquifer large‐scale effective parameters and smoothed boundary conditions, based on which parameter and boundary condition estimation can be refined at increasing detail using standard or highly parameterized estimation techniques.  相似文献   

14.
Gurdak JJ  McCray JE  Thyne G  Qi SL 《Ground water》2007,45(3):348-361
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability.  相似文献   

15.
This research incorporates the generalized likelihood uncertainty estimation (GLUE) methodology in a high‐resolution Environmental Protection Agency Storm Water Management Model (SWMM), which we developed for a highly urbanized sewershed in Syracuse, NY, to assess SWMM modelling uncertainties and estimate parameters. We addressed two issues that have long been suggested having a great impact on the GLUE uncertainty estimation: the observations used to construct the likelihood measure and the sampling approach to obtain the posterior samples of the input parameters and prediction bounds of the model output. First, on the basis of the Bayes' theorem, we compared the prediction bounds generated from the same Gaussian distribution likelihood measure conditioned on flow observations of varying magnitude. Second, we employed two sampling techniques, the sampling importance resampling (SIR) and the threshold sampling methods, to generate posterior parameter distributions and prediction bounds, based on which the sampling efficiency was compared. In addition, for a better understanding of the hydrological responses of different pervious land covers in urban areas, we developed new parameter sets in SWMM representing the hydrological properties of trees and lawns, which were estimated through the GLUE procedure. The results showed that SIR was a more effective alternative to the conventional threshold sampling method. The combined total flow and peak flow data were an efficient alternative to the intensive 5‐min flow data for reducing SWMM parameter and output uncertainties. Several runoff control parameters were found to have a great effect on peak flows, including the newly introduced parameters for trees. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
This review and commentary sets out the need for authoritative and concise information on the expected error distributions and magnitudes in observational data. We discuss the necessary components of a benchmark of dominant data uncertainties and the recent developments in hydrology which increase the need for such guidance. We initiate the creation of a catalogue of accessible information on characteristics of data uncertainty for the key hydrological variables of rainfall, river discharge and water quality (suspended solids, phosphorus and nitrogen). This includes demonstration of how uncertainties can be quantified, summarizing current knowledge and the standard quantitative results available. In particular, synthesis of results from multiple studies allows conclusions to be drawn on factors which control the magnitude of data uncertainty and hence improves provision of prior guidance on those uncertainties. Rainfall uncertainties were found to be driven by spatial scale, whereas river discharge uncertainty was dominated by flow condition and gauging method. Water quality variables presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitudes exceeded 40%. We consider how data uncertainties impact on the interpretation of catchment dynamics, model regionalization and model evaluation. In closing the review, we make recommendations for future research priorities in quantifying data uncertainty and highlight the need for an improved ‘culture of engagement’ with observational uncertainties. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Acoustic emission (AE) monitoring is a non-invasive method of monitoring fracturing both in situ, and in experimental rock deformation studies. Until recently, the major impediment for imaging brittle failure within a rock mass is the accuracy at which the hypocenters may be located. However, recent advances in the location of regional scale earthquakes have successfully reduced hypocentral uncertainties by an order of magnitude. The least-squares Geiger, master event relocation, and double difference methods have been considered in a series of synthetic experiments which investigate their ability to resolve AE hypocentral locations. The effect of AE hypocenter location accuracy due to seismic velocity perturbations, uncertainty in the first arrival pick, array geometry and the inversion of a seismically anisotropic structure with an isotropic velocity model were tested. Hypocenters determined using the Geiger procedure for a homogeneous, isotropic sample with a known velocity model gave a RMS error for the hypocenter locations of 2.6 mm; in contrast the double difference method is capable of reducing the location error of these hypocenters by an order of magnitude. We test uncertainties in velocity model of up to ±10% and show that the double difference method can attain the same RMS error as using the standard Geiger procedure with a known velocity model. The double difference method is also capable of precise locations even in a 40% anisotropic velocity structure using an isotropic model for location and attains a RMS mislocation error of 2.6 mm that is comparable to a RMS mislocation error produced with an isotropic known velocity model using the Geiger approach. We test the effect of sensor geometry on location accuracy and find that, even when sensors are missing, the double difference method is capable of a 1.43 mm total RMS mislocation compared to 4.58 mm for the Geiger method. The accuracy of automatic picking algorithms used for AE studies is ±0.5 μs (1 time sample when the sampling rate is 0.2 μs). We investigate how AE locations are effected by the accuracy of first arrival picking by randomly delaying the actual first arrival by up to 5 time samples. We find that even when noise levels are set to 5 time samples the double difference method successfully relocates the synthetic AE.  相似文献   

18.
Mixed extreme wave climate model for reanalysis databases   总被引:1,自引:0,他引:1  
Hindcast or wave reanalysis databases (WRDB) constitute a powerful source with respect to instrumental records in the design of offshore and coastal structures, since they offer important advantages for the statistical characterization of wave climate variables, such as continuous long time records of significant wave heights, mean and peak periods, etc. However, reanalysis data is less accurate than instrumental records, making extreme data analysis derived from WRDB prone to under predict design return period values. This paper proposes a mixed extreme value model to deal with maxima, which takes full advantage of both (i) hindcast or wave reanalysis and (ii) instrumental records, reducing the uncertainty in its predictions. The resulting mixed model consistently merges the information given by both kinds of data sets, and it can be applied to any extreme value analysis distribution, such as generalized extreme value, peaks over threshold or Pareto–Poisson. The methodology is illustrated using both synthetically generated and real data, the latter taken from a given location on the northern Spanish coast.  相似文献   

19.
Comparison of surface and borehole locations of induced seismicity   总被引:1,自引:0,他引:1  
Monitoring of induced microseismic events has become an important tool in hydraulic fracture diagnostics and understanding fractured reservoirs in general. We compare microseismic event and their uncertainties using data sets obtained with surface and downhole arrays of receivers. We first model the uncertainties to understand the effect of different acquisition geometries on location accuracy. For a vertical array of receivers in a single monitoring borehole, we find that the largest part of the final location uncertainty is related to estimation of the backazimuth. This is followed by uncertainty in the vertical position and radial distance from the receivers. For surface monitoring, the largest uncertainty lies in the vertical position due to the use of only a single phase (usually P‐wave) in the estimation of the event location. In surface monitoring results, lateral positions are estimated robustly and are not sensitive to the velocity model. In this case study, we compare event location solutions from two catalogues of microseismic events; one from a downhole array and the second from a surface array of 1C geophone. Our results show that origin time can be reliably used to find matching events between the downhole and surface catalogues. The locations of the corresponding events display a systematic shift consistent with a poorly calibrated velocity model for downhole dataset. For this case study, locations derived from surface monitoring have less scatter in both vertical and horizontal directions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号