首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Abstract

Abstract A hydrological simulation model was developed for conjunctive representation of surface and groundwater processes. It comprises a conceptual soil moisture accounting module, based on an enhanced version of the Thornthwaite model for the soil moisture reservoir, a Darcian multi-cell groundwater flow module and a module for partitioning water abstractions among water resources. The resulting integrated scheme is highly flexible in the choice of time (i.e. monthly to daily) and space scales (catchment scale, aquifer scale). Model calibration involved successive phases of manual and automatic sessions. For the latter, an innovative optimization method called evolutionary annealing-simplex algorithm is devised. The objective function involves weighted goodness-of-fit criteria for multiple variables with different observation periods, as well as penalty terms for restricting unrealistic water storage trends and deviations from observed intermittency of spring flows. Checks of the unmeasured catchment responses through manually changing parameter bounds guided choosing final parameter sets. The model is applied to the particularly complex Boeoticos Kephisos basin, Greece, where it accurately reproduced the main basin response, i.e. the runoff at its outlet, and also other important components. Emphasis is put on the principle of parsimony which resulted in a computationally effective modelling. This is crucial since the model is to be integrated within a stochastic simulation framework.  相似文献   

3.
Automatic calibration of complex subsurface reaction models involves numerous difficulties, including the existence of multiple plausible models, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study investigated a novel procedure for performing simultaneous calibration of multiple models (SCMM). By combining a hybrid global-plus-polishing search heuristic with a biased-but-random adaptive model evaluation step, the new SCMM method calibrates multiple models via efficient exploration of the multi-model calibration space. Central algorithm components are an adaptive assignment of model preference weights, mapping functions relating the uncertain parameters of the alternative models, and a shuffling step that efficiently exploits pseudo-optimal configurations of the alternative models. The SCMM approach was applied to two nitrate contamination problems involving batch reactions and one-dimensional reactive transport. For the chosen problems, the new method produced improved model fits (i.e. up to 35% reduction in objective function) at significantly reduced computational expense (i.e. 40–90% reduction in model evaluations), relative to previously established benchmarks. Although the method was effective for the test cases, SCMM relies on a relatively ad-hoc approach to assigning intermediate preference weights and parameter mapping functions. Despite these limitations, the results of the numerical experiments are empirically promising and the reasoning and structure of the approach provide a strong foundation for further development.  相似文献   

4.
Uplift and the accompanying reduction in overburden result in anomalously high velocity in the uplifted rock unit relative to its current depth. The present work utilizes the non‐uniqueness of the parameters of instantaneous velocity versus depth functions as an effective tool for uplift studies. The linear function with its two parameters, V0 and k, is a very simple function and is used as the illustrative vehicle. In the parameter space, i.e. in a plot where one axis represents V0 and the other axis represents k, non‐uniqueness can be represented by contours of equal goodness‐of‐fit values between the observed data and the fitted function. The contour delimiting a region of equivalent solutions in the parameter space is called a ‘solution trough’. Uplift corresponds to a rotation of the solution trough in the parameter space. It is shown that, in terms of relative depth changes, there are five possible configurations (five cases) of uplift in a given area (the mobile location) relative to another area (the reference location). The cases depend on whether the uplifted location had attained a (pre‐uplift) maximum depth of burial that was greater than, similar to, or smaller than the maximum depth of burial at the reference location. Interpretation of the relationships between the solution troughs corresponding to the different locations makes it possible to establish which of the five cases applies to the uplifted location and to estimate the amount of uplift that the unit had undergone at that location. The difficulty in determining the reduction in velocity due to decompaction resulting from uplift is a main source of uncertainty in the estimate of the amount of uplift. This is a common problem with all velocity‐based methods of uplift estimation. To help around this difficulty, the present work proposes a first‐order approximation method for estimating the effect of decompaction on velocity in an uplifted area.  相似文献   

5.
Forecasting of extreme events and phenomena that respond to non-Gaussian heavy-tailed distributions (e.g., extreme environmental events, rock permeability, rock fracture intensity, earthquake magnitudes) is essential to environmental and geoscience risk analysis. In this paper, new parametric heavy-tailed distributions are devised starting from the exponential power probability density function (pdf) which is modified by explicitly including higher-order “cumulant parameters” into the pdf. Instead of dealing with whole power random variables, novel “residual” random variables are proposed to reconstruct the cumulant generating function. The expected value of a residual random variable with the corresponding pdf for order G, gives the input higher-order cumulant parameter. Thus, each parametric pdf is used to simulate a random variable containing residuals that yield, in average, the expected cumulant parameter. The cumulant parameters allow the formulation of heavy-tailed skewed pdfs beyond the lognormal to handle extreme events. Monte Carlo simulation of heavy-tailed distributions with higher-order parameters is demonstrated with a simple example for permeability.  相似文献   

6.
Abstract

This paper examines the efficiency of various methods of calibrating a rainfall-runoff model. The model used is a 12 parameter version of the Boughton model which has been developed for large tropical basins. Attempts were made to improve the efficiency of calibration in three areas: selection of the best nonlinear programming algorithms; reduction of the number of objective functions required for calibration; and simplification of the model structure. The best algorithms were found to be those of Powell, Rosenbrock, and the simplex method of Nelder and Mead. The Davidon method did not perform well. The number of objective function evaluations can be reduced by performing a sensitivity analysis on the model and selecting a small group of parameters which are not interdependent and which the objective function is sensitive to. This may yield a substantial reduction in the computer time required to calibrate the model. Simplification of the model structure can also yield substantial savings, especially where it removes calculations which are redundant and reduces the number of model parameters.  相似文献   

7.
双地震带的影响因素探讨   总被引:8,自引:4,他引:4       下载免费PDF全文
张克亮  魏东平 《地球物理学报》2011,54(11):2838-2850
讨论了全球39个俯冲带内的双地震带层间距、应力类型与俯冲参数的相互关系,这些俯冲参数包括动力学参数(板块年龄、热参数、板片拉力)、运动学参数(俯冲板块速度、上覆板块运动速度、海沟迁移速度、弧后形变特征)、几何形态参数(浅俯冲角、深俯冲角、俯冲深度、长度)及上覆板块性质等.结果表明:(1)I型双地震带易形成于年龄较古老(...  相似文献   

8.
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.  相似文献   

9.
Finding an operational parameter vector is always challenging in the application of hydrologic models, with over‐parameterization and limited information from observations leading to uncertainty about the best parameter vectors. Thus, it is beneficial to find every possible behavioural parameter vector. This paper presents a new methodology, called the patient rule induction method for parameter estimation (PRIM‐PE), to define where the behavioural parameter vectors are located in the parameter space. The PRIM‐PE was used to discover all regions of the parameter space containing an acceptable model behaviour. This algorithm consists of an initial sampling procedure to generate a parameter sample that sufficiently represents the response surface with a uniform distribution within the “good‐enough” region (i.e., performance better than a predefined threshold) and a rule induction component (PRIM), which is then used to define regions in the parameter space in which the acceptable parameter vectors are located. To investigate its ability in different situations, the methodology is evaluated using four test problems. The PRIM‐PE sampling procedure was also compared against a Markov chain Monte Carlo sampler known as the differential evolution adaptive Metropolis (DREAMZS) algorithm. Finally, a spatially distributed hydrological model calibration problem with two settings (a three‐parameter calibration problem and a 23‐parameter calibration problem) was solved using the PRIM‐PE algorithm. The results show that the PRIM‐PE method captured the good‐enough region in the parameter space successfully using 8 and 107 boxes for the three‐parameter and 23‐parameter problems, respectively. This good‐enough region can be used in a global sensitivity analysis to provide a broad range of parameter vectors that produce acceptable model performance. Moreover, for a specific objective function and model structure, the size of the boxes can be used as a measure of equifinality.  相似文献   

10.
ABSTRACT

Bias correction is a necessary post-processing procedure in order to use regional climate model (RCM)-simulated local climate variables as the input data for hydrological models due to systematic errors of RCMs. Most of the present bias-correction methods adjust statistical properties between observed and simulated data based on a predefined duration (e.g. a month or a season). However, there is a lack of analysis of the optimal period for bias correction. This study attempted to address the question whether there is an optimal number for bias-correction groups (i.e. optimal bias-correction period). To explore this we used a catchment in southwest England with the regional climate model HadRM3 precipitation data. The proposed methodology used only one grid of RCM in the Exe catchment, one emissions scenario (A1B) and one member (Q0) among 11 members of HadRM3. We tried 13 different bias-correction periods from 3-day to 360-day (i.e. the whole of one year) correction using the quantile mapping method. After the bias correction a low pass filter was used to remove the high frequencies (i.e. noise) followed by estimating Akaike’s information criterion. For the case study catchment with the regional climate model HadRM3 precipitation, the results showed that a bias-correction period of about 8 days is the best. We hope this preliminary study on the optimum number bias-correction period for daily RCM precipitation will stimulate more research to improve the methodology with different climatic conditions. Future efforts on several unsolved problems have been suggested, such as how strong the filter should be and the impact of the number of bias correction groups on river flow simulations.
Editor M.C. Acreman Associate editor S. Kanae  相似文献   

11.
A method for using remotely sensed snow cover information in updating a hydrological model is developed, based on Bayes' theorem. A snow cover mass balance model structure adapted to such use of satellite data is specified, using a parametric snow depletion curve in each spatial unit to describe the subunit variability in snow storage. The snow depletion curve relates the accumulated melt depth to snow‐covered area, accumulated snowmelt runoff volume, and remaining snow water equivalent. The parametric formulation enables updating of the complete snow depletion curve, including mass balance, by satellite data on snow coverage. Each spatial unit (i.e. grid cell) in the model maintains a specific depletion curve state that is updated independently. The uncertainty associated with the variables involved is formulated in terms of a joint distribution, from which the joint expectancy (mean value) represents the model state. The Bayesian updating modifies the prior (pre‐update) joint distribution into a posterior, and the posterior joint expectancy replaces the prior as the current model state. Three updating experiments are run in a 2400 km2 mountainous region in Jotunheimen, central Norway (61°N, 9°E) using two Landsat 7 ETM+ images separately and together. At 1 km grid scale in this alpine terrain, three parameters are needed in the snow depletion curve. Despite the small amount of measured information compared with the dimensionality of the updated parameter vector, updating reduces uncertainty substantially for some state variables and parameters. Parameter adjustments resulting from using each image separately differ, but are positively correlated. For all variables, uncertainty reduction is larger with two images used in conjunction than with any single image. Where the observation is in strong conflict with the prior estimate, increased uncertainty may occur, indicating that prior uncertainty may have been underestimated. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
The correlation dimension (CD) of a time series provides information on the number of dominant variables present in the evolution of the underlying system dynamics. In this study, we explore, using logistic regression (LR), possible physical connections between the CD and the mathematical modeling of risk of arsenic contamination in groundwater. Our database comprises a large-scale arsenic survey conducted in Bangladesh. Following the recommendation by Hossain and Sivakumar (Stoch Environ Res Risk Assess 20(1–2):66–76, 2006), who reported CD values ranging from 8 to 11 for this database, 11 variables are considered herein as indicators of the aquifer’s geochemical regime with potential influence on the arsenic concentration in groundwater. A total of 2,048 possible combinations of influencing variables are considered as candidate LR risk models to delineate the impact of the number of variables on the prediction accuracy of the model. We find that the uncertainty associated with prediction of wells as safe and unsafe by LR risk model declines systematically as the total number of influencing variables increases from 7 to 11. The sensitivity of the mean predictive performance also increases noticeably for this range. The consistent reduction in predictive uncertainty coupled with the increased sensitivity of the mean predictive behavior within the universal sample space exemplify the ability of CD to function as a proxy for the number of dominant influencing variables. Such a rapid proxy, based on non-linear dynamic concepts, appears to have considerable merit for application in current management strategies on arsenic contamination in developing countries, where both time and resources are very limited.  相似文献   

13.
This study investigates the dynamic behavior of suspended sediment load transport at different temporal scales in the Mississippi River basin. Data corresponding to five successively doubled temporal scales (i.e. daily, two‐day, four‐day, eight‐day and 16‐day) from the St. Louis gaging station in Missouri are analyzed. The investigation is focused on identifying possible low‐dimensional deterministic behavior in the suspended sediment load transport dynamics, with an aim towards reduction in model complexity. The correlation dimension method is used to identify low‐dimensional determinism. The suspended sediment load dynamics are represented through phase‐space reconstruction, and the variability is estimated using the (proximity of) reconstructed vectors in the phase space. The results indicate the presence of low‐dimensional determinism in the suspended sediment load series at each of the five temporal scales, with the variables dominantly governing the dynamics in the order of three or four. These results not only suggest the appropriateness of relatively simpler models but also hint at possible scale invariance in the suspended sediment load transport dynamics. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
Metallic iron (Fe0) is often reported as a reducing agent for environmental remediation. There is still controversy as to whether Fe0 plays any significant direct role in the process of contaminant reductive transformation. The view that Fe0 is mostly a generator of reducing agents (e.g. H, H2 and FeII) and Fe oxyhydroxides has been either severely refuted or just tolerated. The tolerance is based on the simplification that, without Fe0, no secondary reducing agents could be available. Accordingly, Fe0 serves as the original source of electron donors (including H, H2 and FeII). The objective of this communication is to refute the named simplification and establish that quantitative reduction results from secondary reducing agents. For this purpose, reports on aqueous contaminant removal by Al0, Fe0 and Zn0 are comparatively discussed. Results indicated that reduction may be quantitative in aqueous systems containing Fe0 and Zn0 while no significant reduction is observed in Al0/H2O systems. Given that Al0 is a stronger reducing agent than Fe0 and Zn0, it is concluded that contaminant reduction in Fe0/H2O systems results from synergic interactions between H/H2 and FeII within porous Fe oxyhydroxides. This conclusion corroborates the operating mode of Fe0 bimetallics as H/H2 producing systems for indirect contaminant reduction.  相似文献   

15.
A new methodology is proposed for the development of parameter-independent reduced models for transient groundwater flow models. The model reduction technique is based on Galerkin projection of a highly discretized model onto a subspace spanned by a small number of optimally chosen basis functions. We propose two greedy algorithms that iteratively select optimal parameter sets and snapshot times between the parameter space and the time domain in order to generate snapshots. The snapshots are used to build the Galerkin projection matrix, which covers the entire parameter space in the full model. We then apply the reduced subspace model to solve two inverse problems: a deterministic inverse problem and a Bayesian inverse problem with a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is validated with a conceptual one-dimensional groundwater flow model. We then apply the methodology to a basin-scale, conceptual aquifer in the Oristano plain of Sardinia, Italy. Using the methodology, the full model governed by 29,197 ordinary differential equations is reduced by two to three orders of magnitude, resulting in a drastic reduction in computational requirements.  相似文献   

16.
Recent research recognized that the slope of 18% can be used to distinguish between the ‘gentle slope’ case and that of ‘steep slope’ for the detected differences in hydraulic variables (flow depth, velocity, Reynolds number, Froude number) and those representatives of sediment transport (flow transport capacity, actual sediment load). In this paper, using previous measurements carried out in mobile bed rills and flume experiments characterized by steep slopes (i.e., slope greater than or equal to 18%), a theoretical rill flow resistance equation to estimate the Darcy-Weisbach friction factor is tested. The main aim is to deduce a relationship between the velocity profile parameter Γ, the channel slope, the Reynolds number, the Froude number and the textural classes using a data base characterized by a wide range of hydraulic conditions, plot or flume slope (18%–84%) and textural classes (clay ranging from 3% to 71%). The obtained relationship is also tested using 47 experimental runs carried out in the present investigation with mobile bed rills incised in a 18%—sloping plot with a clay loam soil and literature data. The analysis demonstrated that: (1) the soil texture affects the estimate of the Γ parameter and the theoretical flow resistance law (Equation 25), (2) the proposed Equation (25) fits well the independent measurements of the testing data base, (3) the estimate of the Darcy-Weisbach friction factor is affected by the soil particle detachability and transportability and (4) the Darcy-Weisbach friction factor is linearly related to the rill slope.  相似文献   

17.
The most common noise-reduction methods employed in the vibroseis technique (e.g. spike and burst reduction, vertical stacking) are applied in the field to reduce noise at a very early stage. In addition, vibrator phase control systems prevent signal distortions produced by non-linearity of the source itself. However, the success of these automatic correction methods depends on parameter justification by the operator and the actual characteristics of the distorting noise. More specific noise-reduction methods (e.g. Combisweep (Trade mark of Geco-Prakla), elimination of harmonics) increase production costs or need uncorrelated data for the correction process. Because the field data are usually correlated and vertically stacked in the field to minimize logistical and processing costs, it is not possible to make subsequent parameter corrections to optimize the noise reduction after correlation and vertical stacking of a production record. The noise-reduction method described here uses the final recorded, correlated and stacked vibroseis field data. This method eliminates signal artifacts caused e.g. by incorrect vibroseis source signals being used in parameter estimation when a frequency–time analysis is combined with a standard convolution process. Depending on the nature of the distortions, a synthetically generated, nearly recursive noise-separation operator compresses the noise artifact in time using a trace-by-trace filter. After elimination of this compressed noise, re-application of the separation operator leads to a noise-corrected replacement of the input data. The method is applied to a synthetic data set and to a real vibroseis field record from deep seismic sounding, with good results.  相似文献   

18.
Seasonality of low flows and dominant processes in the Rhine River   总被引:5,自引:5,他引:0  
Low flow forecasting is crucial for sustainable cooling water supply and planning of river navigation in the Rhine River. The first step in reliable low flow forecasting is to understand the characteristics of low flow. In this study, several methods are applied to understand the low flow characteristics of Rhine River basin. In 108 catchments of the Rhine River, winter and summer low flow regions are determined with the seasonality ratio (SR) index. To understand whether different numbers of processes are acting in generating different low flow regimes in seven major sub-basins (namely, East Alpine, West Alpine, Middle Rhine, Neckar, Main, Mosel and Lower Rhine) aggregated from the 108 catchments, the dominant variable concept is adopted from chaos theory. The number of dominant processes within the seven major sub-basins is determined with the correlation dimension analysis. Results of the correlation dimension analysis show that the minimum and maximum required number of variables to represent the low flow dynamics of the seven major sub-basins, except the Middle Rhine and Mosel, is 4 and 9, respectively. For the Mosel and Middle Rhine, the required minimum number of variables is 2 and 6, and the maximum number of variables is 5 and 13, respectively. These results show that the low flow processes of the major sub-basins of the Rhine could be considered as non-stochastic or chaotic processes. To confirm this conclusion, the rescaled range analysis is applied to verify persistency (i.e. non-randomness) in the processes. The estimated rescaled range statistics (i.e. Hurst exponents) are all above 0.5, indicating that persistent long-term memory characteristics exist in the runoff processes. Finally, the mean values of SR indices are compared with the nonlinear analyses results to find significant relationships. The results show that the minimum and maximum numbers of required variables (i.e. processes) to model the dynamic characteristics for five out of the seven major sub-basins are the same, but the observed low flow regimes are different (winter low flow regime and summer low flow regime). These results support the conclusion that a few interrelated nonlinear variables could yield completely different behaviour (i.e. dominant low flow regime).  相似文献   

19.
The space and time resolutions used for the input variables of a distributed hydrological model have a sufficient impact on the model results. This resolution depends on the required accuracy, experimental site and the processes and variables taken into account in the hydrological model. The influence of space and time resolution is studied here for the case of TOPMODEL, a model based on the variable contributing area concept, applied to an experimental 12 km2 catchment (Coët-Dan, Brittany, France) during a two month winter period. A sensitivity analysis to space and time resolution is performed first for input variables derived from the digital elevation data, secondly for the optimized values of the TOPMODEL parameters and finally for modelling efficiency. This analysis clearly shows that a relevant domain of space and time resolutions where efficiency is fairly constant can be defined for the input topographic variables, as opposed to another domain of larger resolutions that induces a strong decrease of modelling efficiency. It also shows that the use of a single set of parameters, defined as mean values of parameters on this relevant domain of resolution, does not modify the accuracy of modelling. The sensitivity of the parameters to space and time resolution allows the physical significance of the parameter values to be discussed.  相似文献   

20.
Symmetry distribution of cities in China   总被引:1,自引:0,他引:1  
The authors of this paper induced five principles of geographical symmetry based on the space distributions of cities and towns in China. There is a symmetry distribution of cities and towns. The symmetry characteristics are the following: (i) the average coordination number of the cities (including large cities, medium cities and county towns) is 6 (i.g. rotation symmetry); (ii) the distribution of large and medium cities are shown to be the latticework in which two directions are parallel to two main tectonic ones in China, respectively; (iii) the distribution of county towns of a province is also shown to be the latticework in which two directions are parallel to two tectonic ones in this province (i. g. two-dimensional translation) and (iv) the concentric circle distribution of cities (CCDC) is centered round a large city (i. g. rotation symmetry).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号