首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 612 毫秒
1.
2.
The implementation of Monte Carlo simulations (MCSs) for the propagation of uncertainty in real-world seawater intrusion (SWI) numerical models often becomes computationally prohibitive due to the large number of deterministic solves needed to achieve an acceptable level of accuracy. Previous studies have mostly relied on parallelization and grid computing to decrease the computational time of MCSs. However, another approach which has received less attention in the literature is to decrease the number of deterministic simulations by using more efficient sampling strategies. Sampling efficiency is a measure of the optimality of a sampling strategy. A more efficient sampling strategy requires fewer simulations and less computational time to reach a certain level of accuracy. The efficiency of a sampling strategy is highly related to its space-filling characteristics.This paper illustrates that the use of optimized Latin hypercube sampling (OLHS) strategies instead of the widely employed simple random sampling (SRS) and Latin hypercube sampling (LHS) strategies, can significantly improve sampling efficiency and hence decrease the simulation time of MCSs. Nine OLHS strategies are evaluated including: improved Latin hypercube sampling (IHS); optimum Latin hypercube (OLH) sampling; genetic optimum Latin hypercube (GOLH) sampling; three sampling strategies based on the enhanced stochastic evolutionary (ESE) algorithm namely φp-ESE which employs the φp space-filling criterion, CLD-ESE which utilizes the centered L2-discrepancy (CLD) space-filling criterion, and SLD-ESE which uses the star L2-discrepancy (SLD) space-filling criterion; and three sampling strategies based on the simulated annealing (SA) algorithm namely φp-SA which employs the φp criterion, CLD-SA which uses the CLD criterion, and SLD-SA which utilizes the SLD criterion. The study applies SRS, LHS and the nine OLHS strategies to MCSs of two synthetic test cases of SWI. The two test cases are the Henry problem and a two-dimensional radial representation of SWI in a circular island. The comparison demonstrates that the CLD-ESE strategy is the most efficient among the evaluated strategies. This paper also demonstrates how the space-filling characteristics of different OLHS designs change with variations in the input arguments of their optimization algorithms.  相似文献   

3.
In this paper, the numerical errors associated with the finite difference solutions of two-dimensional advection–dispersion equation with linear sorption are obtained from a Taylor analysis and are removed from numerical solution. The error expressions are based on a general form of the corresponding difference equation. The variation of these numerical truncation errors is presented as a function of Peclet and Courant numbers in X and Y direction, a Sink/Source dimensionless number and new form of Peclet and Courant numbers in X–Y plane. It is shown that the Crank–Nicolson method is the most accurate scheme based on the truncation error analysis. The effects of these truncation errors on the numerical solution of a two-dimensional advection–dispersion equation with a first-order reaction or degradation are demonstrated by comparison with an analytical solution for predicting contaminant plume distribution in uniform flow field. Considering computational efficiency, an alternating direction implicit method is used for the numerical solution of governing equation. The results show that removing these errors improves numerical result and reduces differences between numerical and analytical solution.  相似文献   

4.
Muro Leccese (Lecce) contains one the most important Messapian archaeological sites in southern Italy.The archaeological interest of the site arises from the discovery of the remains of Messapian walls, tombs, roads, etc. (4th–2nd centuries BC) in the neighbourhood. The archaeological remains were found at about 0.3 m depth.At present the site belongs to the municipality, which intends to build a new sewer network through it. The risk of destroying potentially interesting ancient archaeological structures during the works prompted an archaeological survey of the area. The relatively large dimensions of the area (almost 10,000 m2), together with time and cost constraints, made it necessary to use geophysical investigations as a faster means to ascertain the presence of archaeological items. Since the most important targets were expected to be located at a soil depth of about 0.3 m, a ground-penetrating radar (GPR) survey was carried out in an area located near the archaeological excavations. Unfortunately the geological complexity did not allow an easy interpretation of the GPR data.Therefore a 3D electrical resistivity tomography (ERT) scan was conducted in order to resolve these interpretation problems.A three-way comparison of the results of the dense ERT measurements parallel to the x axis, the results of the measurements parallel to the y axis and the combined results was performed.Subsequently the synthetic model approach was used to provide a better characterization of the resistivity anomalies visible on the ERT field data.The 3D inversion results clearly illustrate the capability to resolve in view of quality 3D structures of archaeological interest. According to the presented data the inversion models along one direction (x or y) seems to be adequate in reconstructing the subsurface structures.Naturally field data produce good quality reconstructions of the archaeological features only if the x-line and y-line measurements are considered together. Despite the increased computational time required by the 3D acquisition and 3D inversion schemes, good quality results can be produced.  相似文献   

5.
The celebrated Boltzmann-Gibbs (BG) entropy, S BG = ?kΣ i p i ln p i , and associated statistical mechanics are essentially based on hypotheses such as ergodicity, i.e., when ensemble averages coincide with time averages. This dynamical simplification occurs in classical systems (and quantum counterparts) whose microscopic evolution is governed by a positive largest Lyapunov exponent (LLE). Under such circumstances, relevant microscopic variables behave, from the probabilistic viewpoint, as (nearly) independent. Many phenomena exist, however, in natural, artificial and social systems (geophysics, astrophysics, biophysics, economics, and others) that violate ergodicity. To cover a (possibly) wide class of such systems, a generalization (nonextensive statistical mechanics) of the BG theory was proposed in 1988. This theory is based on nonadditive entropies such as \(S_q = k\frac{{1 - \sum\nolimits_i {p_i^q } }}{{q - 1}}\left( {S_1 = S_{BG} } \right)\). Here we comment some central aspects of this theory, and briefly review typical predictions, verifications and applications in geophysics and elsewhere, as illustrated through theoretical, experimental, observational, and computational results.  相似文献   

6.
We describe an algorithm for rapidly computing the surface displacements induced by a general polygonal load on a layered, isotropic, elastic half-space. The arbitrary surface pressure field is discretized using a large number, n, of equally-sized circular loading elements. The problem is to compute the displacement at a large number, m, of points (or stations) distributed over the surface. The essence of our technique is to reorganize all but a computationally insignificant part of this calculation into an equivalent problem: compute the displacements due to a single circular loading element at a total of m n stations (where m n is the product m × n). We solve this “parallel” problem at high computational speed by utilizing the sparse evaluation and massive interpolation (SEMI) method. Because the product m n that arises in our parallel problem is normally very large, we take maximum possible advantage of the acceleration achieved by the SEMI algorithm.  相似文献   

7.
We propose and validate a new sampling method to assess the presence, abundance and distribution of macrophytes in circular-shaped lakes according to the requirements of the Water Framework Directive (WFD2000/60/EC). The results of the macrophyte survey, and in particular of macrophyte diversity, obtained using this method are also discussed.The sampling is based on randomly selected transects homogeneously distributed around the perimeter of the lake. The number of transects is proportional to the lake's size. The method was validated on six Italian volcanic lakes using computational resampling procedures on a total of 126 transects.Using resampling procedures, we show that the proposed approach identifies more than 75% of the overall species richness through a moderate sampling effort. According to our results, Charophytes dominate aquatic vegetation in Italian volcanic lakes. Species diversity is highest at shallow depths, whereas the most abundant species, such as Chara polyacantha, are located at an intermediate depth between the shoreline and the maximum growing depth.  相似文献   

8.
9.
A modified domain reduction method(MDRM) that introduces damping terms to the original DRM is presented in this paper. To verify the proposed MDRM and compare the computational accuracy of these two methods, a numerical test is designed. The numerical results of the MDRM and DRM are compared using an extended meshed model. The results show that the MDRM significantly improved the computational accuracy of the DRM. Then, the MDRM is compared with two existing conventional methods, namely Liao's transmitting boundary and viscous-spring boundary with Liu's method. The MDRM shows its great advancement in computational accuracy, stability and range of applications. This paper also discusses the influence of boundary location on computational accuracy. It can be concluded that smaller models tend to have larger errors. By introducing two dimensionless parameters, φ_1 and φ_2, the rational distance between the observation point and the MDRM boundary is suggested. When φ_1 2 or φ_213, the relative PGA error can be limited to 5%. In practice, the appropriate model size can be chosen based on these two parameters to achieve desired computational accuracy.  相似文献   

10.
Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.  相似文献   

11.
Input variable selection (IVS) is a necessary step in modeling water resources systems. Neglecting this step may lead to unnecessary model complexity and reduced model accuracy. In this paper, we apply the minimum redundancy maximum relevance (MRMR) algorithm to identifying the most relevant set of inputs in modeling a water resources system. We further introduce two modified versions of the MRMR algorithm (α-MRMR and β-MRMR), where α and β are correction factors that are found to increase and decrease as a power-law function, respectively, with the progress of the input selection algorithms and the increase of the number of selected input variables. We apply the proposed algorithms to 22 reservoirs in California to predict daily releases based on a set from a 121 potential input variables. Results indicate that the two proposed algorithms are good measures of model inputs as reflected in enhanced model performance. The α-MRMR and β-MRMR values exhibit strong negative correlation to model performance as depicted in lower root-mean-square-error (RMSE) values.  相似文献   

12.
To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.  相似文献   

13.
In acoustic logging-while-drilling (ALWD) finite difference in time domain (FDTD) simulations, large drill collar occupies, most of the fluid-filled borehole and divides the borehole fluid into two thin fluid columns (radius -27 mm). Fine grids and large computational models are required to model the thin fluid region between the tool and the formation. As a result, small time step and more iterations are needed, which increases the cumulative numerical error. Furthermore, due to high impedance contrast between the drill collar and fluid in the borehole (the difference is 〉30 times), the stability and efficiency of the perfectly matched layer (PML) scheme is critical to simulate complicated wave modes accurately. In this paper, we compared four different PML implementations in a staggered grid finite difference in time domain (FDTD) in the ALWD simulation, including field-splitting PML (SPML), multiaxial PML(M- PML), non-splitting PML (NPML), and complex frequency-shifted PML (CFS-PML). The comparison indicated that NPML and CFS-PML can absorb the guided wave reflection from the computational boundaries more efficiently than SPML and M-PML. For large simulation time, SPML, M-PML, and NPML are numerically unstable. However, the stability of M-PML can be improved further to some extent. Based on the analysis, we proposed that the CFS-PML method is used in FDTD to eliminate the numerical instability and to improve the efficiency of absorption in the PML layers for LWD modeling. The optimal values of CFS-PML parameters in the LWD simulation were investigated based on thousands of 3D simulations. For typical LWD cases, the best maximum value of the quadratic damping profile was obtained using one do. The optimal parameter space for the maximum value of the linear frequency-shifted factor (a0) and the scaling factor (β0) depended on the thickness of the PML layer. For typical formations, if the PML thickness is 10 grid points, the global error can be reduced to 〈1% using the optimal PML parameters, and the error will decrease as the PML thickness increases.  相似文献   

14.
Many broadcast-spawning benthic invertebrates are subject to sperm limitation yet achieve high population densities, as for example dreissenid mussels (Dreissena polymorpha and Dreisssena bugensis) that were introduced into the Laurentian Great Lakes. The question remains whether biological or ecological/physical mechanisms reduce sperm limitation. Gamete dilution/longevity experiments were undertaken to determine whether dreissenid mussels are subject to sperm limitation, and computational fluid dynamic modeling was used to determine the potential influence of bottom roughness on sperm dilution in nature. Results indicated that dreissenid mussels may be sperm limited, but the extent to which sperm dilution affects them is lower than what was reported for other broadcast spawning invertebrates. Importantly, model mussel clusters influenced external fertilization by retaining sperm in downstream eddies but allowing downstream transport from one cluster to another. This, in addition to high sperm potency at low sperm concentrations, may help to explain the success of dreissenid mussels as invasive species.  相似文献   

15.
This paper documents our development and evaluation of a numerical solver for systems of sparsely linked ordinary differential equations in which the connectivity between equations is determined by a directed tree. These types of systems arise in distributed hydrological models. The numerical solver is based on dense output Runge–Kutta methods that allow for asynchronous integration. A partition of the system is used to distribute the workload among different processes, enabling a parallel implementation that capitalizes on a distributed memory system. Communication between processes is performed asynchronously. We illustrate the solver capabilities by integrating flow transport equations for a ∼17,000 km2 river basin subdivided into 305,000 sub-watersheds that are interconnected by the river network. Numerical experiments for a few models are performed and the runtimes and scalability on our parallel computer are presented. Efficient numerical integrators such as the one demonstrated here bring closer to reality the goal of implementing fully distributed real-time flood forecasting systems supported by physics based hydrological models and high-quality/high-resolution rainfall products.  相似文献   

16.
A parallel soil–structure interaction (SSI) model is presented for applications on distributed computer systems. Substructring method is applied to the SSI system and a coupled finite–infinite element based parallel computer program is developed. In the SSI system, infinite elements are used to represent the soil which extends to infinity. In this case, a large finite element mesh is required to define the near field for reliable predictions. The resulting large-scale problems are solved on distributed computer systems in this study. The domain is represented by separated substructures and an interface. The number of substructures are determined by the available processors in the parallel platform. To avoid the formation of large interface equations, smaller interface equations are distributed to processors while substructure contributions are performed. This saves a lot of memory storage and computational effort. Direct solution techniques are used for the solution of interface and substructure equation systems. The program is investigated through some example problems. The example problems exposed the need for solving large-scale problems in order to reach better results. The results of the example problems demonstrated the benefits of the parallel SSI algorithm.  相似文献   

17.
18.
A method for simultaneous determination of mixed model parameters, which have different physical dimensions or different responses to data, is presented. Mixed parameter estimation from observed data within a single model space shows instabilities and trade-offs of the solutions. We separate the model space into N-subspaces based on their physical properties or computational convenience and solve the N-subspaces systems by damped least-squares and singular-value decomposition. Since the condition number of each subsystem is smaller than that of the single global system, the approach can greatly increase the stability of the inversion. We also introduce different damping factors into the subsystems to reduce the trade-offs between the different parameters. The damping factors depend on the conditioning of the subsystems and may be adequately chosen in a range from 0.1 % to 10 % of the largest singular value. We illustrate the method with an example of simultaneous determination of source history, source geometry, and hypocentral location from regional seismograms, although it is applicable to any geophysical inversion.  相似文献   

19.
A common way to simulate fluid flow in porous media is to use Lattice Boltzmann (LB) methods. Permeability predictions from such flow simulations are controlled by parameters whose settings must be calibrated in order to produce realistic modelling results. Herein we focus on the simplest and most commonly used implementation of the LB method: the single-relaxation-time BGK model. A key parameter in the BGK model is the relaxation time τ which controls flow velocity and has a substantial influence on the permeability calculation. Currently there is no rigorous scheme to calibrate its value for models of real media. We show that the standard method of calibration, by matching the flow profile of the analytic Hagen-Poiseuille pipe-flow model, results in a BGK-LB model that is unable to accurately predict permeability even in simple realistic porous media (herein, Fontainebleau sandstone). In order to reconcile the differences between predicted permeability and experimental data, we propose a method to calibrate τ using an enhanced Transitional Markov Chain Monte Carlo method, which is suitable for parallel computer architectures. We also propose a porosity-dependent τ calibration that provides an excellent fit to experimental data and which creates an empirical model that can be used to choose τ for new samples of known porosity. Our Bayesian framework thus provides robust predictions of permeability of realistic porous media, herein demonstrated on the BGK-LB model, and should therefore replace the standard pipe-flow based methods of calibration for more complex media. The calibration methodology can also be extended to more advanced LB methods.  相似文献   

20.
A wave type based method for real-time prediction of strong ground motion (SGM) accelerogram is developed. Real-time prediction of SGM is requested in predictive building control systems to trigger and control actuator systems achieving the goal of reduction of the structural deformations during an on-going earthquake. It is well known that SGM is a classic example of non-stationary stochastic process with temporal variation of both amplitude and frequency content. The developed non-parametric model considers the non-homogeneity of the seismic process which contains different wave types with the individual frequency contents and time-dependency amplitude distribution pattern. Therefore, an important part of the method is to detect dominant seismic wave phases. Prediction of seismic signal is undertaken by applying frequency adaptive windowing approach, which leads to predict the on-coming signal in time window tt based on the measured data in the time window t. Besides use of the frequency adaptive windowing, constant windowing and semi-adaptive windowing approaches are deployed. The results show that use of the adaptive time windows relevant to dominant frequency of the signal will enable the model to catch and predict the most dominant frequencies. Performance of the proposed model is verified by the use of 97 free-field accelerograms, which were applied to train and validate the prediction model. The selected accelerograms were measured above the soil type C and D according Eurocode 8 and their Moment magnitude are ranging between 6.2 and 7.7. The learning capability of the radial basis function Artificial Neural Network is used to reconstruct the SGM accelerogram. The most significant advantage of the proposed model is the concept of wave type based modeling which has the advantage of a conceptual physical modeling of the seismic process. Comparison of the real-time predicted and the observed accelerograms shows a high correlation when the frequency adaptive approach is applied. This paper lays a foundation for more effective use of real-time predictive control systems and potential for future extension in active structural control as well as in real-time seismology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号