首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
A simple a posteriori local error estimate for Newmark time integration schemes in dynamic analysis is presented, based on the concept of a so called ‘post-processing’ technique. In conjunction with the error estimate, an adaptive time-stepping algorithm is described, which adjusts the time step size so that the local error of each time step is within a prescribed error tolerance. Numerical examples given in the paper indicate that the error estimate is asymptotically convergent, computationally efficient and convenient, and the adaptive time-stepping scheme can predict a nearly optimal step size from time to time, thus making the numerical solution reliable in an efficient manner.  相似文献   

2.
A simple local error estimator is presented for time integration schemes in dynamic analysis. This error estimator involves only a small computational cost. The time step size is adaptively adjusted so that the local error at each time step is within a prescribed accuracy. It is found that the estimator performs well under various circumstances and provides an economical adaptive process. Attempts to estimate the global time integration error are also reported.  相似文献   

3.
4.
基于BFGS法融合InSAR和GPS技术监测地表三维形变   总被引:7,自引:1,他引:6       下载免费PDF全文
虽然InSAR技术具有高精度、大范围和高空间分辨率等优点,但只能监测雷达视线方向上的一维地表形变;而GPS技术虽可以监测地表的三维形变,但其空间分辨率很低.本文针对融合InSAR和GPS技术监测地表高空间分辨率三维形变展开研究.首先证明了简单的局部最优化迭代算法就能求得综合InSAR和GPS监测地表形变速率的能量函数模型的全局最优估值.随后提出了利用BFGS局部最优算法反演最优的地表三维形变速率.该方法既能避免全局最优化算法计算复杂且难以收敛的问题,又能克服传统的解析法中数值计算不稳定的缺点.最后,通过模拟实验和美国南加州真实数据实验表明,该方法能够得到高精度的地表三维形变速率场.而且当观测或插值误差导致解析法误差较大时,BFGS方法仍能得到高精度、稳定的全局最优解.  相似文献   

5.
Defining the possible scenario of earthquake-induced landslides, Arias intensity is frequently used as a shaking parameter, being considered the most suitable for characterising earthquake impact, while Newmark׳s sliding-block model is widely used to predict the performance of natural slopes during earthquake shaking. In the present study we aim at providing tools for the assessment of the hazard related to earthquake-induced landslides at regional scale, by means of new empirical equations for the prediction of Arias intensity along with an empirical estimator of coseismic landslide displacements based on Newmark׳s model. The regression data, consisting of 205 strong motion recordings relative to 98 earthquakes, were subdivided into a training dataset, used to calculate equation parameters, and a validation dataset, used to compare the prediction performance among different possible functional forms and with equations derived from previous studies carried out for other regions using global and/or regional datasets. Equations predicting Arias intensities expected in Greece at known distances from seismic sources of defined magnitude proved to provide more accurate estimates if site condition and focal mechanism influence can be taken into account. Concerning the empirical estimator of Newmark displacements, we conducted rigorous Newmark analysis on 267 one-component records yielding a dataset containing 507 Newmark displacements, with the aim of developing a regression equation that is more suitable and effective for the seismotectonic environment of Greece and could be used for regional-scale seismic landslide hazard mapping. The regression analysis showed a noticeable higher goodness of fit of the proposed relations compared to formulas derived from worldwide data, suggesting a significant improvement of the empirical relation effectiveness from the use of a regionally-specific strong-motion dataset.  相似文献   

6.
The pseudodynamic (PSD) test method imposes command displacements to a test structure for a given time step. The measured restoring forces and displaced position achieved in the test structure are then used to integrate the equations of motion to determine the command displacements for the next time step. Multi‐directional displacements of the test structure can introduce error in the measured restoring forces and displaced position. The subsequently determined command displacements will not be correct unless the effects of the multi‐directional displacements are considered. This paper presents two approaches for correcting kinematic errors in planar multi‐directional PSD testing, where the test structure is loaded through a rigid loading block. The first approach, referred to as the incremental kinematic transformation method, employs linear displacement transformations within each time step. The second method, referred to as the total kinematic transformation method, is based on accurate nonlinear displacement transformations. Using three displacement sensors and the trigonometric law of cosines, this second method enables the simultaneous nonlinear equations that express the motion of the loading block to be solved without using iteration. The formulation and example applications for each method are given. Results from numerical simulations and laboratory experiments show that the total transformation method maintains accuracy, while the incremental transformation method may accumulate error if the incremental rotation of the loading block is not small over the time step. A procedure for estimating the incremental error in the incremental kinematic transformation method is presented as a means to predict and possibly control the error. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   

8.
A standard test of the Newmark method in structural dynamics is its application to the determination of the response of a damped or undamped single-degree-of-freedom system to a prescribed initial displacement or velocity. In this paper formulae for the error, Δj, in the response, after applying the Newmark method for j time-steps each of duration Δt, are proposed and their acceptable accuracy is demonstrated.  相似文献   

9.
《Advances in water resources》2003,26(11):1189-1198
A two-dimensional finite element based overland flow model was developed and used to study the accuracy and stability of three numerical schemes and watershed parameter aggregation error. The conventional consistent finite element scheme results in oscillations for certain time step ranges. The lumped and the upwind finite element schemes are tested as alternatives to the consistent scheme. The upwind scheme did not improve on the stability or the accuracy of the solution, while the lumped scheme provided stable and accurate solutions for time steps twice the size of time steps needed for the consistent scheme. A new accuracy based dynamic time step estimate for the two-dimensional overland flow kinematic wave solution is developed for the lumped scheme. The newly developed dynamic time step estimates are functions of the mesh size, and time of concentration of the watershed hydrograph. Due to lack of analytical solutions, the time step was developed by comparing numerical solutions of various levels of discretization to a reference solution using a very fine mesh and a very small time step. The time step criteria were tested on a different set of problems and proved to be adequate for accurate and stable solutions. A sensitivity analysis for the watershed slope, Manning’s roughness coefficient and excess rainfall rate was conducted in order to test the effect of parameter aggregation on the stability and accuracy of the solution. The results of this analysis show that aggregation of the slope data resulted in the highest error. The roughness coefficient had a smaller effect on the solution while the rainfall intensity did not show any significant effect on the flow rate solution for the range of rainfall intensity used. This work pioneers the challenge of providing guidelines for accurate and stable numerical solutions of the two-dimensional kinematic wave equations for overland flow.  相似文献   

10.
Strong motion records taken during earthquakes in Turkey are used to calculate Newmark displacements in slopes. These displacements are then utilized in developing a novel displacement-based methodology to select the seismic coefficient which is used to calculate pseudostatic safety factor. In the first step of the study, calculated Newmark displacements are evaluated in three different categories which are as follows: using all data, using data for different earthquake magnitude (M) ranges with and without distance constraint and using data for different peak acceleration (amax) ranges. For all categories, different equations are obtained to assign slope displacements as a function of the ratio of yield acceleration to peak acceleration. The results show that categorization of data is an important issue, because the displacements are earthquake magnitude and peak acceleration dependent. In the second step, equations obtained for different peak acceleration ranges are used to propose charts linking upper bound slope displacements (D), seismic coefficients (kh) and pseudostatic safety factors (PSF), which are three important parameters of a pseudostatic approach. This enables the kh values be chosen based on the allowable displacements, instead of the current applications based on judgement and expertise. The results show that kh values for any allowable displacement should be based on anticipated amax values, while use of high PSF values results in lower displacements. Extensive comparison with solutions from the literature is also made. The methodology is best suited for earthquake triggered shallow landslides in natural slopes, consisting of materials which do not lose strength during dynamic loading.  相似文献   

11.
Recent developments of 30 m global land characterization datasets(e.g., land cover, vegetation continues field) represent the finest spatial resolution inputs for global scale studies. Here, we present results from further improvement to land cover mapping and impact analysis of spatial resolution on area estimation for different land cover types. We proposed a set of methods to aggregate two existing 30 m resolution circa 2010 global land cover maps, namely FROM-GLC(Finer Resolution Observation and Monitoring-Global Land Cover) and FROM-GLC-seg(Segmentation), with two coarser resolution global maps on development, i.e., Nighttime Light Impervious Surface Area(NL-ISA) and MODIS urban extent(MODIS-urban), to produce an improved 30 m global land cover map—FROM-GLC-agg(Aggregation). It was post-processed using additional coarse resolution datasets(i.e., MCD12Q1, GlobCover2009, MOD44 W etc.) to reduce land cover type confusion. Around 98.9% pixels remain 30 m resolution after some post-processing to this dataset. Based on this map, majority aggregation and proportion aggregation approaches were employed to create a multi-resolution hierarchy(i.e., 250 m, 500 m, 1 km, 5 km, 10 km, 25 km, 50 km, 100 km) of land cover maps to meet requirements for different resolutions from different applications. Through accuracy assessment, we found that the best overall accuracies for the post-processed base map(at 30 m) and the three maps subsequently aggregated at 250 m, 500 m, 1 km resolutions are 69.50%, 76.65%, 74.65%, and 73.47%, respectively. Our analysis of area-estimation biases for different land cover types at different resolutions suggests that maps at coarser than 5 km resolution contain at least 5% area estimation error for most land cover types. Proportion layers, which contain precise information on land cover percentage, are suggested for use when coarser resolution land cover data are required.  相似文献   

12.
The paper explores the GC (Gravouil and Combescure) partitioning strategy recently adopted in real-time testing (RTDS) and pseudo-dynamic testing (PsD) with dynamic substructuring. The GC method is a multi-time step subdomain algorithm able to couple any time integration schemes from the Newmark family with the appropriate time step size dependent on the subdomain. The partitioning method is numerically tested by developing an external software able to couple finite element codes based on implicit and explicit time integration schemes. A complex Finite Element mesh partitioning, exhibiting a large number of interface points, has been considered for a full-size reinforced concrete frame structure subjected to an earthquake loading: the well-known SPEAR structure pseudo-dynamically tested at the ELSA laboratory, in Ispra, Italy. Implicit and explicit parts of the structure are modelled using multi-fibre beam elements whose cross-section is divided into steel and concrete fibres associated with cyclic and nonlinear behaviours. The accuracy of the results from the Explicit/Implicit multi-time step co-computation has been proved by comparing with the results from full explicit and full implicit computations. Despite the very large duration of the earthquake excitation and the number of interface nodes involved into the mesh partitioning, the Explicit/Implicit multi-time step co-computations provide very accurate global (displacements, forces at the base) and local (maximum strains in concrete) results while reducing by a factor 2 the computation times. Finally, it has been observed that the dissipated energy at the interface coming from the GC coupling algorithm remains very low at the end of the earthquake loading, less than 2% of the external energy, for time step ratios between the large and fine time steps ranging from 20 to 200.  相似文献   

13.
We present a comparison of methods for the analysis of the numerical substructure in a real‐time hybrid test. A multi‐tasking strategy is described, which satisfies the various control and numerical requirements. Within this strategy a variety of explicit and implicit time‐integration algorithms have been evaluated. Fully implicit schemes can be used in fast hybrid testing via a digital sub‐step feedback technique, but it is shown that this approach requires a large amount of computation at each sub‐step, making real‐time execution difficult for all but the simplest models. In cases where the numerical substructure poses no harsh stability condition, it is shown that the Newmark explicit method offers advantages of speed and accuracy. Where the stability limit of an explicit method cannot be met, one of the several alternatives may be used, such as Chang's modified Newmark scheme or the α‐operator splitting method. Appropriate methods of actuator delay compensation are also discussed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
The aim of refracted arrivals inversion is the computation of near-surface information, i.e. first-layer thicknesses and refractor velocities, in order to estimate the initial static corrections for the seismic data. The present trend is moving towards totally automatic inversion techniques, which start by picking the first breaks and end by aligning the seismic traces at the datum plane. Accuracy and computational time savings are necessary requirements. These are not straightforward, because accuracy means noise immunity, which implies the processing of large amounts of data to take advantage of redundancy; moreover, owing to the non-linearity of the problem, accuracy also means high-order modelling and, as a consequence, complex algorithms for making the inversion. The available methods are considered here with respect to the expected accuracy, i.e. to the model they assume. It is shown that the inversion of the refracted arrivals with a linear model leads to an ill-conditioned problem with the result that complete separation between the weathering thickness and the refractor velocity is not possible. This ambiguity is carefully analysed both in the spatial domain and in the wavenumber domain. An error analysis is then conducted with respect to the models and to the survey configurations that are used. Tests on synthetic data sets validate the theories and also give an idea of the magnitude of the error. This is largely dependent on the structure; here quantitative analysis is extended up to second derivative effects, whereas up to now seismic literature has only dealt with first derivatives. The topographical conditions which render the traditional techniques incorrect are investigated and predicted by the error equations. Improved solutions, based on more accurate models, are then considered: the advantages of the Generalized Reciprocal Method are demonstrated by applying the results of the error analysis to it, and the accuracy of the non-linear methods is discussed with respect to the interpolation technique which they adopt. Finally, a two-step procedure, consisting of a linear model inversion followed by a local non-linear correction, is suggested as a good compromise between accuracy and computational speed.  相似文献   

15.
This paper is concerned with application of the h-adaptive finite element method to dynamic analysis of a pile in liquefiable soil considering large deformation. In finite element analysis of pile behavior in liquefiable soil during an earthquake, especially considering large deformation of liquefied ground, error due to discretization in the zone near the pile becomes very large. Our purpose was to refine the approximation of the finite element method. The updated Lagrangian formulation and a cyclic elasto-plastic model based on the kinematic hardening rule were adopted to deal with the nonlinearity of the soil. The mixed finite element and finite difference methods together with the u-p formulation and Biot's two-phase mixture theory were used. To improve the accuracy and increase the efficiency of finite element analysis, an h-adaptive scheme that included a posteriori error estimation and h-version mesh refinement was applied to the analysis. The calculated results of effective stress were smoothed locally by the extrapolation method and smoothed stress was used to calculate the L2 norm of the effective stress error in the last step of the calculation of each time increment. The mesh was refined by a fission procedure based on the indication of the error estimate As a numerical example, a soil–pile interaction system loaded cyclically was analyzed by our method.  相似文献   

16.
In the Newmark and other approximate step-by-step methods, having introduced assumptions in order to transform the differential equations, which are characteristic of response problems, into simultanéous equations, successive solutions lead to a response-time history. In this paper numerical results and formulae are given for the errors which are generated by this procedure. These errors are oscillatory in nature and, in general, the oscillations increase in magnitude as the number of time steps increases. Recommendations for upper limits on the time step, which will provide acceptable accuracy for a wide range of system and excitation parameters, are presented.  相似文献   

17.
Various regional flood frequency analysis procedures are used in hydrology to estimate hydrological variables at ungauged or partially gauged sites. Relatively few studies have been conducted to evaluate the accuracy of these procedures and estimate the error induced in regional flood frequency estimation models. The objective of this paper is to assess the overall error induced in the residual kriging (RK) regional flood frequency estimation model. The two main error sources in specific flood quantile estimation using RK are the error induced in the quantiles local estimation procedure and the error resulting from the regional quantile estimation process. Therefore, for an overall error assessment, the corresponding errors associated with these two steps must be quantified. Results show that the main source of error in RK is the error induced into the regional quantile estimation method. Results also indicate that the accuracy of the regional estimates increases with decreasing return periods. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
The “modified Picard” iteration method, which offers global mass conservation, can also be described as a form of Newton's iteration with lagged nonlinear coefficients. It converges to a time step with first-order discretization error. This paper applies second- and third-order diagonally implicit Runge Kutta (DIRK) time steps to the modified Picard method in one example. It demonstrates improvements over the first-order time step in rms error and error-times-effort model quality by factors ranging from two to over two orders of magnitude, showing that the “modified Picard” and DIRK methods are compatible.  相似文献   

19.
The Stokes problem describes flow of an incompressible constant-viscosity fluid when the Reynolds number is small so that inertial and transient-time effects are negligible. The numerical solution of the Stokes problem requires special care, since classical finite element discretization schemes, such as piecewise linear interpolation for both the velocity and the pressure, fail to perform. Even when an appropriate scheme is adopted, the grid must be selected so that the error is as small as possible. Much of the challenge in solving Stokes problems is how to account for complex geometry and to capture important features such as flow separation. This paper applies adaptive mesh techniques, using a posteriori error estimates, in the finite element solution of the Stokes equations that model flow at pore scales. Different selected numerical test cases associated with various porous geometrics are presented and discussed to demonstrate the accuracy and efficiency of our methodology.  相似文献   

20.
Abstract

Recent work pertaining to estimating error and accuracies in geomagnetic field modeling is reviewed from a unified viewpoint and illustrated with examples. The formulation of a finite dimensional approximation to the underlying infinite dimensional problem is developed. Central to the formulation is an inner product and norm in the solution space through which a priori information can be brought to bear on the problem. Such information is crucial to estimation of the effects of higher degree fields at the Core-Mantle boundary (CMB) because the behavior of higher degree fields is masked in our measurements by the presence of the field from the Earth's crust. Contributions to the errors in predicting geophysical quantities based on the approximate model are separated into three categories: (1) the usual error from the measurement noise; (2) the error from unmodeled fields, i.e. from sources in the crust, ionosphere, etc.; and (3) the error from truncating to a finite dimensioned solution and prediction space. The combination of the first two is termed low degree error while the third is referred to as truncation error.

The error analysis problem consists of “characterizing” the difference δz = z—z, where z is some quantity depending on the magnetic field and z is the estimate of z resulting from our model. Two approaches are discussed. The method of Confidence Set Inference (CSI) seeks to find an upper bound for |z—?|. Statistical methods, i.e. Bayesian or Stochastic Estimation, seek to estimate Ez2 ), where E is the expectation value. Estimation of both the truncation error and low degree error is discussed for both approaches. Expressions are found for an upper bound for |δz| and for Ez2 ). Of particular interest is the computation of the radial field, B., at the CMB for which error estimates are made as examples of the methods. Estimated accuracies of the Gauss coefficients are given for the various methods. In general, the lowest error estimates result when the greatest amount of a priori information is available and, indeed, the estimates for truncation error are completely dependent upon the nature of the a priori information assumed. For the most conservative approach, the error in computing point values of Br at the CMB is unbounded and one must be content with, e.g., averages over some large area. The various assumptions about a priori information are reviewed. Work is needed to extend and develop this information. In particular, information regarding the truncated fields is needed to determine if the pessimistic bounds presently available are realistic or if there is a real physical basis for lower error estimates. Characterization of crustal fields for degree greater than 50 is needed as is more rigorous characterization of the external fields.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号