首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper introduces an extension of the traditional stationary linear coregionalization model to handle the lack of stationarity. Under the proposed model, coregionalization matrices are spatially dependent, and basic univariate spatial dependence structures are non-stationary. A parameter estimation procedure of the proposed non-stationary linear coregionalization model is developed under the local stationarity framework. The proposed estimation procedure is based on the method of moments and involves a matrix-valued local stationary variogram kernel estimator, a weighted local least squares method in combination with a kernel smoothing technique. Local parameter estimates are knitted together for prediction and simulation purposes. The proposed non-stationary multivariate spatial modeling approach is illustrated using two real bivariate data examples. Prediction performance comparison is carried out with the classical stationary multivariate spatial modeling approach. According to several criteria, the prediction performance of the proposed non-stationary multivariate spatial modeling approach appears to be significantly better.  相似文献   

2.
This is the first in a series of three papers focused on using variants of a logarithmic objective function approach to full waveform inversion. In this article, we investigate waveform inversion using full logarithmic principles and compare the results with the conventional least squares approach. We demonstrate theoretically that logarithmic inversion is computational similar to the conventional method in the sense that it uses exactly the same back‐propagation technology as used in least‐squares inversion. In the sense that it produces better results for each of three numerical examples, we conclude that logarithmic inversion is also more robust. We argue that a major reason for the inherent robustness is the fact that the logarithmic approach produces a natural scaling of the amplitude of the residual wavefield by the amplitude of the modelled wavefield that tends to stabilize the computations and consequently improve the final result. We claim that any superiority of the logarithmic inversion is based on the fact that it tends to be tomographic in the early stage of the inversion and more dependent on amplitude differences in the latter stages.  相似文献   

3.
Multidimensional scaling (MDS) has played an important role in non-stationary spatial covariance structure estimation and in analyzing the spatiotemporal processes underlying environmental studies. A combined cluster-MDS model, including geographical spatial constraints, has been previously proposed by the authors to address the estimation problem in oversampled domains in a least squares framework. In this paper is formulated a general latent class model with spatial constraints that, in a maximum likelihood framework, allows to partition the sample stations into classes and simultaneously to represent the cluster centers in a low-dimensional space, while the stations and clusters retain their spatial relationships. A model selection strategy is proposed to determine the number of latent classes and the dimensionality of the problem. Real and artificial data sets are analyzed to test the performance of the model.  相似文献   

4.
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression‐based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least‐squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least‐squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least‐squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.  相似文献   

5.
The key problem in nonparametric frequency analysis of flood and droughts is the estimation of the bandwidth parameter which defines the degree of smoothing. Most of the proposed bandwidth estimators have been based on the density function rather than the cumulative distribution function or the quantile that are the primary interest in frequency analysis. We propose a new bandwidth estimator derived from properties of quantile estimators. The estimator builds on work by Altman and Léger (1995). The estimator is compared to the well-known method of least squares cross-validation (LSCV) using synthetic data generated from various parametric distributions used in hydrologic frequency analysis. Simulations suggest that our estimator performs at least as well as, and in many cases better than, the method of LSCV. In particular, the use of the proposed plug-in estimator reduces bias in the estimation as compared to LSCV. When applied to data sets containing observations with identical values, typically the result of rounding or truncation, the LSCV and most other techniques generally underestimates the bandwidth. The proposed technique performs very well in such situations.  相似文献   

6.
常规装配式混凝土柱脚连接常采用灌浆套筒作为连接方式,为克服地震作用损伤集中于连接区而不利于抗震和修复的问题,提出了一种基于小型超高性能混凝土(UHPC)壳的装配式柱脚连接。采用UHPC预制成环形壳状,设置于预制混凝土柱脚区域,控制该柱脚连接在地震作用下损伤出现的部位。进行了3个足尺试件的试验,对比分析了滞回和骨架曲线、强度和刚度退化以及耗能能力,研究了预制UHPC壳尺寸对抗震性能的影响,提出了骨架曲线简化计算模型。结果表明:该连接形式在地震作用下的混凝土破坏区域转移至UHPC壳上边缘;抗震性能总体良好;较厚较短的UHPC壳更加有利于提高基于小型UHPC壳的装配式混凝土柱脚连接的抗震性能;提出的简化计算模型在一定程度上反映了该连接的内在机理,可用于该连接形式的分析和设计。  相似文献   

7.
High-resolution depth imaging with sparseness-constrained inversion   总被引:2,自引:0,他引:2  
An imaging technique is developed which exceeds the resolution limitation prescribed by conventional seismic imaging methods. The high‐resolution imaging is obtained by introducing a sparseness‐constrained least‐squares inversion into the imaging process of prestack depth migration. This is implemented by a proposed interference technique. In contrast to conventional depth migration, a decomposed signal or combined event, instead of the source wavelet, is needed in the proposed scheme. The proposed method aims to image a small local region with a higher resolution using the prestack data set. It should be applied following conventional depth imaging if a higher resolution is needed in a target zone rather than replacing the conventional depth imaging for the entire medium. Synthetic examples demonstrate the significant improvements in the resolution using the proposed scheme.  相似文献   

8.
This paper presents an analytical procedure for determining ductility damage indices using static collapse mechanism analysis for ductile reinforced concrete (RC) frames subjected to prescribed drift limits corresponding to different seismic performance levels. This assessment benefits from performance-based seismic design (PBSD) concept that employs rotation ductility factors, pre-defined target damage indices and beam sidesway mechanism as key performance objectives to estimate curvature ductility demands at pre-designated plastic hinges of beam sidesway mechanism. The proposed ductility-based damage indices (DBDI) assessment procedure considers regular frames with secondary effects such as P-Delta and soil–structure interaction (SSI) within a simple non-iterative process suitable for practical applications. A 12-story RC moment frame was chosen to implement the proposed procedure considering P-Delta effect. Pushover analysis using SAP 2000 was carried out for the frame to verify the results of the DBDI method. The results show that the DBDI seismic assessment procedure can be used to quantify the damage potential at different performance levels and relate that to local flexural ductility of critical frame members. The research presented in this paper provides a simple yet conservative damage assessment tool for use by practicing engineers.  相似文献   

9.
ABSTRACT

A procedure for calculating areal rainfall, based on recent innovations in finite element analysis, is presented. The procedure involves the use of interpolation functions, allowing an accurate representation of the shape and relief of the catchment, with numerical integration performed by Gaussian quadrature. Each raingauge is allotted two weights, one associated with the rainfall reduced to a datum, and the other with the rainfall-altitude relationship. The latter weight effectively removes any systematic errors due to altitudinal bias of the network.

The rainfall-altitude relationship, derived for individual storms and for synoptic situations for a small area, is used to show that errors due to the bias of the network can be considerable.  相似文献   

10.
Dense 3D residual moveout analysis as a tool for HTI parameter estimation   总被引:1,自引:0,他引:1  
Three‐dimensional residual moveout analysis is the basic step in velocity model refinement. The analysis is generally carried out using horizontal and/or vertical semblances defined on a sparse set of in‐lines or cross‐lines with densely sampled source–receiver offsets. An alternative approach, which we call dense residual moveout analysis (DRMA), is to use all the bins of a three‐dimensional survey but sparsely sampled offsets. The proposed technique is very fast and provides unbiased and statistically efficient estimates of the residual moveout. Indeed, for the sparsest possible offset distribution, when only near‐ and far‐angle stacks are used, the variance of the residual moveout estimate is only 1.4 times larger than the variance of the least‐squares estimate obtained using all offsets. The high performance of DRMA makes it a useful tool for many applications, of which azimuthal velocity analysis is considered here. For a horizontal transverse isotropy (HTI) model, a deterministic procedure is proposed to define, at every point of residual moveout estimation, the azimuthal angle of the HTI axis of symmetry, the Thomsen anisotropy coefficients, and the interval (or root‐mean‐square) velocities in both the HTI isotropy and symmetry planes. The procedure is not restricted by DRMA assumptions; for example, it is also applicable to semblance‐based residual moveout estimates. The high resolution of the technique is illustrated by azimuthal velocity analysis over an oilfield in West Siberia.  相似文献   

11.
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1°-averaged surface free-air gravity anomalies and POGO satellite magnetometer data for the United States, Mexico and Central America illustrate the capabilities of the method.  相似文献   

12.
This paper evaluates a recent record selection and scaling procedure of the authors that can determine the probabilistic structural response of buildings behaving either in the elastic or post‐elastic range. This feature marks a significant strength on the procedure as the probabilistic structural response distribution conveys important information on probability‐based damage assessment. The paper presents case studies that show the utilization of the proposed record selection and scaling procedure as a tool for the estimation of damage states and derivation of site‐specific and region‐specific fragility functions. The method can be used to describe exceedance probabilities of damage limits under a certain target hazard level with known annual exceedance rate (via probabilistic seismic hazard assessment). Thus, the resulting fragility models can relate the seismicity of the region (or a site) with the resulting building performance in a more accurate manner. Under this context, this simple and computationally efficient record selection and scaling procedure can be benefitted significantly by probability‐based risk assessment methods that have started to be considered as indispensable for developing robust earthquake loss models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Least‐squares reverse time migration provides better imaging result than conventional reverse time migration by reducing the migration artefacts, improving the resolution of the image and balancing the amplitudes of the reflectors. However, it is computationally intensive. To reduce its computational cost, we propose an efficient amplitude encoding least‐squares reverse time migration scheme in the time domain. Although the encoding scheme is effective in increasing the computational efficiency, it also introduces the well‐known crosstalk noise in the gradient that degrades the quality of the imaging result. We analyse the cause of the crosstalk noise using an encoding correlation matrix and then develop two numerical schemes to suppress the crosstalk noise during the inversion process. We test the proposed method with synthetic and field data. Numerical examples show that the proposed scheme can provide better imaging result than reverse time migration, and it also generates images comparable with those from common shot least‐squares reverse time migration but with less computational cost.  相似文献   

14.
Yang SY  Yeh HD 《Ground water》2004,42(5):781-784
Slug test data obtained from tests performed in an unconfined aquifer are commonly analyzed by graphical or numerical approaches to determine the aquifer parameters. This paper derives three fourth-degree polynomials to represent the relationship between Bouwer and Rice's coefficients and the ratio of the screen length to the radius of the gravel envelope. A numerical approach using the nonlinear least squares and Newton's method is used to determine hydraulic conductivity from the best fit of the slug test data. The method of nonlinear least squares minimizes the sum of the squares of the differences between the predicted and observed water levels inside the well. With the polynomials, the hydraulic conductivity can be obtained by simply solving the nonlinear least squares equation by Newton's method. A computer code, SLUGBR, was developed from the derived polynomials using the proposed numerical approach. The results of analyzing two slug test datasets show that SLUGBR can determine hydraulic conductivity with very good accuracy.  相似文献   

15.
This paper addresses the parametric inverse problem of locating the point of release of atmospheric pollution. A finite set of observed mixing ratios is compared, by use of least squares, with the analogous mixing ratios computed by an adjoint dispersion model for all possible locations of the release. Classically, the least squares are weighted using the covariance matrix of the measurement errors. However, in practice, this matrix cannot be determined for the prevailing part of these errors arising from the limited representativity of the dispersion model. An alternative weighting proposed here is related to a unified approach of the parametric and assimilative inverse problems corresponding, respectively, to identification of the point of emission or estimation of the distributed emissions. The proposed weighting is shown to optimize the resolution and numerical stability of the inversion. The importance of the most common monitoring networks, with point detectors at various locations, is stressed as a misleading singular case. During the procedure it is also shown that a monitoring network, under given meteorological conditions, itself contains natural statistics about the emissions, irrespective of prior assumptions.  相似文献   

16.
In this work a new algorithm for the fast and efficient 3D inversion of conventional 2D surface electrical resistivity tomography lines is presented. The proposed approach lies on the assumption that for every surface measurement there is a large number of 3D parameters with very small absolute Jacobian matrix values, which can be excluded in advance from the Jacobian matrix calculation, as they do not contribute significant information in the inversion procedure. A sensitivity analysis for both homogeneous and inhomogeneous earth models showed that each measurement has a specific region of influence, which can be limited to parameters in a critical rectangular prism volume. Application of the proposed algorithm accelerated almost three times the Jacobian (sensitivity) matrix calculation for the data sets tested in this work. Moreover, application of the least squares regression iterative inversion technique, resulted in a new 3D resistivity inversion algorithm more than 2.7 times faster and with computer memory requirements less than half compared to the original algorithm. The efficiency and accuracy of the algorithm was verified using synthetic models representing typical archaeological structures, as well as field data collected from two archaeological sites in Greece, employing different electrode configurations. The applicability of the presented approach is demonstrated for archaeological investigations and the basic idea of the proposed algorithm can be easily extended for the inversion of other geophysical data.  相似文献   

17.
Unit hydrographs (UHs), along with design rainfalls, are frequently used to determine the discharge hydrograph for design and evaluation of hydraulic structures. Due to the presence of various uncertainties in its derivation, the resulting UH is inevitably subject to uncertainty. Consequently, the performance of hydraulic structures under the design storm condition is uncertain. This paper integrates the linearly constrained Monte-Carlo simulation with the UH theory and routing techniques to evaluate the reliability of hydraulic structures. The linear constraint is considered because the water volume of each generated design direct runoff hydrograph should be equal to that of the design effective rainfall hyetograph or the water volume of each generated UH must be equal to one inch (or cm) over the watershed. For illustration, the proposed methodology is applied to evaluate the overtopping risk of a hypothetical flood detention reservoir downstream of Tong-Tou watershed in Taiwan.  相似文献   

18.
This study presents a ground-motion selection and scaling methodology that preserves the basic seismological features of the scaled records with reduced scatter in the nonlinear structural response. The methodology modifies each strong-motion recording with known fundamental seismological parameters using the estimations of ground-motion prediction equations for a given target hazard level. It provides robust estimations on target building response through scaled ground motions and calculates the dispersion about this target. This alternative procedure is not only useful for record scaling and selection but, upon its further refinement, can also be advantageous for the probabilistic methods that assess the engineering demand parameters for a given target hazard level. Case studies that compare the performance of the proposed procedure with some other record selection and scaling methods suggest its usefulness for building performance assessment and loss models. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
Unit hydrographs (UHs), along with design rainfalls, are frequently used to determine the discharge hydrograph for design and evaluation of hydraulic structures. Due to the presence of various uncertainties in its derivation, the resulting UH is inevitably subject to uncertainty. Consequently, the performance of hydraulic structures under the design storm condition is uncertain. This paper integrates the linearly constrained Monte-Carlo simulation with the UH theory and routing techniques to evaluate the reliability of hydraulic structures. The linear constraint is considered because the water volume of each generated design direct runoff hydrograph should be equal to that of the design effective rainfall hyetograph or the water volume of each generated UH must be equal to one inch (or cm) over the watershed. For illustration, the proposed methodology is applied to evaluate the overtopping risk of a hypothetical flood detention reservoir downstream of Tong-Tou watershed in Taiwan.  相似文献   

20.
This paper deals with the analysis of gravity anomaly and precise levelling in conjunction with GPS-Levelling data for the computation of a gravimetric geoid and an estimate of the height system bias parameter No for the vertical datum in Pakistan by means of least squares collocation technique. The long term objective is to obtain a regional geoid (or quasi-geoid) modeling using a combination of local data with a high degree and order Earth gravity model (EGM) and to determine a bias (if there is one) with respect to a global mean sea surface. An application of collocation with the optimal covariance parameters has facilitated to achieve gravimetric height anomalies in a global geocentric datum. Residual terrain modeling (RTM) technique has been used in combination with the EGM96 for the reduction and smoothing of the gravity data. A value for the bias parameter No has been estimated with reference to the local GPS-Levelling datum that appears to be 0.705 m with 0.07 m mean square error. The gravimetric height anomalies were compared with height anomalies obtained from GPS-Levelling stations using least square collocation with and without bias adjustment. The bias adjustment minimizes the difference between the gravimetric height anomalies with respect to residual GPS-Levelling data and the standard deviation of the differences drops from 35 cm to 2.6 cm. The results of this study suggest that No adjustment may be a good alternative for the fitting of the final gravimetric geoid as is generally done when using FFT methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号