首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Traditional probabilistic seismic hazard analysis (PSHA) uses ground-motion models that are based on the ergodic assumption, which means that the distribution of ground motions over time at a given site is the same as their spatial distribution over different sites. Evaluations of ground-motion data sets with multiple measurements at a given site and multiple earthquakes in a given region have shown that the ergodic assumption is not appropriate as there are strong systematic region-specific source terms and site-specific path and site terms that are spatially correlated. We model these correlations using a spatial Gaussian process model. Different correlations functions are employed, both stationary and non-stationary, and the results are compared in terms of their predictive power. Spatial correlations of residuals are investigated on a Taiwanese strong-motion data set, and ground motions are collected at the ANZA, CA array. Source effects are spatially correlated, but provide a much stronger benefit in terms of prediction for the ANZA data set than for the Taiwanese data set. We find that systematic path effects are best modeled by a non-stationary covariance function that is dependent on source-to-site distance and magnitude. The correlation structure estimated from Californian data can be transferred to Taiwan if one carefully accounts for differences in magnitudes. About 50% of aleatory variance can be explained by accounting for spatial correlation.  相似文献   

2.
Due to the fast pace increasing availability and diversity of information sources in environmental sciences, there is a real need of sound statistical mapping techniques for using them jointly inside a unique theoretical framework. As these information sources may vary both with respect to their nature (continuous vs. categorical or qualitative), their spatial density as well as their intrinsic quality (soft vs. hard data), the design of such techniques is a challenging issue. In this paper, an efficient method for combining spatially non-exhaustive categorical and continuous data in a mapping context is proposed, based on the Bayesian maximum entropy paradigm. This approach relies first on the definition of a mixed random field, that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function. When incorporating general knowledge about the first- and second-order moments of these fields, it is shown that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions, with parameters estimation that can be obtained from entropy maximization. A posterior distribution that incorporates the various (soft or hard) continuous and categorical data at hand can then be obtained by a straightforward conditionalization step. The use and potential of the method is illustrated by the way of a simulated case study. A comparison with few common geostatistical methods in some limit cases also emphasizes their similarities and differences, both from the theoretical and practical viewpoints. As expected, adding categorical information may significantly improve the spatial prediction of a continuous variable, making this approach powerful and very promising.  相似文献   

3.
A new wavelet-based estimation methodology, in the context of spatial functional regression, is proposed to discriminate between small-scale and large scale variability of spatially correlated functional data, defined by depth-dependent curves. Specifically, the discrete wavelet transform of the data is computed in space and depth to reduce dimensionality. Moment-based regression estimation is applied for the approximation of the scaling coefficients of the functional response. While its wavelet coefficients are estimated in a Bayesian regression framework. Both regression approaches are implemented from the empirical versions of the scaling and wavelet auto-covariance and cross-covariance operators, characterizing the correlation structure of the spatial functional response. Weather stations in ocean islands display high spatial concentration. The proposed estimation methodology overcomes the difficulties arising in the estimation of ocean temperature field at different depths, from long records of ocean temperature measurements in these stations. Data are collected from The World-Wide Ocean Optics Database. The performance of the presented approach is tested in terms of 10-fold cross-validation, and residual spatial and depth correlation analysis. Additionally, an application to soil sciences, for prediction of electrical conductivity profiles is also considered to compare this approach with previous related ones, in the statistical analysis of spatially correlated curves in depth.  相似文献   

4.
This paper presents a novel approach to model and simulate the multi-support depth-varying seismic motions (MDSMs) within heterogeneous offshore and onshore sites. Based on 1D wave propagation theory, the three-dimensional ground motion transfer functions on the surface or within an offshore or onshore site are derived by considering the effects of seawater and porous soils on the propagation of seismic P waves. Moreover, the depth-varying and spatial variation properties of seismic ground motions are considered in the ground motion simulation. Using the obtained transfer functions at any locations within a site, the offshore or onshore depth-varying seismic motions are stochastically simulated based on the spectral representation method (SRM). The traditional approaches for simulating spatially varying ground motions are improved and extended to generate MDSMs within multiple offshore and onshore sites. The simulation results show that the PSD functions and coherency losses of the generated MDSMs are compatible with respective target values, which fully validates the effectiveness of the proposed simulation method. The synthesized MDSMs can provide strong support for the precise seismic response prediction and performance-based design of both offshore and onshore large-span engineering structures.  相似文献   

5.
Data assimilation is widely used to improve flood forecasting capability, especially through parameter inference requiring statistical information on the uncertain input parameters (upstream discharge, friction coefficient) as well as on the variability of the water level and its sensitivity with respect to the inputs. For particle filter or ensemble Kalman filter, stochastically estimating probability density function and covariance matrices from a Monte Carlo random sampling requires a large ensemble of model evaluations, limiting their use in real-time application. To tackle this issue, fast surrogate models based on polynomial chaos and Gaussian process can be used to represent the spatially distributed water level in place of solving the shallow water equations. This study investigates the use of these surrogates to estimate probability density functions and covariance matrices at a reduced computational cost and without the loss of accuracy, in the perspective of ensemble-based data assimilation. This study focuses on 1-D steady state flow simulated with MASCARET over the Garonne River (South-West France). Results show that both surrogates feature similar performance to the Monte-Carlo random sampling, but for a much smaller computational budget; a few MASCARET simulations (on the order of 10–100) are sufficient to accurately retrieve covariance matrices and probability density functions all along the river, even where the flow dynamic is more complex due to heterogeneous bathymetry. This paves the way for the design of surrogate strategies suitable for representing unsteady open-channel flows in data assimilation.  相似文献   

6.
Truncated pluri-Gaussian simulation (TPGS) is suitable for the simulation of categorical variables that show natural ordering as the TPGS technique can consider transition probabilities. The TPGS assumes that categorical variables are the result of the truncation of underlying latent variables. In practice, only the categorical variables are observed. This translates the practical application of TPGS into a missing data problem in which all latent variables are missing. Latent variables are required at data locations in order to condition categorical realizations to observed categorical data. The imputation of missing latent variables at data locations is often achieved by either assigning constant values or spatially simulating latent variables subject to categorical observations. Realizations of latent variables can be used to condition all model realizations. Using a single realization or a constant value to condition all realizations is the same as assuming that latent variables are known at the data locations and this assumption affects uncertainty near data locations. The techniques for imputation of latent variables in TPGS framework are investigated in this article and their impact on uncertainty of simulated categorical models and possible effects on factors affecting decision making are explored. It is shown that the use of single realization of latent variables leads to underestimation of uncertainty and overestimation of measured resources while the use constant values for latent variables may lead to considerable over or underestimation of measured resources. The results highlight the importance of multiple data imputation in the context of TPGS.  相似文献   

7.
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.  相似文献   

8.
Covariance functions and models for complex-valued random fields   总被引:1,自引:1,他引:0  
In Geostatistics, primary interest often lies in the study of the spatial, or spatial-temporal, correlation of real-valued random fields, anyway complex-valued random field theory is surely a natural extension of the real domain. In such a case, it is useful to consider complex covariance functions which are composed of an even real part and an odd imaginary part. Generating complex covariance functions is not simple at all, but the procedure, developed in this paper, allows generating permissible covariance functions for complex-valued random fields in a straightforward way. In particular, by recalling the spectral representation of the covariance and translating the spectral density function by using a shifting factor, complex covariances are obtained. Some general aspects and properties of complex-valued random fields and their moments are pointed out and some examples are given.  相似文献   

9.
Globally supported covariance functions are generally associated with dense covariance matrices, meaning severe numerical problems in solution feasibility. These problems can be alleviated by considering methods yielding sparse covariance matrices. Indeed, having many zero entries in the covariance matrix can both greatly reduce computer storage requirements and the number of floating point operations needed in computation. Compactly supported covariance functions considerably reduce the computational burden of kriging, and allow the use of computationally efficient sparse matrix techniques, thus becoming a core aspect in spatial prediction when dealing with massive data sets. However, most of the work done in the context of compactly supported covariance functions has been carried out in the stationary context. This assumption is not generally met in practical and real problems, and there has been a growing recognition of the need for non-stationary spatial covariance functions in a variety of disciplines. In this paper we present a new class of non-stationary, compactly supported spatial covariance functions, which adapts a class of convolution-based flexible models to non-stationary situations. Some particular examples, computational issues, and connections with existing models are considered.  相似文献   

10.
Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical–numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the optimum physical and statistical resolutions.  相似文献   

11.
Categorical data play an important role in a wide variety of spatial applications, while modeling and predicting this type of statistical variable has proved to be complex in many cases. Among other possible approaches, the Bayesian maximum entropy methodology has been developed and advocated for this goal and has been successfully applied in various spatial prediction problems. This approach aims at building a multivariate probability table from bivariate probability functions used as constraints that need to be fulfilled, in order to compute a posterior conditional distribution that accounts for hard or soft information sources. In this paper, our goal is to generalize further the theoretical results in order to account for a much wider type of information source, such as probability inequalities. We first show how the maximum entropy principle can be implemented efficiently using a linear iterative approximation based on a minimum norm criterion, where the minimum norm solution is obtained at each step from simple matrix operations that converges to the requested maximum entropy solution. Based on this result, we show then how the maximum entropy problem can be related to the more general minimum divergence problem, which might involve equality and inequality constraints and which can be solved based on iterated minimum norm solutions. This allows us to account for a much larger panel of information types, where more qualitative information, such as probability inequalities can be used. When combined with a Bayesian data fusion approach, this approach deals with the case of potentially conflicting information that is available. Although the theoretical results presented in this paper can be applied to any study (spatial or non-spatial) involving categorical data in general, the results are illustrated in a spatial context where the goal is to predict at best the occurrence of cultivated land in Ethiopia based on crowdsourced information. The results emphasize the benefit of the methodology, which integrates conflicting information and provides a spatially exhaustive map of these occurrence classes over the whole country.  相似文献   

12.
The spatial distribution of residual light non-aqueous phase liquid (LNAPL) is an important factor in reactive solute transport modeling studies. There is great uncertainty associated with both the areal limits of LNAPL source zones and smaller scale variability within the areal limits. A statistical approach is proposed to construct a probabilistic model for the spatial distribution of residual NAPL and it is applied to a site characterized by ultra-violet-induced-cone-penetration testing (CPT–UVIF). The uncertainty in areal limits is explicitly addressed by a novel distance function (DF) approach. In modeling the small-scale variability within the areal limits, the CPT–UVIF data are used as primary source of information, while soil texture and distance to water table are treated as secondary data. Two widely used geostatistical techniques are applied for the data integration, namely sequential indicator simulation with locally varying means (SIS–LVM) and Bayesian updating (BU). A close match between the calibrated uncertainty band (UB) and the target probabilities shows the performance of the proposed DF technique in characterization of uncertainty in the areal limits. A cross-validation study also shows that the integration of the secondary data sources substantially improves the prediction of contaminated and uncontaminated locations and that the SIS–LVM algorithm gives a more accurate prediction of residual NAPL contamination. The proposed DF approach is useful in modeling the areal limits of the non-stationary continuous or categorical random variables, and in providing a prior probability map for source zone sizes to be used in Monte Carlo simulations of contaminant transport or Monte Carlo type inverse modeling studies.  相似文献   

13.
Simulating fields of categorical geospatial variables from samples is crucial for many purposes, such as spatial uncertainty assessment of natural resources distributions. However, effectively simulating complex categorical variables (i.e., multinomial classes) is difficult because of their nonlinearity and complex interclass relationships. The existing pure Markov chain approach for simulating multinomial classes has an apparent deficiency—underestimation of small classes, which largely impacts the usefulness of the approach. The Markov chain random field (MCRF) theory recently proposed supports theoretically sound multi-dimensional Markov chain models. This paper conducts a comparative study between a MCRF model and the previous Markov chain model for simulating multinomial classes to demonstrate that the MCRF model effectively solves the small-class underestimation problem. Simulated results show that the MCRF model fairly produces all classes, generates simulated patterns imitative of the original, and effectively reproduces input transiograms in realizations. Occurrence probability maps are estimated to visualize the spatial uncertainty associated with each class and the optimal prediction map. It is concluded that the MCRF model provides a practically efficient estimator for simulating multinomial classes from grid samples.  相似文献   

14.
Using auxiliary information to improve the prediction accuracy of soil properties in a physically meaningful and technically efficient manner has been widely recognized in pedometrics. In this paper, we explored a novel technique to effectively integrate sampling data and auxiliary environmental information, including continuous and categorical variables, within the framework of the Bayesian maximum entropy (BME) theory. Soil samples and observed auxiliary variables were combined to generate probability distributions of the predicted soil variable at unsampled points. These probability distributions served as soft data of the BME theory at the unsampled locations, and, together with the hard data (sample points) were used in spatial BME prediction. To gain practical insight, the proposed approach was implemented in a real-world case study involving a dataset of soil total nitrogen (TN) contents in the Shayang County of the Hubei Province (China). Five terrain indices, soil types, and soil texture were used as auxiliary variables to generate soft data. Spatial distribution of soil total nitrogen was predicted by BME, regression kriging (RK) with auxiliary variables, and ordinary kriging (OK). The results of the prediction techniques were compared in terms of the Pearson correlation coefficient (r), mean error (ME), and root mean squared error (RMSE). These results showed that the BME predictions were less biased and more accurate than those of the kriging techniques. In sum, the present work extended the BME approach to implement certain kinds of auxiliary information in a rigorous and efficient manner. Our findings showed that the BME prediction technique involving the transformation of variables into soft data can improve prediction accuracy considerably, compared to other techniques currently in use, like RK and OK.  相似文献   

15.
Many recent studies have been devoted to the investigation of the nonlinear dynamics of rainfall or streamflow series based on methods of dynamical systems theory. Although finding evidence for the existence of a low-dimensional deterministic component in rainfall or streamflow is of much interest, not much attention has been given to the nonlinear dependencies of the two and especially on how the spatio-temporal distribution of rainfall affects the nonlinear dynamics of streamflow at flood time scales. In this paper, a methodology is presented which simultaneously considers streamflow series, spatio-temporal structure of precipitation and catchment geomorphology into a nonlinear analysis of streamflow dynamics. The proposed framework is based on “hydrologically-relevant” rainfall-runoff phase-space reconstruction acknowledging the fact that rainfall-runoff is a stochastic spatially extended system rather than a deterministic multivariate one. The methodology is applied to two basins in Central North America using 6-hour streamflow data and radar images for a period of 5 years. The proposed methodology is used to: (a) quantify the nonlinear dependencies between streamflow dynamics and the spatio-temporal dynamics of precipitation; (b) study how streamflow predictability is affected by the trade-offs between the level of detail necessary to explain the spatial variability of rainfall and the reduction of complexity due to the smoothing effect of the basin; and (c) explore the possibility of incorporating process-specific information (in terms of catchment geomorphology and an a priori chosen uncertainty model) into nonlinear prediction. Preliminary results are encouraging and indicate the potential of using the proposed methodology to understand via nonlinear analysis of observations (i.e., not based on a particular rainfall-runoff model) streamflow predictability and limits to prediction as a function of the complexity of spatio-temporal forcing relative to basin geomorphology.  相似文献   

16.
Spatial prediction and variable selection for the study area are both important issues in geostatistics. If spatially varying means exist among different subareas, globally fitting a spatial regression model for observations over the study area may be not suitable. To alleviate deviations from spatial model assumptions, this paper proposes a methodology to locally select variables for each subarea based on a locally empirical conditional Akaike information criterion. In this situation, the global spatial dependence of observations is considered and the local characteristics of each subarea are also identified. It results in a composite spatial predictor which provides a more accurate spatial prediction for the response variables of interest in terms of the mean squared prediction errors. Further, the corresponding prediction variance is also evaluated based on a resampling method. Statistical inferences of the proposed methodology are justified both theoretically and numerically. Finally, an application of a mercury data set for lakes in Maine, USA is analyzed for illustration.  相似文献   

17.
Geostatistical seismic inversion methods are routinely used in reservoir characterisation studies because of their potential to infer the spatial distribution of the petro‐elastic properties of interest (e.g., density, elastic, and acoustic impedance) along with the associated spatial uncertainty. Within the geostatistical seismic inversion framework, the retrieved inverse elastic models are conditioned by a global probability distribution function and a global spatial continuity model as estimated from the available well‐log data for the entire inversion grid. However, the spatial distribution of the real subsurface elastic properties is complex, heterogeneous, and, in many cases, non‐stationary since they directly depend on the subsurface geology, i.e., the spatial distribution of the facies of interest. In these complex geological settings, the application of a single distribution function and a spatial continuity model is not enough to properly model the natural variability of the elastic properties of interest. In this study, we propose a three‐dimensional geostatistical inversion technique that is able to incorporate the reservoir's heterogeneities. This method uses a traditional geostatistical seismic inversion conditioned by local multi‐distribution functions and spatial continuity models under non‐stationary conditions. The procedure of the proposed methodology is based on a zonation criterion along the vertical direction of the reservoir grid. Each zone can be defined by conventional seismic interpretation, with the identification of the main seismic units and significant variations of seismic amplitudes. The proposed method was applied to a highly non‐stationary synthetic seismic dataset with different levels of noise. The results of this work clearly show the advantages of the proposed method against conventional geostatistical seismic inversion procedures. It is important to highlight the impact of this technique in terms of higher convergence between real and inverted reflection seismic data and the more realistic approximation towards the real subsurface geology comparing with traditional techniques.  相似文献   

18.
Earthquake‐induced slope displacement is an important parameter for safety evaluation and earthquake design of slope systems. Traditional probabilistic seismic hazard analysis usually focuses on evaluating slope displacement at a particular location, and it is not suitable for spatially distributed slopes over a large region. This study proposes a computationally efficient framework for fully probabilistic seismic displacement analysis of spatially distributed slope systems using spatially correlated vector intensity measures (IMs). First, a spatial cross‐correlation model for three key ground motion IMs, that is, peak ground acceleration (PGA), Arias intensity, and peak ground velocity, is developed using 2686 ground motion recordings from 11 recent earthquakes. To reduce the computational cost, Monte Carlo simulation and data reduction techniques are utilized to generate spatially correlated random fields for the vector IMs. The slope displacement hazards over the region are further quantified using empirical predictive equations. Finally, an illustrative example is presented to highlight the importance of the spatial correlation and the advantage of using spatially correlated vector IMs in seismic hazard analysis of spatially distributed slopes. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Perspective on theories of non-Fickian transport in heterogeneous media   总被引:1,自引:0,他引:1  
Subsurface fluid flow and solute transport take place in a multiscale heterogeneous environment. Neither these phenomena nor their host environment can be observed or described with certainty at all scales and locations of relevance. The resulting ambiguity has led to alternative conceptualizations of flow and transport and multiple ways of addressing their scale and space–time dependencies. We focus our attention on four approaches that give rise to nonlocal representations of advective and dispersive transport of nonreactive tracers in randomly heterogeneous porous or fractured continua. We compare these approaches theoretically on the basis of their underlying premises and the mathematical forms of the corresponding nonlocal advective–dispersive terms. One of the four approaches describes transport at some reference support scale by a classical (Fickian) advection–dispersion equation (ADE) in which velocity is a spatially (and possibly temporally) correlated random field. The randomness of the velocity, which is given by Darcy’s law, stems from random fluctuations in hydraulic conductivity (and advective porosity though this is often disregarded). Averaging the stochastic ADE over an ensemble of velocity fields results in a space–time-nonlocal representation of mean advective–dispersive flux, an approach we designate as stnADE. A closely related space–time-nonlocal representation of ensemble mean transport is obtained upon averaging the motion of solute particles through a random velocity field within a Lagrangian framework, an approach we designate stnL. The concept of continuous time random walk (CTRW) yields a representation of advective–dispersive flux that is nonlocal in time but local in space. Closely related to the latter are forms of ADE entailing fractional derivatives (fADE) which leads to representations of advective–dispersive flux that are nonlocal in space but local in time; nonlocality in time arises in the context of multirate mass transfer models, which we exclude from consideration in this paper. We describe briefly each of these four nonlocal approaches and offer a perspective on their differences, commonalities, and relative merits as analytical and predictive tools.  相似文献   

20.
The seasonally‐dry climate of Northern California imposes significant water stress on ecosystems and water resources during the dry summer months. Frequently during summer, the only water inputs occur as non‐rainfall water, in the form of fog and dew. However, due to spatially heterogeneous fog interaction within a watershed, estimating fog water fluxes to understand watershed‐scale hydrologic effects remains challenging. In this study, we characterized the role of coastal fog, a dominant feature of Northern Californian coastal ecosystems, in a San Francisco Peninsula watershed. To monitor fog occurrence, intensity, and spatial extent, we focused on the mechanisms through which fog can affect the water balance: throughfall following canopy interception of fog, soil moisture, streamflow, and meteorological variables. A stratified sampling design was used to capture the watershed's spatial heterogeneities in relation to fog events. We developed a novel spatial averaging scheme to upscale local observations of throughfall inputs and evapotranspiration suppression and make watershed‐scale estimates of fog water fluxes. Inputs from fog water throughfall (10–30 mm/year) and fog suppression of evapotranspiration (125 mm/year) reduced dry‐season water deficits by 25% at watershed scales. Evapotranspiration suppression was much more important for this reduction in water deficit than were direct inputs of fog water. The new upscaling scheme was analyzed to explore the sensitivity of its results to the methodology (data type and interpolation method) employed. This evaluation suggests that our combination of sensors and remote sensing allows an improved incorporation of spatially‐averaged fog fluxes into the water balance than traditional interpolation approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号