首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
This article deals with the right-tail behavior of a response distribution \(F_Y\) conditional on a regressor vector \({\mathbf {X}}={\mathbf {x}}\) restricted to the heavy-tailed case of Pareto-type conditional distributions \(F_Y(y|\ {\mathbf {x}})=P(Y\le y|\ {\mathbf {X}}={\mathbf {x}})\), with heaviness of the right tail characterized by the conditional extreme value index \(\gamma ({\mathbf {x}})>0\). We particularly focus on testing the hypothesis \({\mathscr {H}}_{0,tail}:\ \gamma ({\mathbf {x}})=\gamma _0\) of constant tail behavior for some \(\gamma _0>0\) and all possible \({\mathbf {x}}\). When considering \({\mathbf {x}}\) as a time index, the term trend analysis is commonly used. In the recent past several such trend analyses in extreme value data have been published, mostly focusing on time-varying modeling of location or scale parameters of the response distribution. In many such environmental studies a simple test against trend based on Kendall’s tau statistic is applied. This test is powerful when the center of the conditional distribution \(F_Y(y|{\mathbf {x}})\) changes monotonically in \({\mathbf {x}}\), for instance, in a simple location model \(\mu ({\mathbf {x}})=\mu _0+x\cdot \mu _1\), \({\mathbf {x}}=(1,x)'\), but the test is rather insensitive against monotonic tail behavior, say, \(\gamma ({\mathbf {x}})=\eta _0+x\cdot \eta _1\). This has to be considered, since for many environmental applications the main interest is on the tail rather than the center of a distribution. Our work is motivated by this problem and it is our goal to demonstrate the opportunities and the limits of detecting and estimating non-constant conditional heavy-tail behavior with regard to applications from hydrology. We present and compare four different procedures by simulations and illustrate our findings on real data from hydrology: weekly maxima of hourly precipitation from France and monthly maximal river flows from Germany.  相似文献   

2.
This paper introduces a portfolio approach for quantifying pollution risk in the presence of PM\(_{2.5}\) concentration in cities. The model used is based on a copula dependence structure. For assessing model parameters, we analyze a limited data set of PM\(_{2.5}\) levels of Beijing, Tianjin, Chengde, Hengshui, and Xingtai. This process reveals a better fit for the t-copula dependence structure with generalized hyperbolic marginal distributions for the PM\(_{2.5}\) log-ratios of the cities. Furthermore, we show how to efficiently simulate risk measures clean-air-at-risk and conditional clean-air-at-risk using importance sampling and stratified importance sampling. Our numerical results show that clean-air-at-risk at 0.01 probability level reaches up to \(352\,{\mu \hbox {gm}^{-3}}\) (initial PM\(_{2.5}\) concentrations of cities are assumed to be \(100\,{\mu \hbox {gm}^{-3}}\)) for the constructed sample portfolio, and that the proposed methods are much more efficient than a naive simulation for computing the exceeding probabilities and conditional excesses.  相似文献   

3.
Vegetation is known to influence the hydrological state variables, suction \( \left( \psi \right) \) and volumetric water content (\( \theta_{w} \)) of soil. In addition, vegetation induces heterogeneity in the soil porous structure and consequently the relative permeability (\( k_{r} \)) of water under unsaturated conditions. The indirect method of utilising the soil water characteristic curve (SWCC) is commonly adopted for the determination of \( k_{r} \). In such cases, it is essential to address the stochastic behaviour of SWCC, in order to conduct a robust analysis on the \( k_{r} \) of vegetative cover. The main aim of this study is to address the uncertainties associated with \( k_{r} \), using probabilistic analysis, for vegetative covers (i.e., grass and tree species) with bare cover as control treatment. We propose two approaches to accomplish the aforesaid objective. The univariate suction approach predicts the probability distribution functions of \( {\text{k}}_{\text{r}} \), on the basis of identified best probability distribution of suction. The bivariate suction and water content approach deals with the bivariate modelling of the water content and suction (SWCC), in order to capture the randomness in the permeability curves, due to presence of vegetation. For this purpose, the dependence structure of \( \psi \) and \( \theta_{w} \) is established via copula theory, and the \( k_{r} \) curves are predicted with respect to varying levels of \( \psi - \theta_{w} \) correlation. The results showed that the \( k_{r} \) of vegetative covers is substantially lower than that in bare covers. The reduction in \( k_{r} \) with drying is more in tree cover than grassed cover, since tree roots induce higher levels of suction. Moreover, the air entry value of the soil depends on the magnitude of \( \psi - \theta_{w} \) correlation, which in turn, is influenced by the type of vegetation in the soil. \( k_{r} \) is found to be highly uncertain in the desaturation zone of the relative permeability curve. The stochastic behaviour of \( k_{r} \) is found to be most significant in tree covers. Finally, a simplified case study is also presented in order to demonstrate the impact of the uncertainty in \( k_{r} \), on the stability of vegetates slopes. With an increment in the parameter \( \alpha \), factor of safety (FS) is found to decrease. The trend of FS is reverse of this with parameter \( n \). Overall FS is found to vary around 4–5%, for both bare and vegetative slopes.  相似文献   

4.
This paper considers a problem of analyzing temporal and spatial structure of particulate matter (PM) data with emphasizing high-level \(\text {PM}_{10}\). The proposed method is based on a combination of a generalized extreme value (GEV) distribution and a multiscale concept from scaling property theory used in hydrology. In this study, we use hourly \(\text {PM}_{10}\) data observed for 5 years on 25 stations located in Seoul metropolitan area, Korea. For our analysis, we calculate monthly maximum values for various duration times and area coverages at each station, and show that their distribution follows a GEV distribution. In addition, we identify that the GEV parameters of \(\text {PM}_{10}\) maxima hold a new scaling property, termed ‘piecewise linear scaling property’ for certain duration times. By using this property, we construct a 12-month return level map of hourly \(\text {PM}_{10}\) data at any arbitrary d-hour duration. Furthermore, we extend our study to understand spatio-temporal multiscale structure of \(\text {PM}_{10}\) extremes over different temporal and spatial scales.  相似文献   

5.
High-biomass red tides occur frequently in some semi-enclosed bays of Hong Kong where ambient nutrients are not high enough to support such a high phytoplankton biomass. These high-biomass red tides release massive inorganic nutrients into local waters during their collapse. We hypothesized that the massive inorganic nutrients released from the collapse of red tides would fuel growth of other phytoplankton species. This could influence phytoplankton species composition. We tested the hypothesis using a red tide event caused by Mesodinium rubrum (M. rubrum) in a semi-enclosed bay, Port Shelter. The red tide patch had a cell density as high as 5.0×105 cells L?1, and high chlorophyll a (63.71 μg L?1). Ambient inorganic nutrients (nitrate: \(\rm{NO}_3^-\), ammonium: \(\rm{NH}_4^+\), phosphate: \(\rm{PO}_4^{3-}\), silicate: \(\rm{SiO}_4^{3-}\)) were low both in the red tide patch and the non-red-tide patch (clear waters outside the red tide patch). Nutrient addition experiments were conducted by adding all the inorganic nutrients to water samples from the two patches followed by incubation for 9 days. The results showed that the addition of inorganic nutrients did not sustain high M. rubrum cell density, which collapsed after day 1, and did not drive M. rubrum in the non-red-tide patch sample to the same high-cell density in the red tide patch sample. This confirmed that nutrients were not the driving factor for the formation of this red tide event, or for its collapse. The death of M. rubrum after day 1 released high concentrations of \(\rm{NO}_3^-\), \(\rm{PO}_4^{3-}\), \(\rm{SiO}_4^{3-}\), \(\rm{NH}_4^+\), and urea. Bacterial abundance and heterotrophic activity increased, reaching the highest on day 3 or 4, and decreased as cell density of M. rubrum declined. The released nutrients stimulated growth of diatoms, such as Chaetoceros affinis var. circinalis, Thalassiothrix frauenfeldii, and Nitzschia sp., particularly with additions of \(\rm{SiO}_4^{3-}\) treatments, and other species. These results demonstrated that initiation of M. rubrum red tides in the bay was not directly driven by nutrients. However, the massive inorganic nutrients released from the collapse of the red tide could induce a second bloom in low-ambient nutrient water, influencing phytoplankton species composition.  相似文献   

6.
In this work, we map the absorption properties of the French crust by analyzing the decay properties of coda waves. Estimation of the coda quality factor \(Q_{c}\) in five non-overlapping frequency-bands between 1 and 32 Hz is performed for more than 12,000 high-quality seismograms from about 1700 weak to moderate crustal earthquakes recorded between 1995 and 2013. Based on sensitivity analysis, \(Q_{c}\) is subsequently approximated as an integral of the intrinsic shear wave quality factor \(Q_{i}\) along the ray connecting the source to the station. After discretization of the medium on a 2-D Cartesian grid, this yields a linear inverse problem for the spatial distribution of \(Q_{i}\). The solution is approximated by redistributing \(Q_{c}\) in the pixels connecting the source to the station and averaging over all paths. This simple procedure allows to obtain frequency-dependent maps of apparent absorption that show lateral variations of \(50\%\) at length scales ranging from 50 km to 150 km, in all the frequency bands analyzed. At low frequency, the small-scale geological features of the crust are clearly delineated: the Meso-Cenozoic basins (Aquitaine, Brabant, Southeast) appear as strong absorption regions, while crystalline massifs (Armorican, Central Massif, Alps) appear as low absorption zones. At high frequency, the correlation between the surface geological features and the absorption map disappears, except for the deepest Meso-Cenozoic basins which exhibit a strong absorption signature. Based on the tomographic results, we explore the implications of lateral variations of absorption for the analysis of both instrumental and historical seismicity. The main conclusions are as follows: (1) current local magnitude \(M_{L}\) can be over(resp. under)-estimated when absorption is weaker(resp. stronger) than the nominal value assumed in the amplitude-distance relation; (2) both the forward prediction of the earthquake macroseismic intensity field and the estimation of historical earthquake seismological parameters using macroseismic intensity data are significantly improved by taking into account a realistic 2-D distribution of absorption. In the future, both \(M_{L}\) estimations and macroseismic intensity attenuation models should benefit from high-resolution models of frequency-dependent absorption such as the one produced in this study.  相似文献   

7.
The first part of this paper reviews methods using effective solar indices to update a background ionospheric model focusing on those employing the Kriging method to perform the spatial interpolation. Then, it proposes a method to update the International Reference Ionosphere (IRI) model through the assimilation of data collected by a European ionosonde network. The method, called International Reference Ionosphere UPdate (IRI UP), that can potentially operate in real time, is mathematically described and validated for the period 9–25 March 2015 (a time window including the well-known St. Patrick storm occurred on 17 March), using IRI and IRI Real Time Assimilative Model (IRTAM) models as the reference. It relies on foF2 and M(3000)F2 ionospheric characteristics, recorded routinely by a network of 12 European ionosonde stations, which are used to calculate for each station effective values of IRI indices \(IG_{12}\) and \(R_{12}\) (identified as \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\)); then, starting from this discrete dataset of values, two-dimensional (2D) maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are generated through the universal Kriging method. Five variogram models are proposed and tested statistically to select the best performer for each effective index. Then, computed maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are used in the IRI model to synthesize updated values of foF2 and hmF2. To evaluate the ability of the proposed method to reproduce rapid local changes that are common under disturbed conditions, quality metrics are calculated for two test stations whose measurements were not assimilated in IRI UP, Fairford (51.7°N, 1.5°W) and San Vito (40.6°N, 17.8°E), for IRI, IRI UP, and IRTAM models. The proposed method turns out to be very effective under highly disturbed conditions, with significant improvements of the foF2 representation and noticeable improvements of the hmF2 one. Important improvements have been verified also for quiet and moderately disturbed conditions. A visual analysis of foF2 and hmF2 maps highlights the ability of the IRI UP method to catch small-scale changes occurring under disturbed conditions which are not seen by IRI.  相似文献   

8.
Vulnerability maps are designed to show areas of greatest potential for groundwater contamination on the basis of hydrogeological conditions and human impacts. The objective of this research is (1) to assess the groundwater vulnerability using DRASTIC method and (2) to improve the DRASTIC method for evaluation of groundwater contamination risk using AI methods, such as ANN, SFL, MFL, NF and SCMAI approaches. This optimization method is illustrated using a case study. For this purpose, DRASTIC model is developed using seven parameters. For validating the contamination risk assessment, a total of 243 groundwater samples were collected from different aquifer types of the study area to analyze \( {\text{NO}}_{ 3}^{ - } \) concentration. To develop AI and CMAI models, 243 data points are divided in two sets; training and validation based on cross validation approach. The calculated vulnerability indices from the DRASTIC method are corrected by the \( {\text{NO}}_{3}^{ - } \) data used in the training step. The input data of the AI models include seven parameters of DRASTIC method. However, the output is the corrected vulnerability index using \( {\text{NO}}_{3}^{ - } \) concentration data from the study area, which is called groundwater contamination risk. In other words, there is some target value (known output) which is estimated by some formula from DRASTIC vulnerability and \( {\text{NO}}_{3}^{ - } \) concentration values. After model training, the AI models are verified by the second \( {\text{NO}}_{3}^{ - } \) concentration dataset. The results revealed that NF and SFL produced acceptable performance while ANN and MFL had poor prediction. A supervised committee machine artificial intelligent (SCMAI), which combines the results of individual AI models using a supervised artificial neural network, was developed for better prediction of vulnerability. The performance of SCMAI was also compared to those of the simple averaging and weighted averaging committee machine intelligent (CMI) methods. As a result, the SCMAI model produced reliable estimates of groundwater contamination risk.  相似文献   

9.
Nowadays, most of the site classifications schemes are based on the predominant period of the site as determined from the average horizontal to vertical spectral ratios of seismic motion or microtremor. However, the difficulty lies in the identification of the predominant period in particular if the observed average response spectral ratio does not present a clear peak but rather a broadband amplification or multiple peaks. In this work, based on the Eurocode-8 (2004) site classification, and assuming bounded random fields for both shear and compression waves-velocities, damping coefficient, natural period and depth of soil profile, one propose a new site-classification approach, based on “target” simulated average \( H/V \) spectral ratios, defined for each soil class. Taking advantage of the relationship of Kawase et al. (Bull Seismol Soc Am 101:2001–2014, 2011), which link the \( H/V \) spectral ratio to the horizontal (\( HTF \)) over the vertical (\( VTF \)) transfer functions, statistics of \( H/V \) spectral ratio via deterministic visco-elastic seismic analysis using the wave propagation theory are computed for the 4 soil classes. The obtained results show that \( H/V \) and \( HTF \) have amplitudes and shapes remarkably different among the four soil classes and exhibit fundamental peaks in the period ranges remarkably similar. Moreover, the “target” simulated average \( H/V \) spectral ratios for the 4 soil classes are in good agreement with the experimental ones obtained by Zhao et al. (Bull Seismol Soc Am 96:914–925, 2006) from the abundant and reliable Japanese strong motions database Kik-net, Ghasemi et al. (Soil Dyn Earthq Eng 29:121–132, 2009) from the Iranian strong motion data, and Di Alessandro et al. (Bull Sesismol Soc Am 106:2, 2011.  https://doi.org/10.1785/0120110084) from the Italian strong motion data. In addition to the 4 EC-8 standard soil classes (A, B, C and D), the superposition of the 4 target \( H/V \) reveals 3 new boundary site classes; AB, BC and CD, for overlapping \( V_{s,30} \) ranges when the predominant peak is not clearly consistent with any of the 4 proposed classes. Finally, one proposes a site classification index based on the ratio between the cross-correlation and the mean quadratic error between the in situ \( H/V \) spectral ratio and the “target” one. In order to test the reliability of the proposed approach, data from 139 sites were used, 132 collected from the Kik-net network database from Japan and 7 from Algeria. The site classification success rate per site class are around 93, 82, 89 and 100% for rock, hard soil, medium soil and soft soil, respectively. Zhao et al. (2006) found an average success for the 4 classes of soil close to 60%, similar to what one found in the present study (63%) without considering the new soil classes, but much smaller if one considers them (86%). In the absence of \( V_{s,30} \) data, the proposed approach can be an alternative to site classification.  相似文献   

10.
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line \(y = a x + b\). This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to \(M_{w}\) vs. \(m_{b}\) and \(M_{w}\) vs. \(M_{S}\) regressions. This improvement is minor, within the typical error of \(M_{w}\). Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.  相似文献   

11.
Random fields based on energy functionals with local interactions possess flexible covariance functions, lead to computationally efficient algorithms for spatial data processing, and have important applications in Bayesian field theory. In this paper we address the calculation of covariance functions for a family of isotropic local-interaction random fields in two dimensions. We derive explicit expressions for non-differentiable Spartan covariance functions in \({\mathbb{R}}^2\) that are based on the modified Bessel function of the second kind. We also derive a family of infinitely differentiable, Bessel-Lommel covariance functions that exhibit a hole effect and are valid in \({\mathbb{R}}^{d}\), where d > 2. Finally, we define a generalized spectrum of correlation scales that can be applied to both differentiable and non-differentiable random fields in contrast with the smoothness microscale.  相似文献   

12.
Diurnal S\(_1\) tidal oscillations in the coupled atmosphere–ocean system induce small perturbations of Earth’s prograde annual nutation, but matching geophysical model estimates of this Sun-synchronous rotation signal with the observed effect in geodetic Very Long Baseline Interferometry (VLBI) data has thus far been elusive. The present study assesses the problem from a geophysical model perspective, using four modern-day atmospheric assimilation systems and a consistently forced barotropic ocean model that dissipates its energy excess in the global abyssal ocean through a parameterized tidal conversion scheme. The use of contemporary meteorological data does, however, not guarantee accurate nutation estimates per se; two of the probed datasets produce atmosphere–ocean-driven S\(_1\) terms that deviate by more than 30 \(\upmu \)as (microarcseconds) from the VLBI-observed harmonic of \(-16.2+i113.4\) \(\upmu \)as. Partial deficiencies of these models in the diurnal band are also borne out by a validation of the air pressure tide against barometric in situ estimates as well as comparisons of simulated sea surface elevations with a global network of S\(_1\) tide gauge determinations. Credence is lent to the global S\(_1\) tide derived from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and the operational model of the European Centre for Medium-Range Weather Forecasts (ECMWF). When averaged over a temporal range of 2004 to 2013, their nutation contributions are estimated to be \(-8.0+i106.0\) \(\upmu \)as (MERRA) and \(-9.4+i121.8\) \(\upmu \)as (ECMWF operational), thus being virtually equivalent with the VLBI estimate. This remarkably close agreement will likely aid forthcoming nutation theories in their unambiguous a priori account of Earth’s prograde annual celestial motion.  相似文献   

13.
Temperature data from SABER/TIMED and Empirical Orthogonal Function(EOF) analysis are taken to examine possible modulations of the temperature migrating diurnal tide(DW1) by latitudinal gradients of zonal mean zonal wind(■). The result shows that z increases with altitudes and displays clearly seasonal and interannual variability. In the upper mesosphere and lower thermosphere(MLT), at the latitudes between 20°N and 20°S, when ■ strengthens(weakens) at equinoxes(solstices) the DW1 amplitude increases(decreases) simultaneously. Stronger maximum in March-April equinox occurs in both z and the DW1 amplitude. Besides, a quasi-biennial oscillation of DW1 is also found to be synchronous with ■. The resembling spatial-temporal features suggest that ■ in the upper tropic MLT probably plays an important role in modulating semiannual, annual, and quasi-biennial oscillations in DW1 at the same latitude and altitude. In addition, ■ in the mesosphere possibly affects the propagation of DW1 and produces SAO of DW1 in the lower thermosphere. Thus, SAO of DW1 in the upper MLT may be a combined effect of ■ both in the mesosphere and in the upper MLT, which models studies should determine in the future.  相似文献   

14.
In a previous publication, the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the \(M9\) Tohoku earthquake occurrence) has been analyzed in a time domain called natural time \(\chi.\) The order parameter of seismicity in this time domain is the variance of \(\chi\) weighted for normalized energy of each earthquake. It was found that the fluctuations of the order parameter of seismicity exhibit 15 distinct minima—deeper than a certain threshold—1 to around 3 months before the occurrence of large earthquakes that occurred in Japan during 1984–2011. Six (out of 15) of these minima were followed by all the shallow earthquakes of magnitude 7.6 or larger during the whole period studied. Here, we show that the probability to achieve the latter result by chance is of the order of \(10^{-5}\). This conclusion is strengthened by employing also the receiver operating characteristics technique.  相似文献   

15.
During the last 15 years, more attention has been paid to derive analytic formulae for the gravitational potential and field of polyhedral mass bodies with complicated polynomial density contrasts, because such formulae can be more suitable to approximate the true mass density variations of the earth (e.g., sedimentary basins and bedrock topography) than methods that use finer volume discretization and constant density contrasts. In this study, we derive analytic formulae for gravity anomalies of arbitrary polyhedral bodies with complicated polynomial density contrasts in 3D space. The anomalous mass density is allowed to vary in both horizontal and vertical directions in a polynomial form of \(\lambda =ax^m+by^n+cz^t\), where mnt are nonnegative integers and abc are coefficients of mass density. First, the singular volume integrals of the gravity anomalies are transformed to regular or weakly singular surface integrals over each polygon of the polyhedral body. Then, in terms of the derived singularity-free analytic formulae of these surface integrals, singularity-free analytic formulae for gravity anomalies of arbitrary polyhedral bodies with horizontal and vertical polynomial density contrasts are obtained. For an arbitrary polyhedron, we successfully derived analytic formulae of the gravity potential and the gravity field in the case of \(m\le 1\), \(n\le 1\), \(t\le 1\), and an analytic formula of the gravity potential in the case of \(m=n=t=2\). For a rectangular prism, we derive an analytic formula of the gravity potential for \(m\le 3\), \(n\le 3\) and \(t\le 3\) and closed forms of the gravity field are presented for \(m\le 1\), \(n\le 1\) and \(t\le 4\). Besides generalizing previously published closed-form solutions for cases of constant and linear mass density contrasts to higher polynomial order, to our best knowledge, this is the first time that closed-form solutions are presented for the gravitational potential of a general polyhedral body with quadratic density contrast in all spatial directions and for the vertical gravitational field of a prismatic body with quartic density contrast along the vertical direction. To verify our new analytic formulae, a prismatic model with depth-dependent polynomial density contrast and a polyhedral body in the form of a triangular prism with constant contrast are tested. Excellent agreements between results of published analytic formulae and our results are achieved. Our new analytic formulae are useful tools to compute gravity anomalies of complicated mass density contrasts in the earth, when the observation sites are close to the surface or within mass bodies.  相似文献   

16.
Microstructure measurements were performed along two sections through the Halmahera Sea and the Ombai Strait and at a station in the deep Banda Sea. Contrasting dissipation rates (??) and vertical eddy diffusivities (K z ) were obtained with depth-averaged ranges of \(\sim [9 \times 10^{-10}-10^{-5}]\) W kg??1 and of \(\sim [1 \times 10^{-5}-2 \times 10^{-3}]\) m2 s??1, respectively. Similarly, turbulence intensity, \(I={\epsilon }/(\nu N^{2})\) with ν the kinematic viscosity and N the buoyancy frequency, was found to vary seven orders of magnitude with values up to \(10^{7}\). These large ranges of variations were correlated with the internal tide energy level, which highlights the contrast between regions close and far from internal tide generations. Finescale parameterizations of ?? induced by the breaking of weakly nonlinear internal waves were only relevant in regions located far from any generation area (“far field”), at the deep Banda Sea station. Closer to generation areas, at the “intermediate field” station of the Halmahera Sea, a modified formulation of MacKinnon and Gregg (2005) was validated for moderately turbulent regimes with 100 < I < 1000. Near generation areas marked by strong turbulent regimes such as “near field” stations within strait and passages, ?? is most adequately inferred from horizontal velocities provided that part of the inertial subrange is resolved, according to Kolmogorov scaling.  相似文献   

17.
Modelling seismic attenuation is one of the most critical points in the hazard assessment process. In this article we consider the spatial distribution of the effects caused by an earthquake as expressed by the values of the macroseismic intensity recorded at various locations surrounding the epicentre. Considering the ordinal nature of the intensity, a way to show its decay with distance is to draw curves—isoseismal lines—on maps, which bound points of intensity not smaller than a fixed value. These lines usually take the form of closed and nested curves around the epicentre, with highly different shapes because of the effects of ground conditions and of complexities in rupture propagation. Forecasting seismic attenuation of future earthquakes requires stochastic modelling of the decay on the basis of a common spatial pattern. The aim of this study is to consider a statistical methodology that identifies a general shape, if it exists, for isoseismal lines of a set of macroseismic fields. Data depth is a general nonparametric method for analysis of probability distributions and datasets. It has arisen as a statistical method to order points of a multivariate space, e.g., Euclidean space \({\mathbb {R}}^{p}\), \(p \ge 1\), according to the centrality with respect to a distribution or a given data cloud. Recently, this method has been extended to the ordering of functions and trajectories. In our case, for a fixed intensity decay \(\varDelta I\), we build a set of convex hulls that enclose the sites of felt intensity \(I_s \ge I_0 -\varDelta I\), one for each macroseismic field of a set of earthquakes that are considered as similar from the attenuation point of view. By applying data depth functions to this functional dataset, it is possible to identify the most central curve, i.e., the attenuation pattern, and to consider other properties like variability, outlyingness, and possible clustering of such curves. Results are shown for earthquakes that occurred on the Central Po Plain in May 2012, and on the eastern flank of Mt. Etna since 1865.  相似文献   

18.
This paper gives the exact solution in terms of the Karhunen–Loève expansion to a fractional stochastic partial differential equation on the unit sphere \({\mathbb {S}}^{2} \subset {\mathbb {R}}^{3}\) with fractional Brownian motion as driving noise and with random initial condition given by a fractional stochastic Cauchy problem. A numerical approximation to the solution is given by truncating the Karhunen–Loève expansion. We show the convergence rates of the truncation errors in degree and the mean square approximation errors in time. Numerical examples using an isotropic Gaussian random field as initial condition and simulations of evolution of cosmic microwave background are given to illustrate the theoretical results.  相似文献   

19.
In situ, airborne and satellite measurements are used to characterize the structure of water vapor in the lower tropical troposphere—below the height, \(z_*,\) of the triple-point isotherm, \(T_*.\) The measurements are evaluated in light of understanding of how lower-tropospheric water vapor influences clouds, convection and circulation, through both radiative and thermodynamic effects. Lower-tropospheric water vapor, which concentrates in the first few kilometers above the boundary layer, controls the radiative cooling profile of the boundary layer and lower troposphere. Elevated moist layers originating from a preferred level of convective detrainment induce a profile of radiative cooling that drives circulations which reinforce such features. A theory for this preferred level of cumulus termination is advanced, whereby the difference between \(T_*\) and the temperature at which primary ice forms gives a ‘first-mover advantage’ to glaciating cumulus convection, thereby concentrating the regions of the deepest convection and leading to more clouds and moisture near the triple point. A preferred level of convective detrainment near \(T_*\) implies relative humidity reversals below \(z*\) which are difficult to identify using retrievals from satellite-borne microwave and infrared sounders. Isotopologues retrievals provide a hint of such features and their ability to constrain the structure of the vertical humidity profile merits further study. Nonetheless, it will likely remain challenging to resolve dynamically important aspects of the vertical structure of water vapor from space using only passive sensors.  相似文献   

20.
In this study, the 11 August 2012 M w 6.4 Ahar earthquake is investigated using the ground motion simulation based on the stochastic finite-fault model. The earthquake occurred in northwestern Iran and causing extensive damage in the city of Ahar and surrounding areas. A network consisting of 58 acceleration stations recorded the earthquake within 8–217 km of the epicenter. Strong ground motion records from six significant well-recorded stations close to the epicenter have been simulated. These stations are installed in areas which experienced significant structural damage and humanity loss during the earthquake. The simulation is carried out using the dynamic corner frequency model of rupture propagation by extended fault simulation program (EXSIM). For this purpose, the propagation features of shear-wave including \( {Q}_s \) value, kappa value \( {k}_0 \), and soil amplification coefficients at each site are required. The kappa values are obtained from the slope of smoothed amplitude of Fourier spectra of acceleration at higher frequencies. The determined kappa values for vertical and horizontal components are 0.02 and 0.05 s, respectively. Furthermore, an anelastic attenuation parameter is derived from energy decay of a seismic wave by using continuous wavelet transform (CWT) for each station. The average frequency-dependent relation estimated for the region is \( Q=\left(122\pm 38\right){f}^{\left(1.40\pm 0.16\right)}. \) Moreover, the horizontal to vertical spectral ratio \( H/V \) is applied to estimate the site effects at stations. Spectral analysis of the data indicates that the best match between the observed and simulated spectra occurs for an average stress drop of 70 bars. Finally, the simulated and observed results are compared with pseudo acceleration spectra and peak ground motions. The comparison of time series spectra shows good agreement between the observed and the simulated waveforms at frequencies of engineering interest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号