首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
时间扩展取样集合卡尔曼滤波同化模拟探空试验研究   总被引:2,自引:0,他引:2  
目前,集合卡尔曼滤波同化预报循环系统主要的计算量和时间都花费在样本成员的预报上,小样本数虽能减少计算量,但样本数过少,特别是当有模式误差时,又会导致滤波发散。为了提高集合卡尔曼滤波同化预报循环系统的效率并减轻滤波发散等问题,开展了基于WRF的时间扩展取样集合卡尔曼滤波同化模拟探空的试验研究,以考察其在中尺度模式中的同化效果。预报时对一组样本数为Nb的样本,不仅在分析时刻取样,同时也在分析时刻前和后每间隔Δt时间进行M次取样,即在没增加预报样本数的情况下,增加了分析样本成员数(Nb+2M×Nb),从而在保证不降低分析精度的前提下,也达到减小集合卡尔曼滤波的计算量的要求。通过一系列试验来检验时间扩展取样的时间间隔Δt及在分析时刻前和后最大取样次数M对同化结果的影响。试验结果表明,当选择合适的Δt和M时,时间扩展集合卡尔曼滤波的同化效果非常接近于样本数为(1+2M)×Nb的传统集合卡尔曼滤波效果,具有一定的可行性。  相似文献   

2.
By sampling perturbed state vectors from each ensemble prediction run at properly selected time levels in the vicinity of the analysis time, the recently proposed time-expanded sampling approach can enlarge the ensemble size without increasing the number of prediction runs and, hence, can reduce the computational cost of an ensemble-based filter. In this study, this approach is tested for the first time with real radar data from a tornadic thunderstorm. In particular, four assimilation experiments were performed to test the time-expanded sampling method against the conventional ensemble sampling method used by ensemble- based filters. In these experiments, the ensemble square-root filter (EnSRF) was used with 45 ensemble members generated by the time-expanded sampling and conventional sampling from 15 and 45 prediction runs, respectively, and quality-controlled radar data were compressed into super-observations with properly reduced spatial resolutions to improve the EnSRF performances. The results show that the time-expanded sampling approach not only can reduce the computational cost but also can improve the accuracy of the analysis, especially when the ensemble size is severely limited due to computational constraints for real-radar data assimilation. These potential merits are consistent with those previously demonstrated by assimilation experiments with simulated data.  相似文献   

3.
The computational cost required by the Ensemble Kalman Filter (EnKF) is much larger than that of some simpler assimilation schemes, such as Optimal Interpolation (OI) or three-dimension variational assimilation (3DVAR). Ensemble optimal interpolation (EnOI), a crudely simplified implementation of EnKF, is sometimes used as a substitute in some oceanic applications and requires much less computational time than EnKF. In this paper, to compromise between computational cost and dynamic covariance, we use the idea of ``dressing' a small size dynamical ensemble with a larger number of static ensembles in order to form an approximate dynamic covariance. The term ``dressing' means that a dynamical ensemble seed from model runs is perturbed by adding the anomalies of some static ensembles. This dressing EnKF (DrEnKF for short) scheme is tested in assimilation of real altimetry data in the Pacific using the HYbrid Coordinate Ocean Model (HYCOM) over a four-year period. Ten dynamical ensemble seeds are each dressed by 10 static ensemble members selected from a 100-member static ensemble. Results are compared to two EnKF assimilation runs that use 10 and 100 dynamical ensemble members. Both temperature and salinity fields from the DrEnKF and the EnKF are compared to observations from Argo floats and an OI SST dataset. The results show that the DrEnKF and the 100-member EnKF yield similar root mean square errors (RMSE) at every model level. Error covariance matrices from the DrEnKF and the 100-member EnKF are also compared and show good agreement.  相似文献   

4.
Considering the observational error, the truncation error and the requirements of numerical weather prediction, three formulas for determining the distance between two adjacent stations d1, the observational vertical increment △p1 and the observational time interval △t1 in optimum sense, have been derived. Since they depend on the shortest wavelength concerned and the ratio of maximum observational error to wave amplitude, the results are quite different for different scale systems.For the filtered model the values of d1, △p1,, and △t1 in general come near those required in the MANUAL on the GOS published in 1980 by WMO. But for the primitive equation model the estimated value of △t1 is much less than those required in the filtered model case.Therefore, it is improper to study the fast moving and developing processes of the atmospheric motion only on the basis of the conventional observations. It seems to be necessary to establish an optimum composite observational system including the surface-based system and the space-based system.  相似文献   

5.
We perform large-eddy simulation (LES) of a moderately convective atmospheric boundary layer (ABL) using a prognostic subfilter-scale (SFS) model obtained by truncating the full conservation equations for the SFS stresses and fluxes. The truncated conservation equations contain production mechanisms that are absent in eddy-diffusivity closures and, thus, have the potential to better parametrize the SFS stresses and fluxes. To study the performance of the conservation-equation-based SFS closure, we compare LES results from the surface layer with observations from the Horizontal Array Turbulence Study (HATS) experiment. For comparison, we also show LES results obtained using an eddy-diffusivity closure. Following past studies, we plot various statistics versus the non-dimensional parameter, Λ w /Δ, where Λ w is the wavelength corresponding to the peak in the vertical velocity spectrum and Δ is the filter width. The LES runs are designed using different domain sizes, filter widths and surface fluxes, in order to replicate partly the conditions in the HATS experiment. Our results show that statistics from the different LES runs collapse reasonably and exhibit clear trends when plotted against Λ w /Δ. The trends exhibited by the production terms in the modelled SFS conservation equations are qualitatively similar to those seen in the HATS data with the exception of SFS buoyant production, which is underpredicted. The dominant production terms in the modelled SFS stress and flux budgets obtained from LES are found to approach asymptotically constant values at low Λ w /Δ. For the SFS stress budgets, we show that several of these asymptotes are in good agreement with their corresponding theoretical values in the limit Λ w /Δ → 0. The modelled SFS conservation equations yield trends in the mean values and fluctuations of the SFS stresses and fluxes that agree better with the HATS data than do those obtained using an eddy-diffusivity closure. They, however, underpredict considerably the level of SFS anisotropy near the wall when compared to observations, which could be a consequence of the shortcomings in the model used for the pressure destruction terms. Finally, we address the computational cost incurred due to the use of additional prognostic equations.  相似文献   

6.
刘德强  冯杰  李建平  王金成 《大气科学》2015,39(6):1165-1178
基于GRAPES区域中尺度数值预报系统(GRAPES_MESO),针对700 hPa、500 hPa和200 hPa的位势高度场H,温度场T,风场纬向分量U,经向分量V和地面降水场,在给定的模式物理过程下,分别考察了时间步长和空间分辨率对于模式预报效果的影响。研究结果表明,空间分辨率(0.3°×0.3°)相同时,各变量在不同层次的预报几乎都存在最优时间步长使得预报技巧最高,初步说明最优时间步长理论在复杂的偏微分方程组中的适用性。随后,将空间分辨率为0.3°×0.3°时最优时间步长(240 s)的预报结果与当前业务中(空间分辨率为0.15°×0.15°、时间步长为90 s)的预报结果进行比较,发现前者的变量H、T、U、V和地面降水场的预报技巧均高于后者,表明并不是空间分辨率越高预报效果越好。  相似文献   

7.
It is essential to quantify the background reactivity of smog-chambers, since this might be the major limitation of experiments carried out at low pollutant concentrations typical of the polluted atmosphere. Detailed investigation of three chamber experiments at zero-NO x in the European Photoreactor (EUPHORE) were carried out by means of rate-of-production analysis and two uncertainty analysis tools: local uncertainty analysis and Monte Carlo simulations with Latin hypercube sampling. The chemical mechanism employed was that for methane plus the inorganic subset of the Master Chemical Mechanism (MCMv3.1). Newly installed instruments in EUPHORE allowed the measurement of nitrous acid and formaldehyde at sub-ppb concentrations with high sensitivity. The presence of HONO and HCHO during the experiments could be explained only by processes taking place on the FEP Teflon walls. The HONO production rate can be described by the empirical equation W(HONO)EUPHORE dry = a × j NO 2× exp (− T 0/T) in the low relative humidity region (RH < 2%, a = 7.3×1021 cm−3, T 0 = 8945K), and by the equation W(HONO)EUPHORE humid = W(HONO)EUPHORE dry+ j NO 2× b × RH q in the higher relative humidity region (2% < RH < 15%, b = 5.8×108 cm−3 and q = 0.36, and RH is the relative humidity in percentages). For HCHO the expression W(HCHO)EUPHORE = c × j NO 2exp (− T0/T) is applicable (c = 3.1×1017 cm−3 and T0 = 5686 K). In the 0–15% relative humidity range OH production from HONO generated at the wall is about a factor of two higher than that from the photolysis of 100 ppb ozone. Effect of added NO2 was found to be consistent with the dark HONO formation rate coefficient of MCMv3.1.  相似文献   

8.
    
The approach of getting useful information of monthly dynamical prediction from ensemble forecasts is studied. The extended range ensemble forecasts (8 members, the initial perturbations of the lagged average forecast (LAF)(0000, 0600, 1200 and 1800 GMT in two consecutive days) of the 500 hPa height field with the global spectral model (T63L16) from January to May 1997 are provided by the National Climate Center of China. The relationship between the spread of ensemble measured by root–mean–square deviation of ensemble member from ensemble mean and forecast skill (the anomaly correlation or the root–mean–square distance between the ensemble mean forecast and the observation) is significant. The spread of ensemble can evaluate the useful forecast days N for the best estimate of 30 days mean. Thus, a weighted mean approach based on ensemble spread is put forward for monthly dynamical prediction. The anomaly correlation of the weighted monthly mean by the ensemble spread is higher than that of both the arithmetic mean and the linear weighted mean. Better results of the monthly mean circulation and anomaly are obtained from the ensemble spread weighted mean. Supported by the Excellent National State Key Laboratory Project (49823002), the National Key Project ‘Study on Chinese Short-Term Climate Forecast System’ (96-908-02) and IAP Innovation Foundation (8-1308). The data were provided through the National Climate Center of China. The authors wish to thank Ms. Chen Lijuan for her assistance.  相似文献   

9.
At the atmosphere simulation chamber SAPHIR in Jülich both Laser-Induced Fluorescence Spectroscopy (LIF) and Long-Path Differential Optical Laser Absorption Spectroscopy (DOAS) are operational for the detection of OH radicals at tropospheric levels. The two different spectroscopic techniques were compared within the controlled environment of SAPHIR based on all simultaneous measurements acquired in 2003 (13 days). Hydroxyl radicals were scavenged by added CO during four of these days in order to experimentally check the calculated precisions at the detection limit. LIF measurements have a higher precision (σ= 0.88×106 cm–3) and better time resolution (Δt = 60 s), but the DOAS method (σ= 1.24×106 cm–3, Δt = 135 s) is regarded as primary standard for comparisons because of its good accuracy. A high correlation coefficient of r = 0.95 was found for the whole data set highlighting the advantage of using a simulation chamber. The data set consists of two groups. The first one includes 3 days, where the LIF measurements yield (1 – 2) ×106 cm–3 higher OH concentrations than observed by the DOAS instrument. The experimental conditions during these days are characterized by increased NOx concentration and a small dynamic range in OH. Excellent agreement is found within the other group of 6 days. The regression to the combined data of this large group yields unity slope without a significant offset.  相似文献   

10.
A study of the oxidation mechanism of N-methyl pyrrolidinone (C5H9NO, NMP) initiated by hydroxyl radicals was made at EUPHORE at atmospheric pressure (1000 ± 10) mbar of air and ambient temperature (T = 300 ± 5 K). The main products were N-methyl succinimide (NMS) (52 ± 4)% and N-formyl pyrrolidinone (FP) (23 ± 9)%. The relative rate technique was used to determine the rate constants of OH with NMP, NMS and FP, the measured values were (in units of cm3 molecule − 1 s− 1): kNMP = (2.2 ± 0.4) × 10− 11, kNMS = (1.4 ± 0.3) × 10− 12 and kFP = (6 ± 1) × 10− 12. The results are presented and discussed in terms of the atmospheric impact.  相似文献   

11.
If a spot of tracer is released into a turbulent flow, the peak concentration at some subsequent time will initially be much greater than that implied by a solution for the ensemble average concentration at fixed points. For two-dimensional turbulence three areas may be defined: (1) an area Ad related to the ensemble average concentration field; (2) an area Ap defined in terms of the relative dispersion of particles seeded into the patch after a short initial diffusion time; and (3) the area At occupied by tracer. It is argued that Ad grows linearly with time, whereas Ap and At grow exponentially; Ap faster than At. Thus, the concentration field is significantly streaky, even within the particle domain, until At becomes comparable with Ad. The time taken for this to occur is estimated; after this time, fluctuations about the ensemble average concentration field should not be greater than those given by a simple mixing length argument. In three-dimensional turbulence the volume Vt of the tracer domain grows much more rapidly than the volume Vp of the particle domain if the merging of streaks is ignored. However, Vt cannot be greater than Vp so streaks must merge and Vp can be used to provide a rough estimate of peak concentration, or concentration variance.  相似文献   

12.
A coupled atmosphere-ocean model developed at the Institute for Space Studies at NASA Goddard Space Flight Center (Russell et al., 1995) was used to verify the validity of Haney-type surface thermal boundary condition, which linearly connects net downward surface heat flux Q to air / sea temperature difference △T by a relaxation coefficient k. The model was initiated from the National Centers for Environmental Prediction (NCEP) atmospheric observations for 1 December 1977, and from the National Ocean Data Center (NODC) global climatological mean December temperature and salinity fields at 1° ×1° resolution. The time step is 7.5 minutes. We integrated the model for 450 days and obtained a complete model-generated global data set of daily mean downward net surface flux Q, surface air temperature TA,and sea surface temperature To. Then, we calculated the cross-correlation coefficients (CCC) between Q and △T. The ensemble mean CCC fields show (a) no correlation between Q and △T in the equatorial regions, and (b) evident correlation (CCC≥ 0.7) between Q and △T in the middle and high latitudes.Additionally, we did the variance analysis and found that when k= 120 W m-2K-1, the two standard deviations, σQ and σk△T, are quite close in the middle and high latitudes. These results agree quite well with a previous research (Chu et al., 1998) on analyzing the NCEP re-analyzed surface data, except that a smaller value of k (80 W m-2K-1) was found in the previous study.  相似文献   

13.
 Changes in land surface driving variables, predicted by GCM transient climate change experiments, are confirmed to exhibit linearity in the global mean land temperature anomaly, ΔT l . The associated constants of proportionality retain spatial and seasonal characteristics of the GCM output, whilst ΔT l is related to radiative forcing anomalies. The resultant analogue model is shown to be robust between GCM runs and as such provides a computationally efficient technique of extending existing GCM experiments to a large range of climate change scenarios. As an example impacts study, the analogue model is used to drive a terrestrial ecosystem model, and predicted changes in terrestrial carbon are found to be similar to those when using GCM anomalies directly. Received: 4 January 1999 / Accepted: 11 December 1999  相似文献   

14.
This study examines the performance of coupling the deterministic four-dimensional variational assimilation system (4DVAR) with an ensemble Kalman filter (EnKF) to produce a superior hybrid approach for data assimilation. The coupled assimilation scheme (E4DVAR) benefits from using the state-dependent uncertainty provided by EnKF while taking advantage of 4DVAR in preventing filter divergence: the 4DVAR analysis produces posterior maximum likelihood solutions through minimization of a cost function about which the ensemble perturbations are transformed, and the resulting ensemble analysis can be propagated forward both for the next assimilation cycle and as a basis for ensemble forecasting. The feasibility and effectiveness of this coupled approach are demonstrated in an idealized model with simulated observations. It is found that the E4DVAR is capable of outperforming both 4DVAR and the EnKF under both perfect- and imperfect-model scenarios. The performance of the coupled scheme is also less sensitive to either the ensemble size or the assimilation window length than those for standard EnKF or 4DVAR implementations.  相似文献   

15.
The SF6 gas tracer observations for puffs released near the ground during the Joint Urban 2003 (JU2003) urban dispersion experiment in Oklahoma City have been analysed. The JU2003 observations, at distances of about 100–1,100 m from the source, show that, at small times, when the puff is still within the built-up downtown domain, the standard deviation of the concentration time series, σt, is influenced by the initial puff spread due to buildings near the source and by hold-up in the wakes of large buildings at the sampler locations. This effect is parameterised by assuming an initial σto of about 42 s, leading to a comprehensive similarity formula: σt = 42 + 0.1t. The second term, 0.1t, is consistent with an earlier similarity relation, σt = 0.1t, derived from puff observations in many experiments over rural terrain. The along-wind dispersion coefficient, σx, is assumed to equal σt u, in which u is the puff speed calculated as the distance from the source to the sampler, x, divided by the time after the release that the maximum concentration is observed at the sampler. σx can be expressed as σx = σxo + 0.14x, with the initial σxo of 45 m. This initial σxo agrees with the suggestion of an initial plume spread of about 40 m, made by McElroy and Pooler from analysis of the 1960s’ St. Louis urban dispersion experiment. The puff speeds, u, are initially only about 20% of the observed wind speed, averaged over about 80 street-level and rooftop anemometers in the city, but approach the mean observed wind speed as the puffs grow vertically. The scatter in the σt data is about ± a factor of two or three at any given travel time. The maximum σt is about 250 s, and the maximum duration of the puff over the sampler, Dt, sometimes called the retention time, is about 1,100 s or 18 min for these puffs and distances.  相似文献   

16.
Extending an earlier study, the best track minimum sea level pressure (MSLP) data are assimilated for landfalling Hurricane Ike (2008) using an ensemble Kalman filter (EnKF), in addition to data from two coastal ground-based Doppler radars, at a 4-km grid spacing. Treated as a sea level pressure observation, the MSLP assimilation by the EnKF enhances the hurricane warm core structure and results in a stronger and deeper analyzed vortex than that in the GFS (Global Forecast System) analysis; it also improves the subsequent 18-h hurricane intensity and track forecasts. With a 2-h total assimilation window length, the assimilation of MSLP data interpolated to 10-min intervals results in more balanced analyses with smaller subsequent forecast error growth and better intensity and track forecasts than when the data are assimilated every 60 minutes. Radar data are always assimilated at 10-min intervals. For both intensity and track forecasts, assimilating MSLP only outperforms assimilating radar reflectivity (Z) only. For intensity forecast, assimilating MSLP at 10-min intervals outperforms radar radial wind (Vr) data (assimilated at 10-min intervals), but assimilating MSLP at 60-min intervals fails to beat Vr data. For track forecast, MSLP assimilation has a slightly (noticeably) larger positive impact than Vr(Z) data. When Vr or Z is combined with MSLP, both intensity and track forecasts are improved more than the assimilation of individual observation type. When the total assimilation window length is reduced to 1h or less, the assimilation of MSLP alone even at 10-min intervals produces poorer 18-h intensity forecasts than assimilating Vr only, indicating that many assimilation cycles are needed to establish balanced analyses when MSLP data alone are assimilated; this is due to the very limited pieces of information that MSLP data provide.  相似文献   

17.
We analyze climate change in a cost–benefit framework, using the emission and concentration profiles of Wigley et al. (Nature 379(6562):240–243, 1996). They present five scenarios that cover the period 1990–2300 and are designed to reach stabilized concentration levels of 350, 450, 550, 650 and 750 ppmv, respectively. We assume that the damage cost in each year t is proportional to the corresponding gross world product and the square of the atmospheric temperature increase (ΔT(t)). The latter is estimated with a simple two-box model (representing the atmosphere and deep ocean). Coupling the damage cost with the abatement cost, we interpolate between the five scenarios to find the one that is optimal in the sense of minimizing the sum of discounted annual (abatement plus damage) costs over a time horizon of N years. Our method is simpler than ‘traditional’ models with the same purpose, and thus allows for a more transparent sensitivity study with respect to the uncertainties of all parameters involved. We report our central result in terms of the stabilized emission level E o and concentration level p o (i.e. their values at t = 300 years) of the optimal scenario. For the central parameter values (that is, N = 150 years, a discount rate r dis = 2%/year and a growth rate r gro = 1%/year of gross world product) we find E o  = 8.0 GtCO2/year and p o = 496 ppmv. Varying the parameters over a wide range, we find that the optimal emission level remains within a remarkably narrow range, from about 6.0 to 12 GtCO2/year for all plausible parameter values. To assess the significance of the uncertainties we focus on the social cost penalty, defined as the extra cost incurred by society relative to the optimum if one makes the wrong choice of the emission level as a result of erroneous damage and abatement cost estimates. In relative terms the cost penalty turns out to be remarkably insensitive to errors. For example, if the true damage costs are three times larger or smaller than the estimate, the total social cost of global climate change increases by less than 20% above its minimum at the true optimal emission level. Because of the enormous magnitude of the total costs involved with climate change (mitigation), however, even a small relative error implies large additional expenses in absolute terms. To evaluate the benefit of reducing cost uncertainties, we plot the cost penalty as function of the uncertainty in relative damage and abatement costs, expressed as geometric standard deviation and standard deviation respectively. If continued externality analysis reduces the geometric standard deviation of relative damage cost estimates from 5 to 4, the benefit is 0.05% of the present value G tot of total gross word product over 150 years (about $3.9 × 1015), and if further research reduces the standard deviation of relative abatement costs from 1 to 0.5, the benefit is 0.03% of G tot .  相似文献   

18.
In a limited number of ensembles, some samples do not adequately reflect the true atmospheric state and can in turn affect forecast performance. This study explored the feasibility of sample optimization using the ensemble Kalman filter (EnKF) for a simulation of the 2014 Super Typhoon Rammasun, which made landfall in southern China in July 2014. Under the premise of sufficient ensemble spread, keeping samples with a good fit to observations and eliminating those with poor fit can affect the performance of EnKF. In the sample optimization, states were selected based on the sample spatial correlation between the ensemble state and observations. The method discarded ensemble states that were less representative and, to maintain the overall ensemble size, generated new ensemble states by reproducing them from ensemble states with a good fit by adding random noise. Sample selection was performed based on radar echo data. Results showed that applying EnKF with optimized samples improved the estimated track, intensity, precipitation distribution, and inner-core structure of Typhoon Rammasun. Therefore, the authors proposed that distinguishing between samples with good and poor fits is vital for ensemble prediction, suggesting that sample optimization is necessary to the effective use of EnKF.  相似文献   

19.
Using an incomplete third-order cumulant expansion method (ICEM) and standard second-order closure principles, we show that the imbalance in the stress contribution of sweeps and ejections to momentum transfer (ΔS o ) can be predicted from measured profiles of the Reynolds stress and the longitudinal velocity standard deviation for different boundary-layer regions. The ICEM approximation is independently verified using flume data, atmospheric surface layer measurements above grass and ice-sheet surfaces, and within the canopy sublayer of maturing Loblolly pine and alpine hardwood forests. The model skill for discriminating whether sweeps or ejections dominate momentum transfer (e.g. the sign of ΔS o ) agrees well with wind-tunnel measurements in the outer and surface layers, and flume measurements within the canopy sublayer for both sparse and dense vegetation. The broader impact of this work is that the “genesis” of the imbalance in ΔS o is primarily governed by how boundary conditions impact first and second moments.  相似文献   

20.
The spatial peak surface shear stress tS¢¢{\tau _S^{\prime\prime}} on the ground beneath vegetation canopies is responsible for the onset of particle entrainment and its precise and accurate prediction is essential when modelling soil, snow or sand erosion. This study investigates shear-stress partitioning, i.e. the fraction of the total fluid stress on the entire canopy that acts directly on the surface, for live vegetation canopies (plant species: Lolium perenne) using measurements in a controlled wind-tunnel environment. Rigid, non-porous wooden blocks instead of the plants were additionally tested for the purpose of comparison since previous wind-tunnel studies used exclusively artificial plant imitations for their experiments on shear-stress partitioning. The drag partitioning model presented by Raupach (Boundary-Layer Meteorol 60:375–395, 1992) and Raupach et al. (J Geophys Res 98:3023–3029, 1993), which allows the prediction of the total shear stress τ on the entire canopy as well as the peak (tS ¢¢/t)1/2{(\tau _S ^{\prime\prime}/\tau )^{1/2}} and the average (tS/t)1/2{(\tau _S^{\prime}/\tau )^{1/2}} shear-stress ratios, is tested against measurements to determine the model parameters and the model’s ability to account for shape differences of various roughness elements. It was found that the constant c, needed to determine the total stress τ and which was unspecified to date, can be assumed a value of about c = 0.27. Values for the model parameter m, which accounts for the difference between the spatial surface average tS{\tau _S^{\prime}} and the peak tS ¢¢{\tau _S ^{\prime\prime}} shear stress, are difficult to determine because m is a function of the roughness density, the wind velocity and the roughness element shape. A new definition for a parameter a is suggested as a substitute for m. This a parameter is found to be more closely universal and solely a function of the roughness element shape. It is able to predict the peak surface shear stress accurately. Finally, a method is presented to determine the new a parameter for different kinds of roughness elements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号