首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 120 毫秒
1.
河口潮汐过程受上游径流、外海潮波等综合因素影响,动力机制复杂,潮位预报难度大。本文提出了一种基于非稳态调和分析(NS_TIDE)和长短时记忆(LSTM)神经网络的混合模型,对河口潮位进行12~48 h短期预报。该模型首先对河口实测潮汐数据进行非稳态调和分析,通过与实测资料对比得到分析误差的时序序列;以此作为LSTM神经网络的输入数据,通过网络学习并预测未来12~48 h潮位预报误差,据此对NS_TIDE的预测结果进行实时校正。利用该模型对2020年长江口潮位过程进行了预报检验,结果表明混合模型12 h、24 h、36 h和48 h短期水位预报的均方根误差(RMSE)相比NS_TIDE模型至多分别降低了0.16 m、0.15 m、0.14 m和0.12 m;针对2020年南京站最高水位预测,NS_TIDE模型预报误差为0.64 m,而混合模型预报误差仅为0.10 m。  相似文献   

2.
水位在忽略观测误差的前提下,可分解为潮位和余水位,后者具有较强的空间相关性以及非平稳特征,是影响水位预报精度的主要因素。港口工程、航运计划编制等方面对实时高精度水位预报具有重要需求,这对余水位预报模型构建提出了更高要求。另外,利用高精度余水位预报模型可减少验潮站布设数量。针对余水位短期预测模型精度不高的现状,本文对余水位进行集合经验模态(EEMD)分解,获得余水位在时间序列上的本征模函数(IMF);使用快速傅立叶变换(FFT)分析各本征模函数的频谱特征;再利用BP神经网络对各个本征模函数进行训练,预测了未来6 h、12 h、24 h的余水位值。对哥伦比亚河下游河口处的3组典型验潮站的余水位数据的预测结果表明,在未来6 h、12 h内的余水位的预测精度达到厘米级,在24 h内接近厘米级,证明了该组合模型在余水位短期预测方面的可行性。  相似文献   

3.
目前世界各国出版的潮汐表和潮流表几乎全是采用调和方法推算的,对于用这种方法进行的潮汐预报的误差已有许多人做过研究;我国也曾有人从调和常数准确度和分潮选取方面进行了研究,并研究了浅水港口的潮汐预报方法。我所与国家海洋局情报研究所潮流组的同志在这方面做了一些工作:在一定程度上提高了潮汐预报的准确度;满足了实践的需要。然而,潮汐预报余差(即实测水位与预报潮高之差)减小的量值与余差本身相比仍是微小。例如在浅水港口吴淞,用1963年实测水位资料的分析结果预报1970年的潮位,采用 Doodson的方法预报,低潮时间的误差在半小时以上者占49%,而采用浅水准调和分潮方法预报,则仅占9%。前者余差的标准差是20.6厘米,后者约为19.7厘米,两者只相差0.9厘米,对余差总体来说,所减少的量值还是很小的。 验潮站测得的每小时一次的水位值,实际上可以认为是周期性和非周期性水位之和。其中,周期部分是潮汐诸分潮振动的迭加结果;在实测水位中扣除预报的潮高后得到的余差基本上可看作是非周期性的。从谱结构来看,实测水位不仅是一系列以线谱为特征的分潮的迭加,而且还有本底噪声以及介于两者之间的非线性相互作用所导致的一些随机起伏。所以,用调和方法预报潮汐,其准确度必有某些限制。为了进一步研究潮汐预报误差,国外曾有人对特定地点的潮汐预报余差进行谱分析,从而得到了一些有意义的结果。本文即拟通过潮汐预报余差功率谱研究潮汐预报的准确度和误差的性质。  相似文献   

4.
探讨了水位控制的精度要求以及目前水位控制方案设计的不足,提出了利用潮汐模型设计水位控制方案的方法;以自编的软件CNTideGets为工具,对虚拟的一个测区内设立虚拟验潮站,利用潮汐模型计算各个虚拟站的深度基准面、预报15天的潮汐资料,然后计算出各个虚拟站之间的潮时差、同时间最大潮高差以及同相位潮差中误差并绘制成图,以此为依据设计测区水位控制方案;对水位控制精度和利用潮汐模型设计水位控制方案提出了建议。  相似文献   

5.
潮汐表是利用长期潮汐观测结果经调和分析实现的主要港湾潮汐预报结果,具有较高的预报精度,而通常的天文潮数值预报目前还难以达到潮汐表的预报精度.本研究在建立常规天文潮数值预报模型的基础上,建立了基于潮汐表数据同化的天文潮数值预报模型,并分别采用这2种模型预报福建沿岸海域的天文潮.其结果表明同化模型的预报结果无论是在潮时还是在潮高均明显优于常规模型;同化模型能显著地改善所研究的沿岸海域90个水位点中至少45个水位点的潮汐预报结果,而其他水位点的预报结果也有不同程度地改善.  相似文献   

6.
宋军  姚志刚  郭俊如  李静  李欢  李程 《海洋通报》2016,35(4):396-405
建立了一个高分辨率的数据同化模型系统,针对渤黄海潮汐模型开边界进行优化研究。潮汐调和常数提取自沿岸的潮位计或者近海的水位计观测。数据同化系统包含向前积分的正模型和潮汐逆模型,正模型是三维的、有限积分的、非线性的区域海洋模型ROMS,逆模型是三维的、线性的、有限元模型TRUXTON。数据同化系统通过反演正压潮汐边界条件优化结果,最大可能的减少各同化数据源的差异所带来的误差。研究证明,同化结果能有效的减少潮汐开边界水位强迫的误差,模型/观测误差在调整后减小超过50%。基于posterior潮汐开边界重构的M2分潮图同前人的结果一致。  相似文献   

7.
基于海洋气象历史观测资料和再分析数据等,利用LSTM深度神经网络方法,开展在有监督学习情况下的海面风场短时预报应用研究。以中国近海5个代表站为研究区域,通过气象台站观测数据和ERA-Interim 6 h再分析数据构建数据集。选取21个变量作为预报因子,分别构建两个LSTM深度神经网络框架(OBS_LSTM和ALL_LSTM)。经与2017年WRF模式6 h预报结果对比分析,得出如下结论:构建的两个LSTM风速预报模型可以大幅降低风速预报误差,RMSE分别降低了41.3%和38.8%,MAE平均降低了43.0%和40.0%;风速误差统计和极端大风分析发现,LSTM模型能够抓住地形、短时大风和台风等敏感信息,对于大风过程预报结果明显优于WRF模式;两种LSTM模型对比发现,ALL_LSTM模型风速预报误差最小,具有很好的稳定性和鲁棒性,OBS_LSTM模型应用范围更广泛。  相似文献   

8.
为了提高潮汐水位的实时预测精度,本文提出了一种基于灰色的数据处理群模块化(Grey-GMDH)潮汐水位实时预测模型。模块化将潮汐分解为两部分:由天体引潮力形成的天文潮部分和由各种天气以及环境因素引起非天文潮部分。使用Grey-GMDH模型和调和分析模型分别对潮汐的非天文潮和天文潮部分进行仿真预测,然后将两部分的预测结果综合形成最终的潮汐预测值。并选用San Diego港口的实测潮汐值数据进行实时预报的仿真实验,实验结果验证了该方法的可行性与有效性并取得了良好的仿真结果,验证了模型有着较高的预报精度。  相似文献   

9.
以天文潮调和分析原理为基础,通过对余水位的分布特征统计分析,提出一种基于少量观测潮位数据实现短期潮汐预报的方法,并研究开发相关的预报软件。应用案例试验结果表明所提出的统计预报方法具有两项重要的应用价值:(1)在缺测数据修订方面具有较高的精度,平均绝对值误差优于5 cm;(2)在为期3天的短期潮汐预报中具有较好表现,平均绝对值误差小于13 cm。  相似文献   

10.
利用JASON-1和TOPEX/POSEIDON卫星高度计在相互校正阶段的观测资料,对两者在中国海和西北太平洋测得的海面风速、有效波高、后向散射截面、海平面高度等参数进行一致性分析;利用j,v模型及主要分潮的调和常数,对中国陆架浅海的JASON-1海平面高度数据进行浅海潮汐修正,使用验潮站月平均水位资料对修正结果加以印证。结果显示,2颗高度计观测的海洋环境参数具有强相关性,JASON-1具备了完成延续TOPEX/POSEIDON数据集这一使命的条件。但是,2套系统对于同一海洋环境参数的观测还是存在不能忽略的差异,对这种差异进行了分析,并给出了修正模型。所使用的浅海潮汐修正方法有效地抑制了中国陆架浅海潮波对海平面高度反演的影响,所使用浅海水域的5个验潮站月平均水位资料与JASON-1高度计经过浅海潮汐修正后的海平面高度的相关系数为0.738,标准偏差为0.096m。通过进一步融合JASON-1和TOPEX/POSEIDON在并行飞行期间的海平面高度数据并与验潮站资料比较显示,两者的相关系数提高到0.83,标准偏差为0.067m。  相似文献   

11.
《Coastal Engineering》2005,52(3):221-236
The notion of data assimilation is common in most wave predictions. This typically means nudging of wave observations into numerical predictions so as to drive the predictions towards the observations. In this approach, the predicted wave climate is corrected at each time of the observation. However, the corrections would diminish soon in the absence of future observations. To drive the model state predictions towards real time climatology, the updating has to be carried out in the forecasting horizon too. This could be achieved if the wave forecasting at the observational network is made available. The present study addresses a wave forecasting technique for a discrete observation station using local models. Embedding theorem based on the time-lagged embedded vector is the basis for the local model. It is a powerful tool for time series forecasting. The efficiency of the forecasting model as an error correction tool (by combining the model predictions with the measurements) has been brought up in a forecasting horizon from few hours to 24 h. The parameters driving the local model are optimised using evolutionary algorithms.  相似文献   

12.
刘聚  暴景阳  许军 《海洋测绘》2019,39(2):10-15
为实现对时差法水位改正结果的精度评估,根据协方差传播律,推导了时差法水位改正的误差方程,讨论了评估潮时差确定精度的方法,并通过数据试验对结果进行了验证。试验结果表明,最小二乘拟合法比相关系数法更能满足时差法水位改正精度评估对潮时差的要求;根据《海道测量规范》要求,试验海域潮时差中误差的限差为6.2min,验潮站网潮时差闭合差中误差的限差为1.0min;水位改正方差反映出了水位改正结果的合理性,可用以评估水位改正精度。  相似文献   

13.
Numerous urbanized embayments in California are at risk of flooding during extreme high tides caused by a combination of astronomical, meteorologic and climatic factors (e.g., El Niño), and the risk will increase as sea levels rise and storminess intensifies. Across California, the potential exists for billions of dollars in losses by 2100 and predictive inundation models will be relied upon at the local level to plan adaptation strategies and forecast localized flood impacts to support emergency management. However, the predictive skill of urban inundation models for extreme tide events has not been critically examined particularly in relation to data quality and flood mapping methodologies. With a case study of Newport Beach, California, we show that tidal flooding can be resolved along streets and at individual parcels using a 2D hydraulic inundation model that captures embayment amplification of the tide, overtopping of flood defenses, and overland flow along streets and into parcels. Furthermore, hydraulic models outperform equilibrium flood mapping methodologies which ignore hydraulic connectivity and are strongly biased towards over-prediction of flood extent. However, infrastructure geometry data including flood barriers, street and parcel elevations are crucial to accurate flood prediction. A real time kinematic (RTK) survey instrument with an error of approximately 1 cm (RMSE) is found to be suitable for barrier height measurement, but an error of approximately15 cm (RMSE) typical of aerial laser scanning or LiDAR is found to be inadequate. Finally, we note that the harbor waterfront in Newport Beach is lined by a patchwork of public and private parcels and flood barriers of varied designs and integrity. Careful attention to hydraulic connectivity (e.g., low points and gaps in barriers) is needed for successful flood prediction.  相似文献   

14.
Salinity is an important parameter influencing the water quality of estuaries, and can constitute a serious problem to society due to the need for freshwater for industry and agriculture. Therefore, the determination of salt intrusion length in estuaries is a challenge for managers as well as scientists in this field. The managers tend to use simple and reliable tools for salinity variation. Although 2-D and 3-D numerical models are common tools for the prediction of salinity intrusion now, analytical models of salinity variation are much more efficient, and also require minimal sets of river data. In this paper, two analytical solutions, Brockway and Savenije used worldwide to assess longitudinal salinity variation in alluvial estuaries, are applied to the Moroccan Atlantic semi-closed estuaries, i.e., Sebou and Loukkos. The solutions are derived from salt convection-dispersion equations, with different assumptions for the dispersion coefficient. The estuaries' bathymetry is described by an exponential function. The performance of these two solutions was evaluated by comparing their results with field-measured salinity data. The Brockway model's salinity predictions performs well to observations especially in downstream reaches of the two estuaries (Sebou: R2 = 0.95, root mean square error [RMSE] = 1.50‰, normalized root mean square error [NRMSE] = 3.45‰; Loukkos: R2 = 0.95, RMSE = 1.13‰, NRMSE = 3.01‰), while the Savenije model outperformed the Brockway's model and is better to predict salt intrusion length and salinity variation along the two estuaries (Sebou: R2 = 0.97, RMSE = 1.15‰, NRMSE = 2.85‰; Loukkos: R2 = 0.98, RMSE = 0.95‰, NRMSE = 1.94‰). This revealed that both analytical solutions apply well to the estimation of salinity variation and the prediction of salt intrusion in these two estuaries.  相似文献   

15.
Abstract

In this paper, two analytical models used worldwide to assess salinity variation in alluvial estuaries are applied to the Ashtamudi estuary, a Ramsar site, southwest coast of India; and Bouregreg estuary, in northwest part of Morocco. The estuaries’ bathymetry is described by an exponential function. Both models are quite similar and use a predictive equation for the dispersion in the estuary mouth (D0). The major difference between the two models is the use of the constant value of K?=?0.5 for the Van der Burgh coefficient (K) and the introduction of the correction factor ζ, which is a function of damping (δ) and shape (γ). The performance of these two models was evaluated by comparing their results with field measurements; this revealed that both analytical models apply well to both the estimation of salinity distribution and the prediction of salt intrusion in the Ashtamudi and Bouregreg estuaries (Ashtamudi: RMSE = 0.60–1.22 ppt; Bouregreg: RMSE = 0.92–2.71 ppt). One model agrees more with the field measurements of salinity distribution along the estuaries axis; the second underestimate and overestimate some values of salinity distribution along the estuaries. Possibly, the constant value of K?=?0.5 for the Van der Burgh coefficient (K) has applicability limits for the estuaries under tidal conditions. The specifying of the parameterization may be a field of research.  相似文献   

16.
The purpose of this study was to apply probabilistic models to the mapping of the potential polychaeta habitat area in the Hwangdo tidal flat, Korea. Remote sensing techniques were used to construct spatial datasets of ecological environments and field observations were carried out to determine the distribution of macrobenthos. Habitat potential mapping was achieved for two polychaeta species, Prionospio japonica and Prionospio pulchra, and eight control factors relating to the tidal macrobenthos distribution were selected. These included the intertidal digital elevation model (DEM), slope, aspect, tidal exposure duration, distance from tidal channels, tidal channel density, spectral reflectance of the near infrared (NIR) bands and surface sedimentary facies from satellite imagery. The spatial relationships between the polychaeta species and each control factor were calculated using a frequency ratio and weights-of-evidence combined with geographic information system (GIS) data. The species were randomly divided into a training set (70%) to analyze habitat potential using frequency ratio and weights-of-evidence, and a test set (30%) to verify the predicted habitat potential map. The relationships were overlaid to produce a habitat potential map with a polychaeta habitat potential (PHP) index value. These maps were verified by comparing them to surveyed habitat locations such as the verification data set. For the verification results, the frequency ratio model showed prediction accuracies of 77.71% and 74.87% for P. japonica and P. pulchra, respectively, while those for the weights-of-evidence model were 64.05% and 62.95%. Thus, the frequency ratio model provided a more accurate prediction than the weights-of-evidence model. Our data demonstrate that the frequency ratio and weights-of-evidence models based upon GIS analysis are effective for generating habitat potential maps of polychaeta species in a tidal flat. The results of this study can be applied towards conservation and management initiatives for the macrofauna of tidal flats.  相似文献   

17.
公交行程时间的精确预测对于提升公交吸引力具有重要意义。本文基于公交车到离站的历史数据,综合考虑时间周期、站点、站间距离、天气等多个因素,建立了基于BP神经网络的公交车静态行程时间预测模型,以该模型为基础,采用动态迭代的方法,叠加多个站间行程时间预测结果,进一步构建了面向连续站点的公交车动态行程时间预测模型,实现对跨越多个站点的公交行程时间预测。以青岛市125路公交为例对算法进行测试。在模型的横向对比实验中,本模型预测结果的绝对误差均在50 s以内,平均绝对误差百分比(MAPE)为11.74%,均方根误差(RMSE)为23.15,R2的确定系数为0.905 1,SVM的MAPE、RMSE、R2 误差指标分别为:12.38%、38.33、0.743 6,LR对应的误差指标分别为:12.50%、25.59、0.884 1;在静态模型与动态模型的对比实验中,动态模型预测结果的MAPE为11.75%,RMSE为23.15,静态模型对应误差指标分别为:11.63%、26.74。研究结果表明,基于BP神经网络的公交动态行程时间预测模型比传统的静态预测方法具有更高的预测精度。  相似文献   

18.
To explore new operational forecasting methods of waves, a forecasting model for wave heights at three stations in the Bohai Sea has been developed. This model is based on long short-term memory(LSTM) neural network with sea surface wind and wave heights as training samples. The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input, the prediction error produced by the proposed LSTM model at Sta. N01 is 20%, 18% and 23% lower than the conventional numerical wave models in terms of the total root mean square error(RMSE), scatter index(SI) and mean absolute error(MAE), respectively. Particularly, for significant wave height in the range of 3–5 m, the prediction accuracy of the LSTM model is improved the most remarkably, with RMSE, SI and MAE all decreasing by 24%. It is also evident that the numbers of hidden neurons, the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy. However, the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used. The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training. Overall, long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.  相似文献   

19.
In the present work we explore the impact of assimilating local tide-gauge and altimetric data on the quality of predicting the major Adriatic tides (M2 and K1). To that end we compute optimal tidal open boundary conditions for a 3D high-resolution finite-element model by using an incremental assimilation formalism. The essence of the method is the use of two dynamical models where the solution in the complex 3D high-resolution model is sought via assimilation of prediction errors into the simpler 2D model with explicit inverse. In the central numerical experiment, harmonic constants from 12 tide gauges are assimilated and the results are analysed at 31 locations, hence 19 independent ones. The data assimilation contributes to the reduction of maximum amplitude error from 5.6 to 0.5 cm for M2 and from 3.9 to 0.1 cm for K1. The assimilation procedure is repeated by assimilating suitably processed Topex/Poseidon altimeter data, again validating the outcome at 31 tide gauge locations. The result was very similar to the gauge-data assimilation outcome. The model output is also validated with the current data, not used in the assimilation. At two locations and at three depths the model was able to reproduce the major and the minor semi-axes of tidal ellipses, as well as their orientations very well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号