首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Using exogenous variables to improve precipitation predictions of ANNs in arid and hyper-arid climates
Authors:Hossein Bari Abarghouei  Seyed Zeynalabedin Hosseini
Institution:1.Department of Agriculture,Payame Noor University,Yazd,Iran;2.Faculty of Natural Resources and Desert Studies,Yazd University,Yazd,Iran
Abstract:Because of scarcity and high variability of rainfall in arid areas, from one hand, reliable prediction of precipitation in such regions is considerably difficult. Furthermore, in some cases, shortage of observation data and several other limitations may intensify complexity of the forecasting. On the other hand, these regions highly suffer from low availability of water which necessitates development of an appropriate modeling approach to provide as precise as possible predictions of precipitation. Artificial neural networks (ANNs) are expected to be a powerful tool in capturing and analyzing high interannual variability of precipitation in arid climates and, subsequently, in proper prediction of precipitation fluctuations in the future. The end of this paper is to improve ANN predictions of precipitation in arid climates using better training of the network. To this end, two approaches were applied. In the first one, just the rainfall monthly data were considered as input. In the second approach, in addition to precipitation, several exogenous variables of precipitation are considered as input to predict precipitation. The chosen exogenous parameters are either effective on or relevant to the precipitation patterns. Then, several lag times, hidden layer sizes, and training algorithms for different running sums are used in order to produce best forecasts. It was shown that the performance of networks increases significantly by importing more external factors as inputs. The bigger time scales also exhibited better performances. In all the five time scales, smaller lag times (especially one month), bigger hidden layer sizes (especially between 31 and 40), and GDX training algorithm presented the best performance. The highest obtained performance was presented by the network with 10 inputs, 1 month lag, 36 hidden layers, and CGF training method in 18 months running sum with R 2 of 0.93.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号