首页 | 官方网站   微博 | 高级检索  
     

一种基于实景图像的低能见度识别算法
引用本文:刘冬韡,穆海振,贺千山,史军,王亚东,武雪沁.一种基于实景图像的低能见度识别算法[J].应用气象学报,2022,33(4):501-512.
作者姓名:刘冬韡  穆海振  贺千山  史军  王亚东  武雪沁
作者单位:1.上海市生态气象和卫星遥感中心,上海 200030
摘    要:为了利用大量视频监控设备提高能见度数据采集密度,提出一种基于实景图像转换的、采用简单卷积神经网络分类提取能见度等级的算法。该算法假设视频设备水平安装且具备开阔视野, 对原始视频图像进行水平分块,提取各分块的梯度、饱和度和亮度信息组成新的图像,基于简单卷积神经网络建模。采用2019年9月—2020年12月上海洋山港气象站29668张视频图像进行训练,建立识别模型,并采用2021年1—5月5757张视频图像对模型进行测试。采用该算法建立的模型参考雾的预报等级(GB/T 27964—2011)将能见度分为5个等级进行检验,白天准确率为87.99%,夜间准确率为81.32%,优于直接采用AlexNet模型。对1000 m以下低能见度天气的识别准确率达95%以上。利用现有的视频摄像头,可有效弥补气象站点能见度仪数据不足的问题,在气象业务上有一定的应用价值。

关 键 词:低能见度    图像识别    算法    卷积神经网络
收稿时间:2022-02-21

A Low Visibility Recognition Algorithm Based on Surveillance Video
Affiliation:1.Shanghai Ecological Forecasting and Remote Sensing Center, Shanghai 2000302.Shanghai Meteorological Information and Technology Support Center, Shanghai 2000303.Shanghai Key Laboratory of Meteorology and Health, Shanghai 200030
Abstract:Low visibility has significant influences on highways, ferries, civil aviation, and other modes of transportation, and the visibility observation of meteorological departments is not dense enough to meet the monitoring needs of low visibility weather. Using existing video surveillance equipment to extract visibility data can save a significant amount of money on visibility instrument deployment and maintenance, improve data density, and provide finer data support for traffic and urban safety operations. Based on video live image conversion, a simple convolutional neural network classification approach is suggested to extract visibility levels. The algorithm assumes that the video devices are installed horizontally and have an open view, and it creates a new fixed-size image by dividing the original video image into horizontal chunks and extracting the gradient, color saturation, and brightness information from each horizontal chunk. A simple convolutional neural network is used to learn and develop a visibility level recognition model from the converted images. The model is trained by 29668 video images of Yangshan Port Weather Station in Shanghai from September 2019 to December 2020, and then tested with 5757 video images from January to May in 2021. The comparison indicates the recognition model generated with this technique has a greater accuracy than the recognition model built directly with AlexNet network. The model has an overall accuracy of 87.99% during daytime and 81.32% during nighttime when the observed visibility is classified into five levels of fog-free, light fog, fog, dense fog, and thick fog according to the fog forecasting level. The model's identification ability for no fog and light fog is high. However, because the scenery becomes nearly indistinguishable once dense fog appears at night, the model's recognition ability for dense fog level at night is poor, and it is easy to categorize it as a fog level mistakenly. Taking 1000 meters as the criterion of low visibility weather, the algorithm's accuracy is 96.18% during daytime and 96.14% during nighttime. The algorithm features a quick learning rate and ease of application, making it suitable for low visibility video image recognition in most open-field scenarios. The model is applied during a radiation fog in Shanghai on 13 April 2021. The video images of the sparse area of the automatic weather station installation are collected for visibility identification, and the visibility distribution map formed together with the existing automatic weather station visibility meter data is more complete and accurate, which demonstrates that the model established by this algorithm can effectively compensate the problem of insufficient density of the existing automatic station visibility meter data, and has certain application value in meteorological operations.
Keywords:
点击此处可从《应用气象学报》浏览原始摘要信息
点击此处可从《应用气象学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号