首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points.  相似文献   

2.
High compression ratio, high decoding performance, and progressive data transmission are the most important requirements of vector data compression algorithms for WebGIS. To meet these requirements, we present a new compression approach. This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates. It is proved that the distance between the converted point and the original point on screen is within 2 pixels, and therefore, our approach is suitable for the visualization of vector data on the client side. Integer coordinates are passed to an Integer Wavelet Transformer, and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes. The experimental results on river data and road data demonstrate the effectiveness of the proposed approach: compression ratio can reach 10% for river data and 20% for road data, respectively. We conclude that more attention needs be paid to correlation between curves that contain a few points.  相似文献   

3.
一种改进均值的自适应中值滤波算法   总被引:1,自引:0,他引:1  
针对自适应中值滤波算法在滤除高浓度椒盐噪声和保留图像边缘细节中的不足,提出了一种改进均值的自适应中值滤波(IMAMF)算法。该算法采用扩充图像边界的方式,使得原图像的边界点能在自适应的滤波窗口下参与噪声检测和滤波处理,并在检测噪声和信号时,增加了噪声阈值判定,将存在噪声的像素点用修正后的均值滤波器值输出,信号点则用原始灰度值输出。为了验证算法的可行性,采用了5种不同的算法进行仿真对比分析,并从主观角度和客观指标上进行效果评价。试验结果表明:该算法能有效滤除浓度为10%~90%范围内的椒盐噪声,且图像细节和边缘信息得到了更好的保留,滤波性能明显优于其他算法。  相似文献   

4.
浮动车地图匹配算法研究   总被引:3,自引:0,他引:3  
王美玲  程林 《测绘学报》2012,41(1):133-0
针对现有浮动车地图匹配算法应用于城市复杂路网时面临的关键技术难点,本文基于浮动车数据,在 SuperMap GIS 平台下实现了城市交通路网的构建,并研究了一种浮动车地图匹配的新算法:基于网格的候选路段确定,基于距离、航向、可达性权重的定位点匹配及基于最短路径的行驶轨迹选择。算法能够满足浮动车地图匹配准确性与实时性的要求,为获取城市道路的交通拥堵状况信息提供可靠依据。  相似文献   

5.
针对目前星上遥感图像实时处理只能实现低级别算法的情况,提出了基于现场可编程门阵列(field-programmable gate array,FPGA)的P-H法星上相对姿态实时解算模型。该模型不仅避免了传统基于欧拉角的复杂三角函数计算与初值估算,还降低了迭代次数。试验选用FPGA(V7 xc7vx1140t)作为实时解算的硬件平台。在FPGA实现中,采用64位的浮点数据结构和串行/并行相结合策略;并采用LU(Lower-Upper)分解-分块算法实现矩阵求逆。试验结果表明,该模型的迭代次数比基于欧拉角的少了13次。该模型在FPGA和计算机的实现结果相差仅为5.0×10-14,加速度比为10。另外,该模型可广泛适用于实时性要求高的图像处理领域。  相似文献   

6.
汪荣峰  廖学军 《测绘科学》2013,38(1):130-132
本文为实现全球海量地形数据的实时可视化,提出了一种新算法。算法不使用几何数据而是利用球面特征进行地形多分辨率模型初建,然后基于视锥与节点关系对初建结果进行扩展来得到完整的地形网格。此外设计了能消除具有复杂邻接关系的节点间裂缝的拼接方式,构造了简洁的方法消除GPU32位浮点精度导致的"wob-bling"现象。实现的算法在普通微机上平均漫游速度达每秒95帧以上。  相似文献   

7.
An oracle-based data management method for large database in CyberCity GIS   总被引:1,自引:0,他引:1  
An Oracle8i-based approach is proposed to manage the integrated databases of large CyberCity. This approach consists of threeschemes: ① a special R -tree index is designed to accelerate spatial retrieving, in which the bounding boxes of local regions have no intersection and all leaf nodes of the R -tree (geometry records ) have no repetitiont;② different data compression algorithms are adopted to compress the digital elevation models, 3D vector models and images, such as LZ77 lossless compression algorithm for compression of vector data and JPEG compression algorithms for texture images;③ in order to communicate with Oracle8i database, a CyberCity GIS spatial database engine (SDE) is designed. On the basis of this SDE prototype a case study is done.  相似文献   

8.
MODIS NDVI时间序列数据的去云算法比较   总被引:4,自引:0,他引:4  
受多重因素的影响,MODIS NDVI数据产品中存在着大量的噪声,需要进行去噪重建.针对目前几种常用的NDVI时间序列数据去云方法,如HANTS法、SPLINE插值法以及Savizky-Golay法,以山东省MODIS NDVI时间序列数据(一年的)作为检验数据,从不同角度比较几种算法的去云能力和使用范围.结果表明:S...  相似文献   

9.
An Oracle8i-based approach is proposed to manage the integrated databases of large CyberCity. This approach consists of three schemes: ? a special R+-tree index is designed to accelerate spatial retrieving, in which the bounding boxes of local regions have no intersection and all leaf nodes of the R+-tree (geometry records) have no repetition; ∪ different data compression algorithms are adopted to compress the digital elevation models, 3D vector models and images, such as LZ77 lossless compression algorithm for compression of vector data and JPEG compression algorithms for texture images; ? in order to communicate with Oracle8i database, a CyberCity GIS spatial database engine (SDE) is designed. On the basis of this SDE prototype a case study is done.  相似文献   

10.
埃默里冰架(Amery ice Shelf,AIS)是南极洲第三大冰架,冰架状态影响着南极洲物质平衡和海平面变化,但目前对于AIS与海水交界的冰架前端位置确定研究甚少。基于哨兵一号(Sentinel-1)合成孔径雷达(synthetic aperture radar,SAR)影像提出了一种高效且精确的冰架前端自动检测方法,利用冰架和海水之间的过渡带的SAR后向散射系数分布特点,利用Sentinel-1 SAR影像并结合单元最小恒虚警率(smallest of constant false alarm rate,SO-CFAR)和形态学滤波得到冰水二值图,采用滑动窗口和累积和的方法自动提取每条剖面线对应的冰架前端点位置,自动绘制AIS前端轮廓线。考虑SAR影像空间分辨率和剖面分辨率等因素对前端检测的影响,进行冰架前端参数优化,并分析有无浮冰对冰架前端提取精度的影响。为了验证影像空间分辨率对各种方法检测结果的影响,将AIS前端无碎冰的影像进行双线性内插法重采样处理,并与基于标准差与五大值法的冰架前端提取算法进行精度对比分析。实验证明提出的剖面法具有一定的适用性。此外,通过分析AIS前端有无碎冰发现,基于SO-CFAR和形态学滤波算法相结合的剖面法对冰架前端提取精度最佳,最优检测精度小于1个像素,且受表面融水、冰架破碎等较小,具有较强的场景适应性。  相似文献   

11.
针对传统点云压缩算法主要对小型物件的小数据量精细点云进行压缩,在大型地物的海量数据压缩方面存在压缩时间长、效率低的不足,提出了一种改进的分层点云数据压缩算法。基于大型地物点云空间结构特点将分层压缩算法的速度优势和距离压缩算法的高效优势相结合,解决了传统压缩算法在大型地物点云压缩方面的不足,实现了海量点云的快速高效压缩。西安市大雁塔三维激光点云压缩实验结果表明:该算法可以快速地完成海量点云的压缩,较之传统压缩算法极大地缩短了压缩时间,提高压缩效率。  相似文献   

12.
显著性权重RX高光谱异常点检测   总被引:1,自引:0,他引:1  
高光谱图像异常点检测中,传统RX异常点检测算法忽略了空间相关性,背景估计不准确。本文提出了一种基于图像局部邻域光谱显著性分析的加权RX算法。该算法通过引入图像显著性分析,对基于概率密度为权重的图像背景建模进行改进,建立光谱显著性权重图,重新定义RX算法中的均值向量和协方差矩阵,并给不同的目标赋予不同的权值,达到优化背景估计的目的。利用合成高光谱数据和真实高光谱数据进行异常点检测实验,结果表明,对于同一组数据,本文算法检测到的异常点数比传统算法多,虚警率较低,有效地提高了检测率。  相似文献   

13.
分析了常规压缩算法(如Douglas-Peucker算法)压缩无拓扑多边形数据会产生公共边界不一致现象,认为出现此现象的原因是多边形公共边界的压缩起始点选择不一致,进而提出了一种新的基于约束点的无拓扑多边形数据压缩算法。算法原理包括:首先将多边形公共边界的两个端点作为约束点处理,使得多边形从约束点处逻辑上分成几段;然后利用常规压缩算法进行分段压缩,使每一多边形公共边界的压缩初始点一致,从而保证了无拓扑多边形数据的一致性压缩;最后大量试验验证了此算法的有效性。  相似文献   

14.
We propose a new lossless and near-lossless compression algorithm for hyperspectral images based on context-based adaptive lossless image coding (CALIC). Specifically, we propose a novel multiband spectral predictor, along with optimized model parameters and optimization thresholds. The resulting algorithm is suitable for compression of data in band-interleaved-by-line format; its performance evaluation on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data shows that it outperforms 3-D-CALIC as well as other state-of-the-art compression algorithms.  相似文献   

15.
梁明  陈文静  段平  李佳 《测绘通报》2019,(4):60-64,70
轨迹大数据的关键瓶颈之一是轨迹数据海量的数据规模对轨迹的分析、挖掘和应用的限制,因而各类针对轨迹数据的压缩方法是轨迹大数据研究的重点。现有轨迹压缩算法重视对轨迹数据的单一维度时空特征的保持,而缺乏压缩算法对多维度时空特征影响的研究。本文选取MBR面积误差、距离误差、方向误差、速度误差、压缩率和压缩速度等轨迹数据多维度时空特征,分别从轨迹的几何特征、运动特征和压缩效率3个层面对典型轨迹压缩方法进行评价。同时,为了系统观察轨迹压缩算法在不同压缩尺度上对轨迹时空特征的影响规律,本文采用多个尺度压缩结果的评价方法。研究结果表明,在整体效果上那些考虑了轨迹运动特征的压缩算法(如TD_TR算法)对轨迹的总体时空特征保持较好;并且不同的压缩算法对时空特征的影响总体上具有随着尺度变化的一致性,可见压缩尺度是决定压缩效果的核心因素。  相似文献   

16.
三维点云为物理世界精细数字化提供了高精度的三维表达方式,广泛应用于三维建模、智慧城市、自主导航系统、增强现实等领域。然而点云的数据海量、非结构化、密度不均等特点给点云的存储和传输带来了巨大挑战,因此在有限的存储空间容量和网络传输带宽中实现低比特率、低失真率的点云压缩具有重要的理论意义和实用价值。围绕点云压缩中的研究现状、标准框架和评价指标,阐述国内外点云压缩算法研究工作、运动图像专家组压缩标准框架以及几何和属性信息质量评价指标的最新进展,分析比较3种开源点云压缩算法在点云压缩公开数据集下的性能表现,并对点云压缩的主要发展方向趋势予以展望。  相似文献   

17.
A novel algorithm for the lossless compression of hyperspectral sounding data is presented. The algorithm rests upon an efficient technique for three-dimensional image band reordering. The technique is based on a correlation factor. The correlation-based band ordering gives 5% higher compression ratios than natural ordering does. On the other hand, the obtained compression ratios are within a percent of those produced by optimal ordering, but the computational time is much lower compared to the optimal ordering. The low computational complexity of the algorithm is based on the use of correlation for the band ordering. Moreover, the algorithm results in 7% to 12% improvement over fast nearest neighbor reordering scheme versions of JPEG-LS and the context-based adaptive lossless image codec algorithms.  相似文献   

18.
几何配准是影像后续处理的重要前提,是遥感信息处理领域研究的热点之一。复杂地形区多时相遥感影像的高精度配准一直是难以突破的难题,光流估计法通过逐像素位移增量解算为此提供了可行的解决思路,但光流法对地物变化异常敏感,经常导致计算的光流场及配准影像存在异常。为此,本文提出一种基于光流校正的复杂地形区多时相遥感影像配准方法,采用亮度和梯度双重约束获取光流场初值,在此基础上使用高斯拉普拉斯算子对异常光流进行检测,然后通过Delaunay三角形曲面插值对异常光流进行校正处理,从而得到各像素精准位移。实验表明,本文提出方法对存在地物变化的复杂地形区多时相遥感影像,可实现高保真、高精度的配准。  相似文献   

19.
In this paper, we have described a model to parallelize the resampling routine, which is used in the geometric correction of data provided by remote sensing satellites. Our model is a typical master-slave model consisting of N machines termed as hosts out of which one is designated as the master. The input image data resides on the master. Processing of the input image data is done in parallel on the N machines. Issues related to load-balancing and various error conditions that may occur during execution like one of the machines going down have been studied and are incorporated in the model. It also provides the flexibility to add or delete the hosts during the execution of the resampling routine. The serial version of this routine involves huge amount of computations and takes substantial amount of time even for an image of 473 MB. We have implemented our model with the help of PVM which is most often used in distributed computing environment. Our approach has been tested for geometric correction on LISS-III 4 band data of size 473 MB. It is found that if one uses 2, 3 or 4 hosts the overall execution time is reduced by 33%, 42% and 49%, respectively.  相似文献   

20.
Panchromatic data of pixel resolution 5.8 m obtained from IRS-1C and IRS-1D satellites proved to be very useful for mapping purposes. One of the popular data product is the 70 km swath mosaic which is covered by a combination of 3 CCD line sensors, each with 4096 pixels. Each CCD-line sensor with different imaging times causes geometric problems of mosaicing three strips data together. In this paper, we propose the details of the design elements of system that caters to the need for accurate and automatic multi strip image registration without any second resampling of the data. The systematic geometric correction grid mapping is improved to facilitate accurate mosaicing by automatic image registration task that makes use of the overlap data within image strips and image registration is achieved up to sub-pixel level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号