首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Principal component analysis (PCA) is deployed in JPEG2000 to provide spectral decorrelation as well as spectral dimensionality reduction. The proposed scheme is evaluated in terms of rate-distortion performance as well as in terms of information preservation in an anomaly-detection task. Additionally, the proposed scheme is compared to the common approach of JPEG2000 coupled with a wavelet transform for spectral decorrelation. Experimental results reveal that, not only does the proposed PCA-based coder yield rate-distortion and information-preservation performance superior to that of the wavelet-based coder, the best PCA performance occurs when a reduced number of PCs are retained and coded. A linear model to estimate the optimal number of PCs to use in such dimensionality reduction is proposed  相似文献   

2.
We propose a compression algorithm for hyperspectral images featuring both lossy and near-lossless compression. The algorithm is based on JPEG 2000 and provides better near-lossless compression performance than 3D-CALIC. We also show that its effect on the results of selected applications is negligible and, in some cases, better than JPEG 2000.   相似文献   

3.
This paper evaluates the effects of JPEG 2000 compression on automated digital surface model (DSM) extraction, using the area-based matching technique. Two stereopairs of aerial photographs, which have different photo scale, pixel size, and topographic type, are used as the images for the experiments. In the experiments, the DSM generated from the uncompressed image is set as the reference, and the elevation accuracy is computed for a range of compression ratios (2:1 to 100:1). The results show that the employed indices of image quality based on the JPEG 2000 compression are clearly superior to the commonly used JPEG compression technique—especially for high compression ratios. However, the elevation accuracy varies with the compression ratios. Also, JPEG 2000 compression does not have a significant influence on the percentage of the successful matching rate. In addition, a near linear fall-off is observed in extracted DSM accuracy with increasing compression ratios.  相似文献   

4.
JPEG 2000和JPEG-LS是目前应用比较广泛的两种压缩算法。本文介绍了JPEG 2000和JPEG-LS的编码流程、实现方法,通过试验分别分析了遥感影像在不同压缩比下的失真程度大小以及不同的失真类型。结果表明:在相同的中低倍压缩比条件下,JPEG 2000和JPEG-LS在影像细节信息保持方面效果非常接近;在高倍压缩比下,JPEG 2000和JPEG-LS压缩后影像分别出现了明显的模糊失真和间歇性失真现象。本文分析结论对相关算法的使用和改进有一定的借鉴意义。  相似文献   

5.
徐大卫  张荣  吴倩 《遥感学报》2015,19(2):263-272
结合小波变换及字典学习提出了一种针对高光谱图像的压缩算法。该算法首先通过小波变换构建多尺度样本集,在小波域使用K-均值奇异值分解(K-SVD)方法学习得到原子尺寸不同的多尺度字典,然后在稀疏表示的过程中,定义一个原子使用频次筛选因子,通过统计局部最优波段稀疏表示时原子使用情况,结合筛选因子对字典原子进行优化筛选,使用精简后的字典对其余波段进行稀疏求解,最后针对不同尺度的表示系数采用自适应的量化编码。实验结果表明,与目前常用的3D-SPIHT和其他的多尺度字典学习算法相比,本文算法在中低比特率下,具有更好的重建性能。  相似文献   

6.
Certain types of two-dimensional (2-D) numerical remote sensing data can be losslessly and compactly compressed for archiving and distribution using standardized image formats. One common method for archiving and distributing data involves compressing data files using file compression utilities such as gzip and bzip2, which are widely available on UNIX and Linux operating systems. GZIP-compressed files and bzip2-compressed files must first be uncompressed before they can be read by a scientific application (e.g., MATLAB, IDL). Data stored using an image format, on the other hand, can be read directly by a scientific application supporting that format and, therefore, can be stored in compressed form, saving disk space. Moreover, wide use of image formats by data providers and wide support by scientific applications can reduce the need for providers of geophysical data to develop and maintain software customized for each type of dataset and reduce the need for users to develop and maintain or download and install such software. This letter demonstrates the utility of standardized image formats for losslessly compressing, archiving, and distributing 2-D geophysical data by comparing them with the traditional file compression utilities gzip and bzip2 on several types of remote sensing data. The formats studied include TIFF, PNG, lossless JPEG, JPEG-LS, and JPEG2000. PNG and TIFF are widely supported. JPEG2000 and JPEG-LS could become widely supported in the future. It is demonstrated that when the appropriate image format is selected, the compression ratios can be comparable to or better than those resulting from the use of file compression utilities. In particular, PNG, JPEG-LS, and JPEG2000 show promise for the types of data studied.  相似文献   

7.
本文分析了JPEG2000压缩算法的基本原理,对某地区的数字航空遥感影像进行了压缩试验,通过比较不同压缩比下各种评价指标的变化,来研究JPEG2000压缩算法对重建影像构像质量的影响,评价结果表明压缩比的增加,重建影像灰度平均值、标准方差和信息熵在一定范围内波动;同时,随着压缩比的增加,重建影像纹理越来越粗,视觉效果降低,并且重建影像与原始影像的一致性程度降低,差别越来越大。  相似文献   

8.
遥感图像压缩会影响分类精度,是值得研究的问题。以高分辨率遥感影像(Quick Bird)的监督分类精度评定为尺度,采用ER Mapper软件的JPEG 2000图像压缩模块对图像进行压缩,再在eCognition软件中对这9种压缩比图像进行面向对象的监督分类,生成分类精度报告。通过分析分类精度的变化,研究了JPEG 2000压缩对遥感影像分类的影响程度及其在遥感影像压缩方面的应用潜力。  相似文献   

9.
Random simulation and GPS decorrelation   总被引:13,自引:1,他引:13  
 (i) A random simulation approach is proposed, which is at the centre of a numerical comparison of the performances of different GPS decorrelation methods. The most significant advantage of the approach is that it does not depend on nor favour any particular satellite–receiver geometry and weighting system. (ii) An inverse integer Cholesky decorrelation method is proposed, which will be shown to out-perform the integer Gaussian decorrelation and the Lenstra, Lenstra and Lovász (LLL) algorithm, and thus indicates that the integer Gaussian decorrelation is not the best decorrelation technique and that further improvement is possible. (iii) The performance study of the LLL algorithm is the first of its kind and the results have shown that the algorithm can indeed be used for decorrelation, but that it performs worse than the integer Gaussian decorrelation and the inverse integer Cholesky decorrelation. (iv) Simulations have also shown that no decorrelation techniques available to date can guarantee a smaller condition number, especially in the case of high dimension, although reducing the condition number is the goal of decorrelation. Received: 26 April 2000 / Accepted: 5 March 2001  相似文献   

10.
针对传统主成分分析(PCA)忽视测站各坐标分量之间相关性的问题,提出了一种小波去噪和多方向主成分分析(WD-MPCA)组合的方法. 该方法弥补了传统PCA的缺陷,与经验模态分解和主成分分析(EMD-PCA)组合方法及小波去噪和主成分分析(WD-PCA)组合方法相比,WD-MPCA组合方法精度最高. 经WD-MPCA组合方法去噪后,其平均中误差分别为0.83 mm、0.85 mm和8.30 mm,比原始坐标残差时间序列的平均中误差分别降低了81.14%、81.91%和40.37%. WD-MPCA组合方法充分考虑了各测站不同分量之间的相关性,可以有效去除信号中的高频随机白噪声(WN)和低频有色噪声(CN),这对高频全球卫星导航系统(GNSS)技术的实际应用和理论发展具有重要的意义.   相似文献   

11.
It is well recognised that data volume represents a huge overhead for softcopy photogrammetry. For example, a file size of 100 Mbytes will be generated from a black and white aerial photograph if digitised with a resolution of 20μm. Large data volumes not only create storage problems but also affect the speed of image processing. As a consequence, data compression of image data is a matter of great significance. This paper describes an investigation into the effects of image compression on the accuracy of digital terrain models (DTMs) extracted from the compressed images. The JPEG system implemented in the Z/I Imaging ImageStation digital photogrammetric workstation (DPW) was used in the study. A systematic test has been carried out on the effect of different levels of JPEG compression (with Q-factors from 1 to 100) on the resulting DTM, which is automatically generated by the DPW using Match-T software. An analysis of the results from the two sites tested shows that image compression tends to cause more significant degradation when the image texture is richer, but that recommendations on Q-factors for use with the ImageStation appear to err on the side of caution. This analysis leads to some tentative conclusions and recommendations both for future investigation and for photogrammetric practice.  相似文献   

12.
Lossy compression is being increasingly used in remote sensing; however, its effects on classification have scarcely been studied. This paper studies the implications of JPEG (JPG) and JPEG 2000 (J2K) lossy compression for image classification of forests in Mediterranean areas. Results explore the impact of the compression on the images themselves as well as on the obtained classification. The results indicate that classifications made with previously compressed radiometrically corrected images and topoclimatic variables are not negatively affected by compression, even at quite high compression ratios. Indeed, JPG compression can be applied to images at a compression ratio (CR, ratio between the size of the original file and the size of the compressed file) of 10:1 or even 20:1 (for both JPG and J2K). Nevertheless, the fragmentation of the study area must be taken into account: in less fragmented zones, high CR are possible for both JPG and J2K, but in fragmented zones, JPG is not advisable, and when J2K is used, only a medium CR is recommended (3.33:1 to 5:1). Taking into account that J2K produces fewer artefacts at higher CR, the study not only contributes with optimum CR recommendations, but also found that the J2K compression standard (ISO 15444-1) is better than the JPG (ISO 10918-1) when applied to image classification. Although J2K is computationally more expensive, this is no longer a critical issue with current computer technology.  相似文献   

13.
Fractal geometry provides a means for describing and analysing the complexity of various features present in digital images. In this paper, characteristics of Fractal based compression of satellite data have been tested for Indian Remote Sensing (IRS) images (of different bands and resolution). The fidelity and efficiency of the algorithm and its relationship with spatial complexity of images is also evaluated. Results obtained from fractal compression have been compared with popularly used compression methods such as JPEG 2000, WinRar. The effect of bands and pixel resolution on the compression rate has also been examined. The results from this study show that the fractal based compression method provides higher compression rate while maintaining the information content of RS images to a great extent than that of JPEG. This paper also asserts that information loss due to fractal compression is minimal. It may be concluded that fractal technique has many potential advantages for compression of satellite images.  相似文献   

14.
基于视觉掩盖效应的自适应图像压缩算法   总被引:1,自引:0,他引:1  
通过分析压缩过程中图像失真的产生机制,提出了一种基于视觉掩盖效应的自适应压缩算法。由于图像是一种非平稳信源,算法采用分块小波变换,并根据图像块的纹理复杂度进行自适应分解,以适应图像不同区域的统计特性。与JPEG2000的对比实验表明,该算法在高倍率压缩时可有效地提高解压图像的视觉质量,较好地保留了图像中的细节及微弱信息。  相似文献   

15.
针对采用传统土地利用回归(land use regression,LUR)模型进行大气污染物浓度模拟时预测变量信息损失的缺陷,将主成分分析(principle component analysis,PCA)与逐步多元线性回归(stepwise multiple line regression,SMLR)相结合,提出了一种改进的LUR(PCA+SMLR)模型模拟大区域PM2.5浓度空间分布的方法。首先采用相关分析筛选与PM2.5显著相关的预测变量,然后对筛选出的预测变量进行主成分变换(PCA),最后保留所有主成分变量进行SMLR建立回归模型模拟PM2.5浓度。并以京津冀为研究区域进行实验验证,对PCR、SMLR、PCA+SMLR这3种模型的实验结果进行对比分析,结果表明,PCA+SMLR模型可提高预测变量对回归模型的贡献度,调整后R2达0.883,并且其精度检验指标及制图效果皆优于传统的LUR模型,证明了该模型可有效提高PM2.5浓度的模拟精度,对PM2.5区域联防联控具有指导意义。  相似文献   

16.
顾及有色噪声影响的CGCS2000下我国CORS站速度估计   总被引:6,自引:2,他引:4  
研究基于计算获取的CGCS2000(中国大地坐标系统2000)下我国国家CORS(全球导航卫星系统连续运行参考站)网1999年至2009年坐标时间序列,首先采用主成分空间滤波方法(PCA)提取CGCS2000框架下国家CORS网坐标时间序列中公共误差(common mode errors,CME)的时空特性;其次,采用功率谱分析方法分析空间滤波后的国家CORS站坐标残差时间序列的噪声性质,采用最大似然法定量估计坐标残差时间序列中的有色噪声分量;最后,采用加权最小二乘法评定顾及不同噪声影响的CGCS2000框架下的国家CORS网年速度估值和实际精度.研究结果表明:采用空间滤波可提高CGCS2000框架下国家CORS网成果的精确性和可靠性,空间滤波后北、东和高方向的平均坐标重复性相对于滤波前分别减小了26%、22%和46%,滤波后国家CORS站高度方向平均振幅减少近64%.在CGCS2000框架下我国CORS站坐标时间序列中白噪声不是噪声的主要成分,白噪声、闪烁噪声和随机漫步噪声的噪声性质是国家CORS站坐标时间序列的基本特征;我国CORS站有色噪声在水平方向和高度方向表现出一定的规律性,顾及有色噪声的速度误差估值比只考虑白噪声的速度误差估值一般大2~6倍,速度估值偏差一般在2%~10%.  相似文献   

17.
基于整数小波变换的多光谱图像无损压缩   总被引:9,自引:0,他引:9  
多光谱图像一般都采用预测方法去除空间冗余和谱内冗余实现无损压缩。通过用提升方法构造整数小波变换, 将变换方法用于去除空间冗余;通过分类方法构造谱间预测器,用预测方法去除谱间冗余,两者相结合,实现无损压缩。由于变换方法的去相关性能良好,使该方法压缩效果大大改善。  相似文献   

18.
提出了一种基于遥感影像融合的不同权限机密信息隐藏盲算法,算法利用奇偶嵌入法和JPEG量化表,根据用户的不同权限进行融合影像中机密信息的不同程度的隐藏,且在提取机密信息和恢复原始融合影像时,不需要原始遥感影像,是一种盲算法。实验结果表明:算法具有强的透明性,大的信息隐藏量,并对各种常见的影像处理具有较强的鲁棒性。此外,算法对不同程度隐藏机密信息的遥感影像的各种应用均没有影响。  相似文献   

19.
面向对象遥感影像分类过程中,特征选择是保证分类精度和提高分类速度的关键因素。本文针对高分影像特征过多造成维度灾难、无法取舍有效特征导致低分类精度等问题,提出了一种基于特征贡献度与主成分分析(PCA)结合的特征选择优化方法,定量分析并提取影像特征。本文首先利用特征贡献度进行特征选择,提取有效特征;然后进行PCA变换消除特征间相互影响,降低维度,将提取的143个影像分类特征经选择与变换至20个主成分特征,最终优化的特征在神经网络(ANN)、K最近邻法(KNN)和支持向量机(SVM)三种分类实验结果中的总精度分别提高了10.56%、7.78%和6.11%,实现了较好的分类效果,说明优化的特征选择方法不仅大大降低了特征维度,减少了后端分类计算量,同时有效提高了分类精度。  相似文献   

20.
Different image processing algorithms have been evaluated in the context of geological mapping using Landsat TM data. False color composites, the principal component imagery, and IHS decorrelation stretching method for Landsat-5 TM data have been found useful for delineating the regional geological features, mainly to provide the maximum geological information of the studied area. The study testifies that using which image processing yields best results for geological mapping in arid and semiarid regions by preserving morphological and spectral information. Generally, the studied area can be divided into three main geological units: Basaltic intrusive rocks, Metamorphic with varying intensities and Sedimentary rocks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号