首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文研究了数字图象压缩的方法、分类和原理以及该技术在天文领域应用的必要性。针对天文观测的特点和需要,经过分析比较,提出了天文图象压缩的可行性方案。通过应用计算机编程、压缩实验得到相应结果。文中对天文数字图象分别进行了以下压缩方法实验:无失真压缩中的哈夫曼编码和位平面编码;限失真压缩中的离散余弦变换编码及混合编码等。其压缩比分别可达到2—4.5(无失真压缩)和10—30(限失真压缩)。图像压缩所需时间随使用的计算机而异。文中详细列出各种压缩方法的实验结果。  相似文献   

2.
针对海量的天文图像数据与有限的存储空间和带宽资源之间矛盾日益突出这一问题,提出一种无损压缩方法,首先将超大天文图像分块,再使用差分脉冲编码调制和5/3整数小波变换,最后使用霍夫曼算法编码。对该方法的原理和具体实现做了详细的分析与介绍,通过实验验证该方法比天文中常用的tar、PKZip、WinZip、WinRar软件在压缩比上分别提高了30%、29%、26%、2%,压缩速度远大于WinZip和WinRar;且该算法实现简单,适合硬件实现和利于并行处理。  相似文献   

3.
本文介绍一种用于天文观测图象数据无信息丢失的现场实时数据压缩方法。在对原始图象数据可压缩性统计分析之后,介绍了一种适合天文观测数据现场实时数据压缩的方法"基础比特+溢出比特"编码方法。讨论了为提高压缩比而采取的各种措施。以DENIS项目中现场观测原始数据压缩为例,说明了信息保存型实时数据压缩的实现过程,最后给出了该方法的实验结果,实验表明,本文介绍的数据压缩方法在无任何信息丢失的情况下,可获得接近理论值的数据压缩比。  相似文献   

4.
We present a method for radical linear compression of data sets where the data are dependent on some number M of parameters. We show that, if the noise in the data is independent of the parameters, we can form M linear combinations of the data which contain as much information about all the parameters as the entire data set, in the sense that the Fisher information matrices are identical; i.e. the method is lossless. We explore how these compressed numbers fare when the noise is dependent on the parameters, and show that the method, though not precisely lossless, increases errors by a very modest factor. The method is general, but we illustrate it with a problem for which it is well-suited: galaxy spectra, the data for which typically consist of ∼103 fluxes, and the properties of which are set by a handful of parameters such as age, and a parametrized star formation history. The spectra are reduced to a small number of data, which are connected to the physical processes entering the problem. This data compression offers the possibility of a large increase in the speed of determining physical parameters. This is an important consideration as data sets of galaxy spectra reach 106 in size, and the complexity of model spectra increases. In addition to this practical advantage, the compressed data may offer a classification scheme for galaxy spectra which is based rather directly on physical processes.  相似文献   

5.
The Visual Infrared Mapping Spectrometer (VIMS) onboard the CASSINI spacecraft obtained new spectral data of the icy satellites of Saturn after its arrival at Saturn in June 2004. VIMS operates in a spectral range from 0.35 to 5.2 μm, generating image cubes in which each pixel represents a spectrum consisting of 352 contiguous wavebands.As an imaging spectrometer VIMS combines the characteristics of both a spectrometer and an imaging instrument. This makes it possible to analyze the spectrum of each pixel separately and to map the spectral characteristics spatially, which is important to study the relationships between spectral information and geological and geomorphologic surface features.The spatial analysis of the spectral data requires the determination of the exact geographic position of each pixel on the specific surface and that all 352 spectral elements of each pixel show the same region of the target. We developed a method to reproject each pixel geometrically and to convert the spectral data into map projected image cubes. This method can also be applied to mosaic different VIMS observations. Based on these mosaics, maps of the spectral properties for each Saturnian satellite can be derived and attributed to geographic positions as well as to geological and geomorphologic surface features. These map-projected mosaics are the basis for all further investigations.  相似文献   

6.
Gaia is the most ambitious space astrometry mission currently envisaged and it will be a technological challenge in all its aspects. Here we describe a proposal for the data compression system of Gaia, specifically designed for this mission but based on concepts that can be applied to other missions and systems as well. Realistic simulations have been performed with our Telemetry CODEC software, which performs a stream partitioning and pre-compression to the science data. In this way, standard compressors such as bzip2 or szip boost their performance and decrease their processing requirements when applied to such pre-processed data. These simulations have shown that a lossless compression factor of 3 can be achieved, whereas standard compression systems were unable to reach a factor of 2.   相似文献   

7.
Astronomical instruments currently provide a large amount of data. Nowadays, a large part of these data are image frames obtained with receivers of increasing size. The scan of large astronomical plates using fast microdensitometers gives image frames of over 30000×30000 pixels. More and more often, images are transmitted over a network in order to control the observations, to process the data, and to examine or to fill a data bank. The time taken for archiving, the cost of communication, the available memory given by magnetic tapes, and the limited bandwidth of transmission lines are reasons which lead us to examine the data compression of astronomical images.The astronomical image has the characteristic of being a set of astronomical sources in the sky background whose values are not zero. We are, in fact, only interested in the astronomical sources. Once a suitable detection is made, we generally want a compression without any distorsion. In this paper, we present a method which can be adapted for this purpose. It is based on morphological skeleton transformations. The experimental results show that it can give us an efficient compression. Moreover, the flexibility of choosing a structure element adapted to different images and the simplicity of implementation are other advantages of this method. Because of these characteristics, different compression applications may be treated.  相似文献   

8.
Hyperspectral imaging is an ubiquitous technique in solar physics observations and the recent advances in solar instrumentation enabled us to acquire and record data at an unprecedented rate. The huge amount of data which will be archived in the upcoming solar observatories press us to compress the data in order to reduce the storage space and transfer times. The correlation present over all dimensions, spatial, temporal and spectral, of solar data-sets suggests the use of a 3D base wavelet decomposition, to achieve higher compression rates. In this work, we evaluate the performance of the recent JPEG2000 Part 10 standard, known as JP3D, for the lossless compression of several types of solar data-cubes. We explore the differences in: a) The compressibility of broad-band or narrow-band time-sequence; I or V Stokes profiles in spectropolarimetric data-sets; b) Compressing data in [x,y, λ] packages at different times or data in [x,y,t] packages of different wavelength; c) Compressing a single large data-cube or several smaller data-cubes; d) Compressing data which is under-sampled or super-sampled with respect to the diffraction cut-off.  相似文献   

9.
丁祖高 《天文学报》1998,39(3):324-332
对自适应矢量量化(SAVQ)技术在太阳射电频谱数据压缩中的应用进行了有意义的讨论.给出一种压缩比和失真可调的SAVQ压缩方法,指出提高数据压缩比的有效方法之一为样本集成,同时介绍了用该技术对北京天文台太阳射电频谱仪上的两个波段(1.0GHz-2.0GHz和2.6GHz-3.8GHz)的观测数据进行压缩的实用效果,并与国际同行所用的ICON方式进行性能比较.最后展示了数据压缩技术及其在天文上的应用.  相似文献   

10.
A novel method of lossless compression for astronomical spectra images is proposed in this paper. Firstly, Integer Wavelet Transform is adopted to perform decorrelation of the data. Afterwards, Embedded Zero-tree Wavelet encoder is employed to describe the zero-tree structure of wavelet coefficients, and then the resulting stream put through Embedded Zero-tree Wavelet encoder can be transformed to character string including only five characters that is easily compressed by entropy coding. Finally, Arithmetic encoder is chosen as the entropy coder here. Groups of simulation data based on LAMOST and observation data from SDSS are used in the experiment to demonstrate the new method, and the experimental results are much better than those of GZIP and JPEG2000.  相似文献   

11.
The increasing use of data compression by space mission experiments poses the question of quality of the images obtained after the compression-decompression process. Indeed, working on an Image Compression Module (ICM), Using Discrete Cosine Transform (DCT), with 8*8 pixel-sized sub-images (each pixel being coded on eight bits), one can find blocking effects on their boundaries. Avril and Nguyen (1992, thereafter ANG 1992), have shown that One Neighbour Accounting Filters, used after image reconstruction without modifying the coding method , provide the best and fastest correction as far as linear filtering is concerned. We present here a non-linear method, also used after image reconstruction, but working on spatial frequencies. It allows us to segregate, in the Fourier space, the signal from the defect, and then to remove it through applying a filter adapted to the frequency spectrum of each spoiled image. Employing the reverse Fourier transform, we then retrieve the corrected image. The efficiency of this new method was tested by three different means:- when Fourier filtering is applied to a reference set of aerial photographs of the Earth, blocking effects are quite indistinguishable by human vision, even when zooming on the images, which was not the case with ONAF;- the improvement of the Root Mean Square (RMS) Error, calculated between the filtered and original images, is at least three times greater than the one obtained with ONAF;- the reconstruction of a three-dimensional view of a landscape, thanks to two stereoscopic images having undergone a compression-decompression process with an algorithm using DCT and a compression rate of about 10, is possible only after Fourier filtering has been applied.The quite good preliminary results of the application of Fourier filtering to the Clementine images of the Moon are also represented.  相似文献   

12.
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.  相似文献   

13.
An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language ( idl ). The functional form of the kernel can be varied (top-hat, Gaussian, etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel, the algorithm increases the smoothing scale until the signal-to-noise ratio (S/N) within the kernel reaches a pre-set value. Thus, noise is suppressed very efficiently, while at the same time real structure, that is, signal that is locally significant at the selected S/N level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The asmooth algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same S/N above the background .
We apply asmooth to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the asmooth ed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.  相似文献   

14.
Astronomical images currently provide large amounts of data. Lossy compression algorithms have recently been developed for high compression ratios. These compression technique introduce distortion in the compressed images and for high compression ratios, a blocking effect appears. We propose a modified compression algorithm based on the hcompress scheme, and we introduce a new decompression method based on the regularization theory The image is restored scale by scale in a multiresolution scheme and the information lost during the compression is recovered by applying a Tikhonov regularization constraint. The experimental results show that the blocking effect is reduced and some measurements made on a simulated image show that the astrometric and the photometric properties of the restored images are improved.  相似文献   

15.
Denker  C.  Yang  G.  Wang  H. 《Solar physics》2001,202(1):63-70
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.  相似文献   

16.
17.
An unbiased method for improving the resolution of astronomical images is presented. The strategy at the core of this method is to establish a linear transformation between the recorded image and an improved image at some desirable resolution. In order to establish this transformation only the actual point spread function and a desired point spread function need be known. No image actually recorded is used in establishing the linear transformation between the recorded and improved image.
This method has a number of advantages over other methods currently in use. It is not iterative, which means it is not necessary to impose any criteria, objective or otherwise, to stop the iterations. The method does not require an artificial separation of the image into 'smooth' and 'point-like' components, and thus is unbiased with respect to the character of structures present in the image. The method produces a linear transformation between the recorded image and the deconvolved image, and therefore the propagation of pixel-by-pixel flux error estimates into the deconvolved image is trivial. It is explicitly constrained to preserve photometry and should be robust against random errors.  相似文献   

18.
Automatic detection of sub-km craters in high resolution planetary images   总被引:4,自引:0,他引:4  
Impact craters are among the most studied geomorphic planetary features because they yield information about the past geological processes and provide a tool for measuring relative ages of observed geologic formations. Surveying impact craters is an important task which traditionally has been achieved by means of visual inspection of images. The shear number of smaller craters present in high resolution images makes visual counting of such craters impractical. In this paper we present a method that brings together a novel, efficient crater identification algorithm with a data processing pipeline; together they enable a fully automatic detection of sub-km craters in large panchromatic images. The technical details of the method are described and its performance is evaluated using a large, 12.5 m/pixel image centered on the Nanedi Valles on Mars. The detection percentage of the method is ∼70%. The system detects over 35,000 craters in this image; average crater density is , but localized spots of much higher crater density are present. The method is designed to produce “million craters” global catalogs of sub-km craters on Mars and other planets wherever high resolution images are available. Such catalogs could be utilized for deriving high spatial resolution and high temporal precision stratigraphy on regional or even planetary scale.  相似文献   

19.
We describe a method for deriving the position and flux of point and compact sources observed by a scanning survey mission. Results from data simulated to test our method are presented, which demonstrate that at least a 10-fold improvement is achievable over that of extracting the image parameters, position and flux, from the equivalent data in the form of pixel maps. Our method achieves this improvement by analysing the original scan data and performing a combined, iterative solution for the image parameters. This approach allows for a full and detailed account of the point-spread function (PSF), or beam profile, of the instrument. Additionally, the positional information from different frequency channels may be combined to provide the flux-detection accuracy at each frequency for the same sky position. Ultimately, a final check and correction of the geometric calibration of the instrument may also be included. The Planck mission was used as the basis for our simulations, but our method will be beneficial for most scanning satellite missions, especially those with non-circularly symmetric PSFs.  相似文献   

20.
The Heliospheric Imager (HI) instruments on the Solar TErrestrial RElations Observatory (STEREO) observe solar plasma as it streams out from the Sun and into the heliosphere. The telescopes point off-limb (from about 4° to 90° elongation) and so the Sun is not in the field of view. Hence, the Sun cannot be used to confirm the instrument pointing. Until now, the pointing of the instruments have been calculated using the nominal preflight instrument offsets from the STEREO spacecraft together with the spacecraft attitude data. This paper develops a new method for deriving the instrument pointing solutions, along with other optical parameters, by comparing the locations of stars identified in each HI image with the known star positions predicted from a star catalogue. The pointing and optical parameters are varied in an autonomous manner to minimise the discrepancy between the predicted and observed positions of the stars. This method is applied to all HI observations from the beginning of the mission to the end of April 2008. For the vast majority of images a good attitude solution has been obtained with a mean-squared deviation between the observed and predicted star positions of one image pixel or less. Updated values have been obtained for the instrument offsets relative to the spacecraft, and for the optical parameters of the HI cameras. With this method the HI images can be considered as “self-calibrating,” with the actual instrument offsets calculated as a byproduct. The updated pointing results and their by-products have been implemented in SolarSoft.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号