首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 471 毫秒
1.
We investigate the Gaussianity of the 4-yr COBE DMR data (in HEALPix pixelization) using an analysis based on spherical Haar wavelets. We use all the pixels lying outside the Galactic cut and compute the skewness, kurtosis and scale–scale correlation spectra for the wavelet coefficients at each scale. We also take into account the sensitivity of the method to the orientation of the input signal. We find a detection of non-Gaussianity at >99 per cent level in just one of our statistics. Taking into account the total number of statistics computed, we estimate that the probability of obtaining such a detection by chance for an underlying Gaussian field is 0.69. Therefore, we conclude that the spherical wavelet technique shows no strong evidence of non-Gaussianity in the COBE DMR data.  相似文献   

2.
The estimation of the frequency, amplitude and phase of a sinusoid from observations contaminated by correlated noise is considered. It is assumed that the observations are regularly spaced, but may suffer missing values or long time stretches with no data. The typical astronomical source of such data is high-speed photoelectric photometry of pulsating stars. The study of the observational noise properties of nearly 200 real data sets is reported: noise can almost always be characterized as a random walk with superposed white noise. A scheme for obtaining weighted non-linear least-squares estimates of the parameters of interest, as well as standard errors of these estimates, is described. Simulation results are presented for both complete and incomplete data. It is shown that, in finite data sets, results are sensitive to the initial phase of the sinusoid.  相似文献   

3.
天文观测站夜天空星像星等信息和天区分布信息可用于指导多设备巡天观测.建立全天相机监测系统(Monitoring all-sky system)对本地天区夜天空实时监测,获取的监测图像需要有效的方法进行处理以提取全天图像星像信息.由于全天图像视场大和高阶扭曲的影响,采用天顶等距投影与多项式函数组合的方法计算图像的底片常数.天文定位的均方根残差约为0.15个像素.通过对图像中亮星部分测光得到的星等差,改正大气消光误差.最后使用HEALPix (Hierarchical Equal Area isoLatitude Pixelation)方法实现天区划分和每个天区可观测极限星等值的存储.  相似文献   

4.
5.
In order to derive the stellar population of a galaxy or a star cluster, it is a common practice to fit its spectrum by a combination of spectra extracted from a data base (e.g. a library of stellar spectra). If the data to be fitted are equivalent widths, the combination is a non-linear one and the problem of finding the 'best' combination of stars that fits the data becomes complex. It is probably because of this complexity that the mathematical aspects of the problem did not receive a satisfying treatment; the question of the uniqueness of the solution , for example, was left in uncertainty. In this paper we complete the solution of the problem by considering the underdetermined case where there are fewer equivalent widths to fit than stars in the data base (the overdetermined case was treated previously). The underdetermined case is interesting to consider because it leaves space for the addition of supplementary astrophysical constraints. In fact, it is shown in this paper that when a solution exists it is generally not unique. There are infinitely many solutions, all of them contained within a convex polyhedron in the solutions vector space. The vertices of this polyhedron are extremal solutions of the stellar population synthesis. If no exact solution exists, an approximate solution can be found using the method described for the overdetermined case. Also provided is an algorithm able to solve the problem numerically; in particular all the vertices of the polyhedron are found.  相似文献   

6.
A probabilistic technique for the joint estimation of background and sources with the aim of detecting faint and extended celestial objects is described. Bayesian probability theory is applied to gain insight into the co-existence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. A multiresolution analysis is used for revealing faint and extended objects in the frame of the Bayesian mixture model. All the revealed sources are parametrized automatically providing source position, net counts, morphological parameters and their errors.
We demonstrate the capability of our method by applying it to three simulated data sets characterized by different background and source intensities. The results of employing two different prior knowledge on the source signal distribution are shown. The probabilistic method allows for the detection of bright and faint sources independently of their morphology and the kind of background. The results from our analysis of the three simulated data sets are compared with other source detection methods. Additionally, the technique is applied to ROSAT All-Sky Survey data.  相似文献   

7.
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman–Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.  相似文献   

8.
The theory of low-order linear stochastic differential equations is reviewed. Solutions to these equations give the continuous time analogues of discrete time autoregressive time-series. Explicit forms for the power spectra and covariance functions of first- and second-order forms are given. A conceptually simple method is described for fitting continuous time autoregressive models to data. Formulae giving the standard errors of the parameter estimates are derived. Simulated data are used to verify the performance of the methods. Irregularly spaced observations of the two hydrogen-deficient stars FQ Aqr and NO Ser are analysed. In the case of FQ Aqr the best-fitting model is of second order, and describes a quasi-periodicity of about 20 d with an e-folding time of 3.7 d. The NO Ser data are best fitted by a first-order model with an e-folding time of 7.2 d.  相似文献   

9.
A selection criterion based on the relative strength of the largest peaks in the amplitude spectra, and an information criterion are used in combination to search for multiperiodicities in Hipparcos epoch photometry. The method is applied to all stars which have been classified as variable in the Hipparcos catalogue: periodic, unsolved and microvariables. Results are assessed critically: although there are many problems arising from aliasing, there are also a number of interesting frequency combinations which deserve further investigation. One such result is the possible occurrence of multiple periods of the order of a day in a few early A-type stars. The Hipparcos catalogue also contains a number of these stars with single periodicities: such stars with no obvious variability classifications are listed, and information about their properties (e.g., radial velocity variations) discussed. These stars may constitute a new class of pulsators.  相似文献   

10.
11.
In many astronomical problems one often needs to determine the upper and/or lower boundary of a given data set. An automatic and objective approach consists in fitting the data using a generalized least-squares method, where the function to be minimized is defined to handle asymmetrically the data at both sides of the boundary. In order to minimize the cost function, a numerical approach, based on the popular downhill simplex method, is employed. The procedure is valid for any numerically computable function. Simple polynomials provide good boundaries in common situations. For data exhibiting a complex behaviour, the use of adaptive splines gives excellent results. Since the described method is sensitive to extreme data points, the simultaneous introduction of error weighting and the flexibility of allowing some points to fall outside of the fitted frontier, supplies the parameters that help to tune the boundary fitting depending on the nature of the considered problem. Two simple examples are presented, namely the estimation of spectra pseudo-continuum and the segregation of scattered data into ranges. The normalization of the data ranges prior to the fitting computation typically reduces both the numerical errors and the number of iterations required during the iterative minimization procedure.  相似文献   

12.
A statistical model is formulated that enables one to analyse jointly the times between maxima and minima in the light curves of monoperiodic pulsating stars. It is shown that the combination of both sets of data into one leads to analyses that are more sensitive. Illustrative applications to the American Association of Variable Star Observers data for a number of long-period variables demonstrate the usefulness of the approach.  相似文献   

13.
The 10.7cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is meaningful and important to predict F10.7 values accurately for both long-term (months-years) and short-term (days) forecasting, which are often used as inputs in space weather models. This study applies a novel neural network technique, support vector regression (SVR), to forecasting daily values of F10.7. The aim of this study is to examine the feasibility of SVR in short-term F10.7 forecasting. The approach, based on SVR, reduces the dimension of feature space in the training process by using a kernel-based learning algorithm. Thus, the complexity of the calculation becomes lower and a small amount of training data will be sufficient. The time series of F10.7 from 2002 to 2006 are employed as the data sets. The performance of the approach is estimated by calculating the norm mean square error and mean absolute percentage error. It is shown that our approach can perform well by using fewer training data points than the traditional neural network.  相似文献   

14.
We present an extensive frequentist analysis of the one-point statistics (number, mean, variance, skewness and kurtosis) and two-point correlation functions determined for the local extrema of the cosmic microwave background temperature field observed in five-years of Wilkinson Microwave Anisotropy Probe ( WMAP ) data. Application of a hypothesis test on the one-point statistics indicates a low variance of hot and cold spots in all frequency bands of the WMAP data. The consistency of the observations with Gaussian simulations of the best-fitting cosmological model is rejected at the 95 per cent confidence level outside the WMAP KQ75 mask and the Northern hemispheres in the Galactic and ecliptic coordinate frames. We demonstrate that it is unlikely that residual Galactic foreground emission contributes to the observed non-Gaussianities. However, the application of a high-pass filter that removes large angular scale power does improve the consistency with the best-fitting cosmological model.
Two-point correlation functions of the local extrema are calculated for both the temperature pair product [temperature–temperature (T–T)] and spatial pair-counting [point–point (P–P)]. The T–T observations demonstrate weak correlation on scales below  20°  and lie completely below the lower 3σ confidence region once various temperature thresholds are applied to the extrema determined for the KQ75 mask and northern sky partitions. The P–P correlation structure corresponds to the clustering properties of the temperature extrema, and provides evidence that it is the large angular-scale structures, and some unusual properties thereof, that are intimately connected to the properties of the hot and cold spots observed in the WMAP five-year data.  相似文献   

15.
As galaxy surveys become larger and more complex, keeping track of the completeness, magnitude limit and other survey parameters as a function of direction on the sky becomes an increasingly challenging computational task. For example, typical angular masks of the Sloan Digital Sky Survey contain about   N = 300 000  distinct spherical polygons. Managing masks with such large numbers of polygons becomes intractably slow, particularly for tasks that run in time     with a naive algorithm, such as finding which polygons overlap each other. Here we present a 'divide-and-conquer' solution to this challenge: we first split the angular mask into pre-defined regions called 'pixels', such that each polygon is in only one pixel, and then perform further computations, such as checking for overlap, on the polygons within each pixel separately. This reduces     tasks to     , and also reduces the important task of determining in which polygon(s) a point on the sky lies from     to     , resulting in significant computational speedup. Additionally, we present a method to efficiently convert any angular mask to and from the popular healpix format. This method can be generically applied to convert to and from any desired spherical pixelization. We have implemented these techniques in a new version of the mangle software package, which is freely available at http://space.mit.edu/home/tegmark/mangle/ , along with complete documentation and example applications. These new methods should prove quite useful to the astronomical community, and since mangle is a generic tool for managing angular masks on a sphere, it has the potential to benefit terrestrial mapmaking applications as well.  相似文献   

16.
We make use of 3456 d of observations of the low-ℓ p-mode oscillations of the Sun in order to study the evolution over time of the measurement precision of the radial eigenfrequencies. These data were collected by the ground-based Birmingham Solar-Oscillations Network (BiSON) between 1991 January and 2000 June. When the power spectrum of the complete time series is fitted, the analysis yields frequency uncertainties that are close to those expected from the returned coherence times of the modes. The slightly elevated levels compared with the prediction appear to be consistent with a degradation of the signal-to-noise ratio in the spectrum that is the result of the influence of the window function of the observations (duty cycle 71 per cent). The fractional frequency precision reaches levels of a several parts in 106 for many of the modes. The corresponding errors reported from observations made by the GOLF instrument on board the ESA/NASA SOHO satellite, when extrapolated to the length of the BiSON data set, are shown to be (on average) about ∼25 per cent smaller than their BiSON counterparts owing to the uninterrupted nature of the data from which they were derived.
An analysis of the BiSON data in contiguous segments of different lengths, T , demonstrates that the frequency uncertainties scale as T −1/2. This is to be expected in the regime where the coherence (life) times of the modes, τ n ℓ, are smaller than the observing time T (the 'oversampled' regime). We show that mode detections are only now beginning to encroach on the 'undersampled' regime (where   T < τ n ℓ)  .  相似文献   

17.
It is well known that in the power spectrum solar p modes have asymmetric profiles, which depart from a Lorentzian shape. We present a framework to explain the contribution of correlated background noise, from the acoustic source, to this asymmetry. An important prediction is that observed peak asymmetry may differ depending on the way the p-mode observations are made, and on how the data are prepared. Furthermore, if valid, the proposed framework may provide the basis for separating the contribution of the correlated noise from that of the source location and properties.  相似文献   

18.
The X-ray timing data for the Crab pulsar obtained by the Chinese X-ray pulsar navigation test satellite are processed and analyzed. The method to build the integrated and standard X-ray pulse profiles of the Crab pulsar by using the X-ray pulsar observation data and the satellite orbit data is described. The principle and algorithm for determining the pulsar's pulse time of arrival (toa) in the frequency domain are briefly introduced. The pulsar's pulse time of arrival is calculated by using the timing data of 50 min integration for each set of observational data. By the comparison between the observed Crab pulsar's pulse time of arrival at the solar system barycenter and that predicted with the Crab pulsar ephemeris, it is found that the timing accuracy is about 14 μs after the systematic error is removed by a quadratic polynomial fitting.  相似文献   

19.
We present a harmonic model for the data analysis of an all-sky cosmic microwave background survey, such as Planck , where the survey is obtained through ring-scans of the sky. In this model, resampling and pixelization of the data are avoided. The spherical transforms of the sky at each frequency, in total intensity and polarization, as well as the bright-point-source catalogue, are derived directly from the data reduced on to the rings. Formal errors and the most significant correlation coefficients for the spherical transforms of the frequency maps are preserved. A clean and transparent path from the original samplings in the time domain to the final scientific products is thus obtained. The data analysis is largely based on Fourier analysis of rings; the positional stability of the instrument's spin axis during these scans is a requirement for the data model and is investigated here for the Planck satellite. Brighter point sources are recognized and extracted as part of the ring reductions and, on the basis of accumulated data, used to build the bright-point-source catalogue. The analysis of the rings is performed in an iterative loop, involving a range of geometric and detector response calibrations. The geometric calibrations are used to reconstruct the paths of the detectors over the sky during a scan and the phase offsets between scans of different detectors; the response calibrations eliminate short- and long-term variations in detector response. Point-source information may allow the reconstruction of the beam profile. The reconstructed spherical transforms of the sky in each frequency channel form the input to the subsequent analysis stages. Although the methods in this paper were developed with the data processing for the Planck satellite in mind, there are many aspects which have wider implementation possibilities, including the construction of real-space pixelized maps.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号