首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present further development and the first public release of our multimodal nested sampling algorithm, called M ulti N est . This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed existing Markov chain Monte Carlo techniques in a wide range of astrophysical inference problems. The accuracy and economy of the M ulti N est algorithm are demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla Λ cold dark matter model to include spatial curvature and a varying equation of state for dark energy. The M ulti N est software, which is fully parallelized using MPI and includes an interface to C osmo MC, is available at http://www.mrao.cam.ac.uk/software/multinest/ . It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at http://www.superbayes.org .  相似文献   

2.
3.
The observation of flux sources near the limit of detection requires a careful evaluation of possible biases in magnitude determination. Both the traditional logarithmic magnitudes and the recently proposed inverse hyperbolic sine (asinh) magnitudes are considered. Formulae are derived for three different biasing mechanisms: the statistical spread of the observed flux values arising from e.g. measurement error; the dependence of these errors on the true flux; and the dependence of the observing probability on the true flux. As an example of the results, it is noted that biases at large signal-to-noise ratios R , at which the two types of magnitude are similar, are of the order of −( p +1)/ R 2, where the exponent p parametrizes a power-law dependence of the probability of observation on the true flux.  相似文献   

4.
A time series is a sample of observations of well‐defined data points obtained through repeated measurements over a certain time range. The analysis of such data samples has become increasingly important not only in natural science but also in many other fields of research. Peranso offers a complete set of powerful light curve and period analysis functions to work with large astronomical data sets. Substantial attention has been given to ease‐of‐use and data accuracy, making it one of the most productive time series analysis software available. In this paper, we give an introduction to Peranso and its functionality. (© 2016 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
We investigate the application of neural networks to the automation of MK spectral classification. The data set for this project consists of a set of over 5000 optical (3800–5200 Å) spectra obtained from objective prism plates from the Michigan Spectral Survey. These spectra, along with their two-dimensional MK classifications listed in the Michigan Henry Draper Catalogue, were used to develop supervised neural network classifiers. We show that neural networks can give accurate spectral type classifications (σ68= 0.82 subtypes, σrms= 1.09 subtypes) across the full range of spectral types present in the data set (B2–M7). We show also that the networks yield correct luminosity classes for over 95 per cent of both dwarfs and giants with a high degree of confidence.   Stellar spectra generally contain a large amount of redundant information. We investigate the application of principal components analysis (PCA) to the optimal compression of spectra. We show that PCA can compress the spectra by a factor of over 30 while retaining essentially all of the useful information in the data set. Furthermore, it is shown that this compression optimally removes noise and can be used to identify unusual spectra.   This paper is a continuation of the work carried out by von Hippel et al. (Paper I).  相似文献   

6.
7.
This contribution aims to introduce the idea that a well‐evolved HTN of the far future, with the anticipated addition of very large apertures, could also be made to incorporate the ability to carry out photonic astronomy observations, particularly Optical VLBI in a revived Hanbury‐Brown and Twiss Intensity Interferometry (HBTII) configuration. Such an HTN could exploit its inherent rapid reconfigurational ability to become a multi‐aperture distributed photon‐counting network able to study higher‐order spatiotemporal photon correlations and provide a unique tool for direct diagnostics of astrophysical emission processes. We very briefly review various considerations associated with the switching of the HTN to a special mode in which single‐photon detection events are continuously captured for a posteriori intercorrelation. In this context, photon arrival times should be determined to the highest time resolution possible and extremely demanding absolute time keeping and absolute time distribution schemes should be devised and implemented in the HTN nodes involved. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
We use Bayesian model selection tools to forecast the Planck satellite's ability to distinguish between different models for the re-ionization history of the Universe, using the large angular scale signal in the cosmic microwave background polarization spectrum. We find that Planck is not expected to be able to distinguish between an instantaneous re-ionization model and a two-parameter smooth re-ionization model, except for extreme values of the additional re-ionization parameter. If it cannot, then it will be unable to distinguish between different two-parameter models either. However, Bayesian model averaging will be needed to obtain unbiased estimates of the optical depth to re-ionization. We also generalize our results to a hypothetical future cosmic variance limited microwave anisotropy survey, where the outlook is more optimistic.  相似文献   

9.
针对包含饱和样本数据的频数幂律分布拟合,提出一个新的幂律分布指数的极大似然估计方法的修正公式.对比研究显示,修正公式适用于剔除异常饱和值的幂律频数拟合.如果不剔除饱和值,幂律指数的估计只能使用修正前的公式,其误差随幂律指数变化,指数较小时误差较大.由此建议,对于包含饱和样本的频数分布拟合,首先剔除异常的饱和值,然后对剩余不含饱和值的子集使用修正公式进行参数估计.  相似文献   

10.
11.
A selection criterion based on the relative strength of the largest peaks in the amplitude spectra, and an information criterion are used in combination to search for multiperiodicities in Hipparcos epoch photometry. The method is applied to all stars which have been classified as variable in the Hipparcos catalogue: periodic, unsolved and microvariables. Results are assessed critically: although there are many problems arising from aliasing, there are also a number of interesting frequency combinations which deserve further investigation. One such result is the possible occurrence of multiple periods of the order of a day in a few early A-type stars. The Hipparcos catalogue also contains a number of these stars with single periodicities: such stars with no obvious variability classifications are listed, and information about their properties (e.g., radial velocity variations) discussed. These stars may constitute a new class of pulsators.  相似文献   

12.
The American Association of Variable Star Observers supplies the astronomical community with a large data base of times of light maxima and minima of Mira (long-period pulsating) stars. Period change studies using these data invariably use either times between maxima, or those between minima. A statistical analysis based on the two-component time series of light curve rise and fall times is developed. The results, which enable one to detect changes in the shapes of light curves, are applied to observations of seven long-period variables.  相似文献   

13.
It has been found that the near-infrared flux variations of Seyfert galaxies satisfy relations of the form   Fi ≈α i j i j Fj   , where Fi , Fj are the fluxes in filters i and j ; and  α i , j , β i , j   are constants. These relations have been used to estimate the constant contributions of the non-variable underlying galaxies. The paper attempts a formal treatment of the estimation procedure, allowing for the possible presence of a third component, namely non-variable hot dust. In an analysis of a sample of 38 Seyfert galaxies, inclusion of the hot dust component improves the model fit in approximately half the cases. All derived dust temperatures are below 300 K, in the range 540–860 K or above 1300 K. A noteworthy feature is the estimation of confidence intervals for the component contributions: this is achieved by bootstrapping. It is also pointed out that the model implies that such data could be fruitfully analysed in terms of principal components.  相似文献   

14.
15.
We discuss some commonly used methods for determining the significance of peaks in the periodograms of time series. We review methods for constructing the classical significance tests, their corresponding false alarm probability functions and the role played in these by independent random variables and by empirical and theoretical cumulative distribution functions. We discuss the concepts of independent frequencies and oversampling in periodogram analysis. We then compare the results of new Monte Carlo simulations for evenly spaced time series with results obtained previously by other authors, and present the results of Monte Carlo simulations for a specific unevenly spaced time series obtained for V403 Car.  相似文献   

16.
针对返回型月球探测器,基于月球探测中转移轨道的动力学特征和误差传递矩阵的性质,分别对地月转移轨道和月地转移轨道的误差传递特点进行了研究,并根据误差传递矩阵给出估算第1次中途轨道修正速度增量的线性公式.通过具体算例,给出在实际力学模型下月球探测中转移轨道误差传递性质,讨论了目标点和目标轨道两种不同的轨道修正方法的特点和适用情形,并结合再入约束条件对月地转移轨道第2次中途修正进行了分析和计算.  相似文献   

17.
The entropic prior for distributions with positive and negative values   总被引:1,自引:0,他引:1  
The maximum entropy method has been used to reconstruct images in a wide range of astronomical fields, but in its traditional form it is restricted to the reconstruction of strictly positive distributions. We present an extension of the standard method to include distributions that can take both positive and negative values. The method may therefore be applied to a much wider range of astronomical reconstruction problems. In particular, we derive the form of the entropy for positive/negative distributions and use direct counting arguments to find the form of the entropic prior. We also derive the measure on the space of positive/negative distributions, which allows the definition of probability integrals and hence the proper quantification of errors.  相似文献   

18.
The key features of the matphot algorithm for precise and accurate stellar photometry and astrometry using discrete point spread functions (PSFs) are described. A discrete PSF is a sampled version of a continuous PSF, which describes the two-dimensional probability distribution of photons from a point source (star) just above the detector. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or an FITS (Flexible Image Transport System) image file. Discrete PSFs are shifted within an observational model using a 21-pixel-wide damped sinc function, and position-partial derivatives are computed using a five-point numerical differentiation formula. Precise and accurate stellar photometry and astrometry are achieved with undersampled CCD (charge-coupled device) observations by using supersampled discrete PSFs that are sampled two, three or more times more finely than the observational data. The precision and accuracy of the matphot algorithm is demonstrated by using the c -language mpd code to analyse simulated CCD stellar observations; measured performance is compared with a theoretical performance model. Detailed analysis of simulated Next Generation Space Telescope observations demonstrate that millipixel relative astrometry and mmag photometric precision is achievable with complicated space-based discrete PSFs.  相似文献   

19.
The past 5 years have seen a rapid rise in the use of tunable filters in many diverse fields of astronomy, through Taurus Tunable Filter (TTF) instruments at the Anglo-Australian and William Herschel Telescopes. Over this time we have continually refined aspects of operation and developed a collection of special techniques to handle the data produced by these novel imaging instruments. In this paper, we review calibration procedures and summarize the theoretical basis for Fabry–Perot photometry that is central to effective tunable imaging. Specific mention is made of object detection and classification from deep narrow-band surveys containing several hundred objects per field. We also discuss methods for recognizing and dealing with artefacts (scattered light, atmospheric effects, etc.), which can seriously compromise the photometric integrity of the data if left untreated. Attention is paid to the different families of ghost reflections encountered, and the strategies used to minimize their presence. In our closing remarks, future directions for tunable imaging are outlined and contrasted with the Fabry–Perot technology employed in the current generation of tunable imagers.  相似文献   

20.
One of the tools used to identify the pulsation modes of stars is a comparison of the amplitudes and phases as observed photometrically at different wavelengths. Proper application of the method requires that the errors on the measured quantities, and the correlations between them, be known (or at least estimated). It is assumed that contemporaneous measurements of the light intensity of a pulsating star are obtained in several wavebands. It is also assumed that the measurements are regularly spaced in time, although there may be missing observations. The amplitude and phase of the pulsation are estimated separately for each of the wavebands, and amplitude ratios and phase differences are calculated. A general scheme for estimating the covariance matrix of the amplitude ratios and phase differences is described. The first step is to fit a time series to the residuals after pre-whitening the observations by the best-fitting sinusoid. The residuals are then cross-correlated to study the interdependence between the errors in the different wavebands. Once the multivariate time-series structure can be modelled, the covariance matrix can be found by bootstrapping. An illustrative application is described in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号