首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对包含饱和样本数据的频数幂律分布拟合,提出一个新的幂律分布指数的极大似然估计方法的修正公式.对比研究显示,修正公式适用于剔除异常饱和值的幂律频数拟合.如果不剔除饱和值,幂律指数的估计只能使用修正前的公式,其误差随幂律指数变化,指数较小时误差较大.由此建议,对于包含饱和样本的频数分布拟合,首先剔除异常的饱和值,然后对剩余不含饱和值的子集使用修正公式进行参数估计.  相似文献   

2.
3.
In the absence of any compelling physical model, cosmological systematics are often misrepresented as statistical effects and the approach of marginalizing over extra nuisance systematic parameters is used to gauge the effect of the systematic. In this article, we argue that such an approach is risky at best since the key choice of function can have a large effect on the resultant cosmological errors.
As an alternative we present a functional form-filling technique in which an unknown, residual, systematic is treated as such. Since the underlying function is unknown, we evaluate the effect of every functional form allowed by the information available (either a hard boundary or some data). Using a simple toy model, we introduce the formalism of functional form filling. We show that parameter errors can be dramatically affected by the choice of function in the case of marginalizing over a systematic, but that in contrast the functional form-filling approach is independent of the choice of basis set.
We then apply the technique to cosmic shear shape measurement systematics and show that a shear calibration bias of  | m ( z )| ≲ 10−3 (1 + z )0.7  is required for a future all-sky photometric survey to yield unbiased cosmological parameter constraints to per cent accuracy.
A module associated with the work in this paper is available through the open source icosmo code available at http://www.icosmo.org .  相似文献   

4.
We present and discuss a method to identify substructures in combined angular-redshift samples of galaxies within clusters. The method relies on the use of the discrete wavelet transform (hereafter DWT) and has already been applied to the analysis of the Coma cluster. The main new ingredient of our method with respect to previous studies lies in the fact that we make use of a 3D data set rather than a 2D one. We test the method on mock cluster catalogues with spatially localized substructures and on a N -body simulation. Our main conclusion is that our method is able to identify the existing substructures provided that: (a) the subclumps are detached in part or all of the phase space, (b) one has a statistically significant number of redshifts, increasing as the distance decreases due to redshift distortions; (c) one knows a priori the scale on which substructures are to be expected. We have found that to allow an accurate recovery we must have both a significant number of galaxies (≈200 for clusters at z ≥0.4 or about 800 at z ≤0.4) and a limiting magnitude for completeness m B =16.
The only true limitation to our method seems to be the necessity of knowing a priori the scale on which the substructure is to be found. This is an intrinsic drawback of the method and no improvement in numerical codes based on this technique could make up for it.  相似文献   

5.
A method to rapidly estimate the Fourier power spectrum of a point distribution is presented. This method relies on a Taylor expansion of the trigonometric functions. It yields the Fourier modes from a number of fast Fourier transforms (FFTs), which is controlled by the order N of the expansion and by the dimension D of the system. In three dimensions, for the practical value   N = 3  , the number of FFTs required is 20.
We apply the method to the measurement of the power spectrum of a periodic point distribution that is a local Poisson realization of an underlying stationary field. We derive an explicit analytic expression for the spectrum, which allows us to quantify – and correct for – the biases induced by discreteness and by the truncation of the Taylor expansion, and to bound the unknown effects of aliasing of the power spectrum. We show that these aliasing effects decrease rapidly with the order N . For   N = 3  , they are expected to be, respectively, smaller than  ∼10−4  and 0.02 at half the Nyquist frequency and at the Nyquist frequency of the grid used to perform the FFTs. The only remaining significant source of errors is reduced to the unavoidable cosmic/sample variance due to the finite size of the sample.
The analytical calculations are successfully checked against a cosmological N -body experiment. We also consider the initial conditions of this simulation, which correspond to a perturbed grid. This allows us to test a case where the local Poisson assumption is incorrect. Even in that extreme situation, the third-order Fourier–Taylor estimator behaves well, with aliasing effects restrained to at most the per cent level at half the Nyquist frequency.
We also show how to reach arbitrarily large dynamic range in Fourier space (i.e. high wavenumber), while keeping statistical errors in control, by appropriately 'folding' the particle distribution.  相似文献   

6.
We present a detrending algorithm for the removal of trends in time series. Trends in time series could be caused by various systematic and random noise sources such as cloud passages, changes of airmass, telescope vibration, CCD noise or defects of photometry. Those trends undermine the intrinsic signals of stars and should be removed. We determine the trends from subsets of stars that are highly correlated among themselves. These subsets are selected based on a hierarchical tree clustering algorithm. A bottom-up merging algorithm based on the departure from normal distribution in the correlation is developed to identify subsets, which we call clusters. After identification of clusters, we determine a trend per cluster by weighted sum of normalized light curves. We then use quadratic programming to detrend all individual light curves based on these determined trends. Experimental results with synthetic light curves containing artificial trends and events are presented. Results from other detrending methods are also compared. The developed algorithm can be applied to time series for trend removal in both narrow and wide field astronomy.  相似文献   

7.
We present further development and the first public release of our multimodal nested sampling algorithm, called M ulti N est . This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed existing Markov chain Monte Carlo techniques in a wide range of astrophysical inference problems. The accuracy and economy of the M ulti N est algorithm are demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla Λ cold dark matter model to include spatial curvature and a varying equation of state for dark energy. The M ulti N est software, which is fully parallelized using MPI and includes an interface to C osmo MC, is available at http://www.mrao.cam.ac.uk/software/multinest/ . It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at http://www.superbayes.org .  相似文献   

8.
A trivial modification to the XML schema of VOEvent v1.1 allows the inclusion of W3C digital signatures. Signatures enable identification, identification enables trust, and trust enables authorization. Such changes would inhibit abuse of the VOEvent networks. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
10.
11.
We investigate the application of neural networks to the automation of MK spectral classification. The data set for this project consists of a set of over 5000 optical (3800–5200 Å) spectra obtained from objective prism plates from the Michigan Spectral Survey. These spectra, along with their two-dimensional MK classifications listed in the Michigan Henry Draper Catalogue, were used to develop supervised neural network classifiers. We show that neural networks can give accurate spectral type classifications (σ68= 0.82 subtypes, σrms= 1.09 subtypes) across the full range of spectral types present in the data set (B2–M7). We show also that the networks yield correct luminosity classes for over 95 per cent of both dwarfs and giants with a high degree of confidence.   Stellar spectra generally contain a large amount of redundant information. We investigate the application of principal components analysis (PCA) to the optimal compression of spectra. We show that PCA can compress the spectra by a factor of over 30 while retaining essentially all of the useful information in the data set. Furthermore, it is shown that this compression optimally removes noise and can be used to identify unusual spectra.   This paper is a continuation of the work carried out by von Hippel et al. (Paper I).  相似文献   

12.
13.
The theory of low-order linear stochastic differential equations is reviewed. Solutions to these equations give the continuous time analogues of discrete time autoregressive time-series. Explicit forms for the power spectra and covariance functions of first- and second-order forms are given. A conceptually simple method is described for fitting continuous time autoregressive models to data. Formulae giving the standard errors of the parameter estimates are derived. Simulated data are used to verify the performance of the methods. Irregularly spaced observations of the two hydrogen-deficient stars FQ Aqr and NO Ser are analysed. In the case of FQ Aqr the best-fitting model is of second order, and describes a quasi-periodicity of about 20 d with an e-folding time of 3.7 d. The NO Ser data are best fitted by a first-order model with an e-folding time of 7.2 d.  相似文献   

14.
15.
In many astronomical problems one often needs to determine the upper and/or lower boundary of a given data set. An automatic and objective approach consists in fitting the data using a generalized least-squares method, where the function to be minimized is defined to handle asymmetrically the data at both sides of the boundary. In order to minimize the cost function, a numerical approach, based on the popular downhill simplex method, is employed. The procedure is valid for any numerically computable function. Simple polynomials provide good boundaries in common situations. For data exhibiting a complex behaviour, the use of adaptive splines gives excellent results. Since the described method is sensitive to extreme data points, the simultaneous introduction of error weighting and the flexibility of allowing some points to fall outside of the fitted frontier, supplies the parameters that help to tune the boundary fitting depending on the nature of the considered problem. Two simple examples are presented, namely the estimation of spectra pseudo-continuum and the segregation of scattered data into ranges. The normalization of the data ranges prior to the fitting computation typically reduces both the numerical errors and the number of iterations required during the iterative minimization procedure.  相似文献   

16.
This contribution aims to introduce the idea that a well‐evolved HTN of the far future, with the anticipated addition of very large apertures, could also be made to incorporate the ability to carry out photonic astronomy observations, particularly Optical VLBI in a revived Hanbury‐Brown and Twiss Intensity Interferometry (HBTII) configuration. Such an HTN could exploit its inherent rapid reconfigurational ability to become a multi‐aperture distributed photon‐counting network able to study higher‐order spatiotemporal photon correlations and provide a unique tool for direct diagnostics of astrophysical emission processes. We very briefly review various considerations associated with the switching of the HTN to a special mode in which single‐photon detection events are continuously captured for a posteriori intercorrelation. In this context, photon arrival times should be determined to the highest time resolution possible and extremely demanding absolute time keeping and absolute time distribution schemes should be devised and implemented in the HTN nodes involved. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
We present a simple and efficient method to set up spherical structure models for N -body simulations with a multimass technique. This technique reduces by a substantial factor the computer run time needed in order to resolve a given scale as compared to single-mass models. It therefore allows to resolve smaller scales in N -body simulations for a given computer run time. Here, we present several models with an effective resolution of up to  1.68 × 109  particles within their virial radius which are stable over cosmologically relevant time-scales. As an application, we confirm the theoretical prediction by Dehnen that in mergers of collisionless structures like dark matter haloes always the cusp of the steepest progenitor is preserved. We model each merger progenitor with an effective number of particles of approximately 108 particles. We also find that in a core–core merger the central density approximately doubles whereas in the cusp–cusp case the central density only increases by approximately 50 per cent. This may suggest that the central regions of flat structures are better protected and get less energy input through the merger process.  相似文献   

18.
The observation of flux sources near the limit of detection requires a careful evaluation of possible biases in magnitude determination. Both the traditional logarithmic magnitudes and the recently proposed inverse hyperbolic sine (asinh) magnitudes are considered. Formulae are derived for three different biasing mechanisms: the statistical spread of the observed flux values arising from e.g. measurement error; the dependence of these errors on the true flux; and the dependence of the observing probability on the true flux. As an example of the results, it is noted that biases at large signal-to-noise ratios R , at which the two types of magnitude are similar, are of the order of −( p +1)/ R 2, where the exponent p parametrizes a power-law dependence of the probability of observation on the true flux.  相似文献   

19.
The estimation of the frequency, amplitude and phase of a sinusoid from observations contaminated by correlated noise is considered. It is assumed that the observations are regularly spaced, but may suffer missing values or long time stretches with no data. The typical astronomical source of such data is high-speed photoelectric photometry of pulsating stars. The study of the observational noise properties of nearly 200 real data sets is reported: noise can almost always be characterized as a random walk with superposed white noise. A scheme for obtaining weighted non-linear least-squares estimates of the parameters of interest, as well as standard errors of these estimates, is described. Simulation results are presented for both complete and incomplete data. It is shown that, in finite data sets, results are sensitive to the initial phase of the sinusoid.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号