首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 779 毫秒
1.
We present results of an analysis of the properties of spheroid-like galaxies that form in hydrodynamical, self-consistent cosmological simulations run with the DEVA code. We find that the structural, dynamical and X-ray properties, as well as their correlations, are compatible with observations of early-type galaxies at low z. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

2.
We compare simulations of the Lyman α forest performed with two different hydrodynamical codes, gadget-2 and enzo . A comparison of the dark matter power spectrum for simulations run with identical initial conditions show differences of 1–3 per cent at the scales relevant for quantitative studies of the Lyman α forest. This allows a meaningful comparison of the effect of the different implementations of the hydrodynamic part of the two codes. Using the same cooling and heating algorithm in both codes, the differences in the temperature and the density probability distribution function are of the order of 10 per cent. The differences are comparable to the effects of box size and resolution on these statistics. When self-converged results for each code are taken into account, the differences in the flux power spectrum – the statistics most widely used for estimating the matter power spectrum and cosmological parameters from Lyman α forest data – are about 5 per cent. This is again comparable to the effects of box size and resolution. Numerical uncertainties due to a particular implementation of solving the hydrodynamic or gravitational equations appear therefore to contribute only moderately to the error budget in estimates of the flux power spectrum from numerical simulations. We further find that the differences in the flux power spectrum for enzo simulations run with and without adaptive mesh refinement are also of the order of 5 per cent or smaller. The latter require 10 times less CPU time making the CPU time requirement similar to that of a version of gadget-2 that is optimized for Lyman α forest simulations.  相似文献   

3.
We report on two fully self-consistent hydrodynamical simulations run, in the context of a ΛCDM cosmological model, with the DEVA code. Galaxy-like-objects of different morphologies form. The assembly patterns identified in these simulations give support to the ab initio scenario to explain morphological differentiation, even if violent events at lower z could also have played a role. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
A revision of Stodoíkiewicz's Monte Carlo code is used to simulate evolution of star clusters. The new method treats each superstar as a single star and follows the evolution and motion of all individual stellar objects. The first calculations for isolated, equal-mass N -body systems with three-body energy generation according to Spitzer's formulae show good agreement with direct N -body calculations for N  = 2000, 4096 and 10 000 particles. The density, velocity, mass distributions, energy generation, number of binaries, etc., follow the N -body results. Only the number of escapers is slightly too high compared with N -body results, and there is no level-off anisotropy for advanced post-collapse evolution of Monte Carlo models as is seen in N -body simulations for N  ≤ 2000. For simulations with N  > 10 000 gravothermal oscillations are clearly visible. The calculations of N   2000, 4096, 10 000, 32 000 and 100 000 models take about 2, 6, 20, 130 and 2500 h, respectively. The Monte Carlo code is at least 105 times faster than the N -body one for N  = 32 768 with special-purpose hardware. Thus it becomes possible to run several different models to improve statistical quality of the data and run individual models with N as large as 100 000. The Monte Carlo scheme can be regarded as a method which lies in the middle between direct N -body and Fokker–Planck models and combines most advantages of both methods.  相似文献   

5.
6.
The growth rate of the turbulent mixing zone, which develops from random perturbations under Rayleigh–Taylor instability, has been studied using the 3D version of the hydrodynamical code VULCAN. Previous studies show large differences between the α parameter of different codes. In its Eulerian mode VULCAN/3D employs Van–Leer scheme for the advection of all variables, and can also use interface tracking for multi-phase flows. Simulations using parallel version of VULCAN/3D give α of about 0.06, a value which agrees very well with experiments and some other simulations.  相似文献   

7.
We study the nucleosynthesis and the induced mixing during the merging of massive stars inside a common envelope. The systems of interest are close binaries, initially consisting of a massive red supergiant and a main-sequence companion of a few solar masses. We apply parameterized results based on hydrodynamical simulations to model the stream-core interaction and the response of the star in a standard stellar-evolution code. Preliminary results are presented illustrating the possibility of unusual nucleosynthesis and post-merging dredge-up which can cause composition anomalies in the supergiant's envelope. This revised version was published online in September 2006 with corrections to the Cover Date.  相似文献   

8.
9.
We present a method for customizing the root grid of zoom-in initial conditions used for simulations of galaxy formation. Starting from the white noise used to seed the structures of an existing initial condition, we cut out a smaller region of interest and use this trimmed white noise cube to create a new root grid. This new root grid contains similar structures as the original, but allows for a smaller box volume and different grid resolution that can be tuned to best suit a given simulation code. To minimally disturb the zoom region, the dark matter particles and gas cells from the original zoom region are placed within the new root grid, with no modification other than a bulk velocity offset to match the systemic velocity of the corresponding region in the new root grid. We validate this method using a zoom-in initial condition containing a Local Group analog. We run collisionless simulations using the original and modified initial conditions, finding good agreement. The dark matter halo masses of the two most massive galaxies at z=0 match the original to within 15%. The times and masses of major mergers are reproduced well, as are the full dark matter accretion histories. While we do not reproduce specific satellite galaxies found in the original simulation, we obtain qualitative agreement in the distributions of the maximum circular velocity and the distance from the central galaxy. We also examine the runtime speedup provided by this method for full hydrodynamic simulations with the ART code. We find that reducing the root grid cell size improves performance, but the increased particle and cell numbers can negate some of the gain. We test several realizations, with our best runs achieving a speedup of nearly a factor of two.  相似文献   

10.
11.
We present a tree code for simulations of collisional systems dominated by a central mass. We describe the implementation of the code and the results of some test runs with which the performance of the code was tested. A comparison between the behaviour of the tree code and a direct hybrid integrator is also presented. The main result is that tree codes can be useful in numerical simulations of planetary accretion, especially during intermediate stages, where possible runaway accretion and dynamical friction lead to a population with a few large bodies in low-eccentricity and low-inclination orbits embedded in a large swarm of small planetesimals in rather excited orbits. Some strategies to improve the performance of the code are also discussed.  相似文献   

12.
It is logically possible that early two-body relaxation in simulations of cosmological clustering influences the final structure of massive clusters. Convergence studies in which mass and spatial resolution are simultaneously increased cannot eliminate this possibility. We test the importance of two-body relaxation in cosmological simulations with simulations in which there are two species of particles. The cases of two mass ratios, √2:1 and 4:1, are investigated. Simulations are run with both a spatially fixed softening length and adaptive softening using the publicly available codes gadget and mlapm , respectively.
The effects of two-body relaxation are detected in both the density profiles of haloes and the mass function of haloes. The effects are more pronounced with a fixed softening length, but even in this case they are not so large as to suggest that results obtained with one mass species are significantly affected by two-body relaxation.
The simulations that use adaptive softening are less affected by two-body relaxation and produce slightly higher central densities in the largest haloes. They run about three times faster than the simulations that use a fixed softening length.  相似文献   

13.
We present two dimensional cylindrically symmetric hydrodynamic simulations and synthetic emission maps of a stellar wind propagating into an infalling, rotating environment. The resulting outflow morphology, collimation and stability observed in these simulations have relevance to the study of young stellar objects, Herbig-Haro jets and molecular outflows. Our code follows hydrogen gas with molecular, atomic and ionic components tracking the associated time dependent molecular chemistry and ionization dynamics with radiative cooling appropriate for a dense molecular gas. We present tests of the code as well as new simulations which indicate the presence of instabilities in the wind-blown bubble’s swept-up shell.  相似文献   

14.
15.
Cosmological N -body simulations are used for a variety of applications. Indeed progress in the study of large-scale structures and galaxy formation would have been very limited without this tool. For nearly 20 yr the limitations imposed by computing power forced simulators to ignore some of the basic requirements for modelling gravitational instability. One of the limitations of most cosmological codes has been the use of a force softening length that is much smaller than the typical interparticle separation. This leads to departures from collisionless evolution that is desired in these simulations. We propose a particle-based method with an adaptive resolution where the force softening length is reduced in high-density regions while ensuring that it remains well above the local interparticle separation. The method, called the Adaptive TreePM (ATreePM), is based on the TreePM code. We present the mathematical model and an implementation of this code, and demonstrate that the results converge over a range of options for parameters introduced in generalizing the code from the TreePM code. We explicitly demonstrate collisionless evolution in collapse of an oblique plane wave. We compare the code with the fixed resolution TreePM code and also an implementation that mimics adaptive mesh refinement methods and comment on the agreement and disagreements in the results. We find that in most respects the ATreePM code performs at least as well as the fixed resolution TreePM in highly overdense regions, from clustering and number density of haloes to internal dynamics of haloes. We also show that the adaptive code is faster than the corresponding high-resolution TreePM code.  相似文献   

16.
We have developed a parallel Particle–Particle, Particle–Mesh (P3M) simulation code for the Cray T3E parallel supercomputer that is well suited to studying the time evolution of systems of particles interacting via gravity and gas forces in cosmological contexts. The parallel code is based upon the public-domain serial Adaptive P3M-SPH (http://coho.astro.uwo.ca/pub/hydra/hydra.html) code of Couchman et al. (1995)[ApJ, 452, 797]. The algorithm resolves gravitational forces into a long-range component computed by discretizing the mass distribution and solving Poisson's equation on a grid using an FFT convolution method, and a short-range component computed by direct force summation for sufficiently close particle pairs. The code consists primarily of a particle–particle computation parallelized by domain decomposition over blocks of neighbour-cells, a more regular mesh calculation distributed in planes along one dimension, and several transformations between the two distributions. The load balancing of the P3M code is static, since this greatly aids the ongoing implementation of parallel adaptive refinements of the particle and mesh systems. Great care was taken throughout to make optimal use of the available memory, so that a version of the current implementation has been used to simulate systems of up to 109 particles with a 10243 mesh for the long-range force computation. These are the largest Cosmological N-body simulations of which we are aware. We discuss these memory optimizations as well as those motivated by computational performance. Performance results are very encouraging, and, even without refinements, the code has been used effectively for simulations in which the particle distribution becomes highly clustered as well as for other non-uniform systems of astrophysical interest.  相似文献   

17.
We present a simple and efficient method to set up spherical structure models for N -body simulations with a multimass technique. This technique reduces by a substantial factor the computer run time needed in order to resolve a given scale as compared to single-mass models. It therefore allows to resolve smaller scales in N -body simulations for a given computer run time. Here, we present several models with an effective resolution of up to  1.68 × 109  particles within their virial radius which are stable over cosmologically relevant time-scales. As an application, we confirm the theoretical prediction by Dehnen that in mergers of collisionless structures like dark matter haloes always the cusp of the steepest progenitor is preserved. We model each merger progenitor with an effective number of particles of approximately 108 particles. We also find that in a core–core merger the central density approximately doubles whereas in the cusp–cusp case the central density only increases by approximately 50 per cent. This may suggest that the central regions of flat structures are better protected and get less energy input through the merger process.  相似文献   

18.
We have developed a new stellar evolution and oscillation code YNEV,which calculates the structures and evolutions of stars,taking into account hydrogen and helium burning.A nonlocal turbulent convection theory and an updated overshoot mixing model are optional in this code.The YNEV code can evolve low-and intermediate-mass stars from the pre-main sequence to a thermally pulsing asymptotic branch giant or white dwarf.The YNEV oscillation code calculates the eigenfrequencies and eigenfunctions of the adiabatic oscillations for a given stellar structure.The input physics and numerical scheme adopted in the code are introduced.Examples of solar models,stellar evolutionary tracks of low-and intermediate-mass stars with different convection theories(i.e.mixing-length theory and nonlocal turbulent convection theory),and stellar oscillations are shown.  相似文献   

19.
Shear mixing is believed to be the main mechanism to provide extra mixing in stellar interiors. We present results of three-dimensional (3D) simulations of the magnetohydrodynamic Kelvin–Helmholtz instability in a stratified shear layer. The magnetic field is taken to be uniform and parallel to the shear flow. We describe the evolution of the fluid flow and the magnetic field for a range of initial conditions. In particular, we investigate how the mixing rate of the fluid depends on the Richardson number and the magnetic field strength. It is found that the magnetic field can enhance as well as suppress mixing. Moreover, we have performed two-dimensional (2D) simulations and discuss some interesting differences between the 2D and 3D results.  相似文献   

20.
Gravitational lensing calculation using a direct inverse ray-shooting approach is a computationally expensive way to determine magnification maps, caustic patterns, and light-curves (e.g. as a function of source profile and size). However, as an easily parallelisable calculation, gravitational ray-shooting can be accelerated using programmable graphics processing units (GPUs). We present our implementation of inverse ray-shooting for the NVIDIA G80 generation of graphics processors using the NVIDIA Compute Unified Device Architecture (CUDA) software development kit. We also extend our code to multiple GPU systems, including a 4-GPU NVIDIA S1070 Tesla unit. We achieve sustained processing performance of 182 Gflop/s on a single GPU, and 1.28 Tflop/s using the Tesla unit. We demonstrate that billion-lens microlensing simulations can be run on a single computer with a Tesla unit in timescales of order a day without the use of a hierarchical tree-code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号