Book Read Free

Dark Matter and Cosmic Web Story

Page 34

by Einasto, Jaan


  Initial perturbations are random, thus at any point the perturbation tensor is three-axial, i.e. in one direction the growth of perturbations is faster. The essential aspect of the Zeldovich approximation is the understanding that in this direction the growth goes to the non-linear regime faster and leads to the formation of flat structures — pancakes. This result is independent of the scale of perturbations: both on large and on small scales pancakes do form.

  The amplitude of fluctuations on different length scales is described by the power spectrum. The primordial power spectrum is usually assumed to have a power law dependence on scale: P(k) ∼ kn, where k is the wavenumber. A popular choice is the scale-invariant spectrum with spectral index n =1, proposed by Zeldovich and Harrison. In this case, fluctuations on different length scales correspond to the same amplitude of fluctuation in the gravitational potential. If the matter is baryonic, as was believed in the early 1970’s, then during the radiation dominated era of the evolution small-scale fluctuations are damped due to photon viscosity (Silk damping). For this reason the power spectrum contains only waves larger than a critical length. First large-scale density enhancements grow, and thereafter they fragment into smaller units. This scenario is thus called top-dawn.

  The first calculations of the evolution of inhomogeneities in the non-linear regime using the Zeldovich approximation were done by Doroshkevich et al. (1973); Doroshkevich & Shandarin (1973). Tens of thousands of particles were moved according to the growing mode of the Zeldovich approximation until the formation of pancakes, i.e. into a highly non-linear regime. A relatively small set of particles (about two dozen) was selected for checking self consistency of their motion. The particles had ‘twins’. The original particles moved as other particles — i.e. according to the Zeldovich approximation. But the trajectories of the twin particles were obtained by integration of their paths in the gravitational field of all particles. Gravitational forces on these twin particles were computed by direct summation of the Newton gravitational forces from all the particles. Comparing the two sets of trajectories of the selected particles with their twins allowed the authors to estimate the accuracy of the Zeldovich approximation. The differences between the two sets of particles were small, which indicated the good accuracy of the Zeldovich approximation.

  The distribution of particles shown in Fig. 5.3 was calculated by Sergei Shan- darin using only the Zeldovich approximation. The first true N-body simulations (Zeldovich approximation was applied only in the early phase of the evolution) were done by Doroshkevich et al. (1980) in 2-dimensions (642 mesh) and by Klypin & Shandarin (1983) in 3-dimensions (323 mesh). Both simulations were done using power spectra cut on small scales. At this time the non-baryonic nature of the dark matter was also a serious possibility to consider, and the first natural candidate was massive neutrinos. Massive neutrinos move with very high speed, thus density perturbations on small scales are damped. For this reason simulations of the neutrino dominated Universe were similar to previous simulations of the baryonic matter, where also spectra were cut on small scales. In all simulations the formation of a cellular structure of particles was clearly visible, as seen in Fig. 5.3.

  The first simulations in Western countries using the Zeldovich approximation for initial conditions were made by Melott (1983) in 2-dimensions (1002 mesh), and by Centrella & Melott (1983) in 3-dimensions (323 mesh, in each cell up to 27 particles). In both cases the Zeldovich method was applied to calculate initial positions and the evolution. The use of a larger number of particles allowed the evolution of both over-dense and under-dense regions to be better seen. Calculations by Centrella & Melott (1983) confirmed the formation of the cellular structure. Voids at isodensity level /mean = 0.5 are isolated from each other, suggesting a “Swiss-cheese” topology. The distribution of particles and their velocities are similar to the distribution found earlier by Doroshkevich et al. (1980) and Klypin & Shandarin (1983).

  Frenk et al. (1983) and White et al. (1983) also used the Zeldovich method to determine initial conditions, but applied the direct integration method to calculate the evolution of 1000 particles. The formation of a system of filaments was confirmed. However, as shown by White et al. (1983), the structure forms too late. This contradicts observations, which suggest an early formation of galaxies. In other words, the conventional neutrino-dominated cosmology is not possible. Problems with the neutrino-dominated Universe were discussed earlier by Zeldovich et al. (1982), see Ch. 6. These are the first papers where Efstathiou and his colleagues used the Zeldovich approximation to set initial conditions.

  I discussed recently the development of numerical simulation methods with Sergei Shandarin. He wrote to me that cosmologists in the West started to use the Zeldovich approximation only after reading papers of the Zeldovich team cited above, and listening to Sergei’s talk at the Erice Summer School in 1981, where Simon White was present. Sergei noticed that Frenk et al. (1983) cited the Klypin & Shandarin (1983) paper several times, but not in the context of setting initial conditions which was actually the core of the Zeldovich method.

  I also asked Sergei how he would explain the fact that astronomers in Western countries were more than ten years behind Moscow scientists in the understanding of the evolution of the Universe. Sergei’s answer was that one possible reason may be the difference in education. In most American and British universities hydrodynamics is not in the curriculum of astronomy students. In contrast, to be accepted by the physics community in Moscow, every student must take the “Theoretical Minimum” exam conducted by Landau or Lifshitz in person. The basis of the exam was the Landau–Lifshitz series of books covering all aspects of theoretical physics. Our collaborator Lev Kofman also wanted to be accepted into the Moscow community of physicists, and took Lifshitz’s exam, in spite of the fact that he had already graduated from Tartu University in theoretical physics.

  In connection to this I remember one winter school in the Caucasus in the early 1980’s, where Andrei Linde gave an overview of the inflation theory. All major astrophysicists attended, among them Josif Shklovsky. After the talk Shklovsky said that he had not understood anything about what Andrei said. Andrei’s answer was simple: “This is new physics, it is not better nor worse than the old physics, it is just different”.

  8.2.2 HDM and CDM simulations

  Numerical simulations of structure evolution for both hot and cold dark matter were made by Melott et al. (1983), and by White et al. (1987) (HDM and CDM models with density parameter Ωm = 1). Both dark matter scenarios were already discussed in Chapters 5 and 6, thus here I repeat only the main results.

  In contrast to the HDM model, in the CDM scenario the structure formation starts at an early epoch. Superclusters consist of a network of filaments of DM halos which can be identified with galaxies, groups, and clusters of galaxies, similar to the observed distribution of galaxies. Thus CDM simulations reproduce quite well the observed structure with clusters, filaments and voids, including quantitative characteristics: the correlation function, the percolation or connectivity, and the multiplicity distribution of systems of galaxies (Melott et al., 1983). The Melott et al. paper ends with a statement: “we see here strong support for the structure formation process in an axion-, gravitino-, or photino-dominated universe. Galaxy formation proceeds from collapse of small-scale perturbations, as in the Hierarchical Clustering theory, but large-scale coherent structure forms as in the Adiabatic theory”.

  The CDM numerical simulation by Adrian Melott was basically made to explore the viability of the CDM scenario. The CDM power spectrum was modeled by two power-law segments with n = 1 on large scales and n = −3 on smaller scales with a sharp bend. In spite of this simplification the model behaved surprisingly well also on small scales. This model and additional models calculated by Adrian and his group with higher resolution were used by our team to compare the observed distribution of galaxies with the model.

  An extensive series of numerical simulations was made by Marc Davis, George Ef
stathiou, Carlos Frenk, and Simon White, nicknamed the “Gang of Four”. First Efstathiou et al. (1985) compared various simulation methods: direct integration, particle–mesh (PM), and particle-particle/particle–mesh (P3M) codes. The last two methods use the CIC technique to calculate the influence of all particles. The P3M method uses additionally direct integration (particle–particle) to take into account gravitational interactions of nearby particles, thus the resolution of simulations is much higher than in the simple CIC or PM code. The authors used various softening parameters in the calculation of gravitational force, as well various density parameters, Ωm = 1 and Ωm = 0.2. The number of particles in simulations was 323 in a grid of 643 cells in most cases.

  Davis et al. (1985) described the evolution of large-scale structure in a CDM dominated Universe, and White et al. (1987) analysed the distribution of clusters, filaments, and voids in a CDM dominated Universe. Several simulation runs were made using a grid with 643 cells and 323 or 603 particles, in cubic boxes of comoving size L = 280 and L = 360 h−1 Mpc, respectively. In all cases Ωm = 1 was assumed. The modeled structures are in good agreement with observed structures. However, the comparison of models with observations was made only in a qualitative manner.

  This series of models is considered as the first comprehensive study of the CDM dominated Universe.

  8.2.3 Simulations with cosmological constant

  A flat cosmological model with Ωtot = 1 is theoretically preferred (arguments which led to the formulation of the inflation theory). On the other hand, direct observational data suggested that the density of matter, including dark matter in galaxies and clusters, yields a value Ωm ≈ 0.2 (Einasto et al., 1974b; Ostriker et al., 1974). Thus the reminder must lie in some smoothly distributed background. The most suitable candidate for such a background is the cosmological term, ΩΛ = Ωtot − Ωm ≈ 0.8. Additional arguments in favour of a flat cosmological model are data on the Hubble constant and on the age of the Universe. A flat cosmological model with Λ-term was discussed by Gunn & Tinsley (1975), Turner et al. (1984), Kofman & Starobinsky (1985), as well as during the Second Tartu Cosmology Seminar (Kofman et al., 1986).

  In the 1970’s and early 1980’s our team compared observation with simulations using models calculated by the Zeldovich team and later by Adrian Melott. In the mid 1980’s it was clear that we needed our own models suited for our particular tasks. Our student Mirt Gramann started to write the program for numerical simulations. As a starting point she took the program developed by Anatoly Klypin, the particle–mesh code. Following the suggestion of Enn, Mirt Gramann calculated models with the cosmological term. The first trial calculations in 1984 were done with a small number of particles. Soon Tartu Observatory got its first UNIX computer with 2 MB of core memory, so it was possible to simulate 3-dimensional models with 643 particles in a 643 mesh. One run took about a month — not too much in those days. Several simulations were made with cube sizes 40 h−1 Mpc and 80 h−1 Mpc. Some models were calculated also for higher values of the matter density, Ωm = 0.4 and Ωm = 1.0, the latter model corresponds to the “standard” CDM model of those days (Gramann, 1987, 1988, 1990). The first application of the model with cosmological constant was in our topology paper by Einasto et al. (1986a).

  In 1988 I received an invitation from the Institute of Astronomy of Cambridge University. At this time it was possible to take with me our post-graduate student Mirt Gramann. Our basic goal was to compare her ΛCDM models with other models and observational data. Carlos Frenk, a member of the US-British team modelling the structure evolution, gave us a copy of data of their standard CDM model with density parameter Ωm = 1 (White et al., 1987). This comparison suggested that the structure of the cosmic web is similar in ΛCDM and standard CDM models. However, in order to get the correct amplitude of density fluctuations, the evolution of the standard CDM model has to be stopped at an earlier epoch (Gra- mann, 1990). In ΛCDM there are no problems with timing — all model properties fit observational quantities well at just the right time.

  We compared our ΛCDM models with observations applying various quantitative methods. The first check was always the correlation function. Additional tests included the connectivity or percolation, the multiplicity of connected systems, the filling factor of both filled and empty regions using various density levels, the void diameter distribution, and the void probability distribution. All quantitative checks suggested that the ΛCDM model represents real galaxy samples very well (Einasto & Einasto, 1989; Gramann, 1990; Einasto et al., 1991). Our ΛCDM model was probably the first one with the presently popular cosmological term.

  Independent evidence favouring a CDM model with the cosmological term was found by Efstathiou et al. (1990). Efstathiou made a series of N-body simulations with the “standard” Ω = 1 CDM model, and several spatially flat ΛCDM models with matter density Ωm = 0.2, for comoving box sizes 50, 150, 300 h−1 Mpc. For all models angular correlation functions were calculated, and compared with the correlation functions found for the galaxy survey made with the APM (Automatic Plate Measuring machine in the Institute of Astronomy of Cambridge University). The authors find that the “standard” model lacks power on large scales, whereas the ΛCDM model fits the observed correlation function well.

  An interesting remark. Together with Mirt Gramann we visited the Institute of Astronomy in Cambridge in 1988,1989, and 1990, and had a lot of discussions with Cambridge astronomers on numerical simulations of the structure. Our comparison of ΛCDM and CDM models with observations was made basically in Cambridge, thus George Efstathiou and his colleagues were aware of our ΛCDM models. Their paper Efstathiou et al. (1990) contains, however, no citations to our papers on this subject.

  An essential property of ΛCDM models is an accelerating expansion speed of the Universe. Experimental proof for this effect came from the comparison of distant and nearby supernovae, see below.

  8.2.4 Modern cosmological simulations

  In the following years the technique to calculate numerical simulations to follow the evolution of the structure of the Universe and to see how galaxies were formed progressed a lot. A detailed description of this development is, however, outside the scope of this review. I only mention one of the largest simulations of the evolution of the structure — the Millennium Simulation. It was made in the Max-Planck Institute for Astrophysics in Garching by Volker Springel and collaborators (Springel et al., 2005; Gao et al., 2005a; Springel et al., 2006). The simulation assumes the ΛCDM initial power spectrum. A cube of the comoving size of 500 h−1 Mpc was simulated using about 10 billion dark matter particles that allowed the evolution of small-scale features in galaxies to be followed. Using a semi-analytic model the formation and evolution of galaxies was also analysed (Di Matteo et al., 2005; Gao et al., 2005b; Croton et al., 2006). For simulated galaxies photometric properties, masses, luminosities and sizes of principal components (bulge, disk) were found. The comparison of this simulated galaxy catalogue with observations shows that the simulation was very successful.

  The results of the Millennium Simulation are frequently used as a starting point for further more detailed simulations of the evolution of single galaxies. We used the Millennium Simulation to construct simulated supercluster catalogues and to compare properties of real and simulated superclusters (Einasto et al., 2007a,c,d). One difference was evident: there are more very rich superclusters than expected from simulations, see Fig. 7.22 (Einasto et al., 2006). One possible explanation for the large difference between the distribution of luminosities of real and simulated samples is that the role of very large density perturbations is underestimated. The other feasible explanation is the presence of some unknown processes in the very early Universe which give rise to the formation of extremely luminous and massive superclusters. The size of the simulation box, L = 500 h−1 Mpc, is, however, not sufficient to conclude which of these possibilities is correct.

  One difficulty with the original pancake scenario by Zeldovich was the shape of objects f
ormed during the collapse. It was assumed that forming systems are flat pancake-like objects, whereas the dominant features of the cosmic web are filaments (Einasto et al., 1980a, 1983b). This discrepancy was explained by Bond et al. (1996). They showed that, due to tidal forces, in most cases only essentially one-dimensional structures, i.e. filaments form.

  The ΛCDM model of structure formation and evolution combines all essential aspects of the original structure formation models, the pancake and the hierarchical clustering scenario. First structures form at very early epochs soon after recombination in places where the primordial matter has the highest density. This occurs in the central regions of future superclusters. The first objects to form are small dwarf galaxies, which grow by infall of primordial matter and other small galaxies. Thus, soon after the formation of the central galaxy other galaxies fall into the gravitational potential well of the supercluster. These clusters have had many merger events and have “eaten” all its nearby companions. During each merger event the cluster suffers a slight shift of its position. As in central regions of superclusters merger galaxies come from all directions, the cluster settles more and more accurately into the centre of the gravitational well of the supercluster. This explains the fact that very rich clusters have almost no residual motion with respect to the smooth Hubble flow. According to the old paradigm galaxies and clusters form by random hierarchical clustering and could have slow motions only in a very low-density universe — an argument against the presence of a large amount of dark matter by Materne & Tammann (1976). However, the low random velocity of central galaxies of clusters is valid only for the richest clusters near the centres of superclusters, see the discussion above on the morphology of clusters in superclusters.

 

‹ Prev