To study the topology of model and real samples we used high-resolution density fields, and divided it at varying threshold density level to cells of ‘empty’ or ‘filled’ regions, depending on whether the density was lower or higher than the threshold density, respectively. Next we found systems, consisting of empty cells (voids) and of filled cells (galaxy systems). Neighbouring empty (or filled) cells belong to one system when they have at least one common sidewall. In such a way we find all individual voids and galaxy systems in the sample. Finally we find the largest system, and calculate its length (the maximal length in x, y, or z, in units of the cube length of the sample), and its filling factor (the volume in units of the total volume of the sample). The largest systems were found for both voids (E) and clusters (F) for a large range of threshold densities.
Our principal results are shown in Fig. 7.4. The Figure shows that the behaviour of voids and clusters of unbiased models for various threshold densities is symmetrical with respect to the mean density. The length and the volume of voids (clusters) is small at small (high) threshold density. When the threshold density approaches the mean density, the largest voids (clusters) rapidly increase in length and volume.
The biased model samples and real samples have completely different bahaviour. For all threshold densities considered the largest void crosses the whole sample volume. The dependence of the length and the volume of the largest cluster on the density threshold is approximately the same (at the present epoch) for all model and real samples. However, note the differences in the volume of voids and clusters in the biased standard CDM and the ΛCDM models. The last model behaves in this respect very closely to the observed sample, while the standard CDM model does not. This was the first quantitative evidence which showed the superiority of the ΛCDM model over the standard CDM model.
The main conclusion of our topology study was that the topology of both voids and clusters depends strongly on the biasing and on the density threshold, applied in the definition of voids and clusters. A honeycomb-like cellular topology with isolated voids is seen only in unbiased model samples (all particles are present) using a very low density threshold. At medium density thresholds all model and real samples have a sponge topology, i.e. both voids and clusters are multiply connected, and form percolating systems which span the whole volume under study. At high threshold densities the topology of clusters is of the type “islands in the ocean”.
We sent our topology paper to “Monthly Notices”, but received a very negative referee report. We spent about half a year to make a new analysis. To our surprise now the referee again rejected the paper asking us to make changes almost opposite to his previous suggestions. So we did not know what to do. We had a number of other projects running, so the revision of the paper stopped. As a result we did not revise the paper again — we lost the “battle” with the referee. Here again the words of Öpik (1977) are relevant — it is difficult to publish results which do not fit into the conventional world picture or paradigm. Now the preprint of the paper is available on our website; the actual preprint was sent to all observatories immediately after it was printed.
Rich Gott and his collaborators continued the study of the topology of the cosmic web (Park & Gott, 1991; Park et al., 1992b; Vogeley et al., 1994b), and now this is a respectable topic of cosmological studies.
7.1.3 Fractal properties of the cosmic web
The correlation function is probably the most commonly used statistic in cosmology. In the mid 1980’s it was clear that filaments and voids are important ingredients of the cosmic web, and we started to look at how they influence the correlation function. We used the CfA redshift survey with magnitude limit mB = 14.5, and ZCAT compilation by John Huchra. The observed sample was divided into a number of cubic and conic volume limited samples of various depth and absolute magnitude limits. Velocities had been corrected for solar motion, Virgo-centric flow, and peculiar velocities in groups and clusters. Also a sample of Abell clusters was used. For comparison we used unbiased and biased model samples based on the HDM, CDM, and hierarchical clustering models, calculated by Adrian Melott.
Our analysis confirmed earlier results by Zeldovich et al. (1982), Melott et al. (1983) and Einasto et al. (1984), that all galaxy correlation functions have a shoulder as seen in Fig. 7.2. This phenomenon is due to the presence of galaxy systems of different shape: spherical or slightly elongated in densely populated clusters, and extremely elongated and less dense in filaments.
Fig. 7.5 The correlation length, r0 versus the cube length L or limiting redshift V0. Open circles are samples in the direction of the Coma supercluster; crosses, samples in the direction of the Perseus supercluster; triangles, samples of Abell clusters of galaxies (Einasto et al., 1986b).
Next we found that the correlation length increases with the sample size, see Fig. 7.5 (Einasto et al., 1986b). To check the dependence of the correlation function on the limiting magnitude, we calculated for the sample of limiting redshift V0 = 2000 km/s correlation functions for different limiting absolute magnitudes (−17.5, −18.5, −20.0), and found no difference. Thus our interpretation was that the increase of the correlation length with the sample size is due to the increasing role of voids in samples of larger sizes. Our data suggested that the correlation length of a fair sample of the universe is r0 ≈ 10 h−1 Mpc, about twice the generally accepted value. Later we found, using more complete samples of galaxies, that the traditional value, r0 ≈ 5 h−1 Mpc, is correct.
The paper by Einasto et al. (1986b) triggered a number of subsequent studies of the correlation function. Initially we did not understand that our results actually hint at the presence of the fractal nature of galaxy distribution. The fractal description of galaxy distribution was suggested by Mandelbrot (1982). Remo Ruffini followed our studies carefully; once he invited Enn and me to participate in the Marcel Grossmann Meeting. We could not attend, but our talk was included in the proceedings (Einasto & Saar, 1982). When the preprint of our correlation function paper (Einasto et al., 1986b) was distributed, he immediately contacted us. He said that our results confirm the fractal nature of the galaxy distribution, and we started a further joint study of the fractal nature of the galaxy distribution (Calzetti et al., 1987).
Pietronero (1987) used our results to suggest that there exists a single fractal (self-similar) structure extending from the galaxy scale up to the present limits of observation. Pietronero emphasised that the present observational data do not support the existence of a length-scale above which the distribution of matter becomes homogeneous. If I understand his opinion correctly, the fractal nature poses serious doubts about the current determinations of “a fair sample” from which one deduces the average mass density and the cosmological density parameter Ω. Pietronero and a number of other astronomers suggest a fractal cosmology in which the structure of the whole Universe is characterised by a single fractal. According to this cosmology concept the mean density of matter is almost zero.
In spring 1987 Enn Saar and I had an invitation from Bernard Jones to visit Nordita in Copenhagen. The possible fractal nature of the galaxy distribution was at this time very topical, and we started to study this phenomenon in more detail. Soon Vicent Martinez, a postdoc from Valencia, joined our team. My task was the collection and analysis of observational data; Enn, Bernard and Vicent studied the problem from the theoretical point of view. Soon it became clear that the distribution of galaxies in space cannot be described by a simple fractal (as already suggested by the shape of the correlation function); instead a multifractal description is needed. In the fractal description an effective fractal dimension of galaxy systems is defined. As mentioned above, on small scales galaxies are well clustered: they form groups and clusters, which have an approximately spherical shape. At larger distances the role of filaments dominates. Filaments are essentially one-dimensional, thus the effective fractal dimension changes. This is clearly seen in the correlation function, see Fig. 7.2. The results of our common study were p
ublished by Jones et al. (1988).
In summer 1987 near Balaton Lake an IAU Symposium devoted to the Large Scale Structure was held. We had a talk on the fractal nature of the distribution of galaxies (Einasto et al., 1988). According to our analysis the structure of the Universe can be described as a multifractal, i.e. the fractal dimension is different on different scales. The talk was given by Bernard.
At the Symposium one of the main specialists of the fractal theory, Benoit Mandelbrot, participated. After the Symposion a smaller group of astronomers had in Budapest a workshop. Here the main speakers were Yakov Zeldovich and Benoit Mandelbrot. Zeldovich disliked the fractal description for two reasons. First, in his opinion the fractal description contains no hints to the physical nature of the distribution of galaxies and the evolution of the structure of cosmic web. Second, if the structure is described by a simple fractal, then the mean density of the Universe is zero, which contradicts all other independent data. Zeldovich stressed that the fractal description can be valid only in a limited range of scales, because on very large scales the distribution of galaxies is rather uniform — there are no extremely large systems of galaxies. The cosmic web itself is the largest system, and on large scales it is statistically homogeneous.
A few years later Benoit Mandelbrot visited the Soviet Union and invited me to Leningrad to discuss the fractal nature of galaxy distribution. We agreed that the fractal nature of the distribution is clearly evident. I had the same opinion as Zeldovich that there exists rather well-defined lower and upper limits of scales where it can be used. The lower limit is probably the typical scale of galaxies with their dark halos, ~0.1 h−1 Mpc. The upper limit is given by the characteristic scale of the supercluster-void network — ~100 h−1 Mpc.
Together with Anatoli Klypin we continued the study of fractal properties of galaxy systems. This time our attention was devoted to the Local and Coma superclusters and their environment (Klypin et al., 1989). We noticed that with increasing depth of the sample the ratio of the volume occupied by systems of galaxies to the full volume of the sample — the filling factor — decreases. This phenomenon is the basic property of the fractal distribution. This fact hints also at the self-similarity of systems of galaxies of various scale. Einasto et al. (1989); Einasto & Einasto (1989) studied the self-similarity of voids in galaxy distribution. This is another manifestation of the hierarchical nature of the distribution of galaxies and voids.
To investigate the scaling law of the distribution of galaxies Martinez & Jones (1990) reanalysed data by Einasto et al. (1986b) and confirmed the increase of the correlation length with the size of the sample. However, they suggested that the increase of the correlation length may be due to the increase of the luminosity limit of subsamples, because all samples are volume (absolute magnitude) limited.
Maret Einasto (1991) analysed the behaviour of the function g(r) = 1 + ξ(r) for volume limited samples of various size and absolute magnitude limit. The apparent magnitude limit used was m = 14.5. The slope of the function g(r) in log-log representation, γ, is related to the fractal (correlation) dimension D of the sample as follows: D = 3 − γ (Pietronero, 1987; Martinez & Jones, 1990). This time a high-luminosity galaxy subsample was also used; this sample had a larger correlation length. The slope of the function g(r) changes at the scale ~3 h−1 Mpc. The change of the slope of the correlation function at this scale is visible also in Fig. 7.2 by Zeldovich et al. (1982). The increase of the amplitude of the correlation function with luminosity is due to the biasing problem, see below.
Similar analyses were made by Martinez et al. (1990); Martinez & Jones (1990) and Guzzo et al. (1991) with rather similar results. Guzzo et al. (1991) find for small and large separations the correlation dimensions D ≈ 1.2 and D ≈ 2.2, respectively.
Einasto et al. (1997b) estimated the fractal dimension of the sample of clusters of galaxies, using the correlation function. The fractal dimension of the sample of all clusters is D ≈ 2, and for clusters which belong to very rich superclusters, DSCL8 ≈ 1.4. Thus structures delineated by very rich superclusters are more one-dimensional than two-dimensional, as in the case of structures defined by all clusters.
These results raise the question: What is the size of a representative sample of the Universe? The galaxy correlation function has the correlation length r0 ≈ 5 h−1 Mpc, on scales r ≥ 5 r0 the correlation function is close to zero. Thus it was thought that the size of a representative sample is of the order of 5r0 ≈ 25 h−1 Mpc. The presence of the cosmic web with filaments, superclusters and voids of size up to ≈100 h−1 Mpc suggests that samples smaller than this size are probably not representative enough. Einasto & Gramann (1993) found that the transition scale is at least 175 h−1 Mpc, see below. If we expect that in a representative sample also superclusters of galaxies of various richness should be present, then the transition scale to homogeneity is evidently still higher, since extremely rich superclusters are very rare (Einasto et al., 2006).
7.1.4 Physical biasing
One problem to solve was to find some explanation for the absence of galaxies in voids. Observational data show that there are no galaxies in voids except galaxy filaments joining superclusters to the web, as seen in Figs. 5.5 and 5.4. In contrast, numerical simulations show the presence of a rarified population of test particles in voids, see Fig. 5.3. In the first approximation the absence of galaxies in voids was explained by Einasto et al. (1980a). Enn Saar developed an approximate analytical model of the evolution of density perturbations in under- and over-dense regions based on Zeldovich (1970) ideas. He found that the matter flows out of under-dense regions and collects in over-dense regions until it collapses (pancake forming) and forms galaxies and clusters. In under-dense regions the density decreases continuously, but never reaches zero. In other words: there must be primordial matter in voids, see Fig. 7.6. Galaxy formation occurs not everywhere but only in regions where the matter has collapsed.
The referee of the paper Einasto et al. (1980a), Michael Fall, asked to exclude most theoretical considerations from the paper, thus the density evolution formula was published many years later when we returned to the void evacuation problem (Einasto et al., 1994a). In autumn 1980 I was invited to visit the Institute of Astronomy, Cambridge, and gave a seminar talk on the large-scale distribution of galaxies. After the talk Michael Fall asked: “all this looks nice, but how can you explain filamentary superclusters and voids?” My answer was: “my task is to find how the structure of the Universe actually is, the explanation of the observed structure is the task of theorists”. At this time we had just started to find quantitative tests to compare observations with theory, so it was too early to make more conclusive statements.
Fig. 7.6 The evolution of over- and under-densities according to an analytical approximation found by Enn Saar. The time t is shown in arbitrary units, the density D is expressed in mean density units. Higher over-density regions collapse earlier, compare density changes corresponding to tform = 1 and tform = 2. At structure formation time the density in under-dense region is D(tform) = 0.5. The evolution was calculated using formulae given by Einasto et al. (1994a).
Soon results of such tests were available (Zeldovich et al., 1982). The multiplicity test shows clearly the difference between observational and model samples. As already mentioned, this test suggests that galaxy formation is a threshold phenomenon: in low-density regions no galaxy formation occurs at all, here the matter is still in pre-galactic unclustered form. In the following years, when comparing observations with simulations, we always excluded non-clustered particles in low-density regions to get a simulated sample, which could be compared with observed galaxy samples (Melott et al., 1983; Einasto et al., 1986a; Einasto & Saar, 1987; Einasto et al., 1989).
The term “biasing” was introduced by Kaiser (1984) to denote the difference between the correlation functions of galaxies and clusters of galaxies. The correlation function and the power spectrum are connected by the Fourier transform, thus a similar differ
ence exists between the power spectra of galaxies and clusters. In the following years the term “biasing” was used in a broader context to quantify the difference between the distributions of various populations: dark matter, galaxies of various luminosity, clusters of galaxies etc.
In our simple analysis of the evolution of over- and under-dense regions we noticed that in over-dense regions matter contracts to form halos which can evolve into galaxies or clusters, while in under-dense regions the density decreases, and there are no conditions to form galaxies. This picture seems to be evident, as it is well-known that only density enhancements in excess of a factor 1.68 relative to the mean density can collapse during Hubble time (Bardeen et al., 1986). This biasing scheme was used by White et al. (1987) to find simulated galaxies in N-body experiments.
Another possibility to define the biasing parameter b is to use the ratio of the density contrast of galaxies and matter at location x: δgal((x) = bδm(x). This definition is based on the tacit assumption that galaxies are randomly placed. In other words, voids are just regions of lower density of galaxies. But there is a problem with this interpretation. Observations show that there are no galaxies in voids, except faint galaxy filaments crossing voids, determined by clusters or superclusters. Thus we expect b = 0 in void regions outside filaments. If the distribution of galaxies follows the distribution of matter in high-density regions, then in these regions b = 1. In order to apply the above formula, and to find the mean value of the biasing parameter, most authors apply smoothing of the density field using a rather large smoothing length (up to 8 h−1 Mpc), so that the density of galaxies is everywhere non-zero. But this procedure smoothes galaxies into regions which are actually empty, thus it does not take into account the actual distribution of galaxies.
Dark Matter and Cosmic Web Story Page 25