The following content uses material from the Wikipedia article which can be viewed, along with the content contribution references and acknowledgements, at: Dark_energy, and is released under the Creative Commons Attribution-Share-Alike License 3.0. Please note that the GNU Free Documentation License may also exist on some text material. Images may not fall under either of the aforementioned licences and particular attention needs to be made when considering to use images or other media files. For full reuse and copyright policy details, please refer to: Wikipedia content reuse copyright information.
In physical cosmology and astronomy, dark energy is an unknown form of energy that affects the universe on the largest scales. The first observational evidence for its existence came from supernovae measurements, which showed that the universe does not expand at a constant rate; rather, the expansion of the universe is accelerating. Understanding the evolution of the universe requires knowledge of the starting conditions and what it consists of. Prior to these observations, the only forms of matter-energy known to exist were ordinary matter, dark matter, and radiation. Measurements of the cosmic microwave background suggest the universe began in a hot Big Bang, from which general relativity explains its evolution and the subsequent large scale motion. Without introducing a new form of energy, there was no way to explain how an accelerating universe could be measured. Since the 1990s, dark energy has been the most accepted premise to account for the accelerated expansion. As of 2020, there are active areas of cosmology research aimed at understanding the fundamental nature of dark energy: is it a feature of measurement errors, or do modifications to general relativity need to be made?
Assuming that the concordance model of cosmology is correct, the best current measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. The mass?energy of dark matter and ordinary (baryonic) matter contributes 27% and 5%, respectively, and other components such as neutrinos and photons contribute a very small amount. The density of dark energy is very low (~ 7 × 10?30 g/cm3), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the mass?energy of the universe because it is uniform across space.
Two proposed forms of dark energy are the cosmological constant, representing a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i.e. the vacuum energy. Scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.
Due to the toy model nature of concordance cosmology, some experts believe that a more accurate general relativistic treatment of the structures that exist on all scales in the real Universe may do away with the need to invoke dark energy. Inhomogeneous cosmologies, which attempt to account for the backreaction of structure formation on the metric, generally do not acknowledge any dark energy contribution to the energy density of the Universe.
The "cosmological constant" is a constant term that can be added to Einstein's field equation of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy".
The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol ? (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'.
The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.
Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.
Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background (CMB) experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003?2010 have continued to support the standard model and give more accurate measurements of the key parameters.
High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations.
As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.
The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not very dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied and un-massive?roughly 10?27 kg/m3?it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it uniformly fills otherwise empty space.
Independently of its actual nature, dark energy would need to have a strong negative pressure (repulsive action), like radiation pressure in a metamaterial, to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress?energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity. In the Friedmann?Lemaître?Robertson?Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion".
In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. Matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ? ? a?3, while radiation is anything which scales to the inverse fourth power of the scale factor (? ? a?4). This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift.
The final component, dark energy, is an intrinsic property of space, and so has a constant energy density regardless of the volume under consideration (? ? a0). Thus, unlike ordinary matter, it does not get diluted with the expansion of space.
The evidence for dark energy is indirect but comes from three independent sources:
In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery.
Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann?Lemaître universe and the local measured Hubble constant.
Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity.
The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass-energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter.
The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.
A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ?150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae.
Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs?Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al.
A new approach to test evidence of dark energy through observational Hubble constant data (OHD) has gained significant attention in recent years. The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as ?cosmic chronometers?. From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter
The reliance on a differential quantity, , can minimize many common issues and systematic effects; and as a direct measurement of the Hubble parameter instead of its integral, like supernovae and baryon acoustic oscillations (BAO), it brings more information and is appealing in computation. For these reasons, it has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.
An attempt to directly observe dark energy in a laboratory failed to detect a new force.
Dark energy's status as a hypothetical force with unknown properties makes it a very active target of research. The problem is attacked from a great variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data.
The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter ? (Lambda, hence Lambda-CDM model). Since energy and mass are related according to the equation E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum.
The cosmological constant has negative pressure equal and opposite to its energy density and so causes the expansion of the universe to accelerate. The reason a cosmological constant has negative pressure can be seen from classical thermodynamics. In general, energy must be lost from inside a container (the container must do work on its environment) in order for the volume to increase. Specifically, a change in volume dV requires work done equal to a change of energy ?P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases, because the energy is equal to ?V, where ? is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = ??.
There are two major advantages for the cosmological constant. The first is that it is simple. Einstein had in fact introduced this term in his original formulation of general relativity such as to get a static universe. Although he later discarded the term after Hubble found that the universe is expanding, a nonzero cosmological constant can act as dark energy, without otherwise changing the Einstein field equations. The other advantage is that there is a natural explanation for its origin. Most quantum field theories predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation).
A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, more than 100 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero, which does not help because supersymmetry must be broken. Also, it is unknown if there is a metastable vacuum state in string theory with a positive cosmological constant.
Nonetheless, the cosmological constant is the most economical solution to the problem of cosmic acceleration. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.
In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.
No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.
The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.
In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = ?1) from above to below. A No-Go theorem has been proven that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.
Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.
This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of the space-time such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter.
The density of the dark energy might have varied in time during the history of the universe. Modern observational data allow us to estimate the present density of the dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the Universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier?Polarski?Linder model (CPL). Some other common models are, (Barboza & Alcaniz. 2008), (Jassal et al. 2005), (Wetterich. 2004), (Oztas et al. 2018).
Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy doesn't actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. It has also been suggested that the anisotropy of the local Universe has been misrepresented as dark energy. This claim was quickly countered by others, including a paper by physicists D. Rubin and J. Heitlauf. A laboratory direct detection attempt failed to detect any force associated with dark energy.
A study published in 2020 questioned the validity of the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age, and suggests that dark energy may not actually exist. Lead researcher of the new study, Young-Wook Lee of Yonsei University, said "Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption." Multiple issues with this paper were raised by other cosmologists, including Adam Riess, who won the 2011 Nobel Prize for the discovery of dark energy.
The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are very many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy.
Astrophysicist Ethan Siegel states that, while such alternatives gain a lot of mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists, and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy.
Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).
Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.
As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration.
There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out.
In philosophy of science, dark energy is an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.
The following content uses material from the Wikipedia article which can be viewed, along with the content contribution references and acknowledgements, at: Dark_matter, and is released under the Creative Commons Attribution-Share-Alike License 3.0. Please note that the GNU Free Documentation License may also exist on some text material. Images may not fall under either of the aforementioned licences and particular attention needs to be made when considering to use images or other media files. For full reuse and copyright policy details, please refer to: Wikipedia content reuse copyright information.
Dark matter is a form of matter thought to account for approximately 85% of the matter in the universe and about a quarter of its total mass?energy density or about . Its presence is implied in a variety of astrophysical observations, including gravitational effects that cannot be explained by accepted theories of gravity unless more matter is present than can be seen. For this reason, most experts think that dark matter is abundant in the universe and that it has had a strong influence on its structure and evolution. Dark matter is called dark because it does not appear to interact with the electromagnetic field, which means it doesn't absorb, reflect or emit electromagnetic radiation, and is therefore difficult to detect.
Primary evidence for dark matter comes from calculations showing that many galaxies would fly apart, or that they would not have formed or would not move as they do, if they did not contain a large amount of unseen matter. Other lines of evidence include observations in gravitational lensing and in the cosmic microwave background, along with astronomical observations of the observable universe's current structure, the formation and evolution of galaxies, mass location during galactic collisions, and the motion of galaxies within galaxy clusters. In the standard Lambda-CDM model of cosmology, the total mass?energy of the universe contains 5% ordinary matter and energy, 27% dark matter and 68% of a form of energy known as dark energy. Thus, dark matter constitutes 85% of total mass, while dark energy plus dark matter constitute 95% of total mass?energy content.
Because dark matter has not yet been observed directly, if it exists, it must barely interact with ordinary baryonic matter and radiation, except through gravity. Most dark matter is thought to be non-baryonic in nature; it may be composed of some as-yet undiscovered subatomic particles. The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly interacting massive particles (WIMPs). Many experiments to directly detect and study dark matter particles are being actively undertaken, but none have yet succeeded. Dark matter is classified as "cold", "warm", or "hot" according to its velocity (more precisely, its free streaming length). Current models favor a cold dark matter scenario, in which structures emerge by gradual accumulation of particles.
Although the existence of dark matter is generally accepted by the scientific community, some astrophysicists, intrigued by certain observations which do not fit some dark matter theories, argue for various modifications of the standard laws of general relativity, such as modified Newtonian dynamics, tensor?vector?scalar gravity, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.
The hypothesis of dark matter has an elaborate history. In a talk given in 1884, Lord Kelvin estimated the number of dark bodies in the Milky Way from the observed velocity dispersion of the stars orbiting around the center of the galaxy. By using these measurements, he estimated the mass of the galaxy, which he determined is different from the mass of visible stars. Lord Kelvin thus concluded "many of our stars, perhaps a great majority of them, may be dark bodies". In 1906 Henri Poincaré in "The Milky Way and Theory of Gases" used "dark matter", or "matière obscure" in French, in discussing Kelvin's work.
The first to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the local galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous.
In 1933, Swiss astrophysicist Fritz Zwicky, who studied galaxy clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called dunkle Materie ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that the bulk of the matter was dark.
Further indications the mass-to-light ratio was not unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula (known now as the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to the missing matter he had uncovered. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda galaxy and a mass-to-light ratio of 50; in 1940 Jan Oort discovered and wrote about the large non-visible halo of NGC 3115.
Vera Rubin, Kent Ford, and Ken Freeman's work in the 1960s and 1970s provided further strong evidence, also using galaxy rotation curves. Rubin and Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. This result was confirmed in 1978. An influential paper presented Rubin and Ford's results in 1980. They showed most galaxies must contain about six times as much dark as visible mass; thus, by around 1980 the apparent need for dark matter was widely recognized as a major unsolved problem in astronomy.
At the same time Rubin and Ford were exploring optical rotation curves, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H-I) often extends to much larger galactic radii than those accessible by optical studies, extending the sampling of rotation curves ? and thus of the total mass distribution ? to a new dynamical regime. Early mapping of Andromeda with the 300 foot telescope at Green Bank and the 250 foot dish at Jodrell Bank already showed the H-I rotation curve did not trace the expected Keplerian decline. As more sensitive receivers became available, Morton Roberts and Robert Whitehurst were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii, Figure 16 of that paper combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H-I data between 20?30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H-I spectroscopy was being developed. In 1972, David Rogstad and Seth Shostak published H-I rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H-I disks.
A stream of observations in the 1980s supported the presence of dark matter, including gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics.
In standard cosmology, matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to radiation, which scales as the inverse fourth power of the scale factor and a cosmological constant, which is independent of a. These scalings can be understood intuitively: For an ordinary particle in a cubical box, doubling the length of the sides of the box decreases the density (and hence energy density) by a factor of 8 (= 2). For radiation, the energy density decreases by a factor of 16 (= 2), because any act whose effect increases the scale factor must also cause a proportional redshift. A cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons." Context will usually indicate which meaning is intended.
The arms of spiral galaxies rotate around the galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then we can model the galaxy as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Second Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there is a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
One of the consequences of general relativity is massive objects (such as a cluster of galaxies) lying between a more distant source (such as a quasar) and an observer should act as a lens to bend the light from this source. The more massive an object, the more lensing is observed.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters. Lensing can lead to multiple copies of an image. By analyzing the distribution of multiple image copies, scientists have been able to deduce and map the distribution of dark matter around the MACS J0416.1-2403 galaxy cluster.
Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Dark matter does not bend light itself; mass (in this case the mass of the dark matter) bends spacetime. Light follows the curvature of spacetime, resulting in the lensing effect.
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the CMB by its gravitational potential (mainly on large scales), and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the cosmic microwave background (CMB).
The cosmic microwave background is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The series of peaks can be predicted for any assumed set of cosmological parameters by modern computer codes such as CMBFast and CAMB, and matching theory to data, therefore, constrains cosmological parameters. The first peak mostly shows the density of baryonic matter, while the third peak relates mostly to the density of dark matter, measuring the density of matter and the density of atoms.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks. After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003?2012, and even more precisely by the Planck spacecraft in 2013?2015. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
If dark matter does not exist, then the next most likely explanation must be general relativity ? the prevailing theory of gravity ? is incorrect and should be modified. The Bullet Cluster, the result of a recent collision of two galaxy clusters, provides a challenge for modified gravity theories because its apparent center of mass is far displaced from the baryonic center of mass. Standard dark matter models can easily explain this observation, but modified gravity has a much harder time, especially since the observational evidence is model-independent.
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (). The measured dark energy density is ; the observed ordinary (baryonic) matter energy density is and the energy density of radiation is negligible. This leaves a missing which nonetheless behaves like matter (see technical definition section above) dark matter.
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon?baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (?1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130?160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model.
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
There are various hypotheses about what dark matter could consist of, as set out in the table below.
|Some dark matter hypotheses|
|Light bosons||quantum chromodynamics axions|
|fuzzy cold dark matter|
|effective field theory|
|other particles||Weakly interacting massive particles|
|self-interacting dark matter|
|superfluid vacuum theory|
|macroscopic||primordial black holes|
|massive compact halo objects (MaCHOs)|
|Macroscopic dark matter (Macros)|
|modified gravity (MOG)||modified Newtonian dynamics (MoND)|
|Tensor?vector?scalar gravity (TeVeS)|
Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. However, for the reasons outlined below, most scientists think the dark matter is dominated by a non-baryonic component, which is likely composed of a currently unknown fundamental particle (or similar exotic state).
Baryons (protons and neutrons) make up ordinary stars and planets. However, baryonic matter also encompasses less common non-primordial black holes, neutron stars, faint old white dwarfs and brown dwarfs, collectively known as massive compact halo objects (MACHOs), which can be hard to detect.
However, multiple lines of evidence suggest the majority of dark matter is not made of baryons:
Candidates for non-baryonic dark matter are hypothetical particles such as axions, sterile neutrinos, weakly interacting massive particles (WIMPs), gravitationally-interacting massive particles (GIMPs), supersymmetric particles, or primordial black holes. The three neutrino types already observed are indeed abundant, and dark, and matter, but because their individual masses ? however uncertain they may be ? are almost certainly too tiny, they can only supply a small fraction of dark matter, due to limits derived from large-scale structure and high-redshift galaxies.
Unlike baryonic matter, nonbaryonic matter did not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is revealed only via its gravitational effects, or weak lensing. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection).
If dark matter is composed of weakly-interacting particles, an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot, because of two factors:
In 2015?2017 the idea dense dark matter was composed of primordial black holes, made a comeback following results of gravitational wave measurements which detected the merger of intermediate mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses). It was proposed the intermediate mass black holes causing the detected merger formed in the hot dense early phase of the universe due to denser regions collapsing. A later survey of about a thousand supernova detected no gravitational lensing events, when about eight would be expected if intermediate mass primordial black holes above a certain mass range accounted for the majority of dark matter.
The possibility atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum suggesting tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests tiny black holes do not exist.
However, there still exists a largely unconstrained mass range smaller than that can be limited by optical microlensing observations, where primordial black holes may account for all dark matter.
Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion ? this is an important distance called the free streaming length (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for later structure formation.
The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): Dark matter particles are classified as cold, warm, or hot according to their FSL; much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.
Cold dark matter leads to a bottom-up formation of structure with galaxies forming first and galaxy clusters at a latter stage, while hot dark matter would result in a top-down formation scenario with large matter aggregations forming early, later fragmenting into separate galaxies; the latter is excluded by high-redshift galaxy observations.
These categories also correspond to fluctuation spectrum effects and the interval following the Big Bang at which each type became non-relativistic. Davis et al. wrote in 1985:
Another approximate dividing line is warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million Kelvins. Standard physical cosmology gives the particle horizon size as 2 c t (speed of light multiplied by time) in the radiation-dominated era, thus 2 light-years. A region of this size would expand to 2 million light-years today (absent structure formation). The actual FSL is approximately 5 times the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years, or 3 megaparsecs, today, around the size containing an average large galaxy.
The 2.7 million K photon temperature gives a typical photon energy of 250 electronvolts, thereby setting a typical mass scale for warm dark matter: particles much more massive than this, such as GeV?TeV mass WIMPs, would become non-relativistic much earlier than one year after the Big Bang and thus have FSLs much smaller than a protogalaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few eV, have FSLs much larger than a protogalaxy, thus qualifying them as hot.
Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes and Preon stars) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.
Studies of Big Bang nucleosynthesis and gravitational lensing convinced most cosmologists that MACHOs cannot make up more than a small fraction of dark matter. According to A. Peter: "... the only really plausible dark-matter candidates are new particles."
The 1997 DAMA/NaI experiment and its successor DAMA/LIBRA in 2013, claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.
Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model which explain the small neutrino mass through the seesaw mechanism.
Warm dark matter comprises particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies. Some researchers consider this a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ? 300 eV to 3000 eV.
No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: A heavier, slower form of neutrino that does not interact through the weak force, unlike other neutrinos. Some modified gravity theories, such as scalar?tensor?vector gravity, require "warm" dark matter to make their equations work.
Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies as such particle. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them 'weakly interacting light particles' (WILPs), as opposed to WIMPs.
The three known flavours of neutrinos are the electron, muon, and tau. Their masses are slightly different. Neutrinos oscillate among the flavours as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos (or for any of the three individually). For example, if the average neutrino mass were over 50 eV/c (less than 10 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c. Thus, observed neutrinos cannot explain dark matter.
Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together.
If dark matter is made up of sub-atomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs are popular search candidates, the Axion Dark Matter Experiment (ADMX) searches for axions. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil the nucleus will emit energy in the form of scintillation light or phonons, as they pass through sensitive detection apparatus. To do this effectively, it is crucial to maintain a low background, and so such experiments operate deep underground to reduce the interference from cosmic rays. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of our galaxy) two dark matter particles could annihilate to produce gamma rays or Standard Model particle?antiparticle pairs. Alternatively, if the dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in our galaxy or others. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves, opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow. The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In April 2012, an analysis of previously available data from its Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.
Because dark matter has not yet been conclusively identified, many other hypotheses have emerged aiming to explain the observational phenomena that dark matter was conceived to explain. The most common method is to modify general relativity. General relativity is well-tested on solar system scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor-vector-scalar gravity (TeVeS), f(R) gravity, negative mass dark fluid, and entropic gravity. Alternative theories abound.
A problem with alternative hypotheses is observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity.
The prevailing opinion among most astrophysicists is while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter.
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the hypothesized properties of dark matter in physics and cosmology.
|Dark energy||<span class="mw-mmv-title" original-title="">A Type Ia supernova (bright spot on the bottom-left) near a galaxy</span><br>||Wed Jun 26 05:17:42 UTC 2019|
|Dark energy||<span class="mw-mmv-title" original-title="">A Type Ia supernova (bright spot on the bottom-left) near a galaxy</span><br>||Wed Jun 26 05:01:57 UTC 2019|
|Antimatter Explained||YouTube||Thu Jun 27 02:43:49 UTC 2019|
|Physics | Antimatter and Dark Matter||YouTube||Thu Jun 27 02:40:37 UTC 2019|
|What is Dark Matter and Dark Energy?||YouTube||Thu Jun 27 02:42:42 UTC 2019|