**宇宙の謎を一挙に解明？エントロピック重力理論とは ダークマターを否定、第3の重力革命になるかもしれない新理論**

http://www.asyura2.com/15/nature6/msg/461.html

Tweet |

宇宙の謎を一挙に解明？エントロピック重力理論とは

ダークマターを否定、第3の重力革命になるかもしれない新理論

2017.1.25（水） 小谷 太郎

ハッブル宇宙望遠鏡が捉えた銀河団「SDSS J1038+4849」。重力レンズの影響で、2つの目と鼻があり微笑んでいるように見える。この中心にダークマターがあるが、写真には写らないと考えられている。（写真：NASA）

前回2016年12月の記事「正体に近づいた？ 宇宙の長年の謎、ダークマター」（http://jbpress.ismedia.jp/articles/-/48735）では、宇宙空間を漂うダークマターについて解説しました。国際宇宙ステーションに搭載された粒子検出器がダークマターの形跡らしきデータを捉えたのです。

けれどもそれと同じころ、「ダークマターは存在しない」と主張する奇抜な研究が注目を集めました。「エントロピック重力理論」と呼ばれるこの物理学理論は、全く新しい原理に基づく重力理論で、ダークマターをはじめとする現代物理学の難問を解決するというのです。

もしもこの主張が本当ならば、これはニュートンとアインシュタインに次ぐ、第3の重力革命です。現時点では正しいとも正しくないとも結論できませんが、面白いので紹介しましょう。

新しい重力理論よ、目覚めよ

17世紀、アイザック・ニュートン（1643-1727）は「万有引力の法則」を発見し、月や惑星やリンゴの運動を説明してのけました。人々はびっくりしました。

20世紀、アルベルト・アインシュタイン（1879-1955）は「相対性理論」を発表し、空間はぐにゃぐにゃ時間はへろへろ伸び縮みするのだと明らかにしました。こういう伸び縮みの効果が重力現象だというのです。人々はびっくりしました。

21世紀、そろそろ新しい重力理論が現われて、人々をびっくりさせてもいいのでは、と期待が高まっています。なぜなら、古い理論では説明できない事柄が、だんだん溜まってきたからです。

古い理論で説明できない事柄の筆頭は「ブラックホールの消滅」です。ブラックホールは強い重力を持つために光も脱出できない天体とされます。

と、述べたハナから矛盾するようなことをいいますが、「量子力学」という理論を少々適用すると、ブラックホールは微弱な光を出しながら徐々に縮んでいくという結論が出ます。そして縮んだ果てに、シャボン玉のようにこわれて消えてしまうといいます。

一方、ブラックホールがこわれて消えると論理的にいろいろ不都合が生じるので、そうはならないだろうという意見もあります。このあたりは、量子力学と相対性理論を統合した「量子重力理論」の完成によって正しく説明されると期待されています。

ブラックホールの他にも、宇宙のはじまり「ビッグバン」はどうして起きたか、宇宙空間に存在するダークマターとダークエネルギーの正体は何か等々、重力の分野には未解決の宿題が山積みです。

だから、こうした宿題を一挙に片付けてくれるであろう量子重力理論の登場が待望されているのです。

「エントロピック重力」とは

2010年、オランダはアムステルダム大のエーリク・フェアリンデ教授は、重力がエントロピーによって生じるというアイデアを発表しました（http://link.springer.com/article/10.1007%2FJHEP04%282011%29029）。「エントロピック重力理論」です。

しかし、そもそも重力なるものが抽象的で直観的に理解しがたい存在なのに、それがエントロピーというさらに訳の分からないもので生じるといわれても、なんのことやらというのが普通の反応でしょう。

ここではエントロピーの正体に深入りせず、それはエネルギーや質量のような、ある物理量であると述べておきます。そこらの物質もエントロピーを持ちますが、特にブラックホールは大量に蓄えていると考えられています。

そして、ブラックホールが外部の物体を引き寄せると、ブラックホールのエントロピーが少々増えるのですが、じゃあ逆に、ブラックホールのエントロピーが増えるために、ブラックホールが物体を引き付けるのだとは考えられないか、というのがエントロピック重力理論の発想です。

フェアリンデ教授はこの考えに基づき、ブラックホールでない通常の質量においても、そのエントロピーと重力の間には関係があると主張しました。そしてちょいちょいと計算して、既知の関係式や観測値をいくつか導出してみせました。

エントロピック重力理論が証明された・・・のだろうか？

2016年12月、オランダ・ライデン大の大学院生マーゴット・ブラウワー氏らの研究チームは、3万3613個の銀河の物質分布を測定し、エントロピック重力理論の予想に合致する結果を得たと発表しました（https://academic.oup.com/mnras/article-abstract/doi/10.1093/mnras/stw3192/2661916/First-test-of-Verlinde-s-theory-of-Emergent）。しかもこの予想は、ダークマターのような余計な仮説を必要としないというのです。

銀河を通過する光線は、銀河の重力によって曲がる。ブラウワー氏のチームはこ の効果を測定して、銀河の質量分布を測定した。 (c) APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team.

拡大画像表示

世間（の一部）は、これでエントロピック重力理論の正しさが証明された、ダークマターは存在しない、と沸き立ちました。

しかしこの研究結果は、もう少し慎重な検討が必要です。

そもそもダークマターとは、星やガスなどの観測可能な物質以外で、宇宙空間に漂っていると考えられている質量を指します。銀河の持つ重力を測定すると、銀河に含まれる星やガスの他に、観測できない重力源があるようなので、これをダークマターと仮に呼ぶのです。

けれどもフェアリンデ教授は、銀河に星やガス以外の重力源があるのではなく、銀河のような大きなスケールでは、重力の法則自体が万有引力の法則と違うのだと主張します。

そして、ダークマターがなくても銀河の重力を説明できるように、重力の法則を変更し、さらに、その変更された重力法則に整合するように、エントロピック重力理論を作っています。

これはかなり曲芸的な手法です。

こうして作られたエントロピック重力理論が、銀河の観測と一致しても、ダークマターが存在しないかどうか、万有引力が銀河の規模では成り立たないかどうかは、まだ結論するには少々早いように思われます。

正しい量子重力理論はどこに？

重力理論の歴史を振り返ると、ニュートンの新しい力学は微積分という新しい数学を必要としました。物理学が難解な学問になったのはこの時からです。

アインシュタインの相対性理論は微分幾何学という数学を駆使するもので、これまた手ごわい数学です。

次の重力理論である量子重力理論がどんなものになるのか、いまだ明らかではありませんが、それがきわめて難解な代物になることだけは疑いありません。

量子重力理論という魅力的なテーマに、研究者はもう何世代も取り組んでいます。これまであまたの秀才が無数の論文を発表し、さまざまなアイデアを提唱しました。エントロピック重力理論はそのひとつです。中には、うまくいかなかったもの、ちょっとうまくいったもの、まだどちらとも判断のつかないものが入り雑じっています。

そうした新理論のいずれもが、高度な数学を駆使する難解な理論で、理解するには何年にもおよぶ修行が必要です。新理論はいくつもの流派に分かれています。すべての流儀を学んで、全体像を把握するのはさらに困難です。

こうしてみると、量子重力理論はまるで野心に満ちた秀才が迷い込む荒野のようです。荒野のどこかにあるという、豊かな正しい量子重力を、人は探してさまよいます。荒野のあちこちに立つ墓標は、うまく行かなかったアイデアです。素人には墓碑銘も判読できません。

正しい道を歩み、正しい新理論に至る人はいるのでしょうか。エントロピック重力理論はそういう正しい理論なのでしょうか。誰も確信を持って答えられません。

ともあれ、エントロピック重力理論が本当に観測データと合うかどうか、今後の追試に注目しましょう。

[JBpressの今日の記事（トップページ）へ]

http://jbpress.ismedia.jp/articles/-/48971

First test of Verlinde's theory of emergent gravity using weak gravitational lensing measurements

Margot M. Brouwer Manus R. Visser Andrej Dvornik Henk Hoekstra Konrad Kuijken Edwin A. Valentijn Maciej Bilicki Chris Blake Sarah Brough Hugo Buddelmeijer ... Show more

MNRAS (2016) 466 (3): 2547-2559. DOI: https://doi.org/10.1093/mnras/stw3192

Published: 09 December 2016 Article history

Views PDF

Cite

Share Tools

Abstract

Verlinde proposed that the observed excess gravity in galaxies and clusters is the consequence of emergent gravity (EG). In this theory, the standard gravitational laws are modified on galactic and larger scales due to the displacement of dark energy by baryonic matter. EG gives an estimate of the excess gravity (described as an apparent dark matter density) in terms of the baryonic mass distribution and the Hubble parameter. In this work, we present the first test of EG using weak gravitational lensing, within the regime of validity of the current model. Although there is no direct description of lensing and cosmology in EG yet, we can make a reasonable estimate of the expected lensing signal of low-redshift galaxies by assuming a background Lambda cold dark matter cosmology. We measure the (apparent) average surface mass density profiles of 33 613 isolated central galaxies and compare them to those predicted by EG based on the galaxies’ baryonic masses. To this end, we employ the ∼180 deg2 overlap of the Kilo-Degree Survey with the spectroscopic Galaxy And Mass Assembly survey. We find that the prediction from EG, despite requiring no free parameters, is in good agreement with the observed galaxy–galaxy lensing profiles in four different stellar mass bins. Although this performance is remarkable, this study is only a first step. Further advancements on both the theoretical framework and observational tests of EG are needed before it can be considered a fully developed and solidly tested theory.

gravitation, gravitational lensing: weak, surveys, galaxies: haloes, cosmology: theory, dark matter

Issue Section: Article

INTRODUCTION

In the past decades, astrophysicists have repeatedly found evidence that gravity on galactic and larger scales is in excess of the gravitational potential that can be explained by visible baryonic matter within the framework of General Relativity (GR). The first evidence through the measurements of the dynamics of galaxies in clusters (Zwicky 1937) and the Local Group (Kahn & Woltjer 1959) and through observations of galactic rotation curves (inside the optical discs by Rubin 1983, and far beyond the discs in hydrogen profiles by Bosma 1981) has been confirmed by more recent dynamical observations (Martinsson et al. 2013; Rines et al. 2013). Furthermore, entirely different methods like gravitational lensing (Hoekstra, Yee & Gladders 2004; von der Linden et al. 2014; Hoekstra et al. 2015; Mandelbaum 2015) of galaxies and clusters, baryon acoustic oscillations (BAOs; Eisenstein et al. 2005; Blake et al. 2011) and the cosmic microwave background (CMB; Spergel et al. 2003; Planck Collaboration XIII 2016) have all acknowledged the necessity of an additional mass component to explain the excess gravity. This interpretation gave rise to the idea of an invisible dark matter (DM) component, which now forms an important part of our standard model of cosmology. In our current Lambda cold dark matter (ΛCDM) model, the additional mass density (the density parameter ΩCDM = 0.266 found by Planck Collaboration XIII 2016) consists of cold (non-relativistic) DM particles, while the energy density in the cosmological constant (ΩΛ = 0.685) explains the observed accelerating expansion of the Universe. In this paradigm, the spatial structure of the sub-dominant baryonic component (with Ωb = 0.049) broadly follows that of the DM. When a DM halo forms through the gravitational collapse of a small density perturbation (Peebles & Yu 1970), baryonic matter is pulled into the resulting potential well, where it cools to form a galaxy in the centre (White & Rees 1978). In this framework, the excess mass around galaxies and clusters, which is measured through dynamics and lensing, has hitherto been interpreted as caused by this DM halo.

In this paper, we test the predictions of a different hypothesis concerning the origin of the excess gravitational force: the Verlinde (2016) model of emergent gravity (EG). Generally, EG refers to the idea that space–time and gravity are macroscopic notions that arise from an underlying microscopic description in which these notions have no meaning. Earlier work on the emergence of gravity has indicated that an area law for gravitational entropy is essential to derive Einstein's laws of gravity (Jacobson 1995; Padmanabhan 2010; Verlinde 2011; Faulkner et al. 2014; Jacobson 2016). But due to the presence of positive dark energy in our Universe. Verlinde (2016) argues that in addition to the area law, there exists a volume law contribution to the entropy. This new volume law is thought to lead to modifications of the emergent laws of gravity at scales set by the ‘Hubble acceleration scale’ a0 = cH0, where c is the speed of light and H0 the Hubble constant. In particular, Verlinde (2016) claims that the gravitational force emerging in the EG framework exceeds that of GR on galactic and larger scales, similar to the MOND phenomenology (Modified Newtonian Dynamics; Milgrom 1983) that provides a successful description of galactic rotation curves (e.g. McGaugh, Lelli & Schombert 2016). This excess gravity can be modelled as a mass distribution of apparent DM, which is only determined by the baryonic mass distribution Mb(r) (as a function of the spherical radius r) and the Hubble constant H0. In a realistic cosmology, the Hubble parameter H(z) is expected to evolve with redshift z. But because EG is only developed for present-day de Sitter space, any predictions on cosmological evolution are beyond the scope of the current theory. The approximation used by Verlinde (2016) is that our Universe is entirely dominated by dark energy, which would imply that H(z) indeed resembles a constant. In any case, a viable cosmology should at least reproduce the observed values of H(z) at low redshifts, which is the regime that is studied in this work. Furthermore, at low redshifts, the exact specifics of the cosmological evolution have a negligible effect on our measurements. Therefore, to calculate distances from redshifts throughout this work, we can adopt an effective ΛCDM background cosmology with Ωm = 0.315 and ΩΛ = 0.685 (Planck Collaboration XIII 2016), without significantly affecting our results. To calculate the distribution of apparent DM, we use the value of H0 = 70 km s−1Mpc−1. Throughout the paper, we use the following definition for the reduced Hubble constant: h ≡ h70 = H0/(70 km s−1Mpc−1).

Because, as mentioned above, EG gives an effective description of GR (with apparent DM as an additional component), we assume that a gravitational potential affects the pathway of photons as it does in the GR framework. This means that the distribution of apparent DM can be observed using the regular gravitational lensing formalism. In this work, we test the predictions of EG specifically relating to galaxy–galaxy lensing (GGL): the coherent gravitational distortion of light from a field of background galaxies (sources) by the mass of a foreground galaxy sample (lenses) (see e.g. Fischer et al. 2000; Hoekstra et al. 2004; Mandelbaum et al. 2006; Velander et al. 2014; van Uitert et al. 2016). Because the prediction of the gravitational potential in EG is currently only valid for static, spherically symmetric and isolated baryonic mass distributions, we need to select our lenses to satisfy these criteria. Furthermore, as mentioned above, the lenses should be at relatively low redshifts since cosmological evolution is not yet implemented in the theory. To find a reliable sample of relatively isolated foreground galaxies at low redshift, we select our lenses from the very complete spectroscopic Galaxy And Mass Assembly survey (GAMA; Driver et al. 2011). In addition, GAMA's stellar mass measurements allow us to test the prediction of EG for four galaxy sub-samples with increasing stellar mass. The background galaxies, used to measure the lensing effect, are observed by the photometric Kilo-Degree Survey (KiDS; de Jong et al. 2013), which was specifically designed with accurate shape measurements in mind.

In Section 2 of this paper, we explain how we select and model our lenses. In Section 3, we describe the lensing measurements. In Section 4, we introduce the EG theory and derive its prediction for the lensing signal of our galaxy sample. In Section 5, we present the measured GGL profiles and our comparison with the predictions from EG and ΛCDM. The discussion and conclusions are described in Section 6.

GAMA LENS GALAXIES

The prediction of the gravitational potential in EG that is tested in this work is only valid for static, spherically symmetric and isolated baryonic mass distributions (see Section 4). Ideally, we would like to find a sample of isolated lenses, but since galaxies are clustered, we cannot use GAMA to find galaxies that are truly isolated. Instead, we use the survey to construct a sample of lenses that dominate their surroundings and a galaxy sample that allows us to estimate the small contribution arising from their nearby low-mass galaxies (i.e. satellites). The GAMA survey (Driver et al. 2011) is a spectroscopic survey with the AAOmega spectrograph mounted on the Anglo-Australian Telescope. In this study, we use the GAMA II (Liske et al. 2015) observations over three equatorial regions (G09, G12 and G15) that together span

∼180deg2

∼180deg2

. Over these regions, the redshifts and properties of 180 960 galaxies1 are measured. These data have a redshift completeness of 98.5 per cent down to a Petrosian r-band magnitude of mr = 19.8. This is very useful to accurately determine the positional relation between galaxies, in order to find a suitable lens sample.

Isolated galaxy selection

To select foreground lens galaxies suitable for our study, we consult the 7th GAMA Galaxy Group Catalogue (G3Cv7) which is created by (Robotham et al. 2011) using a Friends-of-Friends group finding algorithm. In this catalogue, galaxies are classified as either the brightest central galaxy (BCG) or a satellite of a group, depending on their luminosity and their mutual projected and line-of-sight distances. In cases where there are no other galaxies observed within the linking lengths, the galaxy remains ‘non-grouped’ (i.e. it is not classified as belonging to any group). Mock galaxy catalogues, which were produced using the Millennium DM simulation (Springel et al. 2005) and populated with galaxies according to the semi-analytical galaxy formation recipe ‘GALFORM’ (Bower et al. 2006), are used to calibrate these linking lengths and test the resulting group properties.

However, since GAMA is a flux-limited survey, it does not include the satellites of the faintest observed GAMA galaxies when these are fainter than the flux limit. Many fainter galaxies are therefore classified as non-grouped, whereas they are in reality BCGs. This selection effect is illustrated in Fig. 1, which shows that the number of non-grouped galaxies rises towards faint magnitudes whereas the number of BCGs peaks well before. The only way to obtain a sample of ‘isolated’ GAMA galaxies without satellites as bright as fL times their parents luminosity, would be to select only non-grouped galaxies brighter than 1/fL times the flux limit (illustrated in Fig. 1 for fL = 0.1). Unfortunately, such a selection leaves too small a sample for a useful lensing measurement. Moreover, we suspect that in some cases, observational limitations may have prevented the detection of satellites in this sample as well. Instead, we use this selection to obtain a reasonable estimate of the satellite distribution around the galaxies in our lens sample. Because the mass of the satellites is approximately spherically distributed around the BCG and is sub-dominant compared to the BCG's mass, we can still model the lensing signal of this component using the EG theory. How we model the satellite distribution and its effect on the lensing signal is described in Sections 2.2.3 and 4.3, respectively.

Figure 1.

The magnitude distribution of non-grouped galaxies (blue) and BCGs (red). The green dashed line indicates the selection that removes galaxies that might have a satellite beyond the visible magnitude limit. These hypothetical satellites have at most a fraction fL = 0.1 of the central galaxy luminosity, corresponding to the magnitude limit: mr < 17.3. We use this ‘nearby’ sample to obtain a reliable estimate of the satellite distribution around our centrals.

View largeDownload slide

The magnitude distribution of non-grouped galaxies (blue) and BCGs (red). The green dashed line indicates the selection that removes galaxies that might have a satellite beyond the visible magnitude limit. These hypothetical satellites have at most a fraction fL = 0.1 of the central galaxy luminosity, corresponding to the magnitude limit: mr < 17.3. We use this ‘nearby’ sample to obtain a reliable estimate of the satellite distribution around our centrals.

Because centrals are only classified as BCGs if their satellites are detected, whereas non-grouped galaxies are likely centrals with no observed satellites, we adopt the name ‘centrals’ for the combined sample of BCGs and non-grouped galaxies (i.e. all galaxies which are not satellites). As our lens sample, we select galaxies that dominate their surroundings in three ways: (i) they are centrals, i.e. not classified as satellites in the GAMA group catalogue; (ii) they have stellar masses below

1011h−270M⊙

1011h70−2M⊙

, since we find that galaxies with higher stellar mass have significantly more satellites (see Section 2.2.3); and (iii) they are not affected by massive neighbouring groups, i.e. there is no central galaxy within

3h−170Mpc

3h70−1Mpc

(which is the maximum radius of our lensing measurement, see Section 3). This last selection suppresses the contribution of neighbouring centrals (known as the ‘2-halo term’ in the standard DM framework) to our lensing signal, which is visible at scales above

∼1h−170Mpc

∼1h70−1Mpc

.

Furthermore, we only select galaxies with redshift quality nQ ≥ 3, in accordance with the standard recommendation by GAMA. After these four cuts (central, no neighbouring centrals,

M∗<1011h−270M⊙

M∗<1011h70−2M⊙

and nQ ≥ 3), our remaining sample of ‘isolated centrals’ amounts to 33 613 lenses.

Baryonic mass distribution

Because there exists no DM component in the Verlinde (2016) framework of EG, the gravitational potential originates only from the baryonic mass distribution. Therefore, in order to determine the lensing signal of our galaxies as predicted by EG (see Section 4), we need to know their baryonic mass distribution. In this work, we consider two possible models: the point mass approximation and an extended mass profile. We expect the point mass approximation to be valid, given that (i) the bulk mass of a galaxy is enclosed within the minimum radius of our measurement (

30h−170kpc

30h70−1kpc

) and (ii) our selection criteria ensure that our isolated centrals dominate the total mass distribution within the maximum radius of our measurement (

Rmax=3h−170Mpc

Rmax=3h70−1Mpc

). If these two assumptions hold, the entire mass distribution of the isolated centrals can be described by a simple point mass. This allows us to analytically calculate the lensing signal predicted by EG, based on only one observable: the galaxies’ mass Mg that consists of a stellar and a cold gas component. To assess the sensitivity of our interpretation to the mass distribution, we compare the predicted lensing signal of the point mass to that of an extended mass distribution. This more realistic extended mass profile consists of four components: stars, cold gas, hot gas and satellites, which all have an extended density profile. In the following sections, we review each component and make reasonable assumptions regarding their model profiles and corresponding input parameters.

Stars and cold gas

To determine the baryonic masses Mg of the GAMA galaxies, we use their stellar masses M* from version 19 of the stellar mass catalogue, an updated version of the catalogue created by Taylor et al. (2011). These stellar masses are measured from observations of the Sloan Digital Sky Survey (SDSS; Abazajian et al. 2009) and the VISTA Kilo-Degree Infrared Galaxy survey (VIKING; Edge et al. 2013), by fitting Bruzual & Charlot (2003) stellar population synthesis models to the ugrizZYJHK spectral energy distributions (constrained to the rest frame wavelength range 3000–11 000 Å). We correct M* for flux falling outside the automatically selected aperture using the ‘flux-scale’ parameter, following the procedure discussed in Taylor et al. (2011).

In these models, the stellar mass includes the mass locked up in stellar remnants, but not the gas recycled back into the interstellar medium. Because the mass distribution of gas in our galaxies is not measured, we can only obtain realistic estimates from literature. There are two contributions to consider: cold gas consisting of atomic hydrogen (H i), molecular hydrogen (H2) and helium and hot gas consisting of ionized hydrogen and helium. Most surveys find that the mass in cold gas is highly dependent on the galaxies’ stellar mass. For low-redshift galaxies (z < 0.5), the mass in H i (H2) ranges from 20 to 30 per cent (8–10 per cent) of the stellar mass for galaxies with M* = 1010 M⊙, dropping to 5 to 10 per cent (4–5 per cent) for galaxies with M* = 1011 M⊙ (Saintonge et al. 2011; Catinella et al. 2013; Boselli et al. 2014; Morokuma-Matsui & Baba 2015). Therefore, in order to estimate the mass of the cold gas component, we consider a cold gas fraction fcold that depends on the measured M* of our galaxies. We use the best-fitting scaling relation found by Boselli et al. (2014) using the Herschel Reference Survey (Boselli et al. 2010):

(1)

log(fcold)=log(Mcold/M∗)=−0.69log(M∗)+6.63.

log(fcold)=log(Mcold/M∗)=−0.69log(M∗)+6.63.

In this relation, the total cold gas mass Mcold is defined as the combination of the atomic and molecular hydrogen gas, including an additional 30 per cent contribution of helium:

Mcold=1.3(MHI+MH2)

Mcold=1.3(MHI+MH2)

. With a maximum measured radius of ∼1.5 times the effective radius of the stellar component, the extent of the cold gas distribution is very similar to that of the stars (Pohlen et al. 2010; Crocker et al. 2011; Mentuch Cooper et al. 2012; Davis et al. 2013). We therefore consider the stars and cold gas to form a single galactic mass distribution with:

(2)

Mg=(M∗+Mcold)=M∗(1+fcold).

Mg=(M∗+Mcold)=M∗(1+fcold).

For both the point mass and the extended mass profile, we use this galactic mass Mg to predict the lensing signal in the EG framework.

In the point mass approximation, the total density distribution of our galaxies consists of a point source with its mass corresponding to the galactic mass Mg of the lenses. For the extended mass profile, we use Mg as an input parameter for the density profile of the ‘stars and cold gas’ component. Because starlight traces the mass of this component, we use the Sérsic intensity profile (Sérsic 1963; Sérsic 1968) as a reasonable approximation of the density:

(3)

IS(r)∝ρS(r)=ρeexp{−bn[(rre)1/n−1]}.

IS(r)∝ρS(r)=ρeexp{−bn[(rre)1/n−1]}.

Here re is the effective radius, n is the Sérsic index and bn is defined such that Γ(2n) = 2γ(2n, bn). The Sérsic parameters were measured for 167 600 galaxies by Kelvin et al. (2012) on the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey Large Area Survey images from GAMA and the ugrizYJHK images of SDSS DR7 (where we use the parameter values as measured in the r-band). Of these galaxies, 69 781 are contained in our GAMA galaxy catalogue. Although not all galaxies used in this work (the 33,613 isolated centrals) have Sérsic parameter measurements, we can obtain a realistic estimate of the mean Sérsic parameter values of our chosen galaxy samples. We use re and n equal to the mean value of the galaxies for which they are measured within each sample, in order to model the density profile ρS(r) of each full galaxy sample. This profile is multiplied by the effective mass density ρe, which is defined such that the mass integrated over the full ρS(r) is equal to the mean galactic mass 〈Mg〉 of the lens sample. The mean measured values of the galactic mass and Sérsic parameters for our galaxy samples can be found in Table 1.

Table 1.

For each stellar mass bin, this table shows the number N and mean redshift 〈zl〉 of the galaxy sample. Next to these, it shows the corresponding measured input parameters of the ESD profiles in EG: the mean stellar mass 〈M*〉, galactic mass 〈Mg〉, effective radius 〈re〉, Sérsic index 〈n〉, satellite fraction 〈fsat〉 and satellite radius 〈rsat〉 of the centrals. All masses are displayed in units of

log10(M/h−270M⊙)

log10(M/h70−2M⊙)

and all lengths in

h−170kpc

h70−1kpc

.

M★-bin N 〈zl〉 〈M*〉 〈Mg〉 〈re〉 〈n〉 〈fsat〉 〈rsat〉

8.5–10.5 14974 0.22 10.18 10.32 3.58 1.66 0.27 140.7

10.5–10.8 10500 0.29 10.67 10.74 4.64 2.25 0.25 143.9

10.8–10.9 4076 0.32 10.85 10.91 5.11 2.61 0.29 147.3

10.9–11 4063 0.33 10.95 11.00 5.56 3.04 0.32 149.0

Hot gas

Hot gas has a more extended density distribution than stars and cold gas and is generally modelled by the β-profile (e.g. Cavaliere & Fusco-Femiano 1976; Mulchaey 2000):

(4)

ρhot(r)=ρcore(1+(r/rcore)2)3β2,

ρhot(r)=ρcore(1+(r/rcore)2)3β2,

which provides a fair description of X-ray observations in clusters and groups of galaxies. In this distribution, rcore is the core radius of the hot gas. The outer slope is characterized by β, which, for a hydrostatic isothermal sphere, corresponds to the ratio of the specific energy in galaxies to that in the hot gas (see e.g. Mulchaey 2000, for a review). Observations of galaxy groups indicate β ∼ 0.6 (Sun et al. 2009). Fedeli et al. (2014) found similar results using the Overwhelmingly Large Simulations (OWLS; Schaye et al. 2010) for the range in stellar masses that we consider here (i.e. with

M∗∼1010−1011h−270M⊙

M∗∼1010−1011h70−2M⊙

). We therefore adopt β = 0.6. Moreover, Fedeli et al. (2014) estimate that the mass in hot gas is at most three times that in stars. As the X-ray properties from the OWLS model of active galactic nuclei match X-ray observations well (McCarthy et al. 2010), we adopt Mhot = 3〈M*〉. Fedeli et al. (2014) find that the simulations suggest a small core radius rcore (i.e. even smaller than the transition radius of the stars). This implies that ρhot(r) is effectively described by a single power law. Observations show a range in core radii, but typical values are tens of kiloparsecs (e.g. Mulchaey et al. 1996) for galaxy groups. We take rc = re, which is relatively small in order to give an upper limit; a larger value would reduce the contribution of hot gas and thus move the extended mass profile closer to the point mass case. We define the amplitude ρcore of the profile such that the mass integrated over the full ρhot(r) distribution is equal to the total hot gas mass Mhot.

Satellites

As described in Section 2.1, we use our nearby (mr < 17.3) sample of centrals (BCGs and non-grouped galaxies) to find that most of the non-grouped galaxies in the GAMA catalogue might not be truly isolated, but are likely to have satellites beyond the visible magnitude limit. Fortunately, satellites are a spherically distributed, sub-dominant component of the lens, which means that its (apparent) mass distribution can be described within EG. In order to assess the contribution of these satellites to our lensing signal, we first need to model their average baryonic mass distribution. We follow van Uitert et al. (2016) by modelling the density profile of satellites around the central as a double power law:2

(5)

ρsat(r)=ρsat(r/rsat)(1+r/rsat)2,

ρsat(r)=ρsat(r/rsat)(1+r/rsat)2,

where ρsat is the density and rsat the scale radius of the satellite distribution. The amplitude ρsat is chosen such that the mass integrated over the full profile is equal to the mean total mass in satellites

⟨Msat∗⟩

⟨M∗sat⟩

measured around our nearby sample of centrals. By binning these centrals according to their stellar mass

Mcen∗

M∗cen

, we find that for centrals within

109<Mcen∗<1011h−270M⊙

109<M∗cen<1011h70−2M⊙

, the total mass in satellites can be approximated by a fraction

fsat=⟨Msat∗⟩/⟨Mcen∗⟩∼0.2−0.3

fsat=⟨M∗sat⟩/⟨M∗cen⟩∼0.2−0.3

. However, for centrals with masses above

1011h−270M⊙

1011h70−2M⊙

, the satellite mass fraction rapidly rises to fsat ∼ 1 and higher. For this reason, we choose to limit our lens sample to galaxies below

1011h−270M⊙

1011h70−2M⊙

. As the value of the scale radius rsat, we pick the half-mass radius (the radius that contains half of the total mass) of the satellites around the nearby centrals. The mean measured mass fraction 〈fsat〉 and half-mass radius 〈rsat〉 of satellites around centrals in our four M*-bins can be found in Table 1.

LENSING MEASUREMENT

According to GR, the gravitational potential of a mass distribution leaves an imprint on the path of travelling photons. As discussed in Section 1, EG gives an effective description of GR (where the excess gravity from apparent DM detailed in Verlinde 2016 is an additional component). We therefore work under the assumption that a gravitational potential (including that of the predicted apparent DM distribution) has the exact same effect on light rays as in GR. Thus, by measuring the coherent distortion of the images of faraway galaxies (sources), we can reconstruct the projected (apparent) mass distribution (lens) between the background sources and the observer. In the case of GGL, a large sample of foreground galaxies acts as the gravitational lens (for a general introduction, see e.g. Bartelmann & Schneider 2001; Schneider, Kochanek & Wambsganss 2006). Because the distortion of the source images is only ∼1 per cent of their intrinsic shape, the tangential shear γt (which is the source ellipticity tangential to the line connecting the source and the centre of the lens) is averaged for many sources within circular annuli around the lens centre. This measurement provides us with the average shear 〈γt〉(R) as a function of projected radial distance R from the lens centres. In GR, this quantity is related to the excess surface density (ESD) profile ΔΣ(R). Using our earlier assumption, we can also use the same methodology to obtain the ESD of the apparent DM in the EG framework. The ESD is defined as the average surface mass density 〈Σ〉(<R) within R, minus the surface density Σ(R) at that radius:

(6)

ΔΣ(R)=⟨Σ⟩(<R)−Σ(R)=⟨γt⟩(R)Σcrit.

ΔΣ(R)=⟨Σ⟩(<R)−Σ(R)=⟨γt⟩(R)Σcrit.

Here Σcrit is the critical surface mass density at the redshift of the lens:

(7)

Σcrit=c24πGD(zs)D(zl)D(zl,zs),

Σcrit=c24πGD(zs)D(zl)D(zl,zs),

a geometrical factor that is inversely proportional to the strength of the lensing effect. In this equation, D(zl) and D(zs) are the angular diameter distances to the lens and source, respectively, and D(zl, zs) is the distance between the lens and the source.

For a more extensive discussion of the GGL method and the role of the KiDS and GAMA surveys therein, we refer the reader to previous KiDS-GAMA lensing papers: (Sifón et al. 2015, van Uitert et al. 2016, Brouwer et al. 2016) and especially section 3 of (Viola et al. 2015).

KiDS source galaxies

The background sources used in our GGL measurements are observed by KiDS (de Jong et al. 2013). The KiDS photometric survey uses the OmegaCAM instrument (Kuijken et al. 2011) on the VLT Survey Telescope (Capaccioli & Schipani 2011) that was designed to provide a round and uniform point spread function (PSF) over a square degree field of view, specifically with weak lensing measurements in mind. Of the currently available

454deg2

454deg2

area from the ‘KiDS-450’ data release (Hildebrandt et al. 2017), we use the

∼180deg2

∼180deg2

area that overlaps with the equatorial GAMA fields (Driver et al. 2011). After masking bright stars and image defects, 79 per cent of our original survey overlap remains (de Jong et al. 2015).

The photometric redshifts of the background sources are determined from ugri photometry as described in Kuijken et al. (2015) and Hildebrandt et al. (2017). Due to the bias inherent in measuring the source redshift probability distribution p(zs) of each individual source (as was done in the previous KiDS-GAMA studies), we instead employ the source redshift number distribution n(zs) of the full population of sources. The individual p(zs) is still measured, but only to find the ‘best’ redshift zB at the p(zs)-peak of each source. Following Hildebrandt et al. (2017), we limit the source sample to: zB < 0.9. We also use zB in order to select sources which lie sufficiently far behind the lens: zB > zl + 0.2. The n(zs) is estimated from a spectroscopic redshift sample, which is re-weighted to resemble the photometric properties of the appropriate KiDS galaxies for different lens redshifts (for details, see section 3 of van Uitert et al. 2016 and Hildebrandt et al. 2017). We use the n(z) distribution behind the lens for the calculation of the critical surface density from equation (7):

(8)

Σ−1crit=4πGc2D(zl)∫zl+0.2∞D(zl,zs)D(zs)n(zl,zs)dzs,

Σcrit−1=4πGc2D(zl)∫zl+0.2∞D(zl,zs)D(zs)n(zl,zs)dzs,

By assuming that the intrinsic ellipticities of the sources are randomly oriented, 〈γt〉 from equation (6) can be approximated by the average tangential ellipticity 〈εt〉 given by:

(9)

ϵt=−ϵ1cos(2ϕ)−ϵ2sin(2ϕ),

ϵt=−ϵ1cos(2ϕ)−ϵ2sin(2ϕ),

where ε1 and ε2 are the measured source ellipticity components and ϕ is the angle of the source relative to the lens centre (both with respect to the equatorial coordinate system). The measurement of the source ellipticities is performed on the r-band data, which is observed under superior observing conditions compared to the other bands (de Jong et al. 2015; Kuijken et al. 2015). The images are reduced by the theli pipeline (Erben et al. 2013 as described in Hildebrandt et al. 2017). The sources are detected from the reduced images using the SExtractor algorithm (Bertin & Arnouts 1996), whereafter the ellipticities of the source galaxies are measured using the improved self-calibrating lensfit code (Miller et al. 2007; Miller et al. 2013; Fenech Conti et al. 2016). Each shape is assigned a weight ws that reflects the reliability of the ellipticity measurement. We incorporate this lensfit weight and the lensing efficiency

Σ−1crit

Σcrit−1

into the total weight:

(10)

Wls=wsΣ−2crit,

Wls=wsΣcrit−2,

which is applied to each lens–source pair. This factor downweights the contribution of sources that have less reliable shape measurements and of lenses with a redshift closer to that of the sources (which makes them less sensitive to the lensing effect).

Inside each radial bin R, the weights and tangential ellipticities of all lens–source pairs are combined according to equation (6) to arrive at the ESD profile:

(11)

ΔΣ(R)=11+K罵sWlsϵtΣcrit,l罵sWls.

ΔΣ(R)=11+K罵sWlsϵtΣcrit,l罵sWls.

In this equation, K is the average correction of the multiplicative bias m on the lensfit shear estimates. The values of m are determined using image simulations (Fenech Conti et al. 2016) for eight tomographic redshift slices within 0.1 ≤ zB < 0.9 (Dvornik et al., in preparation). The average correction is computed for the lens–source pairs in each respective redshift slice as follows:

(12)

K=罵sWlsms罵sWls,

K=罵sWlsms罵sWls,

where the mean value of K over the entire source redshift range is −0.014.

We also correct the ESD for systematic effects that arise from the residual shape correlations due to PSF anisotropy. This results in non-vanishing contributions to the ESD signal on large scales and at the survey edges, because the averaging is not done over all azimuthal angles. This spurious signal can be determined by measuring the lensing signal around random points. We use ∼18 million locations from the GAMA random catalogue and find that the resulting signal is small (below 10 per cent for scales up to

∼1h−170Mpc

∼1h70−1Mpc

). We subtract the lensing signal around random locations from all measured ESD profiles.

Following previous KiDS-GAMA lensing papers, we measure the ESD profile for 10 logarithmically spaced radial bins within

0.02<R<2h−1100Mpc

0.02<R<2h100−1Mpc

, where our estimates of the signal and uncertainty are thoroughly tested.3 However, since we work with the h ≡ h70 definition, we use the approximately equivalent

0.03<R<3h−170Mpc

0.03<R<3h70−1Mpc

as our radial distance range. The errors on the ESD values are given by the diagonal of the analytical covariance matrix. Section 3.4 of Viola et al. (2015) includes the computation of the analytical covariance matrix and shows that up to a projected radius of

R=2h−1100Mpc

R=2h100−1Mpc

, the square root of the diagonal is in agreement with the error estimate from bootstrapping.

LENSING SIGNAL PREDICTION

According to Verlinde (2016), the gravitational potential Φ(r) caused by the enclosed baryonic mass distribution Mb(r) exceeds that of GR on galactic and larger scales. In addition to the normal GR contribution of Mb(r) to Φ(r), there exists an extra gravitational effect. This excess gravity arises due to a volume law contribution to the entropy that is associated with the positive dark energy in our Universe. In a universe without matter, the total entropy of the dark energy would be maximal, as it would be non-locally distributed over all available space. In our Universe, on the other hand, any baryonic mass distribution Mb(r) reduces the entropy content of the Universe. This removal of entropy due to matter produces an elastic response of the underlying microscopic system, which can be observed on galactic and larger scales as an additional gravitational force. Although this excess gravity does not originate from an actual DM contribution, it can be effectively described by an apparent DM distribution MD(r).

The apparent DM formula

Verlinde (2016) determines the amount of apparent DM by estimating the elastic energy associated with the entropy displacement caused by Mb(r). This leads to the following relation:4:

(13)

∫r0ε2D(r′)A(r′)dr′=VMb(r),

∫0rεD2(r′)A(r′)dr′=VMb(r),

where we integrate over a sphere with radius r and area A(r) = 4πr2. The strain εD(r) caused by the entropy displacement is given by

(14)

εD(r)=8πGcH0MD(r)A(r),

εD(r)=8πGcH0MD(r)A(r),

where c is the speed of light, G the gravitational constant and H0 the present-day Hubble constant (which we choose to be H0 = 70 km s−1Mpc−1). Furthermore,

VMb(r)

VMb(r)

is the volume that would contain the amount of entropy that is removed by a mass Mb inside a sphere of radius r, if that volume were filled with the average entropy density of the universe:

(15)

VMb(r)=8πGcH0Mb(r)r3.

VMb(r)=8πGcH0Mb(r)r3.

Now inserting the relations (14) and (15) into (13) yields:

(16)

∫r0GM2D(r′)r′2dr′=Mb(r)rcH06.

∫0rGMD2(r′)r′2dr′=Mb(r)rcH06.

Finally, by taking the derivative with respect to r on both sides of the equation, one arrives at the following relation:

(17)

M2D(r)=cH0r26Gd(Mb(r)r)dr.

MD2(r)=cH0r26Gd(Mb(r)r)dr.

This is the apparent DM formula from Verlinde (2016) that translates a baryonic mass distribution into an apparent DM distribution. This apparent DM only plays a role in the regime where the elastic response of the entropy of dark energy SDE takes place: where

V(r)>VMb(r)

V(r)>VMb(r)

, i.e. SDE ∝ V(r) is large compared to the entropy that is removed by Mb(r) within our radius r. By substituting equation (15) into this condition, we find that this is the case when:

(18)

r>2GcH0Mb(r)−−−−−−−−−√.

r>2GcH0Mb(r).

For a lower limit on this radius for our sample, we can consider a point source with a mass of

M=1010h−270M⊙

M=1010h70−2M⊙

, close to the average mass 〈Mg〉 of galaxies in our lowest stellar mass bin. In this simple case, the regime starts when

r>2h−170kpc

r>2h70−1kpc

. This shows that our observations (which start at

30h−170kpc

30h70−1kpc

) are well within the EG regime.

However, it is important to keep in mind that this equation does not represent a new fundamental law of gravity, but is merely a macroscopic approximation used to describe an underlying microscopic phenomenon. Therefore, this equation is only valid under the specific set of circumstances that have been assumed for its derivation. In this case, the system considered was a static, spherically symmetric and isolated baryonic mass distribution. With these limitations in mind, we have selected our galaxy sample to meet these criteria as closely as possible (see Section 2.1).

Finally, we note that in order to test the EG predictions with gravitational lensing, we need to make some assumptions about the used cosmology (as discussed in Section 1). These concern the geometric factors in the lensing equation (equation 7) and the evolution of the Hubble constant (which enters in equation 17 for the apparent DM). We assume that if EG is to be a viable theory, it should predict an expansion history that agrees with the current supernova data (Riess, Press & Kirshner 1996; Kessler et al. 2009; Betoule et al. 2014), specifically over the redshift range that is relevant for our lensing measurements (0.2 < zs < 0.9). If this is the case, the angular diameter distance–redshift relation is similar to what is used in ΛCDM. We therefore adopt a ΛCDM background cosmology with Ωm = 0.315 and ΩΛ = 0.685, based on the Planck Collaboration XIII (2016) measurements. Regarding H0 in equation (17), we note that a Hubble parameter that changes with redshift is not yet implemented in the EG theory. However, for the lens redshifts considered in this work (〈zl〉 ∼ 0.2), the difference resulting from using H0 or H(zl) to compute the lensing signal prediction is ∼5 per cent. This means that considering the statistical uncertainties in our measurements (≳40 per cent, see e.g. Fig. 2), our choice to use H0 = 70 km s−1Mpc−1 instead of an evolving H(zl) has no significant effect on the results of this work.

Figure 2.

The ESD profile predicted by EG for isolated centrals, both in the case of the point mass approximation (dark red, solid) and the extended galaxy model (dark blue, solid). The former consists of a point source with the mass of the stars and cold gas component (red), with the lensing signal evaluated for both the baryonic mass (dash–dotted) and the apparent DM (dashed). The latter consists of a stars and cold gas component modelled by a Sérsic profile (blue), a hot gas component modelled by a β-profile (magenta) and a satellite distribution modelled by a double power law (orange), all with the lensing signal evaluated for both the baryonic mass (dash–dotted) and the apparent DM (dashed). Note that the total ESD of the extended mass distribution is not equal to the sum of its components, due to the non-linear conversion from baryonic mass to apparent DM. All profiles are shown for our highest mass bin ($10^{10.9} < M_{\ast } < 10^{11} \,h_{70}^{-2}\,{\rm M_{\odot }}$), but the difference between the two models is similar for the other galaxy sub-samples. The difference between the ESD predictions of the two models is comparable to the median 1σ uncertainty on our lensing measurements (illustrated by the grey band).

View largeDownload slide

The ESD profile predicted by EG for isolated centrals, both in the case of the point mass approximation (dark red, solid) and the extended galaxy model (dark blue, solid). The former consists of a point source with the mass of the stars and cold gas component (red), with the lensing signal evaluated for both the baryonic mass (dash–dotted) and the apparent DM (dashed). The latter consists of a stars and cold gas component modelled by a Sérsic profile (blue), a hot gas component modelled by a β-profile (magenta) and a satellite distribution modelled by a double power law (orange), all with the lensing signal evaluated for both the baryonic mass (dash–dotted) and the apparent DM (dashed). Note that the total ESD of the extended mass distribution is not equal to the sum of its components, due to the non-linear conversion from baryonic mass to apparent DM. All profiles are shown for our highest mass bin (

1010.9<M∗<1011h−270M⊙

1010.9<M∗<1011h70−2M⊙

), but the difference between the two models is similar for the other galaxy sub-samples. The difference between the ESD predictions of the two models is comparable to the median 1σ uncertainty on our lensing measurements (illustrated by the grey band).

From equation (17), we now need to determine the ESD profile of the apparent DM distribution, in order to compare the predictions from EG to our measured GGL profiles. The next steps towards this ΔΣEG(R) depend on our assumptions regarding the baryonic mass distribution of our lenses. We compute the lensing signal in EG for two models (which are discussed in Section 2.2): the point mass approximation and the more realistic extended mass distribution.

Point mass approximation

In this work, we measure the ESD profiles of galaxies at projected radial distances

R>30h−170kpc

R>30h70−1kpc

. If we assume that beyond this distance, the galaxy is almost entirely enclosed within the radius r, we can approximate the enclosed baryonic mass as a constant: Mb(r) = Mb. Re-writing equation (17) accordingly yields:

(19)

MD(r)=cH06G−−−−√rMb−−−√≡CDrMb−−−√,

MD(r)=cH06GrMb≡CDrMb,

where CD is a constant factor determined by c, G and H0. In order to calculate the resulting ΔΣD(R), we first need to determine the spherical density distribution ρD(r). Under the assumption of spherical symmetry, we can use:

(20)

ρD(r)=14πr2dMD(r)dr=CDMb−−−√4πr2.

ρD(r)=14πr2dMD(r)dr=CDMb4πr2.

We calculate the corresponding surface density ΣD(R) as a function of projected distance R in the cylindrical coordinate system (R, ϕ, z), where z is the distance along the line of sight and r2 = R2 + z2, such that:

(21)

ΣD(R)=∫−∞∞ρD(R,z)dz.

ΣD(R)=∫−∞∞ρD(R,z)dz.

Substituting ρD(R, z) provides the surface density of the apparent DM distribution associated with our point mass:

(22)

ΣD(R)=CDMb−−−√4π2∫0∞dzR2+z2=CDMb−−−√4R.

ΣD(R)=CDMb4π2∫0∞dzR2+z2=CDMb4R.

We can now use equation (6) to find the ESD:

(23)

ΔΣ(R)==⟨Σ⟩(<R)−Σ(R)2π∫R0Σ(R′)R′dR′πR2−Σ(R).

ΔΣ(R)=⟨Σ⟩(<R)−Σ(R)=2π∫0RΣ(R′)R′dR′πR2−Σ(R).

In the case of our point mass:

(24)

ΔΣD(R)=CDMb−−−√2R−CDMb−−−√4R=CDMb−−−√4R,

ΔΣD(R)=CDMb2R−CDMb4R=CDMb4R,

which happens to be equal to ΣD(R) from equation (22).5

Apart from the extra contribution from the apparent DM predicted by EG, we also need to add the standard GR contribution from baryonic matter to the ESD. Under the assumption that the galaxy is a point mass we know that Σb(R) = 0 for R > 0, and that the integral over Σb(R) must give the total mass Mb of the galaxy. Substituting this into equation (23) gives us

(25)

ΔΣb(R)=MbπR2.

ΔΣb(R)=MbπR2.

Ultimately, the total ESD predicted by EG in the point mass approximation is

(26)

ΔΣEG(R)=ΔΣb(R)+ΔΣD(R),

ΔΣEG(R)=ΔΣb(R)+ΔΣD(R),

where the contributions are the ESDs of a point source with mass Mg of our galaxies, both in GR and EG.

Extended mass distribution

The above derivation only holds under the assumption that our galaxies can be considered point masses. To test whether this is justified, we wish to compare the point mass prediction to a more realistic lens model. This model includes the extended density profile for stars, cold gas, hot gas and satellites as described in Section 2.2. To determine the ESD profile of the extended galaxy model as predicted by EG, we cannot perform an analytical calculation as we did for the point mass approximation. Instead, we need to calculate the apparent DM distribution

MextD(r)

MDext(r)

and the resulting

ΔΣextD(R)

ΔΣDext(R)

numerically for the sum of all baryonic components. We start out with the total spherical density distribution

ρextb(r)

ρbext(r)

of all components:

(27)

ρextb(r)=ρSb(r)+ρhotb(r)+ρsatb(r),

ρbext(r)=ρbS(r)+ρbhot(r)+ρbsat(r),

where the respective contributions are: the Sérsic model for stars and cold gas, the β-profile for hot gas and the double power law for satellites. We numerically convert this to the enclosed mass distribution:

(28)

Mextb(r)=4π∫0rρextb(r′)r′2dr′.

Mbext(r)=4π∫0rρbext(r′)r′2dr′.

We rewrite equation (17) in order to translate

Mextb(r)

Mbext(r)

to its corresponding distribution of apparent DM in EG:

(29)

MextD(r)=CDrdMextb(r)rdr−−−−−−−−−√,

MDext(r)=CDrdMbext(r)rdr,

which is numerically converted into the apparent DM density distribution

ρextD(r)

ρDext(r)

by substituting

MextD(r)

MDext(r)

into equation (20).

The projected surface density

ΣextD(R)

ΣDext(R)

from equation (21) is calculated by computing the value of

ρextD(R,z)

ρDext(R,z)

in cylindrical coordinates for 103 values of z and integrating over them. The last step towards computing the ESD profile is the subtraction of

ΣextD(R)

ΣDext(R)

from the average surface density within R, as in equation (23), where

⟨ΣextD⟩(<R)

⟨ΣDext⟩(<R)

is calculated by performing the cumulative sum over

2πRΣextD(R)

2πRΣDext(R)

and dividing the result by its cumulative area. In addition to the lensing signal from apparent DM, we need to include the baryonic ESD profile. We numerically compute

ΔΣextb(R)

ΔΣbext(R)

from

ρextb(r)

ρbext(r)

in the same way as we computed

ΔΣextD(R)

ΔΣDext(R)

from

ρextD(r)

ρDext(r)

. This makes the total ESD predicted by EG for the extended mass distribution:

(30)

ΔΣextEG(R)=ΔΣextb(R)+ΔΣextD(R).

ΔΣEGext(R)=ΔΣbext(R)+ΔΣDext(R).

When considering the resulting ESD profiles of the extended density models, we must keep in mind that they only represent reasonable estimates that contain uncertainties for two different reasons.

The extended baryonic density distribution of each component is approximated using reasonable assumptions on the used model profiles and their corresponding input parameters. These assumptions are based on observations of the galaxies in our sample and of other galaxies and also on simulations. Although we try to find suitable input parameters corresponding to the measured stellar mass of our galaxy samples, we cannot be certain that our modelled density distributions are completely correct.

We cannot model the extended density distribution for each individual GAMA galaxy, but have to assume one average profile per lens sample (based on the average stellar mass 〈M*〉 of that sample). Translating the extended baryonic mass model to the lensing profile of its corresponding apparent DM distribution (as explained above) is a highly non-linear operation. Therefore, we cannot be certain that the calculated lensing profile of an average density distribution is exactly the same as the lensing profile of all individual galaxies combined, although these would only differ greatly in the unlikely case that there is a large spread in the input parameters of the extended mass profiles within each stellar mass sub-sample.

For these two reasons, we cannot use the average profile as a reliable model for the apparent DM lensing signal of our galaxy samples. In the point mass approximation, we do have the measured input parameter (the stellar mass) for each individual galaxy and we can compute the apparent DM lensing profile for each individual galaxy. However, this approach can only be used when the contribution from hot gas and satellites is small. We therefore compare our estimate of the apparent DM lensing profile of the extended mass distribution to that of the point masses, to assess the error margins in our EG prediction.

The total ESD profile predicted for the extended density distribution, and that of each component,6 is shown in Fig. 2. We only show the profiles for the galaxies in our highest stellar mass bin:

1010.9<M∗<1011h−270M⊙

1010.9<M∗<1011h70−2M⊙

, but since the relations between the mass in hot gas, satellites and their galaxies are approximately linear, the profiles look similar for the other sub-samples. At larger scales, we find that the point mass approximation predicts a lower ESD than the extended mass profile. However, the difference between the ΔΣ(R) predictions of these two models is comparable to the median 1σ uncertainty on the ESD of our sample (which is illustrated by the grey band in Fig. 2). We conclude that given the current statistical uncertainties in the lensing measurements, the point mass approximation is adequate for isolated centrals within the used radial distance range (

0.03<R<3h−170Mpc

0.03<R<3h70−1Mpc

).

RESULTS

We measure the ESD profiles (following Section 3) of our sample of isolated centrals, divided into four sub-samples of increasing stellar mass. The boundaries of the M*-bins:

log(M∗/h−270M⊙)=[8.5,10.5,10.8,10.9,11.0]

log(M∗/h70−2M⊙)=[8.5,10.5,10.8,10.9,11.0]

, are chosen to maintain an approximately equal signal to noise in each bin. Fig. 3 shows the measured ESD profiles (with 1σ error bars) of galaxies in the four M*-bins. Together with these measurements, we show the ESD profile predicted by EG, under the assumption that our isolated centrals can be considered point masses at scales within

0.03<R<3h−170Mpc

0.03<R<3h70−1Mpc

. The masses Mg of the galaxies in each bin serve as input in equation (26) that provides the ESD profiles predicted by EG for each individual galaxy. The mean baryonic masses of the galaxies in each M*-bin can be found in Table 1. The ESDs of the galaxies in each sample are averaged to obtain the total ΔΣEG(R). It is important to note that the shown EG profiles do not contain any free parameters: both their slope and amplitudes are fixed by the prediction from the EG theory (as stated in equation 17) and the measured masses Mg of the galaxies in each M*-bin. Although this is only a first attempt at testing the EG theory using lensing data, we can perform a very simple comparison of this prediction with both the lensing observations and the prediction from the standard ΛCDM model.

Figure 3.

The measured ESD profiles of isolated centrals with 1σ error bars (black), compared to those predicted by EG in the point source mass approximation (blue) and for the extended mass profile (blue, dashed). Note that none of these predictions are fitted to the data: they follow directly from the EG theory by substitution of the baryonic masses Mg of the galaxies in each sample (and, in the case of the extended mass profile, reasonable assumptions for the other baryonic mass distributions). The mean measured galaxy mass is indicated at the top of each panel. For comparison, we show the ESD profile of a simple NFW profile as predicted by GR (red), with the DM halo mass Mh fitted as a free parameter in each stellar mass bin.

View largeDownload slide

The measured ESD profiles of isolated centrals with 1σ error bars (black), compared to those predicted by EG in the point source mass approximation (blue) and for the extended mass profile (blue, dashed). Note that none of these predictions are fitted to the data: they follow directly from the EG theory by substitution of the baryonic masses Mg of the galaxies in each sample (and, in the case of the extended mass profile, reasonable assumptions for the other baryonic mass distributions). The mean measured galaxy mass is indicated at the top of each panel. For comparison, we show the ESD profile of a simple NFW profile as predicted by GR (red), with the DM halo mass Mh fitted as a free parameter in each stellar mass bin.

Model comparison

In standard GGL studies performed within the ΛCDM framework, the measured ESD profile is modelled by two components: the baryonic mass of the galaxy and its surrounding DM halo. The baryonic component is often modelled as a point source with the mean baryonic mass of the galaxy sample, whereas the DM halo component usually contains several free parameters, such as the mass and concentration of the halo, which are evaluated by fitting a model to the observed ESD profiles. Motivated by N-body simulations, the DM halo is most frequently modelled by the NFW density profile (Navarro et al. 1995), very similar to the double power law in equation (5). This profile has two free parameters: the halo mass Mh that gives the amplitude and the scale radius rs that determines where the slope changes. Following previous GAMA-KiDS lensing papers (see e.g. Sifón et al. 2015; Viola et al. 2015; Brouwer et al. 2016; van Uitert et al. 2016), we define Mh as M200: the virial mass contained within r200, and we define the scale radius in terms of the concentration: c ≡ r200/rs. In these definitions, r200 is the radius that encloses a density of 200 times ρm(z), the average matter density of the Universe. Using the Duffy et al. (2008) mass–concentration relation, we can define c in terms of Mh. We translate the resulting density profile that depends exclusively on the DM halo mass, into the projected ESD distribution following the analytical description of Wright & Brainerd (2000). We combine this NFW model with a point mass that models the baryonic galaxy component (as in equation 25). Because our lens selection minimizes the contribution from neighbouring centrals (see Section 2.1), we do not need to add a component that fits the 2-halo term. We fit the NFW model to our measured ESD profiles using the emcee sampler (Foreman-Mackey et al. 2013) with 100 walkers performing 1000 steps. The model returns the median posterior values of Mh (including 16th and 84th percentile error margins) displayed in Table 2. The best-fitting ESD profile of the NFW model (including 16th and 84th percentile bands) is shown in Fig. 3.

Table 2.

For each stellar mass bin, this table shows the median values (including 16th and 84th percentile error margins) of the halo mass Mh obtained by the NFW fit and the ‘best’ amplitude AB that minimizes the χ2 if the EG profile were multiplied by it (for the point mass and extended mass profile). The halo masses are displayed in units of

log10(M/h−270M⊙)

log10(M/h70−2M⊙)

.

M★-bin Mh AB

AextB

ABext

8.5–10.5

12.15+0.10−0.11

12.15−0.11+0.10

1.36+0.21−0.21

1.36−0.21+0.21

1.21+0.19−0.19

1.21−0.19+0.19

10.5–10.8

12.45+0.10−0.11

12.45−0.11+0.10

1.32+0.19−0.19

1.32−0.19+0.19

1.20+0.18−0.18

1.20−0.18+0.18

10.8–10.9

12.43+0.17−0.22

12.43−0.22+0.17

1.07+0.27−0.27

1.07−0.27+0.27

0.94+0.25−0.25

0.94−0.25+0.25

10.9–11

12.62+0.13−0.16

12.62−0.16+0.13

1.33+0.25−0.26

1.33−0.26+0.25

1.20+0.23−0.24

1.20−0.24+0.23

For both the ΔΣEG predicted by EG (in the point source approximation) and the simple NFW fit ΔΣNFW, we can compare the ΔΣmod of the model with the observed ΔΣobs by calculating the χ2 value:

(31)

χ2=(ΔΣobs−ΔΣmod)⊺⋅C−1(ΔΣobs−ΔΣmod),

χ2=(ΔΣobs−ΔΣmod)⊺⋅C−1(ΔΣobs−ΔΣmod),

where C−1 is the inverse of the analytical covariance matrix (see Section 3). From this quantity, we can calculate the reduced χ2 statistic:7

χ2red=χ2/NDOF

χred2=χ2/NDOF

. It depends on the number of degrees of freedom (DOF) of the model: NDOF = Ndata − Nparam, where Ndata is the number of data points in the measurement and Nparam is the number of free parameters. Due to our choice of 10 R-bins and 4 M*-bins, we use 4 × 10 = 40 data points. In the case of EG, there are no free parameters, which means

NEGDOF=40

NDOFEG=40

. Our simple NFW model has one free parameter Mh for each M*-bin, resulting in

NNFWDOF=40−4=36

NDOFNFW=40−4=36

. The resulting total

χ2red

χred2

over the four M*-bins is 44.82/40 = 1.121 for EG and 33.58/36 = 0.933 for the NFW fit. In other words, both the NFW and EG prediction agree quite well with the measured ESD profile, where the NFW fit has a slightly better

χ2red

χred2

value. Since the NFW profile is an empirical description of the surface density of virialized systems, the apparent correspondence of both the NFW fit and the EG prediction with the observed ESD essentially reflects that the predicted EG profile roughly follows that of virialized systems.

A more appropriate way to compare the two models, however, is in the Bayesian framework. We use a very simple Bayesian approach by computing the Bayesian Information Criterion (BIC; Schwarz 1978). This criterion, which is based on the maximum likelihood

Lmax

Lmax

of the data given a model, penalizes model complexity more strongly than the

χ2red

χred2

. This model comparison method is closely related to other information criteria such as the Akaike Information Criterion (AIK; Akaike 1973) which have become popular because they only require the likelihood at its maximum value, rather than in the whole parameter space, to perform a model comparison (see e.g. Liddle 2007). This approximation only holds when the posterior distribution is Gaussian and the data points are independent. Calculating the BIC, which is defined as:

(32)

BIC=−2ln(Lmax)+Nparamln(Ndata),

BIC=−2ln(Lmax)+Nparamln(Ndata),

allows us to consider the relative evidence of two competing models, where the one with the lowest BIC is preferred. The difference ΔBIC gives the significance of evidence against the higher BIC, ranging from ‘0 - 2: Not worth more than a bare mention’ to ‘>10: Very strong’ (Kass & Raftery 1995). In the Gaussian case, the likelihood can be rewritten as:

−2ln(Lmax)=χ2

−2ln(Lmax)=χ2

. Using this method, we find that BICEG = 44.82 and BICNFW = 48.33. This shows that, when the number of free parameters is taken into account, the EG model performs at least as well as the NFW fit. However, in order to really distinguish between these two models, we need to reduce the uncertainties in our measurement, in our lens modelling, and in the assumptions related to EG theory and halo model.

In order to further assess the quality of the EG prediction across the M*-range, we determine the ‘best’ amplitude AB and index nB: the factors that minimize the χ2 statistic when we fit:

(33)

ΔΣEG(AB,nB,R)=ABCDMb−−−√4(Rh−170kpc)−nB,

ΔΣEG(AB,nB,R)=ABCDMb4(Rh70−1kpc)−nB,

We find that the slope of the EG prediction is very close to the observed slope of the ESD profiles, with a mean value of

⟨nB⟩=1.01+0.02−0.03

⟨nB⟩=1.01−0.03+0.02

. In order to obtain better constraints on AB, we set nB = 1. The values of AB (with 1σ errors) for the point mass are shown in Table 2. We find the amplitude of the point mass prediction to be consistently lower than the measurement. This is expected since the point mass approximation only takes the mass contribution of the central galaxy into account, and not that of extended components like hot gas and satellites (described in Section 2.2). However, the ESD of the extended profile (which is shown in Fig. 3 for comparison) does not completely solve this problem. When we determine the best amplitude for the extended mass distribution by scaling its predicted ESD, we find that the values of

AextB

ABext

are still larger than 1, but less so than for the point mass (at a level of ∼1σ, see Table 2). Nevertheless, the comparison of the extended ESD with the measured lensing profile yields a slightly higher reduced χ2: 45.50/40 = 1.138. However, accurately predicting the baryonic and apparent DM contribution of the extended density distribution is challenging (see Section 4.3). Therefore, the extended ESD profile can primarily be used as an indication of the uncertainty in the lens model.

CONCLUSION

Using the

∼180deg2

∼180deg2

overlap of the KiDS and GAMA surveys, we present the first test of the theory of EG proposed in Verlinde (2016) using weak gravitational lensing. In this theory, there exists an additional component to the gravitational potential of a baryonic mass, which can be described as an apparent DM distribution. Because the prediction of the apparent DM profile as a function of baryonic mass is currently only valid for static, spherically symmetric and isolated mass distributions, we select 33,613 central galaxies that dominate their surrounding mass distribution, and have no other centrals within the maximum radius of our measurement (

Rmax=3h−170Mpc

Rmax=3h70−1Mpc

). We model the baryonic matter distribution of our galaxies using two different assumptions for their mass distribution: the point mass approximation and the extended mass profile. In the point mass approximation we assume that the bulk of the galaxy's mass resides within the minimum radius of our measurement (

Rmin=30h−170kpc

Rmin=30h70−1kpc

), and model the lens as a point source with the mass of the stars and cold gas of the galaxy. For the extended distribution, we not only model the stars and cold gas component as a Sérsic profile, but also try to make reasonable estimates of the extended hot gas and satellite distributions. We compute the lensing profiles of both models and find that, given the current statistical uncertainties in our lensing measurements, both models give an adequate description of isolated centrals. In this regime (where the mass distribution can be approximated by a point mass) the lensing profile of apparent DM in EG is the same as that of the excess gravity in MOND,8 for the specific value a0 = cH0/6.

When computing the observed and predicted ESD profiles, we need to make several assumptions concerning the EG theory. The first is that, because EG gives an effective description of GR in empty space, the effect of the gravitational potential on light rays remains unchanged. This allows us to use the regular gravitational lensing formalism to measure the ESD profiles of apparent DM in EG. Our second assumption involves the used background cosmology. Because EG is only developed for present-day de Sitter space, we need to assume that the evolution of cosmological distances is approximately equal to that in ΛCDM, with the cosmological parameters as measured by the Planck Collaboration XIII (2016). For the relatively low redshifts used in this work (0.2 < zs < 0.9), this is a reasonable assumption. The third assumption is the value of H0 that we use to calculate the apparent DM profile from the baryonic mass distribution. In an evolving universe, the Hubble parameter H(z) is expected to change as a function of the redshift z. This evolution is not yet implemented in EG. Instead it uses the approximation that we live in a dark energy dominated universe, where H(z) resembles a constant. We follow Verlinde (2016) by assuming a constant value, in our case: H0 = 70 km s−1Mpc−1, which is reasonable at a mean lens redshift of 〈zl〉 ∼ 0.2. However, in order to obtain a more accurate prediction for the cosmology and the lensing signal in the EG framework, all these issues need to be resolved in the future.

Using the mentioned assumptions, we measure the ESD profiles of isolated centrals in four different stellar mass bins, and compare these with the ESD profiles predicted by EG. They exhibit a remarkable agreement, especially considering that the predictions contain no free parameters: both the slope and the amplitudes within the four M*-bins are completely fixed by the EG theory and the measured baryonic masses Mg of the galaxies. In order to perform a very simple comparison with ΛCDM, we fit the ESD profile of a simple NFW distribution (combined with a baryonic point mass) to the measured lensing profiles. This NFW model contains one free parameter, the halo mass Mh, for each stellar mass bin. We compare the reduced χ2 of the NFW fit (which has 4 free parameters in total) with that of the prediction from EG (which has no free parameters). Although the NFW fit has fewer degrees of freedom (which slightly penalizes

χ2red

χred2

) the reduced χ2 of this model is slightly lower than that of EG, where

χ2red,NFW=0.933

χred,NFW2=0.933

and

χ2red,EG=1.121

χred,EG2=1.121

in the point mass approximation. For both theories, the value of the reduced χ2 is well within reasonable limits, especially considering the very simple implementation of both models. The fact that our observed density profiles resemble both NFW profiles and the prediction from EG, suggests that this theory predicts a phenomenology very similar to a virialized DM halo. Using the Bayesian Information Criterion, we find that BICEG = 44.82 and BICNFW = 48.33. These BIC values imply that taking the number of data points and free parameters into account, the EG prediction describes our data at least as well as the NFW fit. However, a thorough and fair comparison between ΛCDM and EG would require a more sophisticated implementation of both theories and a full Bayesian analysis that properly takes the free parameters and priors of the NFW model into account. None the less, given that the model uncertainties are also addressed, future data should be able to distinguish between the two theories.

We propose that this analysis should not only be carried out for this specific case, but on multiple scales and using a variety of different probes. From comparing the predictions of EG to observations of isolated centrals, we need to expand our studies to the scales of larger galaxy groups, clusters and eventually to cosmological scales: the cosmic web, BAOs and the CMB power spectrum. Furthermore, there are various challenges for EG, especially concerning observations of dynamical systems such as the Bullet Cluster (Randall et al. 2008) where the dominant mass component appears to be separate from the dominant baryonic component. There is also ongoing research to assess whether there exists an increasing mass-to-light ratio for galaxies of later type (Martinsson et al. 2013), which might challenge EG if confirmed. We conclude that although this first result is quite remarkable, it is only a first step. There is still a long way to go, for both the theoretical groundwork and observational tests, before EG can be considered a fully developed and solidly tested theory. In this first GGL study, however, EG appears to be a good parameter-free description of our observations.

M. Brouwer and M. Visser would like to thank Erik Verlinde for helpful clarifications and discussions regarding his EG theory. We also thank the anonymous referee for the useful comments that helped to improve this paper.

The work of M. Visser was supported by the European Research Council (ERC) Advanced Grant 268088-EMERGRAV, and is part of the Delta Institute for Theoretical Physics (ITP) consortium, a program of the Netherlands Organisation for Scientific Research (NWO). M. Bilicki, H. Hoekstra and C. Sifon acknowledge support from the ERC under FP7 grant number 279396. K. Kuijken is supported by the Alexander von Humboldt Foundation. M. Bilicki acknowledges support from the NWO through grant number 614.001.103. H. Hildebrandt is supported by an Emmy Noether grant (No. Hi 1495/2-1) of the Deutsche Forschungsgemeinschaft. R. Nakajima acknowledges support from the German Federal Ministry for Economic Affairs and Energy (BMWi) provided via DLR under project no. 50QE1103. Dominik Klaes is supported by the Deutsche Forschungsgemeinschaft in the framework of the TR33 ‘The Dark Universe’.

This research is based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017 and 177.A-3018, and on data products produced by Target OmegaCEN, INAF-OACN, INAF-OAPD and the KiDS production team, on behalf of the KiDS consortium. OmegaCEN and the KiDS production team acknowledge support by NOVA and NWO-M grants. Members of INAF-OAPD and INAF-OACN also acknowledge the support from the Department of Physics and Astronomy of the University of Padova, and of the Department of Physics of Univ. Federico II (Naples).

GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO and the participating institutions. The GAMA website is www.gama-survey.org.

This work has made use of python (www.python.org), including the packages numpy (www.numpy.org), scipy (www.scipy.org) and ipython (Pérez & Granger 2007). Plots have been produced with matplotlib (Hunter 2007).

All authors contributed to the development and writing of this paper. The authorship list is given in three groups: the lead authors (M. Brouwer and M. Visser), followed by two alphabetical groups. The first alphabetical group includes those who are key contributors to both the scientific analysis and the data products. The second group covers those who have either made a significant contribution to the data products or to the scientific analysis.

1

These are all galaxies with redshift quality nQ ≥ 2. However, the recommended redshift quality of GAMA (that we use in our analysis) is nQ ≥ 3.

2

Although this double power law is mathematically equivalent to the Navarro–Frenk–White (NFW) profile (Navarro, Frenk & White 1995) that describes virialized DM haloes, it is, in our case, not related to any (apparent) DM distribution. It is merely an empirical fit to the measured distribution of satellite galaxies around their central galaxy.

3

Viola et al. (2015) used the following definition of the reduced Hubble constant: h ≡ h100 = H0/(100 km s−1 Mpc−1).

4

Although Verlinde (2016) derives his relations for an arbitrary number of dimensions d; for the derivation in this paper, we restrict ourselves to four space–time dimensions.

5

Note that the ESD of the apparent DM distribution,

ΔΣD(R)∝H0Mb−−−−−√/R∝h−−√

ΔΣD(R)∝H0Mb/R∝h

, is explicitly dependent on the Hubble constant, which means that an incorrect measured value of H0 would affect our conclusions.

6

Note that due to the non-linear nature of the calculation of the apparent DM distribution, the total ESD profile of the extended mass distribution is not the sum of the components shown in Fig. 2.

7

While the reduced χ2 statistic is shown to be a sub-optimal goodness-of-fit estimator (see e.g. Andrae, Schulze-Hartung & Melchior 2010), it is a widely used criterion and we therefore discuss it here for completeness.

8

After this paper was accepted for publication, it was pointed out to us that (Milgrom 2013) showed that galaxy-galaxy lensing measurements from the Canada-France-Hawaii Telescope Legacy Survey (Brimioulle 2013) are consistent with predictions from relativistic extensions of MOND up to a radius of

140h−172kpc

140h72−1kpc

.

REFERENCES

Abazajian K. N. et al, 2009, ApJS , 182, 543

CrossRefSearch ADS

Akaike H., 1973, Biometrika , 60, 255

CrossRefSearch ADS

Andrae R., Schulze-Hartung T., Melchior P., 2010, preprint (arXiv:1012.3754)

Bartelmann M., Schneider P., 2001, Phys. Rep. , 340, 291

CrossRefSearch ADS

Bertin E., Arnouts S., 1996, A&AS , 117, 393

CrossRefSearch ADS

Betoule M. et al, 2014, A&A , 568, A22

CrossRefSearch ADS

Blake C. et al, 2011, MNRAS , 415, 2892

CrossRefSearch ADS

Boselli A. et al, 2010, PASP , 122, 261

CrossRefSearch ADS

Boselli A., Cortese L., Boquien M., Boissier S., Catinella B., Lagos C., Saintonge A., 2014, A&A , 564, A66

CrossRefSearch ADS

Bosma A., 1981, AJ , 86, 1791

CrossRefSearch ADS

Bower R. G., Benson A. J., Malbon R., Helly J. C., Frenk C. S., Baugh C. M., Cole S., Lacey C. G., 2006, MNRAS , 370, 645

CrossRefSearch ADS

Brimioulle F., Seitz S., Lerchster M., Bender R., Snigula J., 2013, MNRAS , 432, 1046

CrossRefSearch ADS

Brouwer M. M. et al, 2016, MNRAS , 462, 4451

CrossRefSearch ADS

Bruzual G., Charlot S., 2003, MNRAS , 344, 1000

CrossRefSearch ADS

Capaccioli M., Schipani P., 2011, The Messenger , 146, 2

Catinella B. et al, 2013, MNRAS , 436, 34

CrossRefSearch ADS

Cavaliere A., Fusco-Femiano R., 1976, A&A , 49, 137

Crocker A. F., Bureau M., Young L. M., Combes F., 2011, MNRAS , 410, 1197

CrossRefSearch ADS

Davis T. A. et al, 2013, MNRAS , 429, 534

CrossRefSearch ADS

de Jong J. T. A., Verdoes Kleijn G. A., Kuijken K. H., Valentijn E. A., 2013, Exp. Astron. , 35, 25

CrossRefSearch ADS

de Jong J. T. A. et al, 2015, A&A , 582, A62

CrossRefSearch ADS

Driver S. P. et al, 2011, MNRAS , 413, 971

CrossRefSearch ADS

Duffy A. R., Schaye J., Kay S. T., Dalla Vecchia C., 2008, MNRAS , 390, L64

CrossRefSearch ADS

Edge A., Sutherland W., Kuijken K., Driver S., McMahon R., Eales S., Emerson J. P., 2013, The Messenger , 154, 32

Eisenstein D. J. et al, 2005, ApJ , 633, 560

CrossRefSearch ADS

Erben T. et al, 2013, MNRAS , 433, 2545

CrossRefSearch ADS

Faulkner T., Guica M., Hartman T., Myers R. C., Van Raamsdonk M., 2014, J. High Energy Phys. , 3, 51

CrossRefSearch ADS

Fedeli C., Semboloni E., Velliscig M., Van Daalen M., Schaye J., Hoekstra H., 2014, J. Cosmol. Astropart. Phys. , 8, 028

Fischer P. et al, 2000, AJ , 120, 1198

CrossRefSearch ADS

Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP , 125, 306

CrossRefSearch ADS

Fenech Conti I., Herbonnet R., Hoekstra H., Merten J., Miller L., Viola M., 2016, MNRAS , preprint (arXiv:1606.05337)

Hildebrandt H. et al, 2017, MNRAS , 465, 1454

Hoekstra H., Yee H. K. C., Gladders M. D., 2004, ApJ , 606, 67

CrossRefSearch ADS

Hoekstra H., Herbonnet R., Muzzin A., Babul A., Mahdavi A., Viola M., Cacciato M., 2015, MNRAS , 449, 685

CrossRefSearch ADS

Hunter J. D., 2007, Comput. Sci. Eng. , 9, 90

CrossRefSearch ADS

Jacobson T., 1995, Phys. Rev. Lett. , 75, 1260

CrossRefSearch ADSPubMed

Jacobson T., 2016, Phys. Rev. Lett. , 116, 201101

CrossRefSearch ADSPubMed

Kahn F. D., Woltjer L., 1959, ApJ , 130, 705

CrossRefSearch ADS

Kass R. E., Raftery A. E., 1995, J. Am. Stat. Assoc. , 90, 773

CrossRefSearch ADS

Kelvin L. S. et al, 2012, MNRAS , 421, 1007

CrossRefSearch ADS

Kessler R. et al, 2009, ApJS , 185, 32

CrossRefSearch ADS

Kuijken K. et al, 2011, The Messenger , 146, 8

Kuijken K. et al, 2015, MNRAS , 454, 3500

CrossRefSearch ADS

Liddle A. R., 2007, MNRAS , 377, L74

CrossRefSearch ADS

Liske J. et al, 2015, MNRAS , 452, 2087

CrossRefSearch ADS

McCarthy I. G. et al, 2010, MNRAS , 406, 822

McGaugh S. S., Lelli F., Schombert J. M., 2016, Phys. Rev. Lett. , 117, 201101

CrossRefSearch ADSPubMed

Mandelbaum R., 2015, in Cappellari M., Courteau S., eds, Proc. IAU Symp. 311, Galaxy Masses as Constraints of Formation Models . Kluwer, Dordrecht, p. 86

Mandelbaum R., Seljak U., Kauffmann G., Hirata C. M., Brinkmann J., 2006, MNRAS , 368, 715

CrossRefSearch ADS

Martinsson T. P. K., Verheijen M. A. W., Westfall K. B., Bershady M. A., Andersen D. R., Swaters R. A., 2013, A&A , 557, A131

CrossRefSearch ADS

Mentuch Cooper E.et al, 2012, ApJ , 755, 165

CrossRefSearch ADS

Miller L., Kitching T., Heymans C., Heavens A., van Waerbeke L., 2007, MNRAS , 382, 315

CrossRefSearch ADS

Miller L. et al, 2013, MNRAS , 429, 2858

CrossRefSearch ADS

Milgrom M., 1983, ApJ , 270, 371

CrossRefSearch ADS

Milgrom M., 2013, Phys. Rev. Lett. , 111, 041105

CrossRefSearch ADSPubMed

Morokuma-Matsui K., Baba J., 2015, MNRAS , 454, 3792

CrossRefSearch ADS

Mulchaey J. S., 2000, ARA&A , 38, 289

CrossRefSearch ADS

Mulchaey J. S., Davis D. S., Mushotzky R. F., Burstein D., 1996, ApJ , 456, 80

CrossRefSearch ADS

Navarro J. F., Frenk C. S., White S. D., 1995, MNRAS , 275, 56

CrossRefSearch ADS

Padmanabhan T., 2010, Rep. Prog. Phys. , 73, 046901

CrossRefSearch ADS

Peebles P. J., Yu J., 1970, ApJ , 162, 815

CrossRefSearch ADS

Pérez F., Granger B. E., 2007, Comput. Sci. Eng. , 9, 21

CrossRefSearch ADS

Planck Collaboration XIII, 2016, A&A , 594, A13

CrossRefSearch ADS

Pohlen M. et al, 2010, A&A , 518, L72

CrossRefSearch ADS

Randall S. W., Markevitch M., Clowe D., Gonzalez A. H., Bradač M., 2008, ApJ , 679, 1173

CrossRefSearch ADS

Riess A. G., Press W. H., Kirshner R. P., 1996, ApJ , 473, 88

CrossRefSearch ADS

Rines K., Geller M. J., Diaferio A., Kurtz M. J., 2013, ApJ , 767, 15

CrossRefSearch ADS

Robotham A. S. et al, 2011, MNRAS , 416, 2640

CrossRefSearch ADS

Rubin V. C., 1983, Sci. Am. , 248, 96

CrossRefSearch ADS

Saintonge A. et al, 2011, MNRAS , 415, 32

CrossRefSearch ADS

Schaye J. et al, 2010, MNRAS , 402, 1536

CrossRefSearch ADS

Schneider P., Kochanek C., Wambsganss J., 2006, Gravitational Lensing: Strong, Weak and Micro: Saas-Fee Advanced Course 33. Vol. 33 . Springer Science & Business Media, Dordrecht

Schwarz G., 1978, Ann. Stat. , 6, 461

CrossRefSearch ADS

Sérsic J. L., 1963, Bol. Asoc. Astron, La Plata Argentina , 6, 41

Sérsic J. L., 1968, Atlas de Galaxias Australes: Observatorio Astronomico , Cordoba, Argentina

Sifón C. et al, 2015, MNRAS , 454, 3938

CrossRefSearch ADS

Spergel D. N. et al, 2003, ApJS , 148, 175

CrossRefSearch ADS

Springel V. et al, 2005, Nature , 435, 629

CrossRefSearch ADSPubMed

Sun M., Voit G. M., Donahue M., Jones C., Forman W., Vikhlinin A., 2009, ApJ , 693, 1142

CrossRefSearch ADS

Taylor E. N. et al, 2011, MNRAS , 418, 1587

CrossRefSearch ADS

van Uitert E. et al, 2016, MNRAS , 459, 3251

CrossRefSearch ADS

Velander M. et al, 2014, MNRAS , 437, 2111

CrossRefSearch ADS

Verlinde E., 2011, J. High Energy Phys. , 4, 29

CrossRefSearch ADS

Verlinde E. P., 2016, preprint (arXiv:1611.02269)

Viola M. et al, 2015, MNRAS , 452, 3529

CrossRefSearch ADS

von der Linden A. et al, 2014, MNRAS , 439, 2

CrossRefSearch ADS

White S. D., Rees M., 1978, MNRAS , 183, 341

CrossRefSearch ADS

Wright C. O., Brainerd T. G., 2000, ApJ , 534, 34

CrossRefSearch ADS

Zwicky F., 1937, ApJ , 86, 217

CrossRefSearch ADS

© 2016 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society

https://academic.oup.com/mnras/article/466/3/2547/2661916/First-test-of-Verlinde-s-theory-of-emergent

JHEP04(2011)029

Published for SISSA by Springer

Received: October 22, 2010

Accepted: March 19, 2011

Published: April 7, 2011

On the origin of gravity and the laws of Newton

Erik Verlinde

Institute for Theoretical Physics, University of Amsterdam,

Valckenierstraat 65, 1018 XE, Amsterdam, The Netherlands

E-mail: e.p.verlinde@uva.nl

Abstract: Starting from first principles and general assumptions we present a heuristic

argument that shows that Newton’s law of gravitation naturally arises in a theory in which

space emerges through a holographic scenario. Gravity is identified with an entropic force

caused by changes in the information associated with the positions of material bodies. A

relativistic generalization of the presented arguments directly leads to the Einstein equations.

When space is emergent even Newton’s law of inertia needs to be explained. The

equivalence principle auggests that it is actually the law of inertia whose origin is entropic.

Keywords: Gauge-gravity correspondence, Models of Quantum Gravity

ArXiv ePrint: 1001.0785

Open Access doi:10.1007/JHEP04(2011)029

JHEP04(2011)029

Contents

1 Introduction 1

2 Entropic force 3

3 Emergence of the laws of Newton 5

3.1 Force and inertia 6

3.2 Newton’s law of gravity 8

3.3 Naturalness and robustness of the derivation 9

3.4 Inertia and the Newton potential 10

4 Emergent gravity for general matter distributions 11

4.1 The Poisson equation for general matter distributions 12

4.2 The gravitational force for arbitrary particle locations 14

5 The equivalence principle and the Einstein equations 15

5.1 The law of inertia and the equivalence principle 15

5.2 Towards a derivation of the Einstein equations 18

5.3 The force on a collection of particles at arbitrary locations 19

6 Conclusion and discussion 20

6.1 The end of gravity as a fundamental force 20

6.2 Implications for string theory and relation with AdS/CFT 21

6.3 Black hole horizons revisited 23

6.4 Final comments 24

1 Introduction

Of all forces of Nature gravity is clearly the most universal. Gravity influences and is influenced

by everything that carries an energy, and is intimately connected with the structure

of space-time. The universal nature of gravity is also demonstrated by the fact that its

basic equations closely resemble the laws of thermodynamics and hydrodynamics.1 So far,

there has not been a clear explanation for this resemblance.

Gravity dominates at large distances, but is very weak at small scales. In fact, its basic

laws have only been tested up to distances of the order of a millimeter. Gravity is also

considerably harder to combine with quantum mechanics than all the other forces. The

quest for unification of gravity with these other forces of Nature, at a microscopic level,

may therefore not be the right approach. It is known to lead to many problems, paradoxes

1An incomplete list of references includes [1–7].

– 1 –

JHEP04(2011)029

and puzzles. String theory has to a certain extent solved some of these, but not all. And

we still have to figure out what the string theoretic solution teaches us.

Many physicists believe that gravity, and space-time geometry are emergent. Also

string theory and its related developments have given several indications in this direction.

Particularly important clues come from the AdS/CFT, or more generally, the open/closed

string correspondence. This correspondence leads to a duality between theories that contain

gravity and those that don’t. It therefore provides evidence for the fact that gravity can

emerge from a microscopic description that doesn’t know about its existence.

The universality of gravity suggests that its emergence should be understood from

general principles that are independent of the specific details of the underlying microscopic

theory. In this paper we will argue that the central notion needed to derive gravity is

information. More precisely, it is the amount of information associated with matter and

its location, in whatever form the microscopic theory likes to have it, measured in terms

of entropy. Changes in this entropy when matter is displaced leads to a reaction force.

Our aim is to show that this force, given certain reasonable assumptions, takes the form

of gravity.

The most important assumption will be that the information associated with a part

of space obeys the holographic principle [8, 9]. The strongest supporting evidence for the

holographic principle comes from black hole physics [1, 3] and the AdS/CFT correspondence

[10]. These theoretical developments indicate that at least part of the microscopic

degrees of freedom can be represented holographically either on the boundary of space-time

or on horizons.

The concept of holography appears to be much more general, however. For instance,

in the AdS/CFT correspondence one can move the boundary inwards by exploiting a

holographic version of the renormalization group. Similarly, in black hole physics there exist

ideas that the information can be stored on stretched horizons. Furthermore, by thinking

about accelerated observers, one can in principle locate holographic screens anywhere in

space. In all these cases the emergence of the holographic direction is accompanied by

redshifts, and related to a coarse graining procedure. If all these combined ideas are

correct there should exist a general framework that describes how space emerges together

with gravity.

Usually holography is studied in relativistic contexts. However, the gravitational force

is also present in the non-relativistic world. The origin of gravity, whatever it is, should

therefore also naturally explain why this force appears the way it does, and obeys Newton

law of gravitation. In fact, when space is emergent, also the other laws of Newton have

to be re-derived, because standard concepts like position, velocity, acceleration, mass and

force are far from obvious. Hence, in such a setting the laws of mechanics have to appear

alongside with space itself. Even a basic concept like inertia is not given, and needs to be

explained again.

In this paper we present a holographic scenario for the emergence of space and address

the origins of gravity and inertia, which are connected by the equivalence principle. Starting

from first principles, using only space independent concepts like energy, entropy and

temperature, it is shown that Newton’s laws appear naturally and practically unavoidably.

– 2 –

JHEP04(2011)029

A crucial ingredient is that only a finite number of degrees of freedom are associated

with a given spatial volume, as dictated by the holographic principle. The energy, that is

equivalent to the matter, is distributed evenly over the degrees of freedom, and thus leads

to a temperature. The product of the temperature and the change in entropy due to the

displacement of matter is shown to be equal to the work done by the gravitational force.

In this way we find that Newton’s law of gravity emerges in a surprisingly simple fashion.

The holographic principle has not been easy to extract from the laws of Newton and

Einstein, and is deeply hidden within them. Conversely, starting from holography, we find

that these well known laws come out. By reversing the logic that lead people from the laws

of gravity to holography, one obtains a simpler picture of what gravity is.

The presented ideas are consistent with our knowledge of string theory, but if correct

they should have important implications for this theory as well. In particular, the description

of gravity as being due to the exchange of closed strings has to be explained from an

emergent scenario.

We start in section 2 with an exposition of the concept of entropic force. Section 3

illustrates the main heuristic argument in a simple non relativistic setting. Its generalization

to arbitrary matter distributions is explained in section 4. In section 5 we extend

these results to the relativistic case, and derive the Einstein equations. The conclusions

are presented in section 6.

2 Entropic force

An entropic force is an effective macroscopic force that originates in a system with many

degrees of freedom by the statistical tendency to increase its entropy. The force equation

is expressed in terms of entropy differences, and is independent of the details of the microscopic

dynamics. In particular, there is no fundamental field associated with an entropic

force. Entropic forces occur typically in macroscopic systems such as in colloid or biophysics.

Big colloid molecules suspended in a thermal environment of smaller particles,

for instance, experience entropic forces due to excluded volume effects. Osmosis is another

phenomenon driven by an entropic force.

Perhaps the best known example is the elasticity of a polymer. A single polymer

molecule can be modeled by joining together many monomers of fixed length, where each

monomer can freely rotate around the points of attachment and direct itself in any spatial

direction. Each of these configurations has the same energy. When the polymer molecule

is immersed into a heat bath, it likes to put itself into a randomly coiled configuration

since these are entropically favored. There are many more such configurations when the

molecule is short compared to when it is stretched into an extended configuration. The

statistical tendency to return to a maximal entropy state translates into a macroscopic

force, in this case the elastic force.

By using tweezers one can pull the endpoints of the polymer apart, and bring it out of

its equilibrium configuration by an external force F, as shown in figure 1. For definiteness,

we keep one end fixed, say at the origin, and move the other endpoint along the x-axis.

– 3 –

JHEP04(2011)029

Figure 1. A free jointed polymer is immersed in a heat bath with temperature T and pulled out

of its equilibrium state by an external force F. The entropic force points the other way.

The entropy equals

S(E, x) = kB log Ω(E, x) (2.1)

where kB is Boltzman’s constant and Ω(E, x) denotes the volume of the configuration space

for the entire system as a function of the total energy E of the heat bath and the position

x of the second endpoint. The x dependence is entirely a configurational effect: there is

no microscopic contribution to the energy E that depends on x.

In the canonical ensemble the force F is introduced in the partition function2

Z(T, F) =Z

dEdx Ω(E, x) e

−(E+F x)/kBT

. (2.2)

as an external variable dual to the length x of the polymer. The force F required to keep

the polymer at a fixed length x for a given temperature E can be deduced from the saddle

point equations

1

T

=

∂S

∂E ,

F

T

=

∂S

∂x . (2.3)

By the balance of forces, the external force F should be equal to the entropic force, that

tries to restore the polymer to its equilibrium position. An entropic force is recognized

by the facts that it points in the direction of increasing entropy, and, secondly, that it is

proportional to the temperature. For the polymer the force can be shown to obey Hooke’s

law

Fpolymer ∼ −const · kBT hxi.

This example makes clear that at a macroscopic level an entropic force can be conservative,

at least when the temperature is kept constant. The corresponding potential has no

microscopic meaning, however, and is emergent.

It is interesting to study the energy and entropy balance when one gradually lets the

polymer return to its equilibrium position, while allowing the force to perform work on an

external system. By energy conservation, this work must be equal to the energy that has

been extracted from the heat bath. The entropy of the heat bath will hence be reduced.

For an infinite heat bath this would be by the same amount as the increase in entropy of

the polymer. Hence, in this situation the total entropy will remain constant.

2We like to thank B. Nienhuis and M. Shigemori for enlightening discussions on the following part.

– 4 –

JHEP04(2011)029

This can be studied in more detail in the micro-canonical ensemble, because it takes the

total energy into account including that of the heat bath. To determine the entropic force,

one again introduces an external force F and examines the balance of forces. Specifically,

one considers the micro-canonical ensemble given by Ω(E + F x, x), and imposes that the

entropy is extremal. This gives

d

dxS(E+F x, x) = 0 (2.4)

One easily verifies that this leads to the same equations (2.3). However, it illustrates that

micro-canonically the temperature is in general position dependent, and the force also

energy dependent. The term F x can be viewed as the energy that was put into the system

by pulling the polymer out of its equilibrium position. This equation tells us therefore that

the total energy is reduced when the polymer slowly returns to its equilibrium position,

but that the entropy stays the same. In this sense the force acts adiabatically.

Note added. In the following our discussion of the polymer is used as a metaphor to

illustrate that changes in the amount of information, measured by entropy, can lead to a

force. Our aim is to argue that gravity is also an entropic force in this sense. Indeed, we will

see that the same kind of reasonings apply to gravity, with some slight modifications. In

particular, the notions of entropy and temperature used in this paper should be interpreted

as a way of characterizing the amount of information associated with the microscopic

degrees of freedom and the energy costs that is associated with changes in this amount of

information. Specifically, one may think about this amount of information as the volume

of the phase space occupied by the microscopic states.

Another example of an emergent adiabatic force is the reaction force of a fast system

when it is influenced by a slowly varying system. In the Born-Oppenheimer approximation

this force can be obtained by requiring that the phase space volume of all states with

energies below the given state remains constant. This leads to exactly a similar expression

as the entropic force in a thermodynamic situation. In this paper we will merge this

concept with that of an entropic force. A more detailed explanation of the connection

between adiabatic reaction forces with entropic forces, and the relevance for gravity, will

be explained in a future publication [16]

3 Emergence of the laws of Newton

Space is in the first place a device introduced to describe the positions and movements

of particles. Space is therefore literally a storage space for information associated with

matter. Given that the maximal allowed information is finite for each part of space, it is

impossible to localize a particle with infinite precision at a point of a continuum space. In

fact, points and continuous coordinates should eventually arise only as derived concepts.

One could assume that information is stored in points of a discretized space (like in a lattice

model). But if all the associated information would be without duplication, one would not

obtain a holographic description, and in fact, one would not recover gravity.

– 5 –

JHEP04(2011)029

Thus we are going to assume that information is stored on surfaces, or screens. Screens

separate points, and in this way are the natural place to store information about particles

that move from one side to the other. Thus we imagine that this information about the

location particles is stored in discrete bits on the screens. The dynamics on each screen

is given by some unknown rules, which can be thought of as a way of processing the

information that is stored on it. Hence, it does not have to be given by a local field theory,

or anything familiar. The microscopic details are irrelevant for us.

Let us also assume that (like in AdS/CFT) there is one special direction corresponding

to scale or a coarse graining variable of the microscopic theory. This is the direction in which

space is emergent. So the screens that store the information are like stretched horizons.

On one side we image that we are using the variables associated with space and time. On

the other side of the screen we describe everything still in terms of the microscopic data

from which space is derived. We will assume that the microscopic theory has a well defined

notion of time, and its dynamics is time translation invariant. This allows one to define

energy, and by employing techniques of statistical physics, temperature. These will be the

basic ingredients together with the entropy associated with the amount of information.

3.1 Force and inertia

Our starting assumption is directly motivated by Bekenstein’s original thought experiment

[1]from which he obtained his famous entropy formula. He considered a particle with

mass m attached to a fictitious “string” that is lowered towards a black hole. Just before

the horizon the particle is dropped in. Due to the infinite redshift the mass increase

of the black hole can be made arbitrarily small, classically. If one would take a thermal

gas of particles, this fact would lead to problems with the second law of thermodynamics.

Bekenstein solved this by arguing that when a particle is one Compton wavelength from

the horizon, it is considered to be part of the black hole. Therefore, it increases the mass

and horizon area by a small amount, which he identified with one bit of information. This

led him to his area law for the black hole entropy.

We want to mimic this reasoning not near a black hole horizon, but in flat nonrelativistic

space. So we consider a small piece of a holographic screen, and a particle

of mass m that approaches it from the side at which space time has already emerged.

Concretely this means that on one side the physics is described in macroscopic variables

such as the positions of particles. The physics of the other side of the screen is still

formulated in term of microscopic degrees of freedom. The holographic principle tells us

that we can imagine that its associated information can be mapped on to the screen that

separates the two region. Eventually the particle merges with these microscopic degrees of

freedom, but before it does so it already influences the amount of its associated information.

The situation is depicted in figure 2.

Motivated by Bekenstein’s argument, let us postulate that the change of entropy associated

with the information on the boundary equals

∆S = 2πkB when ∆x =

~

mc

. (3.1)

– 6 –

JHEP04(2011)029

Figure 2. A particle with mass approaches a part of the holographic screen. The screen bounds

the emerged part of space, which contains the particle, and stores data that describe the part of

space that has not yet emerged, as well as some part of the emerged space.

The reason for putting in the factor of 2π, will become apparent soon. Let us rewrite this

formula in the slightly more general form by assuming that the change in entropy near the

screen is linear in the displacement ∆x.

∆S = 2πkB

mc

~

∆x. (3.2)

To understand why it is also proportional to the mass m, let us imagine splitting the

particle into two or more lighter sub-particles. Each sub-particle then carries its own

associated change in entropy after a shift ∆x. Since entropy and mass are both additive,

it is natural to expect that the entropy change should be proportional to the mass. How

does force arise? The basic idea is to use the analogy with osmosis across a semi-permeable

membrane. When a particle has an entropic reason to be on one side of the membrane and

the membrane carries a temperature, it will experience an effective force equal to

F ∆x = T ∆S. (3.3)

This is the entropic force. Thus, in order to have a non zero force, we need to have a

non vanishing temperature. From Newton’s law we know that a force leads to a non zero

acceleration. Of course, it is well known that acceleration and temperature are closely

related. Namely, as Unruh showed, an observer in an accelerated frame experiences a

temperature

kBT =

1

2π

~a

c

, (3.4)

where a denotes the acceleration. Let us take this as the temperature associated with the

bits on the screen. Now it is clear why the equation (3.2) for ∆S was chosen to be of the

given form, including the factor of 2π. It is picked precisely in such a way that one recovers

the second law of Newton

F = ma. (3.5)

as is easily verified by combining (3.4) together with (3.2) and (3.3).

– 7 –

JHEP04(2011)029

Equation (3.4) should be read as a formula for the temperature T that is required

to cause an acceleration equal to a. And not as usual, as the temperature caused by

an acceleration.

3.2 Newton’s law of gravity

Now suppose our boundary is not infinitely extended, but forms a closed surface. More

specifically, let us assume it is a sphere with already emerged space on the outside. For

the following it is best to forget about the Unruh law (3.4), since we don’t need it. It only

served as a further motivation for (3.2). The key statement is simply that we need to have

a temperature in order to have a force. Since we want to understand the origin of gravity,

we need to know where the temperature comes from.

One can think about the boundary as a storage device for information. Assuming

that the holographic principle holds, the maximal storage space, or total number of bits,

is proportional to the area A. In fact, in a theory of emergent space this how area may be

defined: each fundamental bit occupies by definition one unit cell.

Let us denote the number of used bits by N. It is natural to assume that this number

will be proportional to the area. So we write

N =

Ac3

G~

(3.6)

where we introduced a new constant G. Eventually this constant is going to be identified

with Newton’s constant, of course. But since we have not assumed anything yet about the

existence a gravitational force, one can simply regard this equation as the definition of G.

So, the only assumption made here is that the number of bits is proportional to the area.

Nothing more.

Suppose there is a total energy E present in the system. Let us now just make the

simple assumption that the energy is divided evenly over the bits N. The temperature is

then determined by the equipartition rule

E =

1

2

N kBT (3.7)

as the average energy per bit. After this we need only one more equation. It is:

E = M c2

. (3.8)

Here M represents the mass that would emerge in the part of space enclosed by the screen,

see figure 3. Even though the mass is not directly visible in the emerged space, its presence

is noticed though the energy that is distributed of the the screen.

The rest is straightforward: one eliminates E and inserts the expression for the number

of bits to determine T in terms of M and A. Next one uses the postulate (3.2) for the

change of entropy to determine the force. Finally one inserts

A = 4πR2

.

– 8 –

JHEP04(2011)029

Figure 3. A particle with mass m near a spherical holographic screen. The energy is evenly

distributed over the occupied bits, and is equivalent to the mass M that would emerge in the part

of space surrounded by the screen.

and one obtains the familiar law:

F = G

Mm

R2

. (3.9)

We have recovered Newton’s law of gravitation, practically from first principles!

These equations do not just come out by accident. It had to work, partly for dimensional

reasons, and also because the laws of Newton have been ingredients in the steps

that lead to black hole thermodynamics and the holographic principle. In a sense we have

reversed these arguments. But the logic is clearly different, and sheds new light on the

origin of gravity: it is an entropic force! That is the main statement, which is new and has

not been made before. If true, this should have profound consequences.

3.3 Naturalness and robustness of the derivation

Our starting point was that space has one emergent holographic direction. The additional

ingredients were that (i) there is a change of entropy in the emergent direction (ii) the

number of degrees of freedom are proportional to the area of the screen, and (iii) the energy

is evenly distributed over these degrees of freedom. After that it is unavoidable that the

resulting force takes the form of Newton’s law. In fact, this reasoning can be generalized

to arbitrary dimensions3 with the same conclusion. But how robust and natural are these

heuristic arguments?

Perhaps the least obvious assumption is equipartition, which in general holds only for

free systems. But how essential is it? Energy usually spreads over the microscopic degrees

of freedom according to some non trivial distribution function. When the lost bits are

randomly chosen among all bits, one expects the energy change associated with ∆S still

to be proportional to the energy per unit area E/A. This fact could therefore be true even

when equipartition is not strictly obeyed.

3

In d dimensions (3.6) includes a factor 1

2

d−2

d−3

to get the right identification with Newton’s constant.

– 9 –

JHEP04(2011)029

Why do we need the speed of light c in this non relativistic context? It was necessary

to translate the mass M into an energy, which provides the heat bath required for the

entropic force. In the non-relativistic setting this heat bath is infinite, but in principle one

has to take into account that the heat bath loses or gains energy when the particle changes

its location under the influence of an external force. This will lead to relativistic redshifts,

as we will see.

Since the postulate (3.1) is the basic assumption from which everything else follows,

let us discuss its meaning in more detail. Why does the entropy precisely change like this

when one shifts by one Compton wave length? In fact, one may wonder why we needed to

introduce Planck’s constant in the first place, since the only aim was to derive the classical

laws of Newton. Indeed, ~ eventually drops out of the most important formulas. So, in

principle one could multiply it with any constant and still obtain the same result. Hence, ~

just serves as an auxiliary variable that is needed for dimensional reasons. It can therefore

be chosen at will, and defined so that (3.1) is exactly valid. The main content of this

equation is therefore simply that there is an entropy change perpendicular to the screen

proportional to the mass m and the displacement ∆x. That is all there is to it.

If we would move further away from the screen, the change in entropy will in general

no longer be given by the same rule. Suppose the particle stays at radius R while the screen

is moved to R0 < R. The number of bits on the screen is multiplied by a factor (R0/R)

2

,

while the temperature is divided by the same factor. Effectively, this means that only ~ is

multiplied by that factor, and since it drops out, the resulting force will stay the same. In

this situation the information associated with the particle is no longer concentrated in a

small area, but spreads over the screen. The next section contains a proposal for precisely

how it is distributed, even for general matter configurations.

3.4 Inertia and the Newton potential

To complete the derivation of the laws of Newton we have to understand why the symbol

a, that was basically introduced by hand in (3.4), is equal to the physical acceleration

x¨. In fact, so far our discussion was quasi static, so we have not determined yet how to

connect space at different times. In fact, it may appear somewhat counter-intuitive that the

temperature T is related to the vector quantity a, while in our identifications the entropy

gradient ∆S/∆x is related to the scalar quantity m. In a certain way it seems more natural

to have it the other way around.

So let reconsider what happens to the particle with mass m when it approaches the

screen. Here it should merge with the microscopic degrees of freedom on the screen, and

hence it will be made up out of the same bits as those that live on the screen. Since each

bit carries an energy 1

2

kBT, the number of bits n follows from

mc2 =

1

2

n kBT. (3.10)

When we insert this into equation (3.2), and use (3.4), we can express the entropy change

in terms of the acceleration as ∆S

n

= kB

a ∆x

2c

2

. (3.11)

– 10 –

JHEP04(2011)029

By combining the above equations one of course again recovers F = ma as the entropic

force. But, by introducing the number of bits n associated with the particle, we succeeded

in making the identifications more natural in terms of their scalar versus vector character.

In fact, we have eliminated ~ from the equations, which in view of our earlier comment is

a good thing.

Thus we conclude that acceleration is related to an entropy gradient. This will be one

of our main principles: inertia is a consequence of the fact that a particle in rest will stay

in rest because there are no entropy gradients. Given this fact it is natural to introduce

the Newton potential Φ and write the acceleration as a gradient

a = −∇Φ.

This allows us to express the change in entropy in the concise way

∆S

n

= − kB

∆Φ

2c

2

. (3.12)

We thus reach the important conclusion that the Newton potential Φ keeps track of the

depletion of the entropy per bit. It is therefore natural to identify it with a coarse graining

variable, like the (renormalization group) scale in AdS/CFT. Indeed, in the next section

we propose a holographic scenario for the emergence of space in which the Newton potential

precisely plays that role. This allows us to generalize our discussion to other mass

distributions and arbitrary positions in a natural way, and give additional support for the

presented arguments.

4 Emergent gravity for general matter distributions

Space emerges at a macroscopic level only after coarse graining. One forgets or integrates

out a large number of microscopic degrees of freedom. A certain part of the microscopic

phase space is forgotten. Hence, there will be a finite entropy associated with each matter

configuration, which measures the amount of microscopic information that is made invisible

to the macroscopic observer. In general, this amount will depend on the distribution of the

matter. The microscopic dynamics by which this information is processed looks random

from a macroscopic point of view. Fortunately, to determine the force we don’t need the

details of the information, nor the exact dynamics, only the amount of information given

by the entropy, and the energy that is associated with it. If the entropy changes as a

function of the location of the matter distribution, it will lead to an entropic force.

Therefore, space can not just emerge by itself. It has to be endowed by a book keeping

device that keeps track of the amount of information for a given energy distribution. It

turns out, that in a non relativistic situation this device is provided by Newton’s potential

Φ. And the resulting entropic force is called gravity.

We start from microscopic information. It is assumed to be stored on holographic

screens. Note that information has a natural inclusion property: by forgetting certain bits,

by coarse graining, one reduces the amount of information. This coarse graining can be

achieved through averaging, a block spin transformation, integrating out, or some other

– 11 –

JHEP04(2011)029

Figure 4. The holographic screens are located at equipotential surfaces. The information on the

screens is coarse grained in the direction of decreasing values of the Newton potential Φ. The

maximum coarse graining happens at black hole horizons, when 2Φ/c2 = −1.

renormalization group procedure. At each step one obtains a further coarse grained version

of the original microscopic data.

The coarse grained data live on smaller screens obtained by moving the first screen

further into the interior of the space. The information that is removed by coarse graining

is replaced by the emerged part of space between the two screens. In this way one gets a

nested or foliated description of space by having surfaces contained within surfaces. In other

words, just like in AdS/CFT, there is one emerging direction in space that corresponds to

a “coarse graining” variable, something like the cut-off scale of the system on the screens.

A priori there is no preferred holographic direction in flat space. However, this is where

we use our observation about the Newton potential. It is the natural variable that measures

the amount of coarse graining on the screens. Therefore, the holographic direction is given

by the gradient ∇Φ of the Newton potential. In other words, the holographic screens

correspond to equipotential surfaces. This leads to a well defined foliation of space, except

that screens may break up into disconnected parts that each enclose different regions of

space. This is depicted in figure 4.

The amount of coarse graining is measured by the ratio Φ/c2

, as can be seen from (3.12).

Note that −2Φ/c2

is a dimensionless number that is always between zero and one, and

is only equal to one on the horizon of a black hole. We interpret this as the point

where all bits have been maximally coarse grained. Thus the foliation naturally stops

at black hole horizons.

4.1 The Poisson equation for general matter distributions

Consider a microscopic state, which after coarse graining corresponds to a given mass

distribution in space. All microscopic states that lead to the same mass distribution belong

to the same macroscopic state. The entropy for each of these state is defined as the number

of microscopic states that flow to the same macroscopic state.

We want to determine the gravitational force by using virtual displacements, and

calculating the associated change in energy. So, let us freeze time and keep all the matter

– 12 –

JHEP04(2011)029

at fixed locations. Hence, it is described by a static matter density ρ(~r). Our aim is to

obtain the force that the matter distribution exerts on a collection of test particles with

masses mi and positions ~ri

.

We choose a holographic screen S corresponding to an equipotential surface with fixed

Newton potential Φ0. We assume that the entire mass distribution given by ρ(x) is contained

inside the volume enclosed by the screen, and all test particles are outside this

volume. To explain the force on the particles, we again need to determine the work that

is performed by the force and show that it is naturally written as the change in entropy

multiplied by the temperature. The difference with the spherically symmetric case is that

the temperature on the screen is not necessarily constant. Indeed, the situation is in general

not in equilibrium. Nevertheless, one can locally define temperature and entropy per

unit area.

First let us identify the temperature. We do this by taking a test particle and moving

it close to the screen, and measuring the local acceleration. Thus, motivated by our earlier

discussion we define temperature analogous to (3.4), namely by

kBT =

1

2π

~∇Φ

kc . (4.1)

Here the derivative is taken in the direction of the outward pointing normal to the screen.

Note at this point Φ is just introduced as a device to describe the local acceleration, but

we don’t know yet whether it satisfies an equation that relates it to the mass distribution.

The next ingredient is the density of bits on the screen. We again assume that these

bits are uniformly distributed, and so (3.6) is generalized to

dN =

c

3

G~

dA. (4.2)

Now let us impose the analogue of the equipartition relation (3.7). It is takes the form of

an integral expression for the energy

E =

1

2

kB

Z

S

T dN. (4.3)

It is an amusing exercise to work out the consequence of this relation. Of course, the

energy E is again expressed in terms of the total enclosed mass M. After inserting our

identifications for the left hand side one obtains a familiar relation: Gauss’s law!

M =

1

4πG Z

S

∇Φ · dA. (4.4)

This should hold for arbitrary screens given by equipotential surfaces. When a bit of mass

is added to the region enclosed by the screen S, for example, by first putting it close to the

screen and then pushing it across, the mass M should change accordingly. This condition

can only hold in general if the potential Φ satisfies the Poisson equation

∇2Φ(~r) = 4πG ρ(~r). (4.5)

We conclude that by making natural identifications for the temperature and the information

density on the holographic screens, that the laws of gravity come out in a straightforward

– 13 –

JHEP04(2011)029

m1

!

F 1

δ

!

r 1

m2

!

F 2

m3

!

F Φ = Φ0 3

Figure 5. A general mass distribution inside the not yet emerged part of space enclosed by the

screen. A collection of test particles with masses mi are located at arbitrary points ~ri

in the already

emerged space outside the screen. The forces F~

i due to gravity are determined by the virtual work

done after infinitesimal displacement δ~ri of the particles.

fashion. Note that to obtain Newtonian gravity one has to assume that the bit density on

the screen is uniform, and that the energy follows from an equipartition. Changing either

of these assumption leads to another form of gravity.

4.2 The gravitational force for arbitrary particle locations

The next issue is to obtain the force acting on matter particles that are located at arbitrary

points outside the screen. For this we need a generalization of the first postulate (3.2) to

this situation. What is the entropy change due to arbitrary infinitesimal displacements δ~ri

of the particles? There is only one natural choice here. We want to find the change δs in

the entropy density locally on the screen S. We noted in (3.12) that the Newton potential

Φ keeps track of the changes of information per unit bit. Hence, the right identification for

the change of entropy density is

δs = kB

δΦ

2c

2

dN (4.6)

where δΦ is the response of the Newton potential due to the shifts δ~ri of the positions

of the particles. To be specific, δΦ is determined by solving the variation of the Poisson

equation

∇2

δΦ(~r) = 4πGX

i

mi δ~ri

·∇i δ(~r − ~ri) (4.7)

One can verify that with this identification one indeed reproduces the entropy shift (3.2)

when one of the particles approaches the screen.

Let us now determine the entropic forces on the particles. The combined work done

by all of the forces on the test particles is determined by the first law of thermodynamics.

However, we need to express in terms of the local temperature and entropy variation.

– 14 –

JHEP04(2011)029

Hence,

X

i

F~

i

· δ~ri =

Z

S

T δs (4.8)

To see that this indeed gives the gravitational force in the most general case, one simply

has to use the electrostatic analogy. Namely, one can redistribute the entire mass M as

a mass surface density over the screen S without changing the forces on the particles.

The variation of the Newton potential can be obtained from the Greens function for the

Laplacian. The rest of the proof is a straightforward application of electrostatics, but then

applied to gravity. The basic identity one needs to prove is

X

i

F~

i

· δ~ri =

1

4πG Z

S

δΦ∇Φ − Φ∇δΦ

dA (4.9)

which holds for any location of the screen outside the mass distribution. This is easily

verified by using Stokes theorem and the Laplace equation. The second term vanishes

when the screen is chosen at a equipotential surface. To see this, simply replace Φ by Φ0

and pull it out of the integral. Since δΦ is sourced by only the particles outside the screen,

the remaining integral just gives zero.

The forces we obtained are independent of the choice of the location of the screen. We

could have chosen any equipotential surface, and we would obtain the same values for F~

i

,

the ones described by the laws of Newton. That all of this works is not just a matter of

dimensional analysis. The invariance under the choice of equipotential surface is very much

consistent with idea that a particular location corresponds to an arbitrary choice of the

scale that controls the coarse graining of the microscopic data. The macroscopic physics,

in particular the forces, should be independent of that choice.

5 The equivalence principle and the Einstein equations

Since we made use of the speed of light c in our arguments, it is a logical step to try

and generalize our discussion to a relativistic situation. So let us assume that the microscopic

theory knows about Lorentz symmetry, or even has the Poincar´e group as a global

symmetry. This means we have to combine time and space into one geometry. A scenario

with emergent space-time quite naturally leads to general coordinate invariance and curved

geometries, since a priori there are no preferred choices of coordinates, nor a reason why

curvatures would not be present. Specifically, we would like to see how Einstein’s general

relativity emerges from similar reasonings as in the previous section. We will indeed show

that this is possible. But first we study the origin of inertia and the equivalence principle.

5.1 The law of inertia and the equivalence principle

Consider a static background with a global time like Killing vector ξ

a

. To see the emergence

of inertia and the equivalence principle, one has to relate the choice of this Killing vector

field with the temperature and the entropy gradients. In particular, we like to see that the

usual geodesic motion of particles can be understood as being the result of an entropic force.

– 15 –

JHEP04(2011)029

In general relativity4

the natural generalization of Newton’s potential is [11],

φ =

1

2

log(−ξ

a

ξa). (5.1)

Its exponent e

φ

represents the redshift factor that relates the local time coordinate to that

at a reference point with φ = 0, which we will take to be at infinity.

Just like in the non relativistic case, we like to use φ to define a foliation of space,

and put our holographic screens at surfaces of constant redshift. This is a natural choice,

since in this case the entire screen uses the same time coordinate. So the processing of the

microscopic data on the screen can be done using signals that travel without time delay.

We want to show that the redshift perpendicular to the screen can be understood

microscopically as originating from the entropy gradients.5 To make this explicit, let us

consider the force that acts on a particle of mass m. In a general relativistic setting force

is less clearly defined, since it can be transformed away by a general coordinate transformation.

But by using the time-like Killing vector one can given an invariant meaning to

the concept of force [11].

The four velocity u

a of the particle and its acceleration a

b ≡u

a∇au

b

can be expressed

in terms of the Killing vector ξ

b as

u

b = e

−φ

ξ

b

, ab = e

−2φ

ξ

a∇aξ

b

.

We can further rewrite the last equation by making use of the Killing equation

∇aξb + ∇bξa = 0

and the definition of φ. One finds that the acceleration can again be simply expressed the

gradient

a

b = −∇bφ. (5.2)

Note that just like in the non relativistic situation the acceleration is perpendicular to

screen S. So we can turn it into a scalar quantity by contracting it with a unit outward

pointing vector Nb normal to the screen S and to ξ

b

.

The local temperature T on the screen is now in analogy with the non relativistic

situation defined by

T =

~

2π

e

φN

b∇bφ. (5.3)

Here we inserted a redshift factor e

φ

, because the temperature T is measured with respect

to the reference point at infinity.

To find the force on a particle that is located very close to the screen, we first use

again the same postulate as in section two. Namely, we assume that the change of entropy

4

In this subsection and the next we essentially follow Wald’s book on general relativity (pg. 288–290).

We use a notation in which c and kB are put equal to one, but we will keep G and ~ explicit.

5

In this entire section it will be very useful to keep the polymer example of section 2 in mind, since that

will make the logic of our reasoning very clear.

– 16 –

JHEP04(2011)029

at the screen is 2π for a displacement by one Compton wavelength normal to the screen.

Hence,

∇aS = −2π

m

~

Na, (5.4)

where the minus sign comes from the fact that the entropy increases when we cross from the

outside to the inside. The comments made on the validity of this postulate in section 3.3

apply here as well. The entropic force now follows from (5.3)

Fa = T ∇aS = −meφ∇aφ (5.5)

This is indeed the correct gravitational force that is required to keep a particle at fixed

position near the screen, as measured from the reference point at infinity. It is the relativistic

analogue of Newton’s law of inertia F = ma. The additional factor e

φ

is due to the

redshift. Note that ~ has again dropped out.

It is instructive to rewrite the force equation (5.5) in a microcanonical form. Let

S(E, xa

) be the total entropy associated with a system with total energy E that contains a

particle with mass m at position x

a

. Here E also includes the energy of the particle. The

entropy will in general also dependent on many other parameters, but we suppress these

in this discussion.

As we explained in the section 2, an entropic force can be determined micro-canonically

by adding by hand an external force term, and impose that the entropy is extremal. For

this situation this condition looks like

d

dxa

S

E+e

φ(x)m, xa

= 0. (5.6)

One easily verifies that this leads to the same equation (5.5). This fixes the equilibrium

point where the external force, parametrized by φ(x) and the entropic force statistically

balance each other. Again we stress the point that there is no microscopic force acting

here! The analogy with equation (2.4) for the polymer, discussed in section 2, should be

obvious now.

Equation (5.6) tells us that the entropy remains constant if we move the particle and

simultaneously reduce its energy by the redshift factor. This is true only when the particle

is very light, and does not disturb the other energy distributions. It simply serves as a

probe of the emergent geometry. This also means that redshift function φ(x) is entirely

fixed by the other matter in the system.

We have arrived at equation (5.6) by making use of the identifications of the temperature

and entropy variations in space time. But actually we should have gone the other way.

We should have started from the microscopics and defined the space dependent concepts

in terms of them. We chose not to follow such a presentation, since it might have appeared

somewhat contrived.

But it is important to realize that the redshift must be seen as a consequence of the

entropy gradient and not the other way around. The equivalence principle tells us that

redshifts can be interpreted in the emergent space time as either due to a gravitational

field or due to the fact that one considers an accelerated frame. Both views are equivalent

in the relativistic setting, but neither view is microscopic. Acceleration and gravity are

both emergent phenomena.

– 17 –

JHEP04(2011)029

5.2 Towards a derivation of the Einstein equations

We would like to extend our derivation of the laws of gravity to the relativistic case, and

obtain the Einstein equations. We will present a sketch of how this can be done in a very

analogous fashion. Let us again consider a holographic screen on closed surface of constant

redshift φ. We assume that it is enclosing a certain static mass configuration with total

mass M. The bit density on the screen is again given by

dN =

dA

G~

(5.7)

as in (4.2). Following the same logic as before, let us assume that the energy associated

with the mass M is distributed over all the bits. Again by equipartition each bit carries a

mass unit equal to 1

2

T. Hence

M =

1

2

Z

S

T dN (5.8)

After inserting the identifications for T and dN we obtain

M =

1

4πG Z

S

e

φ∇φ · dA (5.9)

Note that again ~ drops out, as expected. The equation (5.9) is indeed known to be the

natural generalization of Gauss’s law to General Relativity. Namely, the right hand side is

precisely Komar’s definition of the mass contained inside an arbitrary volume inside any

static curved space time. It can be derived by assuming the Einstein equations. In our

reasoning, however, we are coming from the other side. We made identifications for the

temperature and the number of bits on the screen. But we don’t know yet whether it

satisfies any field equations. The key question at this point is whether the equation (5.9)

is sufficient to derive the full Einstein equations.

An analogous question was addressed by Jacobson for the case of null screens. By

adapting his reasoning to this situation, and combining it with Wald’s exposition of the

Komar mass, it is straightforward to construct an argument that naturally leads to the

Einstein equations. We will present a sketch of this.

First we make use of the fact that the Komar mass (5.9) can alternatively be reexpressed

in terms of the Killing vector ξ

a as

M =

1

8πG Z

S

dxa∧dxb

abcd∇c

ξ

d

(5.10)

Next one uses Stokes theorem and subsequently the relation ∇a∇aξ

b = −Rb

aξ

a which is

implied by the Killing equation for ξ

a

. This leads to an expression for the mass in terms

of the Ricci tensor.

M =

1

4πG Z

Σ

Rabn

a

ξ

b

dV (5.11)

From this identity one can get to an integrated form of the Einstein equations, by noting

that the left hand side can be expressed as an integral over the enclosed volume of certain

components of stress energy tensor Tab. The particular combination of the stress energy

– 18 –

JHEP04(2011)029

tensor can be fixed by comparing properties on both sides, such as for instance the conservation

laws of the tensors that occur in the integrals. This leads finally to the integral

relation

2

Z

Σ

Tab −

1

2

T gab

n

a

ξ

b

dV =

1

4πG Z

Σ

Rabn

a

ξ

b

dV (5.12)

where Σ is the three dimensional volume bounded by the holographic screen S and n

a

is

its normal.

We arrived at the equation (5.12) directly from (5.9) by performing independent steps

on both the left and the right hand side. It is derived for a general static background with

a time like Killing vector ξ

a

. By requiring that it holds for arbitrary screens implies that

also the integrands on both sides are equal. This gives us only a certain component of

the Einstein equation. In fact, we can choose the surface Σ in many ways, as long as its

boundary is given by S. This means that we can vary the normal n

a

. But that still leaves

a contraction with the Killing vector ξ

a

.

To try and get to the Einstein equations we now use a similar reasoning as Jacobson [7],

except now applied to time-like screens. Let us consider a very small region of space time

and look also at very short time scales. Since locally every geometry looks approximately

like Minkowski space, we can choose approximate time like Killing vectors. Now consider

a small local part of the screen, and impose that when matter crosses it, the value of the

Komar integral will jump precisely by the corresponding mass m. By following the steps

described above then leads to (5.12) for all these Killing vectors, and for arbitrary screens.

This kind of reasoning should be sufficient to obtain the full Einstein equations. It is worth

making this sketchy derivation into a complete proof of the Einstein equations.

5.3 The force on a collection of particles at arbitrary locations

We close this section with explaining the way the entropic force acts on a collection of

particles at arbitrary locations xi away from the screen and the mass distribution. The

Komar definition of the mass will again be useful for this purpose. The definition of the

Komar mass depends on the choice of Killing vector ξa. In particular also in its norm, the

redshift factor e

φ

. If one moves the particles by a virtual displacement δxi

, this will effect

the definition of the Komar mass of the matter configuration inside the screen. In fact, the

temperature on the screen will be directly affected by a change in the redshift factor.

The virtual displacements can be performed quasi-statically, which means that the

Killing vector itself is still present. Its norm, or the redshift factor, may change, however.

In fact, also the spatial metric may be affected by this displacement. We are not going to

try to solve these dependences, since that would be impossible due to the non linearity of

the Einstein equations. But we can simply use the fact that the Komar mass is going to

be a function of the positions xi of the particles.

Next let us assume that in addition to this xi dependence of the Komar mass that the

entropy of the entire system has also explicit xi dependences, simply due to changes in the

amount of information. These are de difference that will lead to the entropic force that we

wish to determine. We will now give a natural prescription for the entropy dependence that

is based on a maximal entropy principle and indeed gives the right forces. Namely, assume

– 19 –

JHEP04(2011)029

that the entropy may be written as a function of the Komar mass M and in addition on

the xi

. But since the Komar mass should be regarded as a function M(xi) of the positions

xi

, there will be an explicit and an implicit xi dependence in the entropy. The maximal

entropy principle implies that these two dependences should cancel. So we impose

S

M(xi+δxi), xi+δxi

= S

M(xi), xi

(5.13)

By working out this condition and singling out the equations implied by each variation δxi

one finds

∇iM + T ∇iS = 0 (5.14)

The point is that the first term represents the force that acts on the i-th particle due to

the mass distribution inside the screen. This force is indeed equal to minus the derivative

of the Komar mass, simply because of energy conservation. But again, this is not the

microscopic reason for the force. In analogy with the polymer, the Komar mass represents

the energy of the heat bath. Its dependence on the position of the other particles is caused

by redshifts whose microscopic origin lies in the depletion of the energy in the heat bath

due to entropic effects.

Since the Komar integral is defined on the holographic screen, it is clear that like in the

non relativistic case the force is expressed as an integral over this screen as well. We have

not tried to make this representation more concrete. Finally, we note that this argument

was very general, and did not really use the precise form of the Komar integral, or the

Einstein equations. So it should be straightforward to generalize this reasoning to higher

derivative gravity theories by making use of Wald’s Noether charge formalism [12], which

is perfectly suitable for this purpose.

6 Conclusion and discussion

The ideas and results presented in this paper lead to many questions. In this section we

discuss and attempt to answer some of these. First we present our main conclusion.

6.1 The end of gravity as a fundamental force

Gravity has given many hints of being an emergent phenomenon, yet up to this day it is

still seen as a fundamental force. The similarities with other known emergent phenomena,

such as thermodynamics and hydrodynamics, have been mostly regarded as just suggestive

analogies. It is time we not only notice the analogy, and talk about the similarity, but

finally do away with gravity as a fundamental force.

Of course, Einstein’s geometric description of gravity is beautiful, and in a certain way

compelling. Geometry appeals to the visual part of our minds, and is amazingly powerful

in summarizing many aspects of a physical problem. Presumably this explains why we, as

a community, have been so reluctant to give up the geometric formulation of gravity as

being fundamental. But it is inevitable we do so. If gravity is emergent, so is space time

geometry. Einstein tied these two concepts together, and both have to be given up if we

want to understand one or the other at a more fundamental level.

– 20 –

JHEP04(2011)029

Our description was clearly motivated by the AdS/CFT correspondence. The gravitational

side of this duality is usually seen as independently defined. But in our view it is

a macroscopic emergent description, which by chance we happened to know about before

we understood it as being the dual of a microscopic theory without gravity. We can’t

resist making the analogy with a situation in which we would have developed a theory for

elasticity using stress tensors in a continuous medium half a century before knowing about

atoms. We probably would have been equally resistant in accepting the obvious. Gravity

and closed strings are not much different, but we just have not yet got used to the idea.

The results of this paper suggest gravity arises as an entropic force, once space and

time themselves have emerged. If the gravity and space time can indeed be explained as

emergent phenomena, this should have important implications for many areas in which

gravity plays a central role. It would be especially interesting to investigate the consequences

for cosmology. For instance, the way redshifts arise from entropy gradients could

lead to many new insights.

The derivation of the Einstein equations presented in this paper is analogous to previous

works, in particular [7]. Also other authors have proposed that gravity has an entropic

or thermodynamic origin, see for instance [14]. But we have added an important element

that is new. Instead of only focussing on the equations that govern the gravitational field,

we uncovered what is the origin of force and inertia in a context in which space is emerging.

We identified a cause, a mechanism, for gravity. It is driven by differences in entropy, in

whatever way defined, and a consequence of the statistical averaged random dynamics at

the microscopic level. The reason why gravity has to keep track of energies as well as

entropy differences is now clear. It has to, because this is what causes motion!

The presented arguments have admittedly been rather heuristic. One can not expect

otherwise, given the fact that we are entering an unknown territory in which space does

not exist to begin with. The profound nature of these questions in our view justifies the

heuristic level of reasoning. The assumptions we made have been natural: they fit with

existing ideas and are supported by several pieces of evidence. In the following we gather

more supporting evidence from string theory, the AdS/CFT correspondence, and black

hole physics.

6.2 Implications for string theory and relation with AdS/CFT

If gravity is just an entropic force, then what does this say about string theory? Gravity

is seen as an integral part of string theory, that can not be taken out just like that. But

we do know about dualities between closed string theories that contain gravity and decoupled

open string theories that don’t. A particularly important example is the AdS/CFT

correspondence.

The open/closed string and AdS/CFT correspondences are manifestations of the

UV/IR connection that is deeply engrained within string theory. This connection implies

that short and long distance physics can not be seen as totally decoupled. Gravity is a long

distance phenomenon that clearly knows about short distance physics, since it is evident

that Newton’s constant is a measure for the number of microscopic degrees of freedom.

String theory invalidates the ”general wisdom” underlying the Wilsonian effective field

– 21 –

JHEP04(2011)029

Figure 6. The microscopic theory in a) is effectively described by a string theory consisting of

open and closed strings as shown in b). Both types of strings are cut off in the UV.

theory, namely that integrating out short distance degrees of freedom only generate local

terms in the effective action, most of which become irrelevant at low energies. If that were

completely true, the macroscopic physics would be insensitive to the short distance physics.

The reason why the Wilsonian argument fails, is that it makes a too conservative

assumption about the asymptotic growth of the number of states at high energies. In

string theory the number of high energy open string states is such that integrating them

out indeed leads to long range effects. Their one loop amplitudes are equivalent to the

tree level contributions due to the exchange of close string states, which among other are

responsible for gravity. This interaction is, however, equivalently represented by the sum

over all quantum contributions of the open string. In this sense the emergent nature of

gravity is also supported by string theory.

The AdS/CFT correspondence has an increasing number of applications to areas of

physics in which gravity is not present at a fundamental level. Gravitational techniques are

used as tools to calculate physical quantities in a regime where the microscopic description

fails. The latest of these developments is the application to condensed matter theory. No

one doubts that in these situations gravity emerges only as an effective description. It arises

not in the same space as the microscopic theory, but in a holographic scenario with one

extra dimension. No clear explanation exists of where this gravitational force comes from.

The entropic mechanism described in this paper should be applicable to these physical

systems, and explain the emergence of gravity.

The holographic scenario discussed in this paper has certainly be inspired by the way

holography works in AdS/CFT and open closed string correspondences. In string language,

the holographic screens can be identified with D-branes, and the microscopic degrees of

freedom on these screens represented as open strings defined with a cut off in the UV.

The emerged part of space is occupied by closed strings, which are also defined with a

UV cut off, as shown in figure 6. The open and closed string cut offs are related by the

UV/IR correspondence: pushing the open string cut off to the UV forces the closed string

cut off towards the IR, and vice versa. The value of the cut offs is determined by the

– 22 –

JHEP04(2011)029

location of the screen. Integrating out the open strings produces the closed strings, and

leads to the emergence of space and gravity. Note, however, that from our point of view

the existence of gravity or closed strings is not assumed microscopically: they are emergent

as an effective description.

In this way, the open/closed string correspondence supports the interpretation of gravity

as an entropic force. Yet, many still see the closed string side of these dualities is a well

defined fundamental theory. But in our view gravity and closed strings are emergent and

only present as macroscopic concept. It just happened that we already knew about gravity

before we understood it could be obtained from a microscopic theory without it. We can’t

resist making the analogy with a situation in which we would have developed a theory for

elasticity using stress tensors in a continuous medium half a century before knowing about

atoms. We probably would have been equally resistant in accepting the obvious. Gravity

and closed strings are not much different, but we just have to get used to the idea.

6.3 Black hole horizons revisited

We saved perhaps the clearest argument for the fact that gravity is an entropic force to the

very last. The first cracks in the fundamental nature of gravity appeared when Bekenstein,

Hawking and others discovered the laws of black hole thermodynamics. In fact, the thought

experiment mentioned in section 3 that led Bekenstein to his entropy law is surprisingly

similar to the polymer problem. The black hole serves as the heat bath, while the particle

can be thought of as the end point of the polymer that is gradually allowed to go back to

its equilibrium situation.

Of course, there is no polymer in the gravity system, and there appears to be no direct

contact between the particle and the black hole. But here we are ignoring the fact that

one of the dimensions is emergent. In the holographic description of this same process,

the particle can be thought of as being immersed in the heat bath representing the black

hole. This fact is particularly obvious in the context of AdS/CFT, in which a black hole is

dual to a thermal state on the boundary, while the particle is represented as a delocalized

operator that is gradually being thermalized. By the time that the particle reaches the

horizon it has become part of the thermal state, just like the polymer. This phenomenon

is clearly entropic in nature, and is the consequence of a statistical process that drives the

system to its state of maximal entropy.

Upon closer inspection Bekenstein’s reasoning can be used to show that gravity becomes

an entropic force near the horizon, and that the equations presented in section 3 are

exactly valid. He argued that one has to choose a location slightly away from the black hole

horizon at a distance of about the order of the Compton wave length, where we declare

that the particle and the black hole have become one system. Let us say this location

corresponds to choosing a holographic screen. The precise location of this screen can not

be important, however, since there is not a natural preferred distance that one can choose.

The equations should therefore not depend on small variations of this distance.

By pulling out the particle a bit further, one changes its energy by a small amount

equal to the work done by the gravitational force. If one then drops the particle into the

black hole, the mass M increases by this same additional amount. Consistency of the

– 23 –

JHEP04(2011)029

laws of black hole thermodynamics implies that the additional change in the Bekenstein

Hawking entropy, when multiplied with the Hawking temperature TH, must be precisely

equal to the work done by gravity. Hence,

Fgravity = TH

∂SBH

∂x . (6.1)

The derivative of the entropy is defined as the response of SBH due to a change in the

distance x of the particle to the horizon. This fact is surely known, and probably just

regarded as a consistency check. But let us take it one or two steps further.

Suppose we take the screen even further away from the horizon. The same argument

applies, but we can also choose to ignore the fact that the screen has moved, and lower

the particle to the location of the previous screen, the one closer to the horizon. This

process would happen in the part of space behind the new screen location, and hence it

should have a holographic representation on the screen. In this system there is force in

the perpendicular direction has no microscopic meaning, nor acceleration. The coordinate

x perpendicular to the screen is just some scale variable associated with the holographic

image of the particle. Its interpretation as an coordinate is a derived concept: this is what

it means have an emergent space.

The mass is defined in terms the energy associated with the particle’s holographic

image, which presumably is a near thermal state. It is not exactly thermal, however,

because it is still slightly away from the black hole horizon. We have pulled it out of

equilibrium, just like the polymer. One may then ask: what is the cause of the change

in energy that is holographically dual to the work done when in the emergent space we

gradually lower the particle towards the location of the old screen behind the new one

. Of course, this can be nothing else then an entropic effect, the force is simply due to

the thermalization process. We must conclude that the only microscopic explanation is

that there is an emergent entropic force acting. In fact, the correspondence rules between

the scale variable and energy on the one side, and the emergent coordinate x the mass m

on the other, must be such that F = T ∇S translates into the gravitational force. It is

straightforward to see that this indeed works and that the equations for the temperature

and the entropy change are exactly as given in section 3.

The horizon is only a special location for observers that stay outside the black hole.

The black hole can be arbitrarily large and the gravitational force at its horizon arbitrarily

weak. Therefore, this thought experiment is not just teaching us about black holes. It

teaches us about the nature of space time, and the origin of gravity. Or more precisely, it

tells us about the cause of inertia. We can do the same thought experiment for a Rindler

horizon, and reach exactly the same conclusion. In this case the correspondence rules must

be such that F = T ∇S translates into the inertial force F = ma. Again the formulas work

out as in section 3.

6.4 Final comments

This brings us to a somewhat subtle and not yet fully understood aspect. Namely, the role

of ~. The previous arguments make clear that near the horizon the equations are valid with

~ identified with the actual Planck constant. However, we have no direct confirmation or

– 24 –

JHEP04(2011)029

proof that the same is true when we go away from the horizon, or eespecially when there

is no horizon present at all. In fact, there are reasons to believe that the equations work

slightly different there. The first is that one is not exactly at thermal equilibrium. Horizons

have well defined temperatures, and clearly are in thermal equilibrium. If one assumes that

the screen at an equipotential surface with Φ = Φ0 is in equilibrium, the entropy needed

to get the Unruh temperature (3.4) is given by the Bekenstein-Hawking formula, including

the factor 1/4,

S =

c

3

4G~

Z

S

dA. (6.2)

This value for this entropy appears to be very high, and violates the Bekenstein bound [15]

that states that a system contained in region with radius R and total energy E can not

have an entropy larger than ER. The reason for this discrepancy may be that Bekenstein’s

argument does not hold for the holographic degrees of freedom on the screen, or because

of the fact that we are far from equilibrium.

But there may also be other ways to reconcile these statements, for example by making

use of the freedom to rescale the value of ~. This would not effect the final outcome for

the force, nor the fact that it is entropic. In fact, one can even multiply ~ by a function

f(Φ0) of the Newton potential on the screen. This rescaling would affect the values of the

entropy and the temperature in opposite directions: T gets multiplied by a factor, while S

will be divided by the same factor. Since a priori we can not exclude this possibility, there

is something to be understood. In fact, there are even other modifications possible, like

a description that uses weighted average over many screens with different temperatures.

Even then the essence of our conclusions would not change, which is the fact that gravity

and inertia are entropic forces.

Does this view of gravity lead to predictions? The statistical average should give the

usual laws, hence one has to study the fluctuations in the gravitational force. Their size

depends on the effective temperature, which may not be universal and depends on the

effective value of ~. An interesting thought is that fluctuations may turn out to be more

pronounced for weak gravitational fields between small bodies of matter. But clearly, we

need a better understanding of the theory to turn this into a prediction.

It is well known that Newton was criticized by his contemporaries, especially by Hooke,

that his law of gravity acts at a distance and has no direct mechanical cause like the elastic

force. Ironically, this is precisely the reason why Hooke’s elastic force is nowadays not seen

as fundamental, while Newton’s gravitational force has maintained that status for more

than three centuries. What Newton did not know, and certainly Hooke didn’t, is that the

universe is holographic. Holography is also an hypothesis, of course, and may appear just

as absurd as an action at a distance.

One of the main points of this paper is that the holographic hypothesis provides a

natural mechanism for gravity to emerge. It allows direct “contact” interactions between

degrees of freedom associated with one material body and another, since all bodies inside a

volume can be mapped on the same holographic screen. Once this is done, the mechanisms

for Newton’s gravity and Hooke’s elasticity are surprisingly similar. We suspect that neither

of these rivals would have been happy with this conclusion.

– 25 –

JHEP04(2011)029

Acknowledgments

This work is partly supported by Stichting FOM. I like to thank J. de Boer, B. Chowdhuri,

R. Dijkgraaf, P. McFadden, G. ’t Hooft, B. Nienhuis, J.-P. van der Schaar, and especially

M. Shigemori, K. Papadodimas, and H. Verlinde for discussions and comments.

Open Access. This article is distributed under the terms of the Creative Commons

Attribution Noncommercial License which permits any noncommercial use, distribution,

and reproduction in any medium, provided the original author(s) and source are credited.

References

[1] J.D. Bekenstein, Black holes and entropy, Phys. Rev. D 7 (1973) 2333 [SPIRES].

[2] J.M. Bardeen, B. Carter and S.W. Hawking, The Four laws of black hole mechanics,

Commun. Math. Phys. 31 (1973) 161 [SPIRES].

[3] S.W. Hawking, Particle Creation by Black Holes, Commun. Math. Phys. 43 (1975) 199

[SPIRES].

[4] P.C.W. Davies, Scalar particle production in Schwarzschild and Rindler metrics, J. Phys. A

8 (1975) 609 [SPIRES].

[5] W.G. Unruh, Notes on black hole evaporation, Phys. Rev. D 14 (1976) 870 [SPIRES].

[6] T. Damour, Surface effects in black hole physics, in Proceedings of the Second Marcel

Grossmann Meeting on General Relativity, R. Ruffini ed., North Holland, (1982).

[7] T. Jacobson, Thermodynamics of space-time: The Einstein equation of state, Phys. Rev.

Lett. 75 (1995) 1260 [gr-qc/9504004] [SPIRES].

[8] G. ’t Hooft, Dimensional reduction in quantum gravity, gr-qc/9310026 [SPIRES].

[9] L. Susskind, The World as a hologram, J. Math. Phys. 36 (1995) 6377 [hep-th/9409089]

[SPIRES].

[10] J.M. Maldacena, The large-N limit of superconformal field theories and supergravity, Int. J.

Theor. Phys. 38 (1999) 1113 [Adv. Theor. Math. Phys. 2 (1998) 231] [hep-th/9711200]

[SPIRES].

[11] R.M. Wald, General Relativity, The University of Chicago Press, (1984).

[12] R.M. Wald, Black hole entropy is the Noether charge, Phys. Rev. D 48 (1993) 3427

[gr-qc/9307038] [SPIRES].

[13] L. Susskind, The anthropic landscape of string theory, hep-th/0302219 [SPIRES].

[14] T. Padmanabhan, Thermodynamical Aspects of Gravity: New insights, Rept. Prog. Phys. 73

(2010) 046901 [arXiv:0911.5004] [SPIRES].

[15] J.D. Bekenstein, A Universal Upper Bound on the Entropy to Energy Ratio for Bounded

Systems, Phys. Rev. D 23 (1981) 287 [SPIRES].

[16] E.P. Verlinde, to appear.

– 26 –

http://link.springer.com/article/10.1007%2FJHEP04%282011%29029

最新投稿・コメント全文リスト コメント投稿はメルマガで即時配信 スレ建て依頼スレ

▲上へ ★阿修羅♪ > 環境・自然・天文板6掲示板 次へ 前へ

**★阿修羅♪**http://www.asyura2.com/ since 1995

スパムメールの中から見つけ出すためにメールのタイトルには必ず「阿修羅さんへ」と記述してください。

すべてのページの

**引用、転載、リンクを許可**します。

**確認メールは不要**です。引用元リンクを表示してください。