Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.
From D-branes to M-branes: Up from String Theory Slides:
  1. 1. From D-Branes to M- Branes: Neil Lambert CERN University of Nis 22 Oct 2010 Up from String Theory
  2. 2. Plan • Introduction • What is String Theory? • D-branes • M-Theory • M-branes • Conclusions
  3. 3. Standard Model GUT scale LHC Quantum Gravity The World (as seen from CERN)
  4. 4. • The Standard Model of particle physics is incredibly successful – Describes structure and interactions of all matter* from deep inside nucleons upwards • General Relativity is also very successful – Describes physics on large to cosmologically large scales • But they are famously hard to reconcile – GR is classical – Standard Model is an effective low-energy theory * Well maybe 20% of it
  5. 5. • String Theory seems capable of describing all that we expect in one consistent framework: – Quantum Mechanics and General Covariance – Standard Model-like gauge theory – General Relativity – Cosmology (inflation)?
  6. 6. What is String Theory? Well in fact we know an awful lot (although not what string theory really is)
  7. 7. • (perturbative) quantum field theory assumes that the basic states are point- like particles – Interactions occur when two particles meet:
  8. 8. • Point particles are replaced by 1- dimensional strings – Multitude of particles correspond to the lowest harmonics of an infinite tower of modes
  9. 9. • Feynman diagrams merge and become smooth surfaces • Only one coupling constant: gs - Vacuum expectation value of a scalar field – the dilaton Á
  10. 10. • A remarkable feature is that gravity comes out of the quantum theory, unified with gauge forces • The dimension of spacetime is 10 • Must compactify to 4D • There appear to be a plethora of models with Standard Model-like behaviour – Estimated 10500 4D vacua Landscape
  11. 11. The World (as seen from the Multiverse)
  12. 12. D-Branes • In addition to strings, String Theory contains D-branes: – p-dimensional surfaces in spacetime • 0-brane = point particle • 1-brane = string • 2-brane = membrane • etc…. – Non-perturbative states: Mass ~ 1/gs – End point of open strings
  13. 13. • These open strings give dynamics to the D- brane • At lowest order the dynamics are those of U(n) Super-Yang-Mills
  14. 14. – gYM is determined from gs – Light modes on the worldvolume arise from the open strings (Higg’s mechanism) • Mass = length of a stretched string between the branes – Vast applications to model building m
  15. 15. • At low energy D-branes appear as (extremal) charged black hole solutions – Singularity is extended along p-dimensions • Thus D-branes have both a Yang-Mills description as well as a gravitational one – Exact counting of black hole microstates – AdS/CFT
  16. 16. What is M-Theory? • But not all is perfect in String Theory – Are there really 10500 vacua? – Can one make any observable predictions? • What is String Theory really? – The construction of vibrating interacting strings is just a perturbative device, not a definition of the theory • What are strongly coupled strings? • Furthermore why 5 perturbative string theories – Type I – Type II A & B – Heterotic E8xE8 & SO(32)
  17. 17. • Now all 5 are all thought to be related as different aspects of single theory: M-theory • How? Duality • Two theories are dual if they describe the same physics but with different variables. e.g. S-duality gs ↔ 1/gs
  18. 18. • The classic example of duality occurs in Maxwell’s equations without sources: – ‘electric’ variables: – ‘magnetic’ variables: Self-dual dF = 0 d ? F = 0 F = dA d ? dA = 0 F = ?dAD d ? dAD = 0
  19. 19. • M-theory moduli space:
  20. 20. • M-theory moduli space: at strong coupling 10D
  21. 21. • M-theory moduli space in 3D: X11
  22. 22. • An 11D metric tensor becomes a 10D metric tensor plus a vector and a scalar Scalar that controls the size of the 11th dimensionU(1) gauge field 10D metric g¹ º = µ e¡ 2Á=3g¹ º e4Á=3Aº e4Á=3A¹ e4Á=3 ¶
  23. 23. • Thus the String Theory dilaton has a geometric interpretation as the size of the 11th dimension – But the vev of is gs – String perturbation theory is an expansion about a degenerate 11th dimension – As gs ∞ an extra dimension opens up • 11D theory in the infinite coupling limit. • Predicts a complete quantum theory in eleven dimensions: M-Theory – Effective action is 11D supergravity – Little else is known Á Á
  24. 24. Type IIA String Theory M-Theory 0-Branes gravitational wave along X11 Strings 2-branes 4-branes 5-branes 6-Branes Kaluza-Klein monopoles 2-branes 5-branes M-Branes
  25. 25. Type IIA String Theory M-Theory 0-Branes gravitational wave along X11 Strings 2-branes 4-branes 5-branes 6-Branes Kaluza-Klein monopoles 2-branes 5-branes M-Branes
  26. 26. Type IIA String Theory M-Theory 0-Branes gravitational wave along X11 Strings 2-branes 4-branes 5-branes 6-Branes Kaluza-Klein monopoles 2-branes 5-branes purely gravitational excitations The branes of M-theory M-Branes
  27. 27. • So there are no strings in M-theory – 2-branes and 5-branes • In particular no open strings and no gs – No perturbative expansion – No microscopic understanding • The dynamics of a single M-branes act to minimize their worldvolumes – With other fields related by supersymmetry • M2 [Bergshoeff, Sezgin, Townsend] • M5 [Howe, Sezgin, West] • What about multiple M-branes?
  28. 28. • In string theory you can derive the dynamics of multiple D-branes from symmetries: – Effective theory has 16 supersymmetries and breaks SO(1,9) → SO(1,p) x SO(9-p) – This is in agreement with maximally supersymmetric Yang-Mills gauge theory L = ¡ 1 4 tr(F2 ) ¡ 1 2 tr(DXi )2 + i 2 tr(¹ª¡¹ D¹ ª) + itr(¹ª¡i [Xi ; ª]) + 1 4 tr([Xi ; Xj ])2
  29. 29. • Can we derive the dynamics of M2-branes from symmetries? – Conformal field theory • Strong coupling (IR) fixed point of 3D SYM – No perturbation expansion – The only maximally supersymmetric Lagrangians are Yang-Mills theories • Wrong symmetries for M-Theory • need SO(1,2) x SO(8) not SO(1,2) x SO(7)
  30. 30. • Can we derive the dynamics of M2-branes from symmetries? – Conformal field theory • Strong coupling (IR) fixed point of 3D SYM – No perturbation expansion – The only maximally supersymmetric Lagrangians are Yang-Mills theories • Wrong symmetries for M-Theory • need SO(1,2) x SO(8) not SO(1,2) x SO(7) • Well that turns out not to be true
  31. 31. • The Yang-Mills theories living on D-branes are determined by the susy variation • Here we find a Lie-algebra with a bi-linear anti- symmetric product: • Closure of the susy algebra leads to gauge symmetry: • Consistency of this implies the Jacobi identity: ±ª = ¡¹ ¡i D¹ Xi ² + [Xi ; Xj ]¡i j ¡10² + : : : [¢; ¢] : A ­ A ! A ±Xi = [¤; Xi ] [¤; [X; Y ]] = [[¤; X]; Y ] + [X; [¤; Y ]]
  32. 32. • What is required for M2-branes? – Now and so we require – Thus we need a triple product: 3-algebra – Closure implies a gauge symmetry: – Consistency requires a generalization of the Jacobi identity (fundamental identity) ¡012² = ² ¡012ª = ¡ª [¢; ¢; ¢] : A ­ A ­ A ! A ±ª = ¡¹ ¡i D¹ Xi ² + [XI ; XJ ; XK ]¡I J K ² ±X = [X; A; B] [X; Y; Z; [A; B]] = [[X; A; B]; Y; Z] + [A; [Y; A; B]; Z] + [X; Y; [Z; A; B]]
  33. 33. • The fundamental identity implies the gauge symmetry acts as a (non simple) Lie algebra acting on • 3-algebra data is equivalent to specifying a Lie-algebra with a (split) metric and a representation acting on vector space space (with an invariant metric). ±X = [X; A; B] A A g g
  34. 34. • This gives a maximally supersymmetric Lagrangian with SO(8) R-symmetry [Bagger,NL] • ‘twisted’ Chern-Simons gauge theory • Conformal, parity invariant L = ¡ 1 2 tr(D¹ XI ; D¹ XI ) + i 2 tr(¹ª; ¡¹ D¹ ª) + i 4 tr(¹ª; ¡I J [XI ; XJ ; ª]) + 1 12 tr([XI ; XJ ; XK ])2 +LC S LC S = X k 4¼ tr( ~A ^ d ~A + 2i 3 ~A ^ ~A ^ ~A)
  35. 35. • But it turns out to only have one example: – a,b,c,d = 1,2,3,4 • SU(2)xSU(2) Chern-Simons at level (k,-k) and matter in the bi-fundamental • Vacuum moduli space: • Two M2-branes in R8 /Z2 – agrees with M-theory when k=2 [Ta ; Tb ; Tc ] = 2¼ k "abcd Td integer Mk = (R8 £ R8 )=D2k M2 = (R8 =Z2 £ R8=Z2)=Z2
  36. 36. • Need to generalize: – Weak coupling arises from orbifold – Consider C4 /Zk 0 B B @ Z1 Z2 Z3 Z4 1 C C A » 0 B B @ ! ! !¡ 1 !¡ 1 1 C C A 0 B B @ Z1 Z2 Z3 Z4 1 C C A ! = e2¼i =k
  37. 37. • From the 3-algebra this is achieved if the triple product is no longer totally anti-symmetric: • Consistency requires a related fundamental identity • For example we can take (for nxm matrices): • Resulting action is similar to the N=8 case but: – U(n)xU(m) Chern-Simons theory at level (k,-k) with matter in the bifundamental [X; Y ; ¹Z] = ¡[Y; X; ¹Z] X,Y,Z are Complex Scalar Fields [X; Y ; ¹Z] = 2¼ k (XZy Y ¡ Y Zy X) Mk;n = Symn (R8 =Zk )
  38. 38. • These theories were was first proposed by [Aharony, Bergman, Jafferis and Maldacena] • They gave a brane diagram derivation – Consider the following Hannay-Witten picture
  39. 39. • In terms of the D3-brane SYM worldvolume theory: – Integrating out D5/D3-strings and flowing to IR gives a U(n)xU(n) CS theory with level (k,-k) coupled to bi-fundamental matter – N=3 is enhanced to N=6
  40. 40. IIB D3 : 1 2 3 NS5 : 1 2 4 5 6 (1; k)5 : 1 2 4µ 5µ 6µ 7µ 8µ 9µ + T ¡ duality along x3 IIA D2 : 1 2 KK : ^3 7 8 9 KK=D6 : ^3 4µ 5µ 6µ 7µ 8µ 9µ + lift to M ¡ theory M ¡ theory M2 : 1 2 KK : ^3 7 8 9 ^10 KK : ^3 4µ 5µ 6µ 7µ 8µ 9µ ^10
  41. 41. • The final configuration is just n M2s in a curved background preserving 3/16 susys. – Metric can be written explicitly – smooth except where the centre's intersect – near horizon limit gives n M2's in R8 /Zk. – Preserved susy's are enhanced to 6/16. • Note that this works for all n and all k – even k=1,2 where we expect N=8 susy • Two supersymmetries are not realized in the Lagrangian (carry U(1) charge) • For k=1 even the centre of mass mode is obscured
  42. 42. • One success of these models is an understanding of the mysterious n3/2 growth of the degrees of freedom – Free energy = f(λ)n2 • λ= n/k 1 λ <<1 • f(λ) = λ-1/2 λ>>1 • This has recently been confirmed in Chern- Simons Theory for all λ [Drukker,Marino,Putrov]
  43. 43. • How does one recover D2-branes from this [Mukhi, Papageorgakis] – Give a vev to a scalar field • breaks U(n)xU(n) U(n) and SO(8) SO(7) – becomes a dynamical U(n) gauge field • Similar to a Higg’s effect where a non-dynamical vector eats a scalar to become dynamical – g2 Y M = v2=k v = hX8i X8 L = k v2 LU (n) SY M (XI 6= 8) + O(kv¡ 3)
  44. 44. • What can we learn about M-theory? – Hints at microscopic dynamics of M-branes • e.g. in the N=8 theory one finds mass = area of a triangle with vertices on an M2
  45. 45. • Mass deformations give fuzzy vacua: – M2-branes blow up into fuzzy M5-branes – Can we learn about M5-branes • Also M2s can end on M5’s: Chern-Simons gauge fields become dynamical [ZA ; ZB ; ¹ZB ] = mB A ZB
  46. 46. • There are also infinite dimensional totally antisymmetric 3-algebras: Nambu bracket – Related to M5-branes? • Infinitely many totally anti-symmetric 3- algebras with a Lorentzian metric – Seem to be equivalent to 3D N=8 SYM but with manifest SO(8) and conformal symmetry [X; Y; Z] = ?(dX ^ dY ^ dZ) Functions on a 3-manifold
  47. 47. Conclusions • M-Theory and M-branes are poorly understood but there has been much recent progress: – Complete proposal for the effective Lagrangian of n M2’s in R8 /Zk – Novel highly supersymmetric Chern-Simons gauge theories based on a 3-algebra. – Gives a Lagrangian description of strongly coupled 3D super Yang-Mills • M5-branes remain very challenging as does M- Theory itself but hopefully progress will be made – M2-brane CFT’s ‘define’ M-theory in AdS4xX7
Science Will Never Explain Why There's Something Rather Than Nothing When predicting something that science will never do, it's wise to recall the French philosopher Auguste Comte. In 1835 he asserted that science will never figure out what stars are made of. That seemed like a safe bet, but within decades astronomers started determining the chemical composition of the Sun and other stars by analyzing the spectrum of light they emitted. I'm nonetheless going out on a limb and guessing that science will never, ever answer what I call "The Question": Why is there something rather than nothing? You might think this prediction is safe to the point of triviality, but certain prominent scientists are claiming not merely that they can answer The Question but that they have already done so. Physicist Lawrence Krauss peddles this message in his new book A Universe From Nothing: Why There Is Something Rather Than Nothing (Free Press, 2012). Krauss's answer is nothing new. Decades ago, physicists such as the legendary John Wheeler proposed that, according to the probabilistic dictates of quantum field theory, even an apparently perfect vacuum seethes with particles and antiparticles popping into and out of existence. In 1990, the Russian physicist Andrei Linde assured me that our entire cosmos—as well as an infinite number of other universes—might have sprung from a primordial "quantum fluctuation." I took this notion—and I think Linde presented it—as a bit of mind-titillating whimsy. But Krauss asks us to take the quantum theory of creation seriously, and so does evolutionary biologist Richard Dawkins. "Even the last remaining trump card of the theologian, 'Why is there something rather than nothing?,' shrivels up before your eyes as you read these pages," Dawkins writes in an afterword to Krauss's book. "If On the Origin of Species was biology's deadliest blow to supernaturalism, we may come to see A Universe From Nothing as the equivalent from cosmology." Whaaaa…??!! Dawkins is comparing the most enduringly profound scientific treatise in history to a pop-science book that recycles a bunch of stale ideas from physics and cosmology. This absurd hyperbole says less about the merits of Krauss's derivative book than it does about the judgment-impairing intensity of Dawkins's hatred of religion. Philosopher David Albert, a specialist in quantum theory, offers a more balanced assessment of Krauss's book in The New York Times Book Review. And by balanced assessment, I mean merciless smack down. Albert asks, "Where, for starters, are the laws of quantum mechanics themselves supposed to have come from?" Modern quantum field theories, Albert points out, "have nothing whatsoever to say on the subject of where those fields came from, or of why the world should have consisted of the particular kinds of fields it does, or of why it should have consisted of fields at all, or of why there should have been a world in the first place. Period. Case closed. End of story." If you want a more satisfying exploration of The Question, check out Why Does the World Exist? by the science and philosophy writer Jim Holt, to be published this summer by W.W. Norton. Holt is neither foolish nor arrogant enough to claim that he or anyone else has answered The Question. Rather, he ponders and talks about The Question not only with physicists, notably Linde, Steven Weinberg and David Deutsch, but also with philosophers, theologians and other non-scientists. And why not? When it comes to The Question, everyone and no one is an expert, because The Question is different in kind than any other question posed by science. Ludwig Wittgenstein was trying to make this point when he wrote, in typically cryptic fashion, "Not how the world is, is the mystical, but that it is." In my favorite section of Holt's book, he chats with novelist John Updike, whose work explored our yearning for spiritual as well as sexual fulfillment. Updike prided himself on keeping abreast of the latest scientific ideas, and one of his novels, Roger's Version (Random House, 1986), features characters who debate whether science can displace religion as a source of ultimate answers. Updike told Holt that he doubted whether science would ever produce a satisfying answer to The Question. Science, Updike said, "aspires, like theology used to, to explain absolutely everything. But how can you cross this enormous gulf between nothing and something?" The theory of inflation, Updike noted, which Linde and other theorists have promoted as a theory of cosmic creation, "seems sort of put forward on a smile and a shoeshine." Updike, who died in 2009, a year after Holt interviewed him, toyed with the idea that, if there is a God, He created the world out of boredom. Thirty years ago, I had a, shall we say, experience that left me pondering a slightly different theological explanation of creation: If there is a God, He created this heart-breaking world because He was suffering from a cosmic identity crisis, triggered by His own confrontation with The Question. In other words, God is as mystified as we are by existence. This idea, which I divulged in The End of Science (Addison Wesley, 1996) and Rational Mysticism (Houghton Mifflin, 2003), is totally wacky, of course, but no more so, to my mind, than the preposterous claim of Krauss and other scientists that they have solved the riddle of existence. Science has told us so much about our world! We now understand, more or less, what reality is made of and what forces push and pull the stuff of existence to and fro. Scientists have also constructed a plausible, empirically founded narrative of the history of the cosmos and of life on Earth. But when scientists insist that they have solved, or will soon solve, all mysteries, including the biggest mystery of all, they do a disservice to science; they become the mirror images of the religious fundamentalists they despise. Comte was wrong about how science is limited, but not that it is limited.
Massless Black Holes and Charged Wormholes in String Theory We discuss the zero mass pointlike solutions and charged Einstein-Rosen bridges (wormholes) that arise from the dyonic black hole solution of the Einstein-Maxwell-dilaton theory. These massless black holes exist individually in spacetime, different from the known massless solutions, which come in pairs with opposite signs for their masses. In order to construct a massless object, we choose the integration constants of the solution to have specific values. The massless solutions present some problems: in one case the dilaton field is complex (or the gauge field has negative kinetic energy), and in the other case the solution has negative entropy and temperature or it is naked singularity in the extremal limit. For the first case, the observables computed are real quantities. This massless solution also allow the bridge construction, and we obtain an analytical and static charged wormhole solution, which satisfies the null energy condition.
Signatures of extra dimensions in gravitational waves Considering gravitational waves propagating on the most general 4+N-dimensional space-time, we investigate the effects due to the N extra dimensions on the four-dimensional waves. All wave equations are derived in general and discussed. On Minkowski4 times an arbitrary Ricci-flat compact manifold, we find: a massless wave with an additional polarization, the breathing mode, and extra waves with high frequencies fixed by Kaluza–Klein masses. We discuss whether these two effects could be observed.
Spacetime Fluctuations and a Stochastic Schrödinger-Newton Equation We propose a stochastic modification of the Schrödinger-Newton equation which takes into account the effect of extrinsic spacetime fluctuations. We use this equation to demonstrate gravitationally induced decoherence of two gaussian wave-packets, and obtain a decoherence criterion similar to those obtained in the earlier literature in the context of effects of gravity on the Schrödinger equation. Is the apparent collapse of the wave-function during a quantum measurement caused by a dynamical physical process which results from possible modification of the Schrödinger equation? Or can it be explained within the framework of standard quantum theory via environmental decoherence and the many-worlds interpretation, or through a reformulation such as Bohmian mechanics? In the coming years it might become possible to decisively answer this question experimentally, thanks to advances in technology, and new innovative ideas for experiments based on optomechanics and interferometry [1]. The focus of such experiments and ideas for experiments is to test dynamical collapse theories such as Continuous Spontaneous Localisation [CSL] which involve a stochastic nonlinear modification of the Schrödinger equation. CSL is a phenomenological theory with two free parameters, designed to solve the measurement problem, explain the Born probability rule, and to explain the apparent absence of superpositions of macroscopic states [2, 3]. How- 1 ever, at the present state of understanding it is unclear as to what is the fundamental origin of CSL: why should there be a stochastic modification of the Schrödinger equation? Possible explanations include the existence of a fundamental stochastic field in nature, which couples nonlinearly to matter fields and results in an anti-Hermitean modification to the Hamiltonian. Alternatively, quantum theory maybe a coarse-grained approximation to a deeper theory such as Trace Dynamics, and stochastic modifications arise when one goes beyond the leading order approximation. A third possible explanation is that gravity plays a role in bringing about collapse of the wave-function [1, 4, 5]. The present paper is concerned with a specific, modest aspect concerning the possible role of gravity. The idea that gravity plays a role in collapse of the wave-function has been around for the last fifty years, and has been pursued by many investigators starting with the works [6–16], and also pursued by Diosi and collaborators [17–20]. The basic principle behind the idea is easy to state and understand. Gravitational fields are produced by material bodies; and largely by macroscopic material bodies. However even macroscopic bodies are not exactly classical, and their position and momenta are subject to the uncertainty principle. It is plausible then [unless one invokes semiclassical gravity] that the gravitational field produced by these bodies is also subject to intrinsic fluctuations, which induce stochasticity in the space-time geometry, which cannot be ignored. Thus when one is studying the Schrödinger evolution of a quantum system on a background spacetime (even a flat Minkowski spacetime), one can in principle not ignore these spacetime fluctuations. When one makes models to see how these fluctuations affect the standard Schrödinger evolution, it is found [as should be the case] that microscopic objects are not affected by the gravitational fluctuations, so that the conventional picture of quantum theory and the linear superposition principle continues to hold for them. However, the Schrödinger evolution of a macroscopic object is significantly affected, leading to gravitationally induced decoherence, thus providing at least a partial resolution of the measurement problem. While it has not been shown that collapse of the wave-function can be achieved through gravity, models strongly suggest that fundamental decoherence [loss of interference without loss of superposition] can be achieved through gravity [without the need for an environment]. It is hoped that when properly understood, gravity might be able to provide an underlying explanation for CSL. One of the earliest pioneering works investigating gravity induced decoherence is due to Karolyhazy, who proposed that the quantum nature of objects imposes a minimum uncertainty [different from Planck length] on the accuracy with which length and time intervals can be measured. This is interpreted as an intrinsic property of spacetime, which is modelled 2 as resulting from a stochastic metric perturbation having a [non-white] gaussian two-point correlation. The Schrödinger evolution of a quantum object is modified to include the effect of this stochastic potential, and it is shown that gravitational decoherence can be achieved for macroscopic objects. This model has been studied further by Karolyhazy and collaborators. In a different model, Diosi has modelled the intrinsic quantum uncertainty of the Newtonian gravitational potential [resulting from the quantum nature of the probe] by a [white-noise] gaussian correlation, and again demonstrated gravitational decoherence. This model has also been studied further by various authors. The Karolyhazy model and the Diosi model have been recently compared in [21]. Models such as those of Karolyhazy and Diosi study the effect of extrinsic space-time fluctuations on the Schrödinger equation. A different gravitational effect is due to the selfgravity of the quantum object: how does the Schrödinger equation get modified by the gravity of the very particle for which this equation is being written? One possible way to describe this effect is to propose that to leading order the particle produces a classical potential satisfying the Poisson equation, whose source is a density proportional to the quantum probability density. The Schrödinger equation is then modified to include this potential [a kind of back-reaction] and the modified equation is known as the Schrödinger-Newton [SN] equation [22–25]. The SN equation has been studied extensively in many papers, for its properties and possible limitations [26–37]. One important feature of the SN equation is a gravitationally induced inhibition of dispersion of a wave-packet [27]. However, the SN equation is not intended to explain gravitationally induced decoherence or collapse of the wave-function. It cannot achieve that because it lacks a stochastic feature, unlike the Karolyhazy and Diosi models, which employ a stochastic gravitational field in the Schrödinger equation. The SN equation only incorporates the deterministic back-reaction of self-gravity in a semiclassical fashion, and one worrisome outcome of this deterministic nonlinearity is superluminal signalling. It is desirable to modify the SN equation into a stochastic equation, possibly by including higher order corrections to self-gravity, or otherwise. This brings home the possibility that the SN equation can take into account self-gravity as well as perhaps produce gravitational decoherence, though it remains to be seen whether the superluminal feature can be gotten rid of by including stochasticity. Another interesting aspect which seems worth considering, and which is the subject of the present paper, is to simultaneously take into account the effect of self-gravity and of extrinsic spacetime fluctuations. After all, that seems to be a rather natural and wholesome way of accounting for the role of gravity in Schrödinger evolution. In this spirit, we write down, in 3 the next section, a modified SN equation which includes a stochastic potential representing extrinsic spacetime uncertainty and having Diosi’s white noise correlation. In Section III, we use this stochastic equation to demonstrate gravitational decoherence of two gaussian states of a free particle, and obtain decoherence criteria similar to those obtained by Diosi. In Section IV we discuss the implications of our results, and compare them with earlier work. Details of some of the integrals that appear in Section III, are given in Appendix I.
What is General Relativity?  Abstract. General relativity is a set of physical and geometric principles, which lead to a set of (Einstein) field equations that determine the gravitational field, and to the geodesic equations that describe light propagation and the motion of particles on the background. But open questions remain, including: What is the scale on which matter and geometry are dynamically coupled in the Einstein equations? Are the field equations valid on small and large scales? What is the largest scale on which matter can be coarse grained while following a geodesic of a solution to Einstein’s equations? We address these questions. If the field equations are causal evolution equations, whose average on cosmological scales is not an exact solution of the Einstein equations, then some simplifying physical principle is required to explain the statistical homogeneity of the late epoch Universe. Such a principle may have its origin in the dynamical coupling between matter and geometry at the quantum level in the early Universe. This possibility is hinted at by diverse approaches to quantum gravity which find a dynamical reduction to two effective dimensions at high energies on one hand, and by cosmological observations which are beginning to strongly restrict the class of viable inflationary phenomenologies on the other. We suggest that the foundational principles of general relativity will play a central role in reformulating the theory of spacetime structure to meet the challenges of cosmology in the 21st century.
Gravity Induced Wave Function Collapse Starting from an idea of S.L. Adler [1], we develop a novel model of gravity-induced spontaneous wave-function collapse. The collapse is driven by complex stochastic fluctuations of the spacetime metric. After having derived the fundamental equations, we prove the collapse and amplification mechanism, the two most important features of a consistent collapse model. Under reasonable simplifying assumptions, we constrain the strength ξ of the complex metric fluctuations with available experimental data. We show that ξ ≥ 10−26 in order for the model to guarantee classicality of macro-objects, and at the same time ξ ≤ 10−20 in order not to contradict experimental evidence. As a comparison, in the recent discovery of gravitational waves in the frequency range 35 to 250 Hz, the (real) metric fluctuations reached a peak ξ ∼ 10−21 .
New survey hints at exotic origin for the Cold Spot: first evidence for the Multiverse? A supervoid is unlikely to explain a 'Cold Spot' in the cosmic microwave background, according to the results of a new survey, leaving room for exotic explanations like a collision between universes. The researchers, led by postgraduate student Ruari Mackenzie and Professor Tom Shanks in Durham University's Centre for Extragalactic Astronomy, publish their results in Monthly Notices of the Royal Astronomical Society. The cosmic microwave background, a relic of the Big Bang, covers the whole sky. At a temperature of 2.73 degrees above absolute zero (or -270.43 degrees Celsius), the CMB has some anomalies, including the Cold Spot. This feature, about 0.00015 degrees colder than its surroundings, was previously claimed to be caused by a huge void, billions of light years across, containing relatively few galaxies. The accelerating expansion of the universe causes voids to leave subtle redshifts on light as it passes through via the integrated Sachs-Wolfe effect. In the case of the CMB this is observed as cold imprints. It was proposed that a very large foreground void could, in part, imprint the CMB Cold Spot which has been a source of tension in models of standard cosmology. Previously, most searches for a supervoid connected with the Cold Spot have estimated distances to galaxies using their colours. With the expansion of the universe more distant galaxies have their light shifted to longer wavelengths, an effect known as a cosmological redshift. The more distant the galaxy is, the higher its observed redshift. By measuring the colours of galaxies, their redshifts, and thus their distances, can be estimated. These measurements though have a high degree of uncertainty. In their new work, the Durham team presented the results of a comprehensive survey of the redshifts of 7,000 galaxies, harvested 300 at a time using a spectrograph deployed on the Anglo-Australian Telescope. From this higher fidelity dataset, Mackenzie and Shanks see no evidence of a supervoid capable of explaining the Cold Spot within the standard theory. The researchers instead found that the Cold Spot region, before now thought to be underpopulated with galaxies, is split into smaller voids, surrounded by clusters of galaxies. This 'soap bubble' structure is much like the rest of the universe, illustrated in Figure 2 by the visual similarity between the galaxy distributions in the Cold Spot area and a control field elsewhere.  Mackenzie commented: "The voids we have detected cannot explain the Cold Spot under standard cosmology. There is the possibility that some non-standard model could be proposed to link the two in the future but our data place powerful constraints on any attempt to do that." If there really is no supervoid that can explain the Cold Spot, simulations of the standard model of the universe give odds of 1 in 50 that the Cold Spot arose by chance. Shanks added: "This means we can't entirely rule out that the Spot is caused by an unlikely fluctuation explained by the standard model. But if that isn't the answer, then there are more exotic explanations. 'Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble universe. If further, more detailed, analysis of CMB data proves this to be the case then the Cold Spot might be taken as the first evidence for the multiverse - and billions of other universes may exist like our own." For the moment, all that can be said is that the lack of a supervoid to explain the Cold Spot has tilted the balance towards these more unusual explanations, ideas that will need to be further tested by more detailed observations of the CMB.
For String-Theory to be a Theory of Fermions as well as Bosons, SuperSymmetry is Needed: Here's why SUSY is True In 2002, Pierre Deligne proved a remarkable theorem on what mathematically is called Tannakian reconstruction of tensor categories. Here I give an informal explanation what what this theorem says and why it has profound relevance for theoretical particle physics: Deligne’s theorem on tensor categories combined with Wigner’s classification of fundamental particles implies a strong motivation for expecting that fundamental high energy physics exhibits supersymmetry. I explain this in a moment. But that said, before continuing I should make the following Side remark. Recall that what these days is being constrained more and more by experiment are models of “low energy supersymmetry”: scenarios where a fundamental high energy supergravity theory sits in a vacuum with the exceptional property that a global supersymmetry transformation survives. Results such as Deligne’s theorem have nothing to say about the complicated process of stagewise spontaneous symmetry breaking of a high energy theory down to the low energy effective theory of its vacua. Instead they say something (via the reasoning explained in a moment) about the mathematical principles which underlie fundamental physics fundamentally, i.e. at high energy. Present experiments, for better or worse, say nothing about high energy supersymmetry. Incidentally, it is also high energy supersymmetry, namely supergravity, which is actually predicted by string theory (this is a theorem: the spectrum of the fermionic “spinning string” miraculously exhibits local spacetime supersymmetry), while low energy supersymmetry needs to be imposed by hand in string theory (namely by assuming Calabi-Yau compactifications, there is no mechanism in the theory that would single them out). End of side remark. Now first recall the idea of Wigner’s classification of fundamental particles. In order to bring out the fundamental force of Wigner classification, I begin by recalling some basics of the fundamental relevance of local spacetime symmetry groups: Given a symmetry group GG and a subgroup HGH↪G, we may regard this as implicitly defining a local model of spacetime: We think of GG as the group of symmetries of the would-be spacetime and of HH as the subgroup of symmetries that fix a given point. Assuming that GG acts transitively, this means that the space itself is the coset X=G/H X=G/H For instance if X=Rd−1,1X=Rd−1,1 is Minkowski spacetime, then its isometry group G=Iso(Rd−1,1)G=Iso(Rd−1,1) is the Poincaré group and H=O(d−1,1)H=O(d−1,1) is the Lorentz group. But it also makes sense to consider alternative local spacetime symmetry groups, such as G=O(d−1,2)G=O(d−1,2) the anti-de Sitter group. etc. The idea of characterizing local spacetimes as the coset of its local symmetry group by the stabilizer of any one of its points is called Klein geometry. To globalize this, consider a manifold XX whose tangent spaces look like G/HG/H, and such that the structure group of its tangent bundle is reduced to the action of HH. This is called a Cartan geometry. For the previous example where G/HG/H is Poincaré/Lorentz, then Cartan geometry is equivalently pseudo-Riemannian geometry: the reduction of the structure group to the Lorentz group is equivalently a “vielbein field” that defines a metric, hence a field configuration of gravity. For other choices of GG and HH the same construction unifies essentially all concepts of geometry ever considered, see the table of examples here. This is a powerful formulation of spacetime geometry that regards spacetime symmetry groups as more fundamental than spacetime itself. In the physics literature it is essentially known as the first-order formulation of gravity. Incidentally, this is also the way to obtain super-spacetimes: simply replace the Poincaré group by its super-group extension: the super-Poincaré group (super-Cartan geometry) But why would should one consider that? We get to this in a moment. Now as we consider quantum fields covariantly on such a spacetime, then locally all fields transform linearly under the symmetry group GG, hence they form linear representations of the group GG. Given two GG-representations, we may form their tensor product to obtain a new representation. Physically this corresponds to combining two fields to the joint field of the composite system. Based on this, Wigner suggested that the elementary particle species are to be identified with the irreducible representations of GG, those which are not the tensor product of two non-trivial representations. Indeed, if one computes, in the above example, the irreducible unitary representations of the Poincaré group, then one finds that these are labeled by the quantum numbers of elementary particles seen in experiment, mass and spin, and helicity for massless particles. One may do the same for other model spacetimes, such as (anti-)de Sitter spacetimes. Then the particle content is given by the irreducible representations of the corresponding symmetry groups, the (anti-)de Sitter groups, etc. The point of this digression via Klein geometry and Cartan geometry is to make the following important point: the spacetime symmetry group is more fundamental than the spacetime itself. Therefore we should not be asking: What are all possible types of spacetimes over which we could consider Wigner classification of particles? but we should ask: What are all possible symmetry groups such that their irreducible representations behave like elementary particle species? This is the question that Deligne’s theorem on tensor categories is the answer to. To give a precise answer, one first needs to make the question precise. But it is well known how to do this: A collection of things (for us: particle species) which may be tensored together (for us: compound systems may be formed) where two things may be exchanged in the tensor product (two particles may be exchanged) such that exchanging twice is the identity operation, and such that every thing has a dual under tensoring (for every particle there is an anti-particle); such that the homomorphisms between things (for us: the possible interaction vertices between particle species) form vector spaces; is said to be a linear tensor category . We also add the following condition, which physically is completely obvious, but necessary to make explicit to prove the theorem below: Every thing consists of a finite number of particle species, and the compound of nn copies of NN particle species contains at most NnNn copies of fundamental particle species. Mathematically this is the condition of “subexponential growth”, see here for the mathematical detail. A key example of tensor categories are categories of finite-dimensional representations of groups; but not all tensor categories are necessarily of this form. The question for those which are is called Tannaka duality: the problem of starting with a given tensor category and reconstructing the group that it is the category of representations of. The case of interest to us here is that of tensor categories which are CC-linear, hence where the spaces of particle interaction vertices are complex vector spaces. More generally we could consider kk-linear tensor categories, for kk any field of characteristic 0. Deligne studied the question: Under which conditions is such a tensor category the representation category of some group, and if so, of which kind of group? Phrased in terms of our setup this question is: Given any collection of things that behave like particle species and spaces of interaction vertices between these, under which condition is there a local spacetime symmetry group such that these are the particles in the corresponding Wigner classification of quanta on that spacetime, and what kinds of spacetime symmetry groups arise this way? Now the answer of Deligne’s theorem on tensor categories is this: Every kk-linear tensor category is of this form; the class of groups arising this way are precisely the (algebraic) super-groups. This is due to Pierre Deligne, Catégorie Tensorielle, Moscow Math. Journal 2 (2002) no. 2, 227-248. (pdf) based on Pierre Deligne, Catégories Tannakiennes , Grothendieck Festschrift, vol. II, Birkhäuser Progress in Math. 87 (1990) pp.111-195. reviewed in Victor Ostrik, Tensor categories (after P. Deligne) (arXiv:math/0401347) and in Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, Victor Ostrik, section 9.11 in Tensor categories, Mathematical Surveys and Monographs, Volume 205, American Mathematical Society, 2015 (pdf) Phrased in terms of our setup this means Every sensible collection of particle species and spaces of interaction vertices between them is the collection of elementary particles in the Wigner classification for some local spacetime symmetry group; the local spacetime symmetry groups appearing this way are precisely super-symmetry groups. Notice here that a super-group is understood to be a group that may contain odd-graded components. So also an ordinary group is a super-group in this sense. The statement does not say that spacetime symmetry groups need to have odd supergraded components (that would evidently be false). But it says that the largest possible class of those groups that are sensible as local spacetime symmetry groups is precisely the class of possibly-super groups. Not more. Not less. Hence Deligne’s theorem — when regarded as a statement about local spacetime symmetry via Wigner classification as above — is a much stronger statement than for instance the Coleman-Mandula theorem + Haag-Lopuszanski-Sohnius theorem which is traditionally invoked as a motivation for supersymmetry: For Coleman-Mandula+Haag-Lopuszanski-Sohnius to be a motivation for supersymmetry, you first of all already need to believe that spacetime symmetries and internal symmetries ought to be unified. Even if you already believe this, then the theorem only tells you that supersymmetry is one possibility to achieve this unification, there might still be an infinitude of other possibilities that you haven’t considered yet. For Deligne’s theorem the conclusion is much stronger: First of all, the only thing we need to believe about physics, for it to give us information, is an utmost minimum: that particle species transform linearly under spacetime symmetry groups. For this to be wrong at some fundamental scale we would have to suppose non-linear modifications of quantum physics or other dramatic breakdown of everything that is known about the foundations of fundamental physics. Second, it says not just that local spacetime supersymmetry is one possibility to have sensible particle content under Wigner classification, but that the class of (algebraic) super-groups precisely exhausts the moduli space of possible consistent local spacetime symmetry groups. This does not prove that fundamentally local spacetime symmetry is a non-trivial supersymmetry. But it means that it is well motivated to expect that it might be one.
Lecture for the Fortieth Anniversary of Supergravity In the first part of this lecture, some very basic ideas in supersymmetry and supergravity are presented at a level accessible to readers with modest background in quantum field theory and general relativity. The second part is an outline of a recent paper of the author and his collaborators on the AdS/CFT correspondence applied to the ABJM gauge theory with N = 8 supersymmetry. The first paper on supergravity in D = 4 spacetime dimensions [1] was submitted to the Physical Review in late March, 1976. It was a great honor for me that the fortieth anniversary of this event was one of the features of the 54th Course at the Ettore Majorana Foundation and Centre for Scientific Culture in June, 1976. This note contains some of the material from my lectures there. The first part focuses on the most basic ideas of the subjects of supersymmetry and supergravity, ideas which I hope will be interesting for aspiring physics students. The second part summarizes the results of the paper [2] on what might be called a curiosity of the AdS/CFT correspondence.
On the Phenomenology of String-Theory: Contact with Particle Physics and Cosmology A brief discussion is presented assessing the achievements and challenges of string phenomenology: the subfield dedicated to study the potential for string-theory to make contact with particle physics and cosmology. Building from the well understood case of the standard model as a very particular example within quantum field theory we highlight the very few generic observable implications of string theory, most of them inaccessible to low-energy experiments, and indicate the need to extract concrete scenarios and classes of models that could eventually be contrasted with searches in collider physics and other particle experiments as well as in cosmological observations. The impact that this subfield has had in mathematics and in a better understanding of string-theory is emphasised as spin-offs of string phenomenology. Moduli fields, measuring the size and shape of extra dimensions, are highlighted as generic low-energy remnants of string-theory that can play a key role for supersymmetry breaking as well as for inflationary and post-inflationary early universe cosmology. It is argued that the answer to the question in the title should be, as usual, No. Future challenges for this field are briefly mentioned. This essay is a contribution to arXiv:1612.01569v1 [hep-th] 5 Dec 2016 the conference: “Why Trust a Theory?”, Munich, December 2015.
Entanglement of Quantum Clocks through Gravity: the 'Flow' and 'Directionality' of Time are Blurry and Chaotic Significance: We find that there exist fundamental limitations to the joint measurability of time along neighboring space–time trajectories, arising from the interplay between quantum mechanics and general relativity. Because any quantum clock must be in a superposition of energy eigenstates, the mass–energy equivalence leads to a trade-off between the possibilities for an observer to define time intervals at the location of the clock and in its vicinity. This effect is fundamental, in the sense that it does not depend on the particular constitution of the clock, and is a necessary consequence of the superposition principle and the mass–energy equivalence. We show how the notion of time in general relativity emerges from this situation in the classical limit. Abstract: In general relativity, the picture of space–time assigns an ideal clock to each world line. Being ideal, gravitational effects due to these clocks are ignored and the flow of time according to one clock is not affected by the presence of clocks along nearby world lines. However, if time is defined operationally, as a pointer position of a physical clock that obeys the principles of general relativity and quantum mechanics, such a picture is, at most, a convenient fiction. Specifically, we show that the general relativistic mass–energy equivalence implies gravitational interaction between the clocks, whereas the quantum mechanical superposition of energy eigenstates leads to a nonfixed metric background. Based only on the assumption that both principles hold in this situation, we show that the clocks necessarily get entangled through time dilation effect, which eventually leads to a loss of coherence of a single clock. Hence, the time as measured by a single clock is not well defined. However, the general relativistic notion of time is recovered in the classical limit of clocks. Acrucial aspect of any physical theory is to describe the behavior of systems with respect to the passage of time. Operationally, this means establishing a correlation between the system itself and another physical entity, which acts as a clock. In the context of general relativity, time is specified locally in terms of the proper time along world lines. It is believed that clocks along these world lines correlate to the metric field in such a way that their readings coincide with the proper time predicted by the theory—the so-called “clock hypothesis” (1). A common picture of a reference frame uses a latticework of clocks to locate events in space–time (2). An observer, with a particular split of space–time into space and time, places clocks locally, over a region of space. These clocks record the events and label them with the spatial coordinate of the clock nearest to the event and the time read by this clock when the event occurred. The observer then reads out the data recorded by the clocks at his/her location. Importantly, the observer does not need to be sitting next to the clock to do so. We will call an observer who measures time according to a given clock, but not located next to it, a far-away observer. In the clock latticework picture, it is conventionally considered that the clocks are external objects that do not interact with the rest of the universe. This assumption does not treat clocks and the rest of physical systems on equal footing and therefore is artificial. In the words of Einstein: “One is struck [by the fact] that the theory [of special relativity]… introduces two kinds of physical things, i.e., (1) measuring rods and clocks, (2) all other things, e.g., the electromagnetic field, the material point, etc. This, in a certain sense, is inconsistent…” (3). For the sake of consistency, it is natural to assume that the clocks, being physical, behave according to the principles of our most fundamental physical theories: quantum mechanics and general relativity. In general, the study of clocks as quantum systems in a relativistic context provides an important framework for investigating the limits of the measurability of space–time intervals (4). Limitations to the measurability of time are also relevant in models of quantum gravity (5, 6). It is an open question how quantum mechanical effects modify our conception of space and time and how the usual conception is obtained in the limit where quantum mechanical effects can be neglected. In this work, we show that quantum mechanical and gravitational properties of the clocks put fundamental limits to the joint measurability of time as given by clocks along nearby world lines. As a general feature, a quantum clock is a system in a superposition of energy eigenstates. Its precision, understood as the minimal time in which the state evolves into an orthogonal one, is inversely proportional to the energy difference between the eigenstates (7⇓⇓⇓–11). Due to the mass–energy equivalence, gravitational effects arise from the energies corresponding to the state of the clock. These effects become nonnegligible in the limit of high precision of time measurement. In fact, each energy eigenstate of the clock corresponds to a different gravitational field. Because the clock runs in a superposition of energy eigenstates, the gravitational field in its vicinity, and therefore the space–time metric, is in a superposition. We prove that, as a consequence of this fact, the time dilation of clocks evolving along nearby world lines is ill-defined.We show that this effect is already present in the weak gravity and slow velocities limit, in which the number of particles is conserved. Moreover, the effect leads to entanglement between nearby clocks, implying that there are fundamental limitations to the measurability of time as recorded by the clocks. The limitation, stemming from quantum mechanical and general relativistic considerations, is of a different nature than the ones in which the space–time metric is assumed to be fixed (4). Other works regarding the lack of measurability of time due to the effects the clock itself has on space–time (5, 6) argue that the limitation arises from the creation of black holes. We will show that our effect is independent of this effect, too. Moreover, it is significant in a regime orders of magnitude before a black hole is created. Finally, we recover the classical notion of time measurement in the limit where the clocks are increasingly large quantum systems and the measurement precision is coarse enough not to reveal the quantum features of the system. In this way, we show how the (classical) general relativistic notion of time dilation emerges from our model in terms of the average mass–energy of a gravitating quantum system. From a methodological point of view, we propose a gedanken experiment where both general relativistic time dilation effects and quantum superpositions of space–times play significant roles. Our intention, as is the case for gedanken experiments, is to take distinctive features from known physical theories (quantum mechanics and general relativity, in this case) and explore their mutual consistency in a particular physical scenario. We believe, based on the role gedanken experiments played in the early days of quantum mechanics and relativity, that such considerations can shed light on regimes for which there is no complete physical theory and can provide useful insights into the physical effects to be expected at regimes that are not within the reach of current experimental capabilities. Discussion: In the (classical) picture of a reference frame given by general relativity, an observer sets an array of clocks over a region of a spacial hypersurface. These clocks trace world lines and tick according to the value of the metric tensor along their trajectory. Here we have shown that, under an operational definition of time, this picture is untenable. The reason does not only lie in the limitation of the accuracy of time measurement by a single clock, coming from the usual quantum gravity argument in which a black hole is formed when the energy density used to probe space–time lies inside the Schwarzschild radius for that energy. Rather, the effect we predict here comes from the interaction between nearby clocks, given by the mass–energy equivalence, the validity of the Einstein equations, and the linearity of quantum theory. We have shown that clocks interacting gravitationally get entangled due to gravitational time dilation: The rate at which a single clock ticks depends on the energy of the surrounding clocks. This interaction produces a mixing of the reduced state of a single clock, with a characteristic decoherence time after which the system is no longer able to work as a clock. Although the regime of energies and distances in which this effect is considerable is still far away from the current experimental capabilities, the effect is significant at energy scales that exist naturally in subatomic particle bound states. These results suggest that, in the accuracy regime where the gravitational effects of the clocks are relevant, time intervals along nearby world lines cannot be measured with arbitrary precision, even in principle. This conclusion may lead us to question whether the notion of time intervals along nearby world lines is well defined. Because the space–time distance between events, and hence the question as to whether the events are space-like, light-like, or time-like separated, depend on the measurability of time intervals, one can expect that the situations discussed here may lead to physical scenarios with indefinite causal structure (25). The notion of well-defined time measurability is obtained only in the limit of high-dimensional quantum systems subjected to accuracy-limited measurements. Moreover, we have shown that our model reproduces the classical time dilation characteristic of general relativity in the appropriate limit of clocks as spin coherent states. This limit is consistent with the semiclassical limit of gravity in the quantum regime, in which the energy–momentum tensor is replaced by its expectation value, despite the fact that, in general, the effect cannot be understood within this approximation. The operational approach presented here and the consequences obtained from it suggest that considering clocks as real physical systems instead of idealized objects might lead to new insights concerning the phenomena to be expected at regimes where both quantum mechanical and general relativistic effects are relevant.
Einstein's Theory of General Relativity as a Quantum Field Theory There is a major difference in how the Standard Model developed and how General Relativity (GR) did, and this difference still influences how we think about them today. The Standard Model really developed hand in hand with Quantum Field Theory (QFT). Quantum Electrodynamics (QED) required the development of renormalization theory. Yang–Mills (YM) theory required the understanding of gauge invariance, path integrals and Faddeev–Popov ghosts. To be useful, Quantum Chromodynamics (QCD) required understanding asymptotic freedom and confinement. The weak interaction needed the Brout–Englert–Higgs mechanism, and also dimensional regularization for ’t Hooft’s proof of renormalizability. We only could formulate the Standard Model as a theory after all these QFT developments occurred. In contrast, General Relativity was fully formulated 100 years ago. It has been passed down to us as a geometric theory — “there is no gravitational force, but only geodesic motion in curved spacetime”. And the mathematical development of the classical theory has been quite beautiful. But because the theory was formulated so long ago, there were many attempts to make a quantum theory which were really premature. This generated a really bad reputation for quantum general relativity. We did not have the tools yet to do the job fully. Indeed, making a QFT out of General Relativity requires all the tools of QFT that the Standard Model has, plus also the development of Effective Field Theory (EFT). So, although while many people made important progress as each new tool came into existence, we really did not have all the tools in place until the 1990s. So, let us imagine starting over. We can set out to develop a theory of gravity from the QFT perspective. While there are remaining problems with quantum gravity, the bad reputation that it initially acquired is not really deserved. The QFT treatment of General Relativity is successful as an EFT and it forms a well–defined QFT in the modern sense. Maybe it will survive longer than will the Standard Model. 1 Constructing GR as a Gauge Theory: A QFT Point of View 5 1.1 Preliminaries 1.2 Gauge Theories: Short Reminder 1.2.1 Abelian Case 1.2.2 Non–Abelian Case 1.3 Gravitational Field from Gauging Translations 1.3.1 General Coordinate Transformations 1.3.2 Matter Sector 1.3.3 Gravity Sector 2 Fermions in General Relativity 3 Weak–Field Gravity 3.1 Gauge Transformations 3.2 Newton’s Law . 3.3 Gauge Invariance for a Scalar Field 3.4 Schr¨odinger equation 4 Second Quantization of Weak Gravitational Field 18 4.1 Second Quantization 4.2 Propagator 4.3 Feynman Rules 5 Background Field Method 5.1 Preliminaries 5.1.1 Toy Example: Scalar QED 5.2 Generalization to other interactions 5.2.1 Faddeev–Popov Ghosts 5.3 Background Field Method in GR 6 Heat Kernel Method 6.1 General Considerations 6.2 Applications 6.3 Gauss–Bonnet Term 6.4 The Limit of Pure Gravity 7 Principles of Effective Field Theory 7.1 Three Principles of Sigma–Models 7.2 Linear Sigma–Model 7.2.1 Test of Equivalence 7.3 Loops 7.4 Chiral Perturbation Theory 8 General Relativity as an Effective Field Theory 8.1 Degrees of Freedom and Interactions 8.2 Most General Effective Lagrangian 8.3 Quantization and Renormalization 8.4 Fixing the EFT parameters 8.4.1 Gravity without Tensor Indices 2 8.5 Predictions: Newton’s Potential at One Loop 8.6 Generation of Reissner–Nordstr¨om Metric through Loop Corrections 9 GR as EFT: Further Developments 9.1 Gravity as a Square of Gauge Theory 9.2 Loops without Loops 9.3 Application: Bending of Light in Quantum Gravity 10 Infrared Properties of General Relativity 10.1 IR Divergences at One Loop 10.2 Cancellation of IR Divergences 10.3 Weinberg’s Soft Theorem and BMS Transformations 10.4 Other Soft Theorems 10.4.1 Cachazo–Strominger Soft Theorem 10.4.2 One–Loop Corrections to Cachazo–Strominger Soft Theorem 10.4.3 Relation to YM Theories 10.4.4 Double–Soft Limits of Gravitational Amplitudes 11 An Introduction to Non–local Effective Actions 60 11.1 Anomalies in General 11.2 Conformal Anomalies in Gravity 11.3 Non–local Effective Actions 11.4 An Explicit Example 11.5 Non–local actions as a frontier 12 The Problem of Quantum Gravity.
Can global internal and spacetime symmetries be connected without supersymmetry? To answer this question, we investigate Minkowski spacetimes with d space-like extra dimensions and point out under which general conditions external symmetries induce internal symmetries in the effective 4-dimensional theories. We further discuss in this context how internal degrees of freedom and spacetime symmetries can mix without supersymmetry in agreement with the Coleman-Mandula theorem. We present some specific examples which rely on a direct product structure of spacetime such that orthogonal extra dimensions can have symmetries which mix with global internal symmetries. This mechanism opens up new opportunities to understand global symmetries in particle physics. The nature of spacetime is still a great mystery in fundamental physics and it might be a truly fundamental quantity or it could be an emergent concept. An appealing and most minimalistic approach would be if spacetime and propagating degrees of freedom would have a common origin on equal footing. In such a scenario, spacetime is thus an emergent quantity and there seems to be no reason for it to be restricted to a 4-dimensional Poincar´e symmetry apart from low energy phenomenology. The only exception are additional time-like dimensions which typically lead to inconsistencies when requiring causality [1, 2], while there is no consistency problem with additional space-like dimensions. Additional space-like dimensions have therefore been widely studied. If spacetime and particles consist of the same building blocks, then a fundamental connection of these low energy quantities should exist at high energies. Early attempts in this direction have lead to the Coleman-Mandula no-go theorem [3]. The no-go theorem shows under general assumptions that a symmetry group accounting for 4-dimensional Minkowski spacetime and internal symmetries has to factor into the direct product of spacetime and internal symmetries. This implies that spacetime and particle symmetries cannot mix in relativistic interacting theories. One way to circumvent the no-go theorem is to study graded symmetry algebras which introduce fermionic symmetry generators and are known as supersymmetries [4]. The possibility to mix spacetime and internal symmetries in a relativistic theory is a strong theoretical argument for supersymmetry and supersymmetric extensions of the Standard Model of particle physics are therefore widely studied. However, there is no experimental evidence for supersymmetry, see e.g. [5–7], and it is a finely question to ask: Are there alternative ways to circumvent the Coleman-Mandula theorem?
The Geometry of Noncommutative Spacetimes The idea that spacetime may be quantised was first pondered by Werner Heisenberg in the 1930s (see [1] for a historical review). His proposal was motivated by the urgency of providing a suitable regularisation for quantum electrodynamics. The first concrete model of a quantum spacetime, based on a noncommutative algebra of ‘coordinates’, was constructed by Hartland Snyder in 1949 [2] and extended by Chen-Ning Yang shortly afterwards [3]. With the development of the renormalisation theory, the concept of quantum spacetime became, however, less popular. The revival of Heisenberg’s idea came in the late 1990s with the development of noncommutative geometry [4,5,6]. The latter is an advanced mathematical theory sinking its roots in functional analysis and differential geometry. It permits one to equip noncommutative algebras with differential calculi, compatible with their inherent topology [7,8,9]. Meanwhile, on the physical side, it became clear that the concept of a point-like event is an idealisation—untenable in the presence of quantum fields. This is because particles can never be strictly localised [10,11,12] and, more generally, quantum states cannot be distinguished by means of observables in a very small region of spacetime (cf. [13], p. 131). Nowadays, there exists a plethora of models of noncommutative (i.e., quantum) spacetimes. Most of them are connected with some quantum gravity theory and founded on the postulate that there exists a fundamental length-scale in Nature, which is of the order of Planck length λP∼(Gℏc−3)1/2≈1.6×10−35 m (see, for instance, [14] for a comprehensive review). The ‘hypothesis of noncommutative spacetime’ is, however, plagued with serious conceptual problems (cf. for instance, [15]). Firstly, one needs to adjust the very notions of space and time. This is not only a philosophical problem, but also a practical one: we need a reasonable physical quantity to parametrise the observed evolution of various phenomena. Secondly, the classical spacetime has an inherent Lorentzian geometry, which determines, in particular, the causal relations between the events. This raises the question: Are noncommutative spacetimes also geometric in any suitable mathematical sense? This riddle not only affects the expected quantum gravity theory, but in fact any quantum field theory, as the latter are deeply rooted in the principles of locality and causality. In this short review we advocate a somewhat different approach to noncommutative spacetime (cf. [16,17]), based on an operational viewpoint. We argue that the latter provides a conceptually transparent framework, although this comes at the price of involving rather abstract mathematical structures. In the next section we introduce the language of C∗-algebras and provide a short survey of the operational viewpoint on noncommutative spacetime. Subsequently, we briefly sketch the rudiments of noncommutative geometry à la Connes [4]. Next, we discuss the notion of causality suitable in this context, summarising the outcome of our recent works [18,19,20,21,22,23,24,25,26]. Finally, we explain how the presumed noncommutative structure of spacetime extorts a modification of the axioms of quantum field theory and thus might yield empirical consequences.
Linking the Four Distinct Time-Arrows: On Cosmological Black Holes and the Direction of Time Abstract Macroscopic irreversible processes emerge from fundamental physical laws of reversible character. The source of the local irreversibility seems to be not in the laws themselves but in the initial and boundary conditions of the equations that represent the laws. In this work we propose that the screening of currents by black hole event horizons determines, locally, a preferred direction for the flux of electromagnetic energy. We study the growth of black hole event horizons due to the cosmological expansion and accretion of cosmic microwave background radiation, for different cosmological models. We propose generalized McVittie co-moving metrics and integrate the rate of accretion of cosmic microwave background radiation onto a supermassive black hole over cosmic time. We find that for flat, open, and closed Friedmann cosmological models, the ratio of the total area of the black hole event horizons with respect to the area of a radial co-moving space-like hypersurface always increases. Since accretion of cosmic radiation sets an absolute lower limit to the total matter accreted by black holes, this implies that the causal past and future are not mirror symmetric for any spacetime event. The asymmetry causes a net Poynting flux in the global future direction; the latter is in turn related to the ever increasing thermodynamic entropy. Thus, we expose a connection between four different “time arrows”: cosmological, electromagnetic, gravitational, and thermodynamic.
All of Richard Feynman’s Physics Lectures are now Available Free Online Richard Feynman was something of a rockstar in the physics world, and his lectures at Caltech in the early 1960s were legendary. As Robbie Gonzalez reports for io9, footage of these lectures exists, but they were most famously preserved in a three-volume collection of books called The Feynman Lectures - which has arguably become the most popular collection of physics books ever written. And now you can access the entire collection online for free. The Feynman Lectures on Physics have been made available as part of a collaboration between Caltech and The Feynman Lectures Website, and io9 reports they have been designed to be viewed, equations and all, on any device. The lectures were targeted at first-year university physics students, but they were attended by many graduates and researchers, and even those with a lot of prior physics understanding will be able to get something out of them. And even if you're a physics novice (like me), you can still marvel at the fantastic teaching and amazing science. Like Feynman said: “Physics is like sex: sure, it may give some practical results, but that's not why we do it.” Now stop wasting time online and go and learn from one of the greatest minds in physics.
Time As a Geometric Property of Space The proper description of time remains a key unsolved problem in science. Newton conceived of time as absolute and universal which “flows equably without relation to anything external.” In the nineteenth century, the four-dimensional algebraic structure of the quaternions developed by Hamilton, inspired him to suggest that he could provide a unified representation of space and time. With the publishing of Einstein's theory of special relativity these ideas then lead to the generally accepted Minkowski spacetime formulation of 1908. Minkowski, though, rejected the formalism of quaternions suggested by Hamilton and adopted an approach using four-vectors. The Minkowski framework is indeed found to provide a versatile formalism for describing the relationship between space and time in accordance with Einstein's relativistic principles, but nevertheless fails to provide more fundamental insights into the nature of time itself. In order to answer this question we begin by exploring the geometric properties of three-dimensional space that we model using Clifford geometric algebra, which is found to contain sufficient complexity to provide a natural description of spacetime. This description using Clifford algebra is found to provide a natural alternative to the Minkowski formulation as well as providing new insights into the nature of time. Our main result is that time is the scalar component of a Clifford space and can be viewed as an intrinsic geometric property of three-dimensional space without the need for the specific addition of a fourth dimension.
John Ellis: Video, MP3, and Slides. 'Where is Particle Physics Going? The discovery of the Higgs boson at the LHC in 2012 was a watershed in particle physics. Its existence focuses attention on the outstanding questions about physics beyond the Standard Model: is `empty' space unstable? what is the dark matter? what is the origin of matter? what is the explanation for the small masses of the neutrinos? how is the hierarchy of mass scales in physics established and stabilized? what drove inflation? how to quantize gravity Many of these issues will be addressed by future runs of the LHC, e.g., by studies of the Higgs boson, and also motivate possible future colliders.
On Normative Inductive Reasoning and the Status of Theories in Physics: the Impact of String Theory Evaluating theories in physics used to be easy. Our theories provided very distinct predictions. Experimental accuracy was so small that worrying about epistemological problems was not necessary. That is no longer the case. The underdeterminacy problem between string theory and the standard model for current possible experimental energies is one example. We need modern inductive methods for this problem, Bayesian methods or the equivalent Solomonoff induction. To illustrate the proper way to work with induction problems I will use the concepts of Solomoff induction to study the status of string theory. Previous attempts have focused on the Bayesian solution. And they run into the question of why string theory is widely accepted with no data backing it. Logically unsupported additions to the Bayesian method were proposed. I will show here that, by studying the problem from the point of view of the Solomonoff induction those additions can be understood much better. They are not ways to update probabilities. Instead, they are considerations about the priors as well as heuristics to attempt to deal with our finite resources. For the general problem, Solomonoff induction also makes it clear that there is no demarcation problem. Every possible idea can be part of a proper scientific theory. It is just the case that data makes some ideas extremely improbable. Theories where that does not happen must not be discarded. Rejecting ideas is just wrong.
Study Reveals Substantial Evidence of Holographic Universe A UK, Canadian and Italian study has provided what researchers believe is the first observational evidence that our universe could be a vast and complex hologram. Theoretical physicists and astrophysicists, investigating irregularities in the cosmic microwave background (the 'afterglow' of the Big Bang), have found there is substantial evidence supporting a holographic explanation of the universe—in fact, as much as there is for the traditional explanation of these irregularities using the theory of cosmic inflation. The researchers, from the University of Southampton (UK), University of Waterloo (Canada), Perimeter Institute (Canada), INFN, Lecce (Italy) and the University of Salento (Italy), have published findings in the journal Physical Review Letters. A holographic universe, an idea first suggested in the 1990s, is one where all the information that makes up our 3-D 'reality' (plus time) is contained in a 2-D surface on its boundaries. Professor Kostas Skenderis of Mathematical Sciences at the University of Southampton explains: "Imagine that everything you see, feel and hear in three dimensions (and your perception of time) in fact emanates from a flat two-dimensional field. The idea is similar to that of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card. However, this time, the entire universe is encoded." Although not an example with holographic properties, it could be thought of as rather like watching a 3-D film in a cinema. We see the pictures as having height, width and crucially, depth—when in fact it all originates from a flat 2-D screen. The difference, in our 3-D universe, is that we can touch objects and the 'projection' is 'real' from our perspective. In recent decades, advances in telescopes and sensing equipment have allowed scientists to detect a vast amount of data hidden in the 'white noise' or microwaves (partly responsible for the random black and white dots you see on an un-tuned TV) left over from the moment the universe was created. Using this information, the team were able to make complex comparisons between networks of features in the data and quantum field theory. They found that some of the simplest quantum field theories could explain nearly all cosmological observations of the early universe. Professor Skenderis comments: "Holography is a huge leap forward in the way we think about the structure and creation of the universe. Einstein's theory of general relativity explains almost everything large scale in the universe very well, but starts to unravel when examining its origins and mechanisms at quantum level. Scientists have been working for decades to combine Einstein's theory of gravity and quantum theory. Some believe the concept of a holographic universe has the potential to reconcile the two. I hope our research takes us another step towards this." The scientists now hope their study will open the door to further our understanding of the early universe and explain how space and time emerged.
From Planck data to Planck era: Observational tests of Holographic Cosmology We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard ΛCDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in ΛCDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. l . 30), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.
Symmetry, Reference Frames and Relational Quantities in Quantum Mechanics Abstract. We propose that observables in quantum theory are properly understood as representatives of symmetry-invariant quantities relating one system to another, the latter to be called a reference system. We provide a rigorous mathematical language to introduce and study quantum reference systems, showing that the orthodox “absolute” quantities are good representatives of observable relative quantities if the reference state is suitably localised. We use this relational formalism to critique the literature on the relationship between reference frames and superselection rules, settling a long-standing debate on the subject. In classical physics, symmetry, reference frames and the relativity of physical quantities are intimately connected. The position of a material object is defined as relative to a given frame, and the relative position of object to frame is a shift-invariant quantity. Galiliean directions/angles, velocities and time of events are all relative, and invariant only once the frame-dependence has been accounted for. The relativity of these quantities is encoded in the Galilei group, and the observable quantities are those which are invariant under its action. Einstein’s theory engendered a deeper relativity—the length of material bodies and time between spatially separated events are also frame-dependent quantities—and observables must be sought in accordance with their invariance under the action of the Poincar´e group. In quantum mechanics the analogues of those quantities mentioned above (e.g., position, angle) must also be understood as being relative to a reference frame. As in the normal presentation of the classical theory, the reference frame-dependence is implicit. However, in the quantum case, there arises an ambiguity regarding the definition of a reference frame: if it is classical, this raises the spectre of the lack of universality of quantum mechanics along with technical difficulties surrounding hybrid classical-quantum systems; if quantum, such a frame is subject to difficulties of definition and interpretation arising from indeterminacy, incompatibility, entanglement, and other quantum properties (see, e.g., [1, 2, 3] for early discussions of some of the important issues). In previous work [4, 5], following classical intuition we have posited that observable quantum quantities are invariant under relevant symmetry transformations, and examined the properties of quantum reference frames (viewed as physical systems) which allow for the usual description, in which the reference frame is implicit, to be recovered. We constructed a map U which brings out the relative nature of quantities normally presented in “absolute” form in conventional treatments, which allows for a detailed study of the relativity of states and observables in quantum mechanics and the crucial role played by reference localisation. The objectives for this paper are: 1) To provide a mathematically rigorous and conceptually clear framework with which to discuss quantum reference frames, making precise existing work on the subject (e.g., [6]) and providing proofs of the main claims in [5]; 2) to construct examples, showing how symmetry dictates that the usual text book formulation of quantum theory describes the relation between a quantum system and an appropriately localised reference system; 3) to provide further conceptual context for the quantitative trade-off relations proven in [4]; 4) to provide explicit and clear explanation of what it means for states/observables to be defined relative to an external reference frame, and show how such an external description is compatible with quantum mechanics as a universal theory; 5) to introduce the concepts of absolute coherence and mutual coherence, showing the latter to be required for good approximation of relative quantities by absolute ones, and demonstrating it to be the crucial property for interference phenomena to manifest in the presence of symmetry; 6) to address the questions of dynamics and measurement under symmetry, offering an interpretation of the Wigner-Araki-Yanase theorem based on relational quantities; 7) to analyse simplified models similar to those appearing in the literature purporting to produce superpositions typically thought “forbidden” due to superselection rules, and provide a critical analysis of large amplitude limits in this context guided by two interpretational principles due to Earman and Butterfield, leading directly to 8) to provide a historical account of two differing views on the nature of superselection rules ([7, 8] “versus” [10, 9, 6]), their fundamental status in quantum theory and precisely what restrictions arise in the presence of such a rule, showing how our framework brings a unity to the opposing standpoints; 9) to remove ambiguities and inconsistencies appearing in all previous works on the subject of the connection between superselection rules and reference frames; 10) to offer a fresh perspective, based on the concept of mutual coherence, on the nature and reality of quantum optical coherence, settling a long-standing debate on the subject of whether laser light is “truly” coherent. See also [11] for an important contribution on this topic. We provide general arguments and many worked examples to show precisely how the framework presented works in practice, and which simplify a number of models appearing in the literature. Our paper constitutes further effort in a long line of enquiries (e.g., ([6, 12, 13, 14, 15, 16, 17]) aimed at capturing the relationalism at the heart of the quantum mechanical world view. The fundamental role of symmetry has not impressed itself strongly upon previous consideration of the relative nature of the quantum description, and we view this work (along with [4, 5]) as opening new lines of enquiry in this direction. Our work is inspired by [6] and visits similar themes, and is complementary to recent work on resource theories (e.g., [6, 18, 19, 20, 21]), which focus primarily on practical questions surrounding, for example, high-precision quantum metrology.
Space, and Spacetime, are Emergent Features of the Universe: they Arise as a Result of Non-Local Dynamical Collapse of the Wave-Function Collapse models possibly suggest the need for a better understanding of the structure of space-time. We argue that physical space, and space-time, are emergent features of the Universe, which arise as a result of dynamical collapse of the wave-function. The starting point for this argument is the observation that classical time is external to quantum theory, and there ought to exist an equivalent reformulation which does not refer to classical time. We propose such a reformulation, based on a non-commutative special relativity. In the spirit of Trace Dynamics, the reformulation is arrived at, as a statistical thermodynamics of an underlying classical dynamics in which matter and non-commuting space-time degrees of freedom are matrices obeying arbitrary commutation relations. Inevitable statistical fluctuations around equilibrium can explain the emergence of classical matter fields and classical space-time, in the limit in which the universe is dominated by macroscopic objects. The underlying non-commutative structure of space-time also helps understand better the peculiar nature of quantum non-locality, where the effect of wave-function collapse in entangled systems is felt across space-like separations.
Supergravity at 40: Reflections and Perspectives Excellent Read: The fortieth anniversary of the original construction of Supergravity provides an opportunity to combine some reminiscences of its early days with an assessment of its impact on the quest for a quantum theory of gravity. Contents: 1 Introduction 2 The Early Times 3 The Golden Age 4 Supergravity and Particle Physics 5 Supergravity and String Theory 6 Branes and M–Theory 7 Supergravity and the AdS/CFT Correspondence 8 Conclusions and Perspectives.
String-Theory's AdS/dCFT Duality Passes a Crucial Quantum Test We build the framework for performing loop computations in the defect version of N = 4 super Yang-Mills theory which is dual to the probe D5-D3 brane system with background gauge-field flux. In this dCFT, a codimension-one defect separates two regions of space-time with different ranks of the gauge group and three of the scalar fields acquire non-vanishing and space-time-dependent vacuum expectation values. The latter leads to a highly non-trivial mass mixing problem between different colour and flavour components, which we solve using fuzzy-sphere coordinates. Furthermore, the resulting space-time dependence of the theory’s Minkowski space propagators is handled by reformulating these as propagators in an effective AdS4. Subsequently, we initiate the computation of quantum corrections. The one-loop correction to the one-point function of any local gauge-invariant scalar operator is shown to receive contributions from only two Feynman diagrams. We regulate these diagrams using dimensional reduction, finding that one of the two diagrams vanishes, and discuss the procedure for calculating the one-point function of a generic operator from the SU(2) subsector. Finally, we explicitly evaluate the one-loop correction to the one-point function of the BPS vacuum state, finding perfect agreement with an earlier string-theory prediction. This constitutes a highly non-trivial test of the gauge-gravity duality in a situation where both supersymmetry and conformal symmetry are partially broken.
The Mechanics of Spacetime – A Solid Mechanics Perspective on the Theory of General Relativity We present an elastic constitutive model of gravity where we identify physical space with the mid-hypersurface of an elastic hyperplate called the “cosmic fabric” and spacetime with the fabric’s world volume. Using a Lagrangian formulation, we show that the fabric’s behavior, as derived from of Hooke’s Law, is analogous to that of spacetime per the Field Equations of General Relativity. We relate properties of the fabric such as strain, stress, vibrations, and elastic moduli to properties of gravity and space, such as gravitational potential, gravitational acceleration, gravitational waves, and the density of vacuum. By introducing a mechanical analogy of General Relativity, we enable the application of Solid Mechanics tools to addressing problems in Cosmology.
Gauge Theories and Fiber Bundles: Applications to Particle Dynamics A theory defined by an action which is invariant under a time-dependent group of transformations can be called a gauge theory. Well known examples of such theories are those defined by the Maxwell and Yang-Mills Lagrangians. It is widely believed nowadays that the fundamental laws of physics have to be formulated in terms of gauge theories. The underlying mathematical structures of gauge theories are known to be geometrical in nature and the local and global features of this geometry have been studied for a long time in mathematics under the name of fibre bundles. It is now understood that the global properties of gauge theories can have a profound influence on physics. For example, instantons and monopoles are both consequences of properties of geometry in the large, and the former can lead to, e.g., CP violation, while the latter can lead to such remarkable results as the creation of fermions out of bosons. Some familiarity with global differential geometry and fibre bundles seems therefore very desirable to a physicist who works with gauge theories. One of the purposes of the present work is to introduce the physicist to these disciplines using simple examples. There exists a certain amount of literature written by general relativists and particle physicists which attempts to explain the language and techniques of fibre bundles. Generally, however, in these admirable reviews, the concepts are illustrated by field theoretic examples like the gravitational and the Yang-Mills systems. This practice tends to create the impression that the subtleties of gauge invariance can be understood only through the medium of complicated field theories. Such an impression, however, is false and simple systems with gauge invariance occur in plentiful quantities in the mechanics of point particles and extended objects. Further, it is often the case that the large scale properties of geometry play an essential role in determining the physics of these systems. They are thus ideal to commence studies of gauge theories from a geometrical point of view. Besides, such systems have an intrinsic physical interest as they deal with particles with spin, interacting charges and monopoles, particles in Yang-Mills fields, etc... We shall present an exposition of these systems and use them to introduce the reader to the mathematical concepts which underlie gauge theories. Many of these examples are known to exponents of geometric quantization, but we suspect that, due in part to mathematical difficulties, the wide community of physicists is not very familiar with their publications. We admit that our own acquaintance with these publications is slight. If we are amiss in giving proper credit, the reason is ignorance and not deliberate intent. The matter is organized as follows. After a brief introduction to the concept of gauge invariance and its relationship to determinism in Section 2, we introduce in Chapters 3 and 4 the notion of fibre bundles in the context of a discussion on spinning point particles and Dirac monopoles. The fibre bundle language provides for a singularity-free global description of the interaction between a magnetic monopole and an electrically charged test particle. Chapter 3 deals with a non-relativistic treatment of the spinning particle. The non-trivial extension to relativistic spinning particles is dealt with in Chapter 5. The free particle system as well as interactions with external electromagnetic and gravitational fields are discussed in detail. In Chapter 5 we also elaborate on a remarkable relationship between the charge-monopole system and the system of a massless particle with spin. The classical description of Yang-Mills particles with internal degrees of freedom, such as isospinor colour, is given in Chapter 6. We apply the above in a discussion of the classical scattering of particles off a ’t Hooft-Polyakov monopole. In Chapter 7 we elaborate on a Kaluza-Klein description of particles with internal degrees of freedom. The canonical formalism and the quantization of most of the preceding systems are discussed in Chapter 8. The dynamical systems given in Chapters 3-7 are formulated on group manifolds. The procedure for obtaining the extension to super-group manifolds is briefly discussed in Chapter 9. In Chapter 10, we show that if a system admits only local Lagrangians for a configuration space Q, then under certain conditions, it admits a global Lagrangian when Q is enlarged to a suitable U(1) bundle over Q. Conditions under which a symplectic form is derivable from a Lagrangian are also found. The list of references cited in the text is, of course, not complete, but it is instead intended to be a guide to the extensive literature in the field.
Quantum Field Theory as a Lorentz Invariant Statistical Field Theory  We propose a reformulation of quantum field theory (QFT) as a Lorentz invariant statistical field theory. This rewriting embeds a collapse model within an interacting QFT and thus provides a possible solution to the measurement problem. Additionally, it relaxes structural constraints on standard QFTs and hence might open the way to future mathematically rigorous constructions. Finally, because it shows that collapse models can be hidden within QFTs, this article calls for a reconsideration of the dynamical program, as a possible underpinning rather than as a modification of quantum theory. In its orthodox acceptation, quantum mechanics is not a dynamical theory of the world. It provides accurate predictions about the results of measurements, but leaves the reality of the microscopic substrate supporting their emergence unspecified. The situation is no different, apart from additional technical subtleties, in the relativistic regime. Quantum field theory (QFT) is indeed no more about fields than non-relativistic quantum mechanics is about particles. At best these entities are intermediary mathematical objects entering in the computation of probabilities. They cannot, even in principle, be approximate representations of an underlying physical reality. More precisely, a QFT (even regularized) does not a priori yield a probability measure on fields living in space-time1 , even if this is a picture one might find intuitively appealing. This does not mean that the very existence of tangible matter is made impossible, but rather that the formalism remains agnostic about its specifics. It seems that most physicists would want more and it is uncontroversial that it would sometimes be helpful to have more (if only to solve the measurement problem [1, 2]). One would likely feel better with local beables [3] (or a primitive ontology [4, 5]), i.e. with something in the world, some physical “stuff”, that the theory is about and that can ultimately be used to derive the statistics of measurement results. In the non-relativistic limit, Bohmian mechanics [6–9] has provided a viable proposition for such an underlying physical theory of the quantum cookbook [10, 11]. It may not be the only one nor the most appealing to all physicists, but at least it is a working proof of principle. In QFT, finding an underlying description in terms of local beables has proved a more difficult endeavour. Bohmian mechanics can indeed only be made Lorentz invariant in a weak sense [12] and its extension to QFT is sublte [13, 14]. At present, there does not seem to exist a fully Lorentz invariant theory of local beables that reproduces the statistics of QFT in a compact way (even setting aside the technicalities of renormalization), although some ground work has been done [15]. The first objective of this article is to propose a solution to this problem and provide a reformulation (or interpretation) of QFT as a Lorentz invariant statistical field theory (where the word “field” is understood in its standard “classical” sense). For that matter, we shall get insights from another approach to the foundations of quantum mechanics: the dynamical reduction program. The idea of dynamical reduction models2 is to slightly modify the linear state equation of quantum mechanics to get definite measurement outcomes in the macroscopic realm, while only marginally modifying microscopic dynamics. Pioneered by Ghirardi, Rimini, and Weber [16], Diósi [17], Pearle [18, 19], and Gisin [20] (among others), the program has blossomed to give a variety of non-relativistic models that modify the predictions of the Standard Model in a more or less gentle way. The models can naturally be endowed with a clear primitive ontology, made of fields [21], particles [22, 23] or flashes [24]. Some instantiations of the program, such as the Continuous Spontaneous Localization (CSL) model [18, 19] or the Diósi-Penrose (DP) model [17, 25, 26] are currently being put under experimental scrutiny. These models have also been difficult to extend to relativistic settings despite recent advances by Tumulka [27], Bedingham [28] and Pearle [29]. For subtle reasons we shall discuss later, these latter proposals, albeit crucially insightful for the present inquiry, are difficult to handle and not yet entirely satisfactory. The second objective of this article is thus to construct a theory that can be seen as a fully relativistic dynamical reduction model and that has a transparent operational content. The two aforementioned objectives –redefining a QFT in terms of a relativistic statistical field theory and constructing a fully relativistic dynamical reduction model– shall be two sides of the same coin. Indeed, our dynamical reduction model will have an important characteristic distinguishing it from its predecessors: its empirical content will be the same as that of an orthodox interacting QFT, hence providing a potential interpretation rather than a modification of the Standard Model. This fact may be seen as a natural accomplishment of the dynamical program, yet in some sense also as a call for its reconsideration. Surely, if a dynamical reduction model that is arguably more symmetric and natural than its predecessors can be fully hidden within the Standard Model, it suggests that the “collapse” manifestations currently probed in experiments are but artifacts of retrospectively unnatural choices of non-relativistic models. We should finally warn that the purpose of the present article should not be seen as only foundational or metaphysical. The instrumentalist reader, who may still question the legitimacy of a quest for ontology on positivistic grounds, might nonetheless be interested in its potential mathematical byproducts. As we shall see, because it relaxes some strong constraints on the regularity of QFTs, our proposal might indeed be of help for future mathematically rigorous constructions. The article is structured as follows. We first introduce non-relativistic collapse models in section 2 to gather the main ideas and insights needed for the extension to QFT. The core of our new definition of QFT is provided in section 3. We show that the theory allows to understand the localization of macroscopic objects providing a possible natural solution to the measurement problem in section 4. Finally, we discuss in section 5 the implications for QFT and the dynamical reduction program, as well as the limits and the relation to previous work, of our approach.