A Brief History of the Multiverse The theory of inflationary multiverse is based on unification of inflationary cosmology, anthropic
considerations, and particle physics. Its most advanced versions include a combination of eternal
inflation and string theory into what is now called “string theory landscape.” This theory is still ‘work in progress,’ and the pendulum of public opinion with respect to it swings with a very large amplitude. Some people love this theory, others hate it and write papers defending the integrity of physics. It does not help much that the word “multiverse” is used differently by different people. In this situation it may be useful to remember what exactly we are talking about and why this theory was invented. This is not an easy task. Until the mid-90’s the cosmological anthropic principle pioneered by Dicke, Carter, Rees, Barrow, Rozental and others remained very unpopular. For example, we know that the proton mass is almost exactly equal to the neutron mass. If the proton
were 1% heavier or lighter, life as we know it would be impossible. Similarly, one cannot significantly change the electron charge and its mass without making the universe unsuitable for life. But if physical parameters are just constants in the Lagrangian, nothing can change them. Therefore the standard lore was that one should avoid using anthropic arguments for explaining fundamental properties of our world. As a result, some of the key ideas relating to each other inflation and anthropic considerations originally were expressed in a rather cryptic form and scattered among preprints, conference proceedings and old journals which are hard to find. In this paper I will briefly describe the history of the evolution of these ideas and provide links to some of the original publications which are especially difficult to find.
Historically, there were many different versions of the theory of the multiverse based on the
many-world interpretation of quantum mechanics [1] and quantum cosmology [2], on the theory of creation of the universe ‘from nothing’ [3], and on the investigation of the Hartle-Hawking wave function [4]. These ideas are most powerful, but their consistent implementation requires deep understanding of difficult conceptual issues of quantum cosmology. Moreover, quantum cosmology by itself does not allow us to change fundamental constants. Therefore the main progress in the development of the theory of the inflationary multiverse was achieved in a different, conceptually simpler context. To explain it, let us remember that
one of the starting points of the pre-inflationary cosmology was that the universe is globally
uniform. This was the so-called ‘cosmological principle’, which was invoked by many people,
from Newton to Einstein, to account for the observed large-scale homogeneity of the universe.
The physical mechanism explaining the homogeneity of our part of the world was provided by
inflationary theory. Surprisingly enough, this theory made the cosmological principle obsolete.
Indeed, the main idea of inflationary cosmology is to make our part of the universe homogeneous
by stretching any pre-existing inhomogeneities and by placing all possible ‘defects,’ such as domain walls and monopoles, far away from us, thus rendering them unobservable. If the universe consists of different parts, each of these parts after inflation may become locally homogeneous and so large that its inhabitants will not see other parts of the universe, so they may conclude, incorrectly, that the universe looks the same everywhere. However, properties of different parts of the universe may be dramatically different. In this sense, the universe effectively becomes a multiverse consisting of different exponentially large locally homogeneous parts with different properties. To distinguish these exponentially large parts of our world from more speculative ‘other universes’ entirely disconnected from each other, I called these parts ‘mini-universes,’ others call them ‘pocket universes’. Eventually, we started using the word ‘multiverse’, or ‘inflationary multiverse’ to describe the world consisting of many different 'mini-universes’, or ‘pocket universes’.
An advanced version of this scenario describes our world as an eternally growing selfreproducing
fractal consisting of many locally homogeneous parts (mini-universes). If the fundamental theory of all interactions has many different vacuum states, or allows different types of compactification, the laws of the low-energy physics and even the dimensionality of space in each of these mini-universes may be different. This provided, for the first time, a simple scientific interpretation of the anthropic principle, which did not rely on the possible existence of ‘other universes’: We can live only in those parts of the world which can support
life as we know it, so there is a correlation between our own properties, and the properties of
the part of the world that we can observe.
BIRS Workshop on 'String-Theory and M-Theory Geometries' Conference Videos: Enjoy!
Diego Marqués:
The Odd story of alpha-prime corrections
Watch video | Download video: 201701231041-Marques.mp4 (156M)
Jeong-Hyuck Park:
Green-Schwarz superstring and Stringy Gravity in doubled-yet-gauged spacetime
Watch video | Download video: 201701231429-Park.mp4 (185M)
Igor Bandos:
Underlying 11D EFT: A conjecture
Watch video | Download video: 201701231603-Bandos.mp4 (178M)
Daniel Waldram:
Generalised geometry and marginal deformations
Watch video | Download video: 201701241304-Waldram.mp4 (231M)
Charles Strickland-Constable:
Supersymmetric backgrounds and generalised special holonomy
Watch video | Download video: 201701241409-Strickland-Constable.mp4 (142M)
Marco Gualtieri:
The mathematical meaning of the generalized Kahler potential
Watch video | Download video: 201701241531-Gualtieri.mp4 (281M)
Emanuel Malek:
Half-maximal consistent truncations using EFT and the M-theory / heterotic duality
Watch video | Download video: 201701251902-Malek.mp4 (182M)
Felix Rudolph:
A Connection for Born geometry and its application to DFT
Watch video | Download video: 201701261907-Rudolph.mp4 (153M)
Martin Cederwall:
E_9 geometry
Watch video | Download video: 201701270903-Cederwall.mp4 (363M)
Chris Hull:
The geometry and non-geometry of double field theory
Watch video | Download video: 201701271039-Hull.mp4 (274M).
Is the Dark Sector of the Universe the Quantum Foam due to Quantum Fluctuations of Spacetime? Is it possible that the dark sector (dark energy in the form of an effective dynamical cosmological constant, and dark matter) has its origin in quantum gravity? This talk sketches a positive response. Here specifically quantum gravity refers to the combined effect of quantum foam (or spacetime foam due to quantum fluctuations of spacetime) and gravitational thermodynamics. We use two simple independent gedankan experiments to show that the holographic principle can be understood intuitively as having its origin in the quantum fluctuations of spacetime. Applied to cosmology, this consideration leads to a dynamical cosmological constant of the observed magnitude, a result that can also be obtained for the present and recent cosmic eras by using unimodular gravity and causal set theory. Next we generalize the concept of gravitational thermodynamics to a spacetime with positive cosmological constant (like ours) to reveal the natural emergence, in galactic dynamics, of a critical acceleration parameter related to the cosmological constant. We are then led to construct a phenomenological model of dark matter which we call “modified dark matter” (MDM) in which the dark matter density profile depends on both the cosmological constant and ordinary matter. We provide observational tests of MDM by fitting the rotation curves to a sample of 30 local spiral galaxies with a single free parameter and by showing that the dynamical and observed masses agree in a sample of 93 galactic clusters. We also give a brief discussion of the possibility that quanta of both dark energy and dark matter are non-local, obeying quantum Boltzmann statistics (also called infinite statistics) as described by a curious average of the bosonic and fermionic algebras. If such a scenario is correct, we can expect some novel particle phenomenology involving dark matter interactions. This may explain why so far no dark matter detection experiments have been able to claim convincingly to have detected dark matter.
Metastring Theory and Modular Spacetime: Reformulating String-Theory Independently of an a-Priori Spacetime Interpretation or a Pre-Assumption of Locality Abstract: String theory is canonically accompanied with a space-time interpretation
which determines S-matrix-like observables, and connects to the standard physics at low
energies in the guise of local effective field theory. Recently, we have introduced a reformulation
of string theory which does not rely on an a priori space-time interpretation or a
pre-assumption of locality. This metastring theory is formulated in such a way that stringy
symmetries (such as T-duality) are realized linearly. In this paper, we study metastring
theory on a flat background and develop a variety of technical and interpretational ideas.
These include a formulation of the moduli space of Lorentzian worldsheets, a careful study
of the symplectic structure and consequently consistent closed and open boundary conditions,
and the string spectrum and operator algebra. What emerges from these studies is
a new quantum notion of space-time that we refer to as a quantum Lagrangian or equivalently
a modular space-time. This concept embodies the standard tenets of quantum theory
and implements in a precise way a notion of relative locality. The usual string backgrounds
(non-compact space-time along with some toroidally compactified spatial directions) are
obtained from modular space-time by a limiting procedure that can be thought of as a
correspondence limit.
The Madelung Equations as the Foundational Interpretation of Quantum Theory Despite its age, quantum theory still suffers from serious conceptual diffi-
culties. To create clarity, mathematical physicists have been attempting to
formulate quantum theory geometrically and to find a rigorous method of
quantization, but this has not resolved the problem.
In this article we argue that a quantum theory recursing to quantization algorithms
is necessarily incomplete. To provide an alternative approach, we show
that the Schr¨odinger equation is a consequence of three partial differential
equations governing the time evolution of a given probability density. These
equations, discovered by E. Madelung, naturally ground the Schr¨odinger theory
in Newtonian mechanics and Kolmogorovian probability theory. A variety
of far-reaching consequences for the projection postulate, the correspondence
principle, the measurement problem, the uncertainty principle, and
the modeling of particle creation and annihilation are immediate. We also
give a speculative interpretation of the equations following Bohm, Vigier and
Tsekov, by claiming that quantum mechanical behavior is possibly caused
by gravitational background noise.
The idea of quantization was first put forward by Dirac [52] in 1925 in an attempt
to extend Heisenberg’s theory of matrix mechanics [60]. He based the concept on
a formal analogy between the Hamilton and the Heisenberg equation and on the
principle of correspondence, namely that a quantum theoretical model should yield
a “classical” one in some limit. This analogy motivated Dirac to develop a scheme
that constructs one or more quantum analogues of a given “classical system” formulated
in the language of Hamiltonian mechanics.1 When it was discovered that
Dirac’s scheme, nowadays known as canonical quantization, was ill-defined (see
[59, 64] for the original works by Groenewold and van Hove, also [1, §5.4], in particular
[1, Thm. 5.4.9]), physicists and mathematicians attempted to develop a
more sophisticated machinery rather than questioning the ansatz. The result has
been a variety of quantization algorithms, one of which is particularly noteworthy:
Geometric quantization (cf. [34, 56] for an introduction).
In his seminal paper, Segal [88] expressed the need to employ the language of differential geometry in quantum theory. He understood that determining the relevant
differential-geometric structures, spaces and their relation to the fundamental
equations of quantum theory creates the mathematical coherence necessary to adequately
address foundational issues in the subject. By merging this ansatz with
Kirillov’s work in representation theory [69], Segal, Kostant [70] and Souriau [24]
were able to construct the algorithm of geometric quantization. However, rather
than elaborating on the relation between quantum and classical mechanics, geometric
quantization unearthed a large amount of geometric structures [11, §23.2],
introduced in an ad hoc manner.
It is tempting to blame this state of affairs on the inadequacy of the geometric
ansatz or the theory, but instead we invite the reader to take a step back. What
is the reason for the construction of a quantization algorithm? Why do we quantize?
Certainly, quantum mechanics should agree with Newtonian mechanics in
some approximation, where the latter is known to accord with experiment, but is it
reasonable to assume the existence of an algorithm that constructs the new theory
out of the old one?
These questions are of philosophical nature and it is useful to address them within
the historical context. Clearly, the step from Newtonian mechanics to quantum mechanics
was a scientific revolution, which is why we find the work of the philosopher
and physicist Thomas Kuhn [17] of relevance to our discussion. Kuhn is known for
his book “The Structure of Scientific Revolutions” [17], in which he analyzed the
steps of scientific progress in the natural sciences. For a summary see [80].
Kuhn argues that, as a field of science develops, a paradigm is eventually formed
through which all empirical data is interpreted. As, however, the empirical evidence
becomes increasingly incompatible with the paradigm, it is modified in an ad hoc
manner in order to allow for progress in the field. Ultimately, this creates a crisis
in the field as attempts to account for the evidence become increasingly ad hoc,
unmanageably elaborate and ultimately contradictory. Unless a new paradigm is
presented and withstands experimental and theoretical scrutiny, the crisis persists
and deepens, because of the internal and external inconsistencies of the current
paradigm.
This process can be directly observed in the history of quantum theory. When
Newtonian mechanics was faced with the problem of describing the atomic spectra
and the stability of the atom in the beginning of the twentieth century [44], it was
ad hoc modified by adding the Bohr-Sommerfeld quantization condition [44,89] despite
its known inconsistency with then accepted principles of physics [45,61]. This
ad hoc modification of Newtonian mechanics continued with Werner Heisenberg’s
[60] and Erwin Schr¨odinger’s [85] postulation of their fundamental equations of
quantum mechanics, two descriptions later shown to be formally equivalent by von
Neumann in his constitutive work [19]. Schr¨odinger’s and Heisenberg’s description
can be viewed as an ad hoc modification, because their equations are formulated
on a Newtonian spacetime and intended to replace Newton’s second law without
being based on postulated principles of nature. With his quantization algorithm
[52], Dirac supplied a convenient way to pass from the mathematical description of a physical system in Newtonian mechanics to the then incomplete, new theory.
In accordance with Kuhn’s description, it was a pragmatic, ad hoc step, not one
rooted in deep philosophical reflection. Nonetheless, the concept of quantization is
ingrained in quantum theory as of today [29], while the as of now futile search for
unity in physics has become increasingly ad hoc and elaborate [33, §19].
We are thus reminded of our historical position and the original intention behind
quantization: We would like to be able to mathematically describe microscopic
phenomena, having at hand neither the fundamental equations describing those
phenomena nor a proper understanding of the physical principles involved allowing
us to derive such equations. That is, what we lack with respect to our knowledge
of microscopic phenomena is, in Kuhn’s words, a paradigm. Rather than having a
set of principles of nature, which we use to intuitively understand and derive the
fundamental laws of quantum theory, we physicists assume the validity of the old
theory, namely Newtonian mechanics or special relativity in its Hamiltonian formulation,
only to apply an ad hoc algorithm to obtain laws we have inadequately
understood. This is why the concept of quantization itself is objectionable.
Indeed, even if a mathematically well defined quantization scheme existed, it would
remain an ad hoc procedure and one would still need additional knowledge which
quantized systems are physical (cf. [28, §5.1.2] for a discussion of this in German).
From a theory builder’s perspective, it would then be more favorable to simply
use the quantized, physically correct models as a theoretical basis and deduce the
classical models out of these, rather than formulating the theory in the reverse way.
Hence quantization can be viewed as a procedure invented to systematically guess
quantum-theoretical models. This is done with the implicit expectation of shedding
some light on the conceptual and mathematical problems of quantum theory,
so that one day a theory can be deduced from first principles. Thus a quantum
theory, which is constructed from a quantization scheme, must necessarily be incomplete.
More precisely, it has not been formulated as a closed entity, since for its
formulation it requires the theory it attempts to replace and which it potentially
contradicts.
As a result of this development, quantum mechanics and thus quantum theory as
a whole has not been able to pass beyond its status as an ad hoc modification of
Newtonian mechanics and relativity to date. For a recapitulation of the history of
quantum theory illustrating this point, see e.g. the article by Heisenberg [61].
Fortunately, our criticism does not apply to the theory of relativity, which to our
knowledge provides an accurate description of phenomena [99], at least in the
macroscopic realm. As the principles of relativity theory are known (cf. [16, p.
XVII]), the ridiculousness of “relativizing” Newtonian mechanics is obvious. Indeed,
in the theory of relativity physics still finds a working paradigm.
Rejecting quantization neither leads to a rejection of quantum theory itself, nor
does it imply that previous attempts to put quantum theory into a geometric language
were futile. If we reject quantization, we are forced to view quantum theory
as incomplete and phenomenological, which raises the question of what the underlying
physical principles and observables are. Considering that the theory of relativity is mainly a theory of spacetime geometry, asking, as Segal did, for the
primary geometric and physical quantities in quantum theory offers a promising
and natural approach to this question.
Therefore we reason, that we theorists should look at the equations of quantum
theory with strong empirical support and use these to construct a mathematically
consistent, probabilistic, geometric theory, tied to fundamental physical principles
as closely as possible. But how is this to be approached?
Experimental evidence for retro-causation in quantum mechanics using weak values Physicists and philosophers have long argued about the real possibility that future events can influence what happens in the present. A time symmetric theory for nonrelativistic quantum mechanics first proposed in 1964 by Aharonov, Bergman, and Lebowitz and elaborated more recently by Aharonov et al. in terms of weak measurements and weak values has provided a formalism that encompasses this using not only standard forward in time evolving quantum states, but also requiring in its formulation quantum states that are evolving backward in time. Although the retro-causal interpretation of weak values remains controversial, experiments have verified many of the theory’s counterintuitive predictions. Here, we report an experimentally observed change in a weak value induced by controlled phase shifts occurring in an optically dark path of a twin Mach–Zehnder interferometer. These results can be explained in terms of the effect of a retro-propagating quantum state upon a weak value. This explanation provides empirical support for Aharonov’s controversial retro-causal interpretation of quantum mechanics.
Deriving Einstein's TGR from 5-D topological Chern-Simons theory We propose a gravitation theory in 4 dimensional space-time obtained by compacting to 4 dimensions the five dimensional topological Chern-Simons theory with the gauge group SO(1,5) or SO(2,4) – the de Sitter or anti-de Sitter group of 5-dimensional space-time. In the resulting theory, torsion, which is solution of the field equations as in any gravitation theory in the first order formalism, is not necessarily zero. However, a cosmological solution with zero torsion exists, which reproduces the Lambda-CDM cosmological solution of General Relativity. A realistic solution with spherical symmetry is also obtained.
A New Approach to Quantum Cosmology and the Resolution of the Big Bang Singularity Problem The initial Big-Bang singularity is the most troubling feature of the standard cosmology, which quantum effects are hoped to resolve. In this paper, we study quantum cosmology with conformal (Weyl)
invariant matter. We show it is natural to extend the scale factor to negative values, allowing a
large, collapsing Universe to evolve across a quantum “bounce” into an expanding Universe like
ours. We compute the Feynman propagator for Friedmann-Robertson-Walker backgrounds exactly,
identifying curious pathologies in the case of curved (open or closed) universes. We then include
anisotropies, fixing the operator ordering of the quantum Hamiltonian by imposing covariance under
field redefinitions and again finding exact solutions. We show how complex classical solutions allow one to circumvent the singularity while maintaining the validity of the semiclassical approximation. The simplest isotropic universes sit on a critical boundary, beyond which there is qualitatively different behavior, with potential for instability. Additional scalars improve the theory’s stability. Finally, we study the semiclassical propagation of inhomogeneous perturbations about the flat, isotropic case, at linear and nonlinear order, showing that, at least at this level, there is no particle production across the bounce. These results form the basis for a promising new approach to quantum cosmology and the resolution of the big bang singularity.
String Theory and the Space-Time Heisenberg Uncertainty Principle The notion of space-time uncertainty principle in string theory is clarified and further developed.
The motivation and the derivation of the principle are first reviewed in a reasonably self-contained way. It is then shown that the nonperturbative (Borel summed) high-energy and high-momentum transfer behaviors of string scattering are consistent with the spacetime uncertainty principle. It is also shown that, in consequence of the principle, string theories in 10 dimensions generically exhibit a characteristic length scale which is equal to the well-known 11 dimensional Planck length of M-theory as the scale at which
stringy effects take over the effects of classical supergravity, even without involving D-branes
directly. The meanings of the space-time uncertainty relation in connection with D-branes and black holes are discussed and reinterpreted. Finally, we present a novel interpretation of the Schild-gauge action for strings from a viewpoint of noncommutative geometry, which conforms to the space-time uncertainty relation by manifestly exhibiting a noncommutativity of quantized string coordinates dominantly between space and time. We also discuss the consistency of the space-time uncertainty relation with S and T dualities.
Do Alien Particles Exist, and Can they be Detected? We may call “alien particles” those particles belonging to the matter/field content of a d-dimensional brane other than the 3-brane (or stack of branes) sweeping the spacetime in which we live. They can appear in our space-time at the regions of intersection between our and their brane. They can be identified (or not) as alien matter depending on their properties, on the physical laws governing their evolution in the “homeland” brane, and on the details of our detection techniques. Modern physical theories – in particular, higher-dimensional unified models of gravity and of the other fundamental interactions – have accustomed us to the idea that the space-time in which we live can be appropriately represented as the fourdimensional hypersurface spanned by the evolution of a 3-brane, embedded in an external “bulk” manifold. We may recall, in this respect, the so-called brane-world scenario, and its many physical/cosmological applications (see e.g. Ref. 1 for an updated review). The bulk, however, might also contain many other branes similar to ours. The space-times spanned by the various branes might have reciprocal (and possible multiple) intersections. The intersection of (wrapping, Dirichlet) branes is indeed at the ground of possible string-theory explanations of basic Standard Model properties, such as the chirality of the fermion spectrum and the observed number of quark/lepton generations.2 But there are also cosmological and inflationary consequences of brane intersections.3, 4 Quite independently of the possible motivations/applications mentioned above, another important aspect of brane intersections, in my opinion, is the following. Even if the matter/field content of a given brane is rigidly localized (excluding gravity) on the associated world-volume, a direct interaction among the matter components of different branes (or stacks of branes) turns out to be possible, in principle, at the intersection regions, where the world-lines of particles belonging to different branes can mix and intersect. Those regions may thus behave as open windows on the extra dimensions.
On 'Naturalness' in Physics and Possible Explanations for Fine-Tuning of the Universe The Froggatt-Nielsen mechanism and the multi-local field theory are interesting and promising candidates for solving the naturalness problem in the universe. These theories are based on the different physical principles: The former assumes the microcanonical partition function R/Dφ-Q^iδ(Si − Ii), and the latter assumes the partition function R Dφ exp (iSM) where SM is the multi-local action P i ciSi+P i,j ci,jSiSj+· · · .
Our main purpose is to show that they are equivalent in the sense that they predict the same fine-tuning mechanism. In order to clarify our argument, we first study (review) the similarity between the Froggatt-Nielsen mechanism and statistical mechanics in detail, and show that the dynamical fine-tuning in the former picture can be understood completely in the same way as the determination of the temperature in the latter picture. Afterward, we discuss the multi-local theory and the equivalence between it and the the Froggatt-Nielsen mechanism. Because the multi-local field theory can be
obtained from physics at the Planck/String scale, this equivalence indicates that the micro-canonical picture can also originate in such physics. As a concrete example, we also review the IIB matrix model as an origin of the multi-local theory. Although the Standard Model (SM) is completed by the discovery of the Higgs boson, there are many open questions in it such as the Higgs quadratic divergence, the Strong CP problem, the cosmological constant problem, and so on. These problems are difficult to answer in ordinary quantum field theory (QFT), and called the naturalness problem. Therefore, it is quite important to seek for new theory or mechanism that naturally answers these questions. One of the possibilities is to try to explain the observed couplings by dynamical fine-tuning. For example, in the Strong CP problem, θ becomes dynamical by considering the Peccei-Quinn symmetry and its breaking. However, even if such a field theoretical approach with a new symmetry can solve one of the fine-tunings, it is difficult to solve a few problems simultaneously. So, it is meaningful to study new mechanism that can realize a few fine-tunings simultaneously. Among various proposals, the Froggatt-Niselsen mechanism (FNM) [1] recently attracts much attention because the predicted value of the Higgs mass (∼ 130GeV) was close to the observed value ' 125GeV. It was originally proposed to explain the nontrivial behavior of the SM Higgs potential at high energy scale: The potential has another minimum around the
Planck scale, and it can degenerate with the electroweak vacuum vh = 246GeV depending on
the values of the SM couplings. Such a degeneration is called the Multiple point criticality principle (MPP), and there are a lot of studies so far [2]. The fundamental assumption in the FNM is to use the micro-canonical partition function like statistical mechanics, and its origin still remains obscure. In this picture, the couplings in QFT become dynamical, and their dynamical fine-tuning can take place. See [1, 2] and the following discussion for the details.
On the other hand, in [3], it was also argued that a few naturalness problems, including the
MPP, can be solved by the multi-local field theory. It assumes that the effective action below the Planck/String scale is given by the multi-local one: Although it seems difficult to study the theory as quantum theory, we will see that we can reduce it to QFT with the couplings being dynamical by a simple mathematical transformation. See the following discussion for the details. As well as the FNM, we do not need to consider its fundamental origin as long as we apply it to the naturalness problem, however, such an origin can be actually found
in physics at the Planck/String scale. Therefore, the multi-local theory seems to be more promising than the FNM it that everything can be explained from more fundamental physics. The purpose of this paper is to show that these two approaches are in fact equivalent in the sense that they predict the same fine-tuning mechanism in QFT: The coupling in
QFT is fixed at the point that most strongly dominates in their partition functions, and the
fine-tuned value generally depends on the details of the theories. This fact indicates that the micro-canonical picture may also originate in the Planck/String scale physics such as the wormhole theory [4] or matrix model. As a concrete example, we also review the derivation of the multi-local theory from the IIB matrix model [5]. Although the study in [5] is mathematically rigorous, most of the discussion can be done without relying on the details of mathematics. So, in this paper, we aim to give an instructive and intuitive explanation of their work.
Could Everything Emerge From One Dimension in Physics: the Time Dimension?! Unification in One Dimension A physical theory of the world is presented under the unifying principle that all of nature is laid out before us and experienced through the passage of time. The one-dimensional progression in time is opened out into a multi-dimensional mathematically consistent flow, with the simplicity of the former giving rise to symmetries of the latter. The act of perception identifies an extended spacetime arena of intermediate dimension, incorporating the symmetry of geometric spatial rotations, against which physical objects are formed and observed. The spacetime symmetry is contained as a subgroup of, and provides a natural breaking mechanism for, the higher general symmetry of time. It will be described how the world of gravitation and cosmology, as well as quantum theory and particle physics, arises from these considerations.
Popper and Wittgenstein on the Metaphysics of Experience In the Tractatus Wittgenstein argued that there are metaphysical truths. But these are ineffable, for metaphysical sentences try to say what can only be shown. Accordingly, they are pseudo-propositions because they are ill-formed. In the Investigations he no longer thought that metaphysical propositions are pseudo-propositions, but argued that they are either nonsense or norms of descriptions. Popper criticized Wittgenstein’s ideas and argued that metaphysical truths are effable. Yet it is by now clear that he misunderstood Wittgenstein’s arguments (namely that metaphysical propositions are ill-formed because they employ unbound variables) and misguidedly thought that Wittgenstein used the principle of verification for distinguishing empirical propositions from metaphysical propositions. Because Popper developed his philosophy in part as a critique of Wittgenstein’s philosophy, this invites the question of whether these misunderstandings have consequences for his own philosophy. I discuss this question and argue that Popper’s attempt to distinguish metaphysics and science with the aid of a criterion of testability is from Wittgenstein’s perspective misguided. The main problem facing Popper’s philosophy is that alleged metaphysical propositions are not theoretical propositions but rules for descriptions (in the misleading guise of empirical propositions). If Wittgenstein’s ideas are correct, then metaphysical problems are not scientific but grammatical problems which can only be resolved through conceptual investigations.
To Explain or to Predict? Statistics and Philosophy of Science Conclusion: "The bottom line is nicely summarized by Hagerty and Srinivasan (1991): “We note that the practice in applied research of concluding that a model with a higher predictive validity is “truer,” is not a valid inference. This paper shows that a parsimonious but less true model can have a higher predictive validity than a truer but less parsimonious model.”
Abstract.
Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description.
In many disciplines there is near-exclusive use of statistical modeling for causal explanation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation
between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the philosophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory versus a predictive
goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal
the practical implications of the distinction to each step in the modeling process.
Why the E_11 Conjecture Stating that the Kac-Moody Algebra E_11 is the Symmetry Algebra of M-theory is True [3 Papers Hyperlinked] I begin with some memories of Abdus Salam who was my PhD supervisor. After reviewing the theory of non-linear realisations and Kac-Moody algebras, I explain how to construct the non-linear realisation based on the Kac-Moody algebra E11 and its vector representation. I explain how this field theory leads to dynamical equations which contain an infinite number of fields defined on a spacetime with an infinite number of coordinates. I then show that these unique dynamical equations, when truncated to low level fields and the usual coordinates of spacetime, lead to precisely the equations of motion of eleven dimensional supergravity theory. By taking different group decompositions of E11 we find all the maximal supergravity theories, including the gauged maximal supergravities, and as a result the non-linear realisation should be thought of as a unified theory that is the low energy effective action for type II strings and branes. These results essentially confirm the E11 conjecture given many years ago.
E_11: Sign of the times
By: Arjan Keurentjes
We discuss the signature of space-time in the context of the E_11 -conjecture. In this setting, the space-time signature depends on the choice of basis for the ``gravitational sub-algebra'' A_10, and Weyl transformations connect interpretations with different signatures of space-time. Also the sign of the 4-form gauge field term in the Lagrangian enters as an adjustable sign in a generalized signature. Within E_11, the combination of space-time signature (1,10) with conventional sign for the 4-form term, appropriate to M-theory, can be transformed to the signatures (2,9) and (5,6) of Hull's M*- and M'-theories (as well as (6,5), (9,2) and (10,1)). Theories with other signatures organize in orbits disconnected from these theories. We argue that when taking E_11 seriously as a symmetry algebra, one cannot discard theories with multiple time-directions as unphysical. We also briefly explore links with the SL(32,R) conjecture.
By: Paul P. Cook We investigate the motivations and consequences of the conjecture that the Kac-Moody algebra E11 is the symmetry algebra of M-theory, and we develop methods to aid the further investigation
of this idea. The definitions required to work with abstract root systems of Lie algebras are given in review leading up to the definition of a Kac-Moody algebra. The motivations for the E11 conjecture are
reviewed and the nonlinear realisation of gravity relevant to the conjecture is explicitly described.
The algebras of E11, relevant to M-theory, and K27, relevant to the bosonic string theory, along
with their l1 representations are constructed. Tables of low level roots are produced for both the
adjoint and l1 representations of these algebras.
A solution generating element of the Kac-Moody algebra is given, and it is shown by construction
that it encodes all the known half-BPS brane solutions of the maximally oxidised supergravity
theories. It is then used to look for higher level branes associated to the roots of the Kac-Moody
algebra associated to the oxidised theory.
The nature of how spacetime signature is encoded within the E11 formulation of M-theory is
analysed. The effect of the multiple signatures that arise from the Weyl reflections of E11 on the
solution generating group element is found and precise conditions of when an electric brane solution exists and in which signatures are given. As a corollary to these investigations the spacelike branes of M-theory are found associated to the solution generating element. The U-duality multiplets of exotic brane charges are shown to have a natural E11 origin. General formulae for finding the content of arbitrary brane charge multiplets are given and the exact content of the particle and string multiplets in dimensions 4, 5, 6, 7, 8 is related to the l1 representation of E11.
Now, and the Flow of Time The progression of time can be understood by assuming that the Hubble expansion takes place in 4 dimensions rather than in 3. The flow of time consists of the continuous creation of new moments, new nows, that accompany the creation of new space. This model suggests a modification to the metric tensor of the vacuum that leads to testable consequences. Two cosmological tests are proposed, but they present both experimental and theoretical problems. A more practical and immediate test is based on a predicted lag in the emergence of gravitational radiation when two black holes merge. In such mergers (as recently observed by the LIGO team), a macroscopic volume (millions of cubic kilometers) of space is created in the region in which the gravitational wave is generated; this one-time creation of new space should be accompanied by the creation of detectable level of new time, resulting in a time delay that could be observed as a growing lag in the emission of the wave as the merger takes place. Time is unified with space through the Minkowski concept of space-time, yet time and space have qualitatively different behavior in a way that goes beyond a minus sign in the metric. Given any coordinate system, we can stand still in space but not in time; time inexorably flows. The rate of flow depends on the velocity of the local Lorentz frame and on the gravitational potential. Yet this description of the relative changes in the rate of flow does not address the key disparity that time flows yet space doesn’t. In many ways a simpler problem is the arrow of time, the intriguing question of why time flows forward rather than backward, given that most of the fundamental equations of physics show a forward/backward symmetry. Eddington [1] attributed the arrow of time to the second law of thermodynamics, the statement that the entropy of the universe always increases, and that this is the only “law” of physics that contradicts time-reversal symmetry (or, at least, it was at the time Eddington conceived the theory). The recent discovery of time-symmetry violation in B decay [2] suggests that the direction of time might be set by something more fundamental. The Eddington proposal, that the arrow of time is related to entropy increase, has many shortcomings. At its heart, the second law is basically tautological; it consists of the statement that the future will probably be composed of states that, because they have high multiplicity, are more likely. Arguably, the physics of the second Law is found in ergodic hypothesis, the assumption that all accessible states are equally likely so ones with high multiplicity are more probable. But that principle bears little relationship to the flow of time. When Eddington proposed the concept, he was unaware of the fact that there is substantially more entropy in the cosmic microwave radiation than in all of the visible matter of the universe, by a factor of about 10 million. Moreover, because that radiation expands adiabatically with the Hubble expansion, its entropy is not changing. In addition, it is widely thought that there is even a vaster store of entropy on the surfaces of massive black holes, and perhaps even more on the event horizon of the universe. The entropy of these regions is thought to be increasing, but they are so remote from the earth (signals from these surfaces cannot reach us in finite time), that it is hard to understand why they should have an effect on our local time. In the Eddington theory, the arrow of time is set remotely and universally, with no correlation expected between local variations in the rates of entropy and of time. Contrast this to the general theory of relativity, which correctly predicted that local gravitational potential has an immediate and (these days, easily) observed effect on local time. Moreover, the entropy of the Earth is decreasing as it sheds entropy to infinity; it is likely the entropy of the Sun (not including the radiation which it has discarded) is decreasing. This leads to the result that the entropy of all known matter in the universe, with the exclusion of photons lost to space, is decreasing. And finally, unlike the general theory of relativity, the arrow of time / entropy speculation leads to no testable predictions that could falsify it. By the standards of Karl Popper, it does not rank as a valid physics theory. Not only can it not be falsified, but it can not even be verified–unlike string theory which, although not falsifiable with current predictions, at least does predict possibly observable particles, extra compact dimensions that could be detected (but haven’t been so far), and subtle correlations in the cosmic microwave background [3].
SpaceTime from Hilbert Space: Decompositions of Hilbert Space as Instances of Time Abstract: There has been recent interest in identifying entanglement as the fundamental
concept from which space may emerge. We note that the particular way that a Hilbert space is decomposed into tensor factors is important in what the resulting geometry looks like. We then propose that time may be regarded as a variable that parameterizes a family of such decompositions, thus giving rise to a family of spatial geometries. As a proof of concept, this idea is demonstrated in two toy models based on Kitaev’s toric code, which
feature a dynamical change of dimension and topology. The idea that spacetime is an emergent notion has long been discussed in the physics literature, and there has been attempts to obtain Einstein’s equations from such considerations [1, 2]. Recently, there has been increasing motivation and overwhelming evidence that quantum entanglement may be the fundamental concept from which space emerges as a secondary construct [3, 4]. An explicit such construction is recently proposed [5], in which the mutual information between various factors of an underlying Hilbert space is used to define a metric (distance) between these factors. Although this and other such procedures can define space as an emergent concept from a purely quantum mechanical construction, they still have to deal with time as a fundamental notion to begin with. This should be done by introducing an extra ingredient, namely a Hamiltonian, to generate the time evolution. In this paper we propose that time, too, can be considered as an emergent concept arising from the same underlying Hilbert space without introducing a Hamiltonian.
The Quantum State of the Universe Does Not Contain Evidence of the Wavefunction Collapse Two thought experiments are analyzed, revealing that the quantum state of the universe does not contain evidence of the wavefunction collapse. The first thought experiment shows that unitary quantum evolution alone can account for the outcomes of any combination of quantum experiments. This is in contradiction with the standard view on quantum measurement, which appeals to the wavefunction collapse. The second thought experiment consists in successive measurements, and reveals that the standard quantum measurement scheme predicts violations of the conservation laws. It is shown that the standard view on quantum measurements makes some unnecessary assumptions, which lead to the apparent necessity to invoke wavefunction collapse. Once these assumptions are removed, a new measurement scheme emerges, which is compatible with both the unitary evolution and the conservation laws.
Philosophically Deep: Quantum Superpositions of “Common-Cause” and “Direct-Cause” Causal Structures The deeply rooted intuition that the basic building blocks of the world are cause-effectrelations goes back over a thousand years [1–3] and yet still puzzles philosophers and scientists alike. In physics, general relativity provides a theoretic account of the causal relations that describe which events in spacetime can influence which other events. For two (infinitesimally close) events separated by a time-like or light-like interval, one event is in the future light cone of the other, such that there could be a direct cause-effect relationship between them. When a space-like interval separates two events, no event can influence the other. The causal relations in general relativity are dynamical, since they are imposed by the dynamical light cone structure [4]. A conceptual difficulty in combining quantum physics with general relativity arises when one attempts to incorporate the notion of causal structure in the quantum framework. It is expected that such a notion will be both dynamical, as in general relativity, as well as indefinite, due to quantum theory [5]. One might then expect indefiniteness with respect to the question of whether an interval between two events is time-like or space-like, or even whether event A is prior to or after event B for time-like separated events. Yet, finding a unified framework for the two theories is notoriously difficult and the candidate models still need to overcome technical and conceptual problems. One possibility to separate conceptual from technical issues is to consider more general, theory-independent notions of causality. The causal model formalism [6, 7] is such an approach, which has found applications in areas as diverse as medicine, social sciences and machine learning [8]. The study of its extension with quantum features [9–15] might provide insights that are currently missing from the theory-laden take at combining quantum mechanics with general relativity. The constraints arising for a general set of causal relations, both classically and quantumly, are still poorly understood. As a step in exploring this question, we consider a coherently controlled superposition of “direct-cause” and “common-cause” relationships between two events. We propose an implementation involving the spatial superposition of a mass and general relativistic time dilation. Finally, we develop a computationally efficient method to distinguish such genuinely quantum causal structures from classical (incoherent) mixtures of causal structures and show how to design experimental verifications of the “quantumness” of a causal structure.
Quantum Mechanics is Incompatible with a Broad Class of NonLocal Causal Models including Bell-Types Explaining observations in terms of causes and effects is central to empirical science. However, correlations between entangled quantum particles seem to defy such an explanation. This implies that some of the fundamental assumptions of causal explanations have to give way. We consider a relaxation of one of these assumptions, Bell’s local causality, by allowing outcome dependence: a direct causal influence between the outcomes of measurements of remote parties. We use interventional data from a photonic experiment to bound the strength of this causal influence in a two-party Bell scenario, and observational data from a Bell-type inequality test for the considered models. Our results demonstrate the incompatibility of quantum mechanics with a broad class of nonlocal causal models, which includes Bell-local models as a special case. Recovering a classical causal picture of quantum correlations thus requires an even more radical modification of our classical notion of cause and effect.
Quantum Physics Without a Predefined Notion of Time or Causal Structure The standard formulation of quantum theory assumes a predefined notion of time. This is a major obstacle in the search for a quantum theory of gravity, where the causal structure of space-time is expected to be dynamical and fundamentally probabilistic in character. Here, we propose a generalized formulation of quantum theory without predefined time or causal structure, building upon a recently introduced operationally time-symmetric approach to quantum theory. The key idea is a novel isomorphism between transformations and states which depends on the symmetry transformation of time reversal. This allows us to express the time-symmetric formulation in a time-neutral form with a clear physical interpretation, and ultimately drop the assumption of time. In the resultant generalized formulation, operations are associated with regions that can be connected in networks with no directionality assumed for the connections, generalizing the standard circuit framework and the process matrix framework for operations without global causal order. The possible events in a given region are described by positive semidefinite operators on a Hilbert space at the boundary, while the connections between regions are described by entangled states that encode a nontrivial symmetry and could be tested in principle. We discuss how the causal structure of space-time could be understood as emergent from properties of the operators on the boundaries of compact space-time regions. The framework is compatible with indefinite causal order, timelike loops, and other acausal structures
Just When You Thought Quantum 'Reality' Can't Get 'Weirder', It Does: A Quantum Eraser that Works Counterfactually on Non-Existent Events We combine the two eyebrow-raising phenomena of counterfactuality and erasure, proposing a quantum eraser that works counterfactually. Quantum erasure was first proposed by Scully and Druhl more than three decades ago [1], sending shockwaves through the physics community. While early debates on double-slit interference, going back to Bohr and Einstein [2], focussed on Heisenberg’s uncertainty principle as preventing one from learning which slit a particle went through as well as observing interference, quantum erasure put the focus on entanglement instead, a concept brought to light by Einstein and colleagues in their EPR paper [3]. Scully and Druhl showed that it is possible to place a which-path tag on individual particles passing through a double-slit interferometer without disturbing it, thus throwing the uncertainty principle out of the discussion. Interference, however, is still lost because entanglement provides which-path information. The mere possibility of obtaining such information is enough to destroy interference. Erasing which-path information, even after the particles have long been detected, dramatically restores interference, seemingly allowing one to edit the past [4–6]. Practically, quantum erasure has recently been used to entangle, for the first time, two different-colour photons [7], and more recently, to design a protocol for quantum key distribution (QKD) promising inherent security against detector attacks [8]. Counterfactuality, on the other hand, gleans information from events that could have happened but did not in fact take place. But information is physical—it is always manifested in physical form. The basic idea behind our present scheme is that information counterfactually communicated from Bob to Alice—that is without any particles travelling between them—can be made to manifest itself as a flip in the polarisation of Alice’s photon. This allows us to combine the two phenomena of erasure and counterfactuality, proposing a simple yet unusual quantum eraser.
The Wave-Particle-Duality = Entropic-Uncertainty-Principle An interferometer - no matter how clever the design - cannot reveal both the wave and particle behavior of a quantum system. This fundamental idea has been captured by inequalities, so-called wave-particle duality relations (WPDRs), that upper bound the sum of the fringe visibility (wave behavior) and path distinguishability (particle behavior). Another fundamental idea is Heisenberg’s uncertainty principle, stating that some pairs of observables cannot be known simultaneously. Recent work has unified these two principles for two-path interferometers. Here we extend this unification to n-path interferometers, showing that WPDRs correspond to a modern formulation of the uncertainty principle stated in terms of entropies. Furthermore, our unification provides a framework for solving an outstanding problem of how to formulate universally valid WPDRs for interferometers with more than two paths, and we employ this framework to derive some novel WPDRs.
String/M-theories About Our World Are Testable in the traditional Physics Way Some physicists hope to use string/M-theory to construct a comprehensive underlying theory of our physical world a "final theory". Can such a theory be tested? A quantum theory of gravity must be formulated in 10 dimensions, so obviously testing it experimentally requires projecting it onto our 4D world (called "compactification"). Most string theorists study theories, including aspects such as AdS/CFT, not phenomena, and are not much interested in testing theories beyond the Standard Model about our world. Compactified theories generically have many realistic features whose necessary presence provides some tests, such as gravity, Yang-Mills forces like the Standard Model ones, chiral fermions that lead to parity violation, softly broken supersymmetry, Higgs physics, families, hierarchical fermion masses and more. All tests of theories in physics have always depended on assumptions and approximate calculations, and tests of compactified string/M-theories do too. String phenomenologists have also formulated some explicit tests for compactified theories. In particular, I give examples of tests from compactified M-theory (involving Higgs physics, predictions for superpartners at LHC, electric dipole moments, and more). It is clear that compactified theories exist that can describe worlds like ours, and it is clear that even if a multiverse were real it does not prevent us from finding comprehensive compactified theories like one that might describe our world. I also discuss what we might mean by a final theory, what we might want it to explain, and comment briefly on multiverse issues from the point of view of finding a theory that describes our world.
Best Theoretical Evidence String-Theory is True: Relating Gauge Gravity and String Theory We consider topological constraints that must be satisfied by formulations of gravitation as a gauge theory. To facilitate the analysis we review and further justify the composite bundle formalism of Tresguerres as a consistent underlying structure capable of incorporating both the local Lorentz and translational degrees of freedom. Identifying an important global structure required by the composite construction, we translate this into conditions on the underlying manifold. We find that in addition to admitting the expected orientability, causality and spin structures, the underlying manifold must also admit a string structure. We take this to imply that even before considerations of quantum consistency, topological considerations of gauge gravity provide a classical motivation for extended degrees of freedom.
M-theory Solves the Cosmological-Constant-Problem and Yields the Right Result: Quantum Gravity is the Time-Evolution of a Gauge-Theory One of the crucial problems in modern cosmology is the cosmological constant (CC) one [1]. Recently, a solution to the CC problem has been discussed with the help of chameleon fields in [2]. Chameleon fields [3, 4] are quantum fields, typically scalar, with a mass which is an
increasing function of the matter density. Therefore, locally, these fields are heavy while, globally (on cosmological distances) or whenever the matter density is small, the chameleon is light. This peculiar density-dependent mass justifies the name ”chameleon”. For reviews about chameleon
fields the reader is referred to [5–8]. In reference [2], gravity is treated basically at a semiclassical level in the framework of the
Modified Fujii’s Model (MFM), but it is common knowledge that the CC problem is really acute only in the quantum gravity regime. Is it possible to provide a quantum description of gravitation in the MFM? One step forward has been done in [9] at the level of the effective action with the formulation of a Chameleonic Equivalence Principle (CEP): with the help of chameleon fields it is possible to show that quantum gravitation is equivalent to a conformal anomaly in the MFM [9]. One of the purposes of this article is to discuss the quantum description of gravity in the UV completion of the MFM. In particular, we will show that the MFM is obtained from heterotic-M-theory. This connection with the string provides stronger theoretical grounds for our solution to the CC problem in the MFM. Another good news is that the theory is UV finite because we identify the string mass as our UV cut-off. Formally we have two conformal frames in the MFM, but, as we will see, there is actually a unique conformal frame: the frame where particles’ masses are field-dependent. We will call string frame (S-frame) the conformal frame where a non-minimal coupling term (dilaton-dilatoncurvature) is present. One essential element of the MFM is a conformal transformation from the S-frame to the Einstein frame (E-frame). Interestingly, a shift of the E-frame dilaton σ is related to a shift in the amount of scale invariance in the Einstein frame. This comment is the
crucial property exploited in [2] to solve the cosmological constant (CC) problem. The role of conformal transformations in the MFM has been discussed in connection to other problems, for example (A) quantum gravity and the collapse of the wave function in quantum mechanics [9],
(B) solar physics [10]. During this analysis we will analyze once again the non-equivalence of different conformal frames at the quantum level proposed in [2]. Our analysis of the MFM will be developed exploiting a peculiar compactification of time: time will be compactified on a S 1/Z2 orbifold. There are a number of consequences of this compactification. For example, as we will see, gravity is the time evolution of a gauge theory. How can we identify gravity with a quantum gauge interaction? In order to identify strong interaction with gravity, we exploit holography and AdS/CFT [11] (for a recent book about AdS/CFT see [12]). Indeed, in AdS/CFT, gravity in N dimensions is dual to a gauge theory in N − 1 dimensions. This shift (N − 1) → N in the number of dimensions is the signature of the holographic nature of AdS/CFT. The Maldacena’s conjecture is useful in the MFM, because, as
we will discuss in this article, we will exchange space with time and, in this way, the holographic shift (N − 1) → N is simply due to the time dimension. In other words, AdS/CFT guarantees that gravity is dual to a gauge interaction, but, in our paper, the holographic shift (N −1) → N
is due to time and, hence, we really identify gravity with strong interaction, in the sense that, as already mentioned above, quantum gravity is the time-evolution of a gauge interaction. As far as the organization of this paper is concerned, in section 2. we summarize some useful results already discussed in the literature. The remaining sections contain the original contributions of this article. In section 3. we analyze the non-equivalence of different conformal frames at the quantum level. Section 4. provides the link between the MFM and heterotic-Mtheory. The last section contains some concluding remarks.
String-Theory is still 'the only game-in-town' and will never be rivaled: the TASI lectures, 2016, on Cosmological Observables and String-theory Inflation [1] provides a mechanism for generating the structure in the observed universe in a very simple way: it is seeded by quantum fluctuations during a primordial epoch of accelerated expansion [2]. This in itself is one of the most elegant results in physics, perhaps the most basic application of quantum field theory. Observational data sets1 provide unprecedented results on the spectrum and statistics of primordial perturbations. We will be particularly concerned with the tensor to scalar ratio r, the tilt ns and other features of the power spectrum, and non-gaussian corrections f-I NL. Other measurements such as direct gravity wave searches can also provide constraints and discovery potential, e.g. for cosmic strings. This confluence of theory and observation is a good situation, even though both sides have significant limitations. The subject is far reaching in other ways, including nontrivial connections to string theory, which will be our focus in these lectures. Before getting to that let me dispel a possible misconception. You will sometimes hear inflation described as fine tuned. However, models of inflation – including the simplest kind – can easily be radiatively stable with dynamically generated couplings, i.e. fully ‘natural’ from a Wilsonian low energy effective field theory point of view. This is the case for some of the simplest classes such as chaotic inflation [12] or more general large-field inflation scenarios such as Natural Inflation [13], as well as some examples of small-field inflation such as [14].2 Moreover, low energy field theory is all that is required to describe most of the phenomenology;3 this goes back to pioneering works such as [2] for particular models, and one can capture the effect of enhanced symmetries on the physics of the perturbations in a very elegant, less model-dependent way using the recently developed effective theory [15] or more general calculations such as [16]. However, without imposing particular symmetries, this theory leaves open an enormous space of potential possibilities for the observables. The EFT Lagrangian even at the purely single-field level contains arbitrary functions of time. Moreover, as we will see, the contributions of very heavy fields can make a non-negligible imprint on the observables if they couple significantly to the inflaton, requiring a multi-field description. It is also true, as we will discuss in detail, that the process would be strongly affected by the presence of Planck-suppressed higher dimension operators in the Lagrangian (there is sensitivity to an infinite sequence of such terms in the large-field cases just mentioned!). This raises the question of their existence and robustness in a complete theory of quantum gravity. In string theory we find interesting answers to these questions at the level of specific mechanisms for inflation. Some of these in part realize the classic ideas, but they all bring substantial new twists to the story. Examples include [17][19][20][21][22][23] along with a number of other interesting proposals. Regardless of their fate as literal models of primordial inflation, novel string-theoretic mechanisms continue to contribute to a more systematic understanding of the process of inflation and its perturbations, leading in turn to a more complete analysis of cosmological data. There is a rich synergy between “top down” and “bottom up” approaches. In these lectures, we will start with some basics of inflationary cosmology, early universe observables, and their sensitivity to high energy physics. The energy scales involved in inflation are at most several orders of magnitude below the Planck scale, and the process can be described in the language of effective quantum field theory. Nonetheless even Planck-suppressed quantum gravity corrections are dangerously irrelevant in the renormalization group sense, and cannot safely be neglected. An important focus will be the most extreme version of this arising in large-field inflation, where we will cover the relation between primordial gravitational waves and quantum gravity and its symmetry structure. The same is true of certain contributions of heavy fields with masses well above the Hubble scale of inflation, as has become clear recently [33]. As we will see in detail, this follows from the timescale involved in inflation, and also from the large (and growing) amount of relevant data. To set the stage, we will provide a brief, pedagogical introduction to the structure of string compactifications. Upon reduction to four dimensions, these are massively dominated by cosmological backgrounds with positive scalar potential energy. Although there are (in)famously many solutions in the landscape, however, the structure is enormously constrained. We will introduce several interesting mechanisms for inflation that arise consistently within these constraints. Despite the richness of the landscape, several of these dynamical mechanisms are sufficiently well-defined to motivate concrete empirical tests, which are underway. Finally, we will describe how the structure of string compactifications – particularly the absence of a hard cosmological constant, leading to their metastability– fits in a highly nontrivial way with a direct method for uplifting the AdS/CF T holographic duality to de Sitter spacetime. For lack of space and time, I will not be able to be comprehensive in these notes. Some other interesting aspects of the subject are covered in Senatore’s lectures at this school. The literature contains numerous reviews such as [26][27] and [28].
Bio-Evolution: Cooperation is Favored by Natural Selection There has been much interest in studying evolutionary games in structured populations, often modelled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering.
Great News for Superstring Theory: Supergravity from D0-brane Quantum Mechanics The gauge/gravity duality conjecture claims the equivalence between gauge theory and superstring/M-theory. In particular, the one-dimensional gauge theory of D0-branes and type IIA string theory should agree on properties of hot black holes. Type IIA superstring theory predicts the leading N 2 behavior of the black hole internal energy to be E/N2 = a0T 14/5+a1T 23/5+a2T 29/5+· · · with the supergravity prediction a0 = 7.41 and unknown coefficients a1, a2, . . . associated with stringy corrections. In order to test this duality we perform a lattice study of the gauge theory and extract a continuum, large-N value of a0 = 7.4 ± 0.5—the first direct confirmation of the supergravity prediction at finite temperature—and constrain the stringy corrections (a1 = 9.7 ± 2.2 and a2 = 5.6 ± 1.8). We also study the sub-leading 1/N2 corrections to the internal energy.
More Good News for String-Theory: Holographic Construction of Quantum Field Theory using Wavelets Wavelets encode data at multiple resolutions, which in a wavelet description of a quantum field theory, allows for fields to carry, in addition to space-time coördinates, an extra dimension: scale. A recently introduced Exact Holographic Mapping [C.H. Lee and X.-L. Qi, Phys. Rev. B 93, 035112 (2016)] uses the Haar wavelet basis to represent the free Dirac fermionic quantum field theory (QFT) at multiple renormalization scales thereby inducing an emergent bulk geometry in one higher dimension. This construction is, in fact, generic and we show how higher families of Daubechies wavelet transforms of 1+1 dimensional scalar bosonic QFT generate a bulk description with a variable rate of renormalization flow. In the massless case, where the boundary is described by conformal field theory, the bulk correlations decay with distance consistent with an Anti-de-Sitter space (AdS3) metric whose radius of curvature depends on the wavelet family used. We propose an experimental demonstration of the bulk/boundary correspondence via a digital quantum simulation using Gaussian operations on a set of quantum harmonic oscillator modes.