Paradoxes and Primitive Ontology in Collapse Theories of Quantum Mechanics Collapse theories are versions of quantum mechanics according to which the
collapse of the wave function is a real physical process. They propose precise
mathematical laws to govern this process and to replace the vague conventional
prescription that a collapse occurs whenever an “observer” makes a “measurement.”
The “primitive ontology” of a theory (more or less what Bell called the
“local beables”) are the variables in the theory that represent matter in spacetime.
There is no consensus about whether collapse theories need to introduce a
primitive ontology as part of their definition. I make some remarks on this question
and point out that certain paradoxes about collapse theories are absent if a
primitive ontology is introduced. Although collapse theories (Ghirardi, 2007) have been invented to overcome the paradoxes
of orthodox quantum mechanics, several authors have set up similar paradoxes in
collapse theories. I argue here, following Monton (2004), that these paradoxes evaporate
as soon as a clear choice of the primitive ontology is introduced, such as the flash
ontology or the matter density ontology. In addition, I give a broader discussion of the
concept of primitive ontology, what it means and what it is good for.
According to collapse theories of quantum mechanics, such as the Ghirardi–Rimini–
Weber (GRW) theory (Ghirardi et al., 1986; Bell, 1987a) or similar ones (Pearle, 1989;
Di´osi, 1989; Bassi and Ghirardi, 2003), the time evolution of the wave function ψ in our
world is not unitary but instead stochastic and non-linear; and the Schrödinger equation is merely an approximation, valid for systems of few particles but not for macroscopic
systems, i.e., systems with (say) 1023 or more particles. The time evolution law for ψ
provided by the GRW theory is formulated mathematically as a stochastic process, see,
e.g., (Bell, 1987a; Bassi and Ghirardi, 2003; Allori et al., 2008), and can be summarized
by saying that the wave function ψ of all the N particles in the universe evolves as
if somebody outside the universe made, at random times with rate Nλ, an unsharp
quantum measurement of the position observable of a randomly chosen particle. “Rate
Nλ” means that the probability of an event in time dt is equal to Nλ dt; λ is a constant
of order 10−15 sec−1
. It turns out that the empirical predictions of the GRW theory
agree with the rules of standard quantum mechanics up to deviations that are so small
that they cannot be detected with current technology (Bassi and Ghirardi, 2003; Adler,
2007; Feldmann and Tumulka, 2012; Bassi and Ulbricht, 2014; Carlesso et al., 2016).
The merit of collapse theories, also known as dynamical state reduction theories, is
that they are “quantum theories without observers” (Goldstein, 1998), as they can be
formulated in a precise way without reference to “observers” or “measurements,” although
any such theory had been declared impossible by Bohr, Heisenberg, and others.
Collapse theories are not afflicted with the vagueness, imprecision, and lack of clarity of
ordinary, orthodox quantum mechanics (OQM). Apart from the seminal contributions by
Ghirardi et al. (1986); Bell (1987a); Pearle (1989); Di´osi (1989, 1990), and a precursor by
Gisin (1984), collapse theories have also been considered by Gisin and Percival (1993);
Leggett (2002); Penrose (2000); Adler (2007); Weinberg (2012), among others. A feature
that makes collapse models particularly interesting is that they possess extensions to
relativistic space-time that (unlike Bohmian mechanics) do not require a preferred foliation
of space-time into spacelike hypersurfaces (Tumulka, 2006a,b; Bedingham et al.,
2014); see Maudlin (2011) for a discussion of this aspect.
Collapse theories have been understood in two very different ways: some authors
[e.g., Bell (1987a); Ghirardi et al. (1995); Goldstein (1998); Maudlin (2007); Allori et al.
(2008); Esfeld (2014)] think that a complete specification of a collapse theory requires,
besides the evolution law for ψ, a specification of variables describing the distribution of
matter in space and time (called the primitive ontology or PO), while other authors [e.g.,
Albert and Loewer (1990); Shimony (1990); Lewis (1995); Penrose (2000); Adler (2007);
Pearle (2009); Albert (2015)] think that a further postulate about the PO is unnecessary
for collapse theories. The goals of this paper are to discuss some aspects of these two
views, to illustrate the concept of PO, and to convey something about its meaning and
relevance. I begin by explaining some more what is meant by ontology (Section 2) and
primitive ontology (Section 3). Then (Section 4), I discuss three paradoxes about GRW
from the point of view of PO. In Section 5, I turn to a broader discussion of PO. Finally
in Section 6, I describe specifically its relation to the mind-body problem.
The Impact of the Higgs on Einstein’s Gravity and the Geometry of Spacetime The experimental observation of the Higgs particle at
the LHC has confirmed that the Higgs mechanism is
a natural phenomenon, through which the particles of
the standard model of interactions (smi) acquire their
masses from the spectrum of eigenvalues of the Casimir
mass operator of the Poincaré group. The fact that the
masses and orbital spins defined by the Poincaré group
appear in particles of that model, consistent with the
internal (gauge) symmetries, naturally suggests the existence
of some kind of combination between all symmetries
of the total Lagrangian. However, such “symmetry
mixing" sits at the core of an acute mathematical
problem which emerged in the 1960’s, after some "no-go"
theorems showed the impossibility of an arbitrary combinations
between the Poincaré group with the internal
symmetries groups. More specifically, it was shown that
the particles belonging to the same internal spin multiplet
would necessarily have the same mass, in complete
disagreement with the observations [1, 2].
It took a considerable time to understand that the
problem was located in the somewhat destructive "nilpotent
action" of the translational subgroup of the Poincaré
group over the spin operators of the electroweak symmetry
U(1) × SU(2) [3, 4]. Among the proposed solutions,
one line of thought suggested a simple replacement of the
Poincaré group by some other Lie symmetry, like for example
the 10-parameter homogeneous de Sitter groups.
Another, more radical proposal suggested the replacement
of the whole Lie algebra structure by a graded Lie
algebra, in the framework of the super-string program.
Such propositions have impacted on the subsequent development
of high energy physics and cosmology during
the next four or five decades, lasting up to today.
Here, following a comment by A. Salam [5], we present a
new view of the symmetry mixing problem, based on the
Higgs vacuum symmetry. In order to assign masses to
all particles of the smi, in accordance with the eigenvalues
of the Casimir mass operator of the Poincaré group,
the vacuum symmetry must remain an exact symmetry
mixed with the Poincaré group. Admittedly, this is not
too obvious because the Higgs mechanism requires the
breaking of the vacuum symmetry and consequently also
of the mixing. We start with the analysis of the Higgs vacuum symmetry, and its relevance to the solution of the
symmetry mixing problem. In the sequence, we explore
the fact that the mixing with the Poincaré group also
implies in the emergence of particles with higher spins,
including the relevant case of the Fierz-Pauli theory of
spin-2 fields in the smi. We end with the proposition of a
new, massive spin-2 particle of geometric nature, acting
as a short range carrier of the gravitational field, complementing
the long range Einstein’s gravitational interaction.
We begin by tracing an analogy between the “Mexican
hat" shape of the Higgs potential with a cassino roulette.
The roulette works by the combined action of gravitation
with the spin produced by the action of the croupier over
the playing ball. The energy of the ball eventually ends as
it "naturally falls" into one of the numbered slots at the
bottom of the roulette, producing a winning number. In
our analogy, the playing ball represents a particle of the
standard model and the numbered slots at the bottom of
the roulette corresponds to Higgs vacuum represented by
a circumference at the bottom of the hat, whose symmetry
group is SO(2). A difference is that while the slots in
the roulette are labeled by the integers, the bottom circle
of the Mexican hat is a continuous manifold parametrized
by an angle, assuming specific real values in the interval
[0, ∞). When a particle falls into the vacuum, it "wins a
mass" so to speak, not any mass, but only a discrete, positive,
isolated real mass values which correspond to one
of the eigenvalues of the Casimir mass operator of the
Poincaré group [27]. In other words, the measurement of
one particle mass in its vacuum state is an “observational
condition" of the Higgs theory, which in our analogy corresponds
to stopping the roulette, so that every player
can read and confirm who is the winner, does not end
the game. The roulette will spin again, so that all other
particles also may have the chance of winning a mass.
The spontaneous breaking of the vacuum symmetry will
does not eliminate that symmetry. Consequently, the
Higgs mechanism requires that the vacuum symmetry is
exact, braking only at the moment of assigning the mass
to any given particle.
Uniting Grand Unified Theory with Brane World Scenario We present a field theoretical model unifying grand unified theory (GUT) and brane world scenario.
As a concrete example, we consider SU(5) GUT in 4+1 dimensions where our 3 + 1 dimensional
spacetime spontaneously arises on five domain walls. A field-dependent gauge kinetic
term is used to localize massless non-Abelian gauge fields on the domain walls and to assure
the charge universality of matter fields. We find the domain walls with the symmetry breaking
SU(5) → SU(3) × SU(2) × U(1) as a global minimum and all the undesirable moduli are stabilized
with the mass scale of MGUT. Profiles of massless Standard Model particles are determined as a
consequence of wall dynamics. The proton decay can be exponentially suppressed. VI. CONCLUDING REMARKS: We propose a 4 + 1 dimensional model which unifies
SU(5) GUT and the brane world scenario. Our 3 + 1 dimensional
spacetime dynamically emerges with the symmetry
breaking SU(5) → GSM together with one generation
of the SM matter fields. We solve the gradient
flow equation and confirm the 3-2 splitting configuration
is the global minimum in a large parameter region. By
applying the idea of the field-dependent gauge kinetic
function [24–26] to our model, we solve the long-standing
difficulties of the localization of massless gauge fields and
charge universality. All the undesirable moduli are stabilized.
Furthermore, the proton decay can be exponentially
suppressed.
We have not yet included the SM Higgs field and the
second and higher generations, but our framework can
easily incorporate the former similarly to Ref.[14] and the
latter with the mass hierarchy in the spirit of Ref.[30, 31].
Furthermore, our model can be extended to other GUT
gauge group like SO(10). Supersymmetry and/or warped
spacetime with gravity can also be included without serious
difficulties. Since our model has strong resemblance
to D-branes in superstring theory, we hope that our field theoretical model can give some hints for simple constructions
of SM by D-branes.
Supersymmetric Quantum Mechanics and Topology Supersymmetric quantum mechanical models are computed by the path integral approach. In the limit, the integrals localize to the zero modes. This allows us to perform the index computations exactly because of supersymmetric localization, and we will show how the geometry of target space enters the physics of sigma models resulting in the relationship between the supersymmetric model and the geometry of the target space in the form of topological invariants. Explicit computation details are given for the Euler characteristics of the target manifold and the index of Dirac operator for the model on a spin manifold.
1. Introduction
Supersymmetry is a quantum mechanical space-time symmetry which induces transformations between bosons and fermions. The generators of this symmetry are spinors which are anticommuting (fermionic) variables rather than the ordinary commuting (bosonic) variables; hence their algebra involves anticommutators instead of commutators. A unified framework consisting of bosons and fermions thus became possible, both combined in the same supersymmetric multiplet [1]. It is overwhelmingly accepted that supersymmetry is an essential feature of any unified theory as it not only provides a unified ground for bosons and fermions but is also helpful in reducing ultraviolet divergences. It was discovered by Gel’fand and Likhtman [2], Ramond [3], Neveu and Schwarz [4], and later by a few physicists [1, 5]. Whether Supersymmetry (SUSY) is actually realized in nature or not is still not clear; however, it has provided powerful mathematical tools and enormous amount of insights have been obtained [6]. For example, SUSY could be used to unify the space-time and internal symmetries of the S-matrix avoiding the no-go theorem of Coleman and Mandula [7], imposing local gauge invariance to SUSY which gives rise to supergravity [8, 9]. In such theories, locally gauged SUSY gives rise to Einstein’s general theory of relativity, which highlights that the local SUSY theories give a natural framework for the unification of gravity and other fundamental forces.
Supersymmetric quantum mechanics was originally developed by Witten [10], as a toy model to test the breaking of supersymmetry. In answering the same question, SUSY was also studied in the simplest case of SUSY QM by Cooper and Freedman [11]. In a later paper, the so-called “Witten Index" was proposed by Witten [12], which is a topological invariant and it essentially provides a tool to study the SUSY breaking nonperturbatively. A year later, Bender et al. [13] proposed a new critical index to study SUSY breaking in a lattice regulated system nonperturbatively. In its early days, SUSY QM was studied as a test to check the SUSY breaking nonperturbatively.
Later, when people started to explore further aspects of SUSY QM, it was realized that this was a field of research worthy of further exploration in its own right. The introduction of the topological index by Witten [12] attracted a lot of attention from the physics community and people started to study different topological aspects of SUSY QM.
Witten Index was extensively explored and it was shown that the index exhibited anomalies in certain theories with discrete and continuous spectra [14–18]. Using SUSY QM, proofs of Atiyah-Singer Index theorem were given [19–21]. A link between SUSY QM and stochastic differential equations was investigated in [22], which was used to prove algorithms about stochastic quantization; Salomonson and van Holten were the first to give a path integral formulation of SUSY QM [23]. The ideas from SUSY QM were extended to study higher dimensional systems and systems with many particles to implement such ideas to problems in different branches of physics, for example, condensed matter physics, atomic physics, and statistical physics [24–29]. Another interesting application is [30], in which the low energy dynamics of -monopoles in supersymmetric Yang-Mills theory are determined by supersymmetric quantum mechanics based on the moduli space of static monopole solutions.
There are also situations where SUSY QM arises naturally, for example, in the semiclassical quantization of instanton solitons in field theory. In the classical limit, the dynamics can often be described in terms of motion on the moduli space of the instanton solitons. Semiclassical effects are then described by quantum mechanics on the moduli space. In a supersymmetric theory, soliton solutions generally preserve half the supersymmetries of the parent theory and these are inherited by the quantum mechanical system. Complying with this, Hollowood and Kingaby in [31] show that a simple modification of SUSY QM involving the mass term for half the fermions naturally leads to a derivation of the integral formula for the genus, which is a quantity that interpolated between the Euler characteristic and arithmetic genus.
The research work in the direction of using supersymmetry to exploit topology occurred in phases: first one started in early 80s with the work of Witten [10, 32], Álvarez-Gaumé [33], and Friedan and Windey [34] and the later phase starting from late 80s and early nineties is still going on. A couple of major breakthroughs in the second phase were due to Witten: in [35], Jone’s polynomials for knot invariants which were understood quantum field theoretically, and, in [36], Donaldson’s invariants for four manifolds. Supersymmetric localization is a powerful technique to achieve exact results in quantum field theories. A recent development using supersymmetric localization technique is the exact computation of the entropy of black holes by a topologically twisted index of ABJM theory [37]. SUSY QM also has important applications in mathematical physics, as in providing simple proof of index theorems which establishes connection between topological properties of differentiable manifolds to local properties.
This review gives a basic introduction to supersymmetric quantum mechanics and later it establishes SUSY QM’s relevance to the index theorem. We will consider a couple of problems in dimensions, that is, supersymmetric quantum mechanics, by using supersymmetric path integrals, to illustrate the relationship between physics of the supersymmetric model and geometry of the background space which is some manifold in the form of Euler characteristic of this manifold . Furthermore, for a manifold admitting spin structure, we study a more refined model which yields the index of Dirac operator. Both the Euler characteristic of a manifold and the index of Dirac operator are the Witten indices of the appropriate supersymmetric quantum mechanical systems. Put differently, we will reveal the connection between supersymmetry and index theorem by path integrals.
The organization of this paper is as follows: Section 2 is an introduction to the calculus of Grassmann variables and their properties. Section 3 is an introduction to the Gaussian integrals, for both commuting (bosonic) and anticommuting (fermionic) variables including some basic examples. Section 4 involves the study of supersymmetric sigma models on both flat and curved space. Section 5 is the summary and conclusion.
Three conflicts between quantum theory and general relativity, which make it implausible that a quantum theory of gravity can be arrived at by quantising Einsteinian gravity We highlight three conflicts between quantum theory and classical general relativity, which make it implausible that a quantum theory of gravity can be
arrived at by quantising classical gravity. These conflicts are: quantum nonlocality
and space-time structure; the problem of time in quantum theory; and the quantum
measurement problem. We explain how these three aspects bear on each other, and
how they point towards an underlying noncommutative geometry of space-time.
Why we Need to Quantise Everything, Including Gravity There is a long-standing debate about whether gravity should be quantised. A powerful line of argument in
favour of quantum gravity considers models of hybrid systems consisting of coupled quantum-classical sectors. The conclusion is that such models are inconsistent: either the quantum sector’s defining properties necessarily spread to the classical sector, or they are violated. These arguments have a long history, starting with the debates about the quantum nature of the electromagnetic fields in the early days of quantum theory. Yet, they have limited scope because they rely on particular dynamical models obeying restrictive conditions, such as unitarity. In this paper we propose a radically new, more general argument, relying on less restrictive assumptions. The key feature is an information-theoretic characterisation of both sectors, including their interaction, via constraints on copying operations. These operations are necessary for the existence of observables in any physical theory, because they constitute the most general representation of measurement interactions. Remarkably, our argument is formulated without resorting to particular dynamical models, thus being applicable to any hybrid system, even those ruled by “post-quantum” theories. Its conclusion is also compatible with partially quantum systems, such as those that exhibit features like complementarity, but may lack others, such as entanglement. As an example, we consider a hybrid system of qubits and rebits. Surprisingly, despite the rebit’s lack of complex amplitudes, the signature quantum protocols such as teleportation are still possible.
The Salecker-Wigner-Peres Quantum Clock, Feynman Paths, and a Tunnelling Time that Should NOT Exist The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle
is supposed to spend is a specified region of space Ω. By construction, the result is a real positive
number, and the method seems to avoid the difficulty of introducing complex time parameters, which
arises in the Feynman paths approach. However, it tells very little about what is being learnt about
the particle’s motion. We investigate this matter further, and show that the SWP clock, like any
other Larmor clock, correlates the rotation of its angular momentum with the durations τ Feynman
paths spend in Ω, therefore destroying interference between different durations. An inaccurate
weakly coupled clock leaves the interference almost intact, and the need to resolve resulting ”which
way?” problem is the main difficulty at the centre of the ”tunnelling time” controversy. In the
absence of a probability distribution for the values of τ , the SWP results are expressed in terms
of moduli of the ”complex times”, given by the weighted sums of the corresponding probability
amplitudes. It is shown that over-interpretation of these results, by treating the SWP times as
physical time intervals, leads to paradoxes and should be avoided. We analyse various settings of
the SWP clock, different calibration procedures, and the relation between the SWP results and the
quantum dwell time. Our general analysis is applied to the cases of stationary tunnelling and tunnel
ionisation.
Towards a complete ∆(27) × SO(10) SUSY Grand Unified Theory I discuss a renormalisable model based on ∆(27) family symmetry with
an SO(10) grand unified theory (GUT) with spontaneous geometrical CP
violation. The symmetries are broken close to the GUT breaking scale,
yielding the minimal supersymmetric standard model. Low-scale Yukawa
structure is dictated by the coupling of matter to ∆(27) antitriplets φ
whose vacuum expectation values are aligned in the CSD3 directions by
the superpotential. Light physical Majorana neutrinos masses emerge
from the seesaw mechanism within SO(10). The model predicts a normal
neutrino mass hierarchy with the best-fit lightest neutrino mass m1 ∼ 0.3
meV, CP-violating oscillation phase δ
l ≈ 280◦ and the remaining neutrino
parameters all within 1σ of their best-fit experimental values. Introduction
It is well established that the Standard Model (SM) remains incomplete while it fails
to explain why neutrinos have mass. Small Dirac masses may be added by hand, but
this gives no insight into the Yukawa couplings of fermions to Higgs (where a majority
of free parameters in the SM originate), or the extreme hierarchies in the fermion mass
spectrum, ranging from neutrino masses of O(meV) to a top mass of O(100) GeV.
Understanding this, and flavour mixing among quarks and leptons, constitutes the
flavour puzzle. Other open problems unanswered by the SM include the sources of
CP violation (CPV), as well as the origin of three distinct gauge forces, and why
they appear to be equal at very high energy scales.
An approach to solving these puzzles is to combine a Grand Unified Theory (GUT)
with a family symmetry which controls the structure of the Yukawa couplings. In the
highly attractive class of models based on SO(10) [1] , three right-handed neutrinos
are predicted and neutrino mass is therefore inevitable via the seesaw mechanism.
In this paper I summarise a recently proposed model [2], renormalisable at the
GUT scale, capable of addressing all the above problems, based on ∆(27) × SO(10).
Affine Kac-Moody Algebras and the Wess-Zumino-Witten Model In 1984, Belavin, Polyakov and Zamolodchikov [1] showed how an infinite-dimensional field theory problem could effectively be reduced to a finite problem, by the presence of
an infinite-dimensional symmetry. The symmetry algebra was the Virasoro algebra, or
two-dimensional conformal algebra, and the field theories studied were examples of twodimensional
conformal field theories. The authors showed how to solve the minimal models
of conformal field theory, so-called because they realise just the Virasoro algebra, and they
do it in a minimal fashion. All fields in these models could be grouped into a discrete, finite
set of conformal families, each associated with a representation of the Virasoro algebra.
This strategy has since been extended to a large class of conformal field theories with
similar structure, the rational conformal field theories (RCFT’s) [2]. The new feature is
that the theories realise infinite-dimensional algebras that contain the Virasoro algebra as
a subalgebra. The larger algebras are known as W-algebras [3] in the physics literature. Thus the study of conformal field theory (in two dimensions) is intimately tied to infinitedimensional algebras. The rigorous framework for such algebras is the subject of vertex (operator) algebras [4] [5]. A related, more physical approach is called meromorphic conformal
field theory [6]. Special among these infinite-dimensional algebras are the affine Kac-Moody algebras (or
their enveloping algebras), realised in the Wess-Zumino-Witten (WZW) models [7]. They are the simplest infinite-dimensional extensions of ordinary semi-simple Lie algebras. Much
is known about them, and so also about the WZW models. The affine Kac-Moody algebras
are the subject of these lecture notes, as are their applications in conformal field theory.
For brevity we restrict consideration to the WZW models; the goal will be to indicate how
the affine Kac-Moody algebras allow the solution of WZW models, in the same way that
the Virasoro algebra allows the solution of minimal models, and W-algebras the solution
of other RCFT’s. We will also give a couple of examples of remarkable mathematical
properties that find an “explanation” in the WZW context.
One might think that focusing on the special examples of affine Kac-Moody algebras is
too restrictive a strategy. There are good counter-arguments to this criticism. Affine KacMoody
algebras can tell us about many other RCFT’s: the coset construction [8] builds a
large class of new theories as differences of WZW models, roughly speaking. Hamiltonian
reduction [9] constructs W-algebras from the affine Kac-Moody algebras. In addition,
many more conformal field theories can be constructed from WZW and coset models by
the orbifold procedure [10] [11]. Incidentally, all three constructions can be understood in
the context of gauged WZW models.
Along the same lines, the question “Why study two-dimensional conformal field theory?”
arises. First, these field theories are solvable non-perturbatively, and so are toy models
that hopefully prepare us to treat the non-perturbative regimes of physical field theories.
Being conformal, they also describe statistical systems at criticality [12]. Conformal field
theories have found application in condensed matter physics [13]. Furthermore, they are
vital components of string theory [14], a candidate theory of quantum gravity, that also
provides a consistent framework for unification of all the forces.
The basic subject of these lecture notes is close to that of [15]. It is hoped, however,
that this contribution will complement that of Gawedzki, since our emphases are quite
different.
The layout is as follows. Section 2 is a brief introduction to the WZW model, including
its current algebra. Affine Kac-Moody algebras are reviewed in Section 3, where some
background on simple Lie algebras is also provided. Both Sections 2 and 3 lay the foundation
for Section 4: it discusses applications, especially 3-point functions and fusion rules.
We indicate how a priori surprising mathematical properties of the algebras find a natural
framework in WZW models, and their duality as rational conformal field theories.
Towards optimal experimental tests on the reality of the quantum state The Barrett–Cavalcanti–Lal–Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.
Bell Violation in Primordial Cosmology According to the inflationary model, the primordial fluctuations which we see in CMB (Cosmic Microwave Background) were produced by quantum mechanical effects in the early universe. These fluctuations are the origin for the formation of large-scale structure, but the fluctuations we observe at present are actually classical in nature. The highly-entangled quantum mechanical wave function of the universe plays a very important role during quantum mechanical interpretation of the required fluctuations. Hence, one can use the Hartle–Hawking wave function in de-Sitter space. Due to this fact, quantum mechanical fluctuations can be theoretically demonstrated, and can also be implemented in the context of primordial cosmology—if and only if we perform a cosmological experiment using the highly-entangled quantum mechanical wave function of the universe which is defined in inflationary period and eventually violate Bell’s inequality [1]. To describe the background methodology, it is important to mention that in the context of quantum mechanics, Bell test experiment can be described by the measurement of two non- commutating physical operators. These operators are associated with two distinct locations in space-time. Thus, using same analogy in the context of primordial cosmology, we can perform cosmological observations on two spatially separated and causally disconnected places up to the epoch of reheating (after inflation). During these observations we can measure the numerical values of various cosmological observables (along with cosmic variance), and can also be computed from scalar curvature fluctuation. However, it is important to note that for all such observations, we cannot measure the value of associated canonically conjugate momentum. Hence, for these cosmological observables, we cannot measure the imprints of two non-commuting operators in primordial cosmology. There is subtle point, however, which is that if these observables satisfy the minimum requirements of decoherence effect, then we can possibly perform measurements from two exactly commuting cosmological observables, and therefore we will be able to design a Bell’s inequality violating cosmological experimental setup. We know that in quantum theory, to design such an experimental setup one has to perform a number of repeated measurements on the same object (which in this context is the same quantum state of the universe), and therefore in such a physical situation we can justify the appearance of each and every measurement using a single quantum state. In the case of primordial cosmology, we can do the same thing; that is, consider two spatially-separated portions in the sky which plays the same role of performing repeated cosmological Bell’s inequality violating experiment using the same quantum mechanical state. Therefore, we have the advantage of choosing the required properties of two spatially-separated portions in the sky in order to setup Bell’s inequality violating experimental setup. It is completely possible to set up a Bell’s inequality violating cosmological experimental setup if we can find a link which connects these non-commutating cosmological observables and classical probability distribution function originated from model of inflation (see Reference [2,3,4] for more details). In this article, we explore this possibility in detail. See also References [5,6,7,8], where the authors have also studied various consequences of the violation of Bell’s inequalities in other areas.
A Mathematics-to-Physics Dictionary: from Classical and Quantum, to Hamiltonian, and Lagrangian Particle Mechanics The aim of this work is to show that particle mechanics, both classical and quantum, Hamiltonian and Lagrangian, can be derived from few simple physical assumptions. Assuming deterministic and reversible time evolution will give us a dynamical system whose set of states forms a topological space and whose law of evolution is a self-homeomorphism. Assuming the system is infinitesimally reducible—specifying the state and the dynamics of the whole system is equivalent to giving the state and the dynamics of its infinitesimal parts—will give us a classical Hamiltonian system. Assuming the system is irreducible—specifying the state and the dynamics of the whole system tells us nothing about the state and the dynamics of its substructure—will give us a quantum Hamiltonian system. Assuming kinematic equivalence, that studying trajectories is equivalent to studying state evolution, will give us Lagrangian mechanics and limit the form of the Hamiltonian/Lagrangian to the one with scalar and vector potential forces.
Deep: Modified-Gravity Theories and the Standard-Model-of-Physics-'Particle-Dark-Matter' Theories are Equivalent An obvious criterion to classify theories of modified gravity is to identify their gravitational degrees of freedom and their coupling to the metric and the matter sector. Using this simple idea, we show that any theory which depends on the curvature invariants is equivalent to general relativity in the presence of new fields that are gravitationally coupled to the energy-momentum tensor. We show that they can be shifted into a new energy-momentum tensor. There is no a priori reason to identify these new fields as gravitational degrees of freedom or matter fields. This leads to an equivalence between dark matter particles gravitationally coupled to the standard model fields and modified gravity theories designed to account for the dark matter phenomenon. Due to this ambiguity, it is impossible to differentiate experimentally between these theories and any attempt of doing so should be classified as a mere interpretation of the same phenomenon.
New Dimensions from Gauge-Higgs Unification The Higgs boson is unified with gauge fields in the gauge-Higgs unification. The SO(5)×U(1) gauge-Higgs electroweak unification in the Randall-Sundrum warped space yields almost the same phenomenology at low energies as the standard model, and gives many predictions for the Higgs couplings and new W′, Z′ bosons around 6 ∼ 8 TeV, which can be tested at 14 TeV LHC. The gauge-Higgs grand unification is achieved in SO(11) gauge theory. It suggests the existence of the sixth dimension (GUT dimension) in addition to the fifth dimension (electroweak dimension). The proton decay is naturally suppressed in the gauge-Higgs grand unification.
Very Surprising Stats: How Physicists Interpret the Foundations of Quantum Mechanics The first question is intended to investigate the specific opinions regarding the randomness found in quantum mechanics. The various answers correspond to how one would answer the question from the viewpoint of different interpretations. Thus, the first option "The randomness is only apparent" corresponds to the answer one would give from the viewpoint of the many worlds interpretation, since the universal wave function evolves in a deterministic (non-random) way through the wave equation, but every observer is embedded in the universe moving along different branches giving rise to an apparent randomness from the observer’s point of view. The second option corresponds to the answer one would give from the viewpoint of bohmian mechanics, where the observed randomness of quantum systems is only due to a lack of knowledge of the exact initial conditions. The majority said either: randomness cannot be removed from any physical theory or randomness is a fundamental concept of nature.
The second question pertains to the role of measurement in defining physical properties.
This question has some ambiguity to it because it might not be well-defined, what is meant by the word "physical property". The intention of the question was to ascertain the participants’ view of wave function collapse; is it a description of nature or our knowledge of a system? A more formal version of "Is the moon there when you are not looking?": Do you believe that physical objects have their properties well defined prior to and
independent of measurement? Majority said NO, or "in some' cases, possible. 11 %: Yes in all cases. 27 %: Yes in some cases. 47 %: No! Others: undecided.
On superpositionality of macro-distinct-states: 55 %: in principle possible. 27 %: eventually experimentally realizable. Rest: no, due to non-collapse of wave-function.
The observer is: 37 % a complex quantum system. 10 % should play no role. 31 %: plays a fundamental role in the application of the formalism, but plays no distinguished physical role. 22 %: plays a distinguished physical role.
How do you understand the measurement problem? 17 %: it is a pseudoproblem. 29 %: it is solved by decoherence. 16 %: it is/will be solved in a different manner. 6 % it is a severe threat to quantum mechanics. 32 %: don't fully understand the problem to form an opinion.
What is the message of the observed violations of Bell's inequality? 37 %: hidden-variables are impossible. 24 %: some notion of non-locality. 7 %: Unperformed measurements have no results. 3 %: Action-at-a-distance in the physical world. I don't know the inequality well enough to have formed an opinion.
If two physical theories give the same predictions, what properties would make you support one over the other? (you can check more than one box). 87 %: Simplicity - simple over complex. 14 %: determinism holds. 86 %: consistency - paradox free. 23 %: ontic, not simply epistemic. 3 %: Chronology - The theory that was established first.
Do phycisists need an interpretation of quantum mechanics? 65 %: yes, to understand nature. 8 %: yes, but only for pedagogical reasons. 23 %: No, it is irrelevant as long as quantum mechanics provides us with correct predictions/results. 4 %: No, it is entirely based on personal beliefs.
What characterizes the Copenhagen interpretation of quantum mechanics? (you can check multiple boxes). 77 %: Collapse of the wavefunction upon measurement. 46 %: Indeterminism - Results are not completely specified by initial conditions. 17 %: Nonlocality, i.e. action-at-a-distance. 10 %: Quantum mechanics works well, but does not describe nature as it really is. 43 %: The correspondence principle - quantum mechanics reproduces classic physics in the limit of high quantum numbers. 71 %: The principle of complementarity - objects have complementary properties which cannot be observed or measured at t... . 9 %: I don't know the interpretation well enough to have formed an opinion.
What characterizes the many worlds interpretation of quantum mechanics? (you can check multiple boxes) 65 %: The existence of multiple parallel worlds . 3 %: The existence of multiple minds belonging to one person. 12 %: Locality, i.e no action-at-a-distance . 13 %: The observer is treated as a physical system . 45 %: No wave function collapse . 30 %: Determinism - Evolution of universal wavefunction is completely governed by the wave equation. 30 %: I don't know the interpretation well enough to have formed an opinion .
What characterizes De Broglie - Bohm pilot wave interpretation of quantum mechanics? (you can check multiple boxes) 31 %: Hidden variables in form of the particles exact positions and momenta. 14 %: Nonlocality. 19 %: Determinism - Events are completely specified by initial conditions . 11 %: Possibility of deriving Borns Rule . 3 %: Wave function collapse. 30 %: Quantum potential - each particle has a an associated potential that guides the particle. 61 %: don't know enough.
What is your favourite interpretation of quantum mechanics? 1 %: Consistent Histories . 39 %: Copenhagen. 2 %: De Broglie - Bohm . 6 %: Everett (many worlds and/or many minds) . 6 %: Information-based / information-theoretical . 1 %: Modal interpretation. 2 %: Objective collapse (e.g., GRW, Penrose). 1 %: Quantum Bayesianism. 3 %: Statistical (ensemble) interpretation. 0 %: Transactional interpretation . 3 %: Other. 36 %: I have no preferred interpretation of quantum mechanics .
What are your reasons for NOT favoring the Copenhagen interpretation? (you can
check multiple boxes) 44 %: The role the observer plays in determining the physical state is too
important. 23 %: The paradoxes that arise on the
macroscopic scale, e.g. Scrödinger's cat and Wigner's friend . 15 %: Nonlocality . 14 %: Quantum mechnanics describes nature as it really is. 32 %: Other .
What are your reasons for NOT favoring the many worlds interpretation? (you can check multiple boxes) 50 %: The notion of multiple worlds seems too
farfetched . 20 %: The notion of multiple minds seems too farfetched. 33 %: The intepretation is too complex compared to others - i.e. Ockham's razor. 7 %: The interpretation is unable to explain the
Born rule. 57 %: It can never be corroborated
experimentally. 20 %: other.
What are your reasons for NOT favoring De Broglie - Bohm theory? (you can check multiple boxes) 41 %: It is too complex compared to other interpretations - i.e. Ockhams razor. 21 %: It has hidden variables, which makes the theory untenable according to Bells
inequality. 16 %: Nonlocality. 38 %: The notion of all particles posessing a quantum potential that guides them seems too far-fetched. 29 %: other.
How often have you switched to a different interpretation? 38 %: Never . 11 %: once. 12 %: several times. 40 %: I have no preferred interpretation of quantum mechanics.
You Will Not Read Anything Deeper in 'Science': On The Philosophy, History and Mathematics Behind the Physics of John 'Bell’s Universe: "No one has messed with our heads more than John Bell" ~ Andrew Friedman! Reinhold A. Bertlmann: My collaboration and friendship with John Bell is recollected. I will explain his outstanding contributions in particle physics, in accelerator physics, and his joint work with Mary Bell. Mary’s work in accelerator physics is also summarized. I recall our quantum debates, mention some personal reminiscences, and give my personal view on Bell’s fundamental work on quantum theory, in particular, on the concept of contextuality and nonlocality of quantum physics. Finally, I describe the huge influence Bell had on my own work, in particular on entanglement and Bell inequalities in particle physics and their experimental verification, and on mathematical physics, where some geometric aspects of the quantum states are illustrated.
Deep Metaphysical Question: Does String-Theory Posit Extended Simples It is sometimes claimed that string theory posits a fundamental ontology
including extended mereological simples, in the form either of minimum-sized regions of space or of the strings themselves. But there is very little in the actual theory to support this claim, and much that
suggests it is false. Extant string theories treat space as a continuum, and strings do not behave like simples. Although existing models are not there yet, the string theory program offers the ambition, and at least the potential, for a theory of everything. Should the program eventually succeed, we will have a theory whose interpretation provides the best possible guess at our world’s fundamental ontology. Relatively little philosophical work has addressed string theory thus far, and not much of the extant work has focused on its ontology – which is unsurprising, since the theory has not yet reached a mature stage in either its mathematical form or its
empirical confirmation. But a few remarks have appeared in the philosophical literature addressing
string theory’s relevance to one debate in metaphysics: the possibility or existence of extended simples. Sometimes it is suggested
that (quantum) string theory posits an ontology of extended simples – either the strings themselves, or minimum-sized regions of “quantized” space. Both the popular literature on string theory and some
technical presentations of the theory make these claims appear quite natural. Perhaps there is room for metaphysicians to wrangle over the metaphysical or conceptual possibility of dividing a string, but the fundamental entities of string theory are extended and physically indivisible; so the story goes. String theory is a work in progress, and upon its completion we may indeed find that the final version posits extended simples. But string theory in its present form provides little to no evidence for the existence of extended simples. In the form that has been put forward as
david john baker Does String Theory Posit Extended Simples? a possible theory of everything, string theory is a quantum theory, and as with other quantum theories, its most obvious or literal physical interpretation is almost certainly unsatisfactory. The likeliest guesses
at satisfactory interpretations do not involve extended simples. And even assuming the obvious interpretation, on which quantum string theory has the same ontology as the classical version of string theory, this ontology does not include extended simples. To support these claims, I will of course need to present some details of the theory. In Section 2 I will do so, hopefully avoiding onerous
technicalities in favor of conceptual clarity while doing justice to the facts. I will then consider in turn the claim that strings are extended
simples (Section 3) and the claim that string theory posits minimumsized regions of space with no proper parts (Section 4). The former rests on an untenably classical understanding of string theory, while
the latter is supported only by the speculative or analogical remarks of some physicists. Neither the theory itself nor the likely direction of its future development bears out these remarks, which in some instances arise from a simple conflation of the detectable and the real. In present-day string theories, spacetime (if it exists at all) is continuous.
Are the Laws of Physics Governing Quantum Measurements Time-Asymmetric? Not so Fast: on the Arrow of Time and Quantum Measurement We investigate the statistical arrow of time for a quantum system being monitored by a sequence of measurements. For a continuous qubit measurement example, we demonstrate that time-reversed evolution is always physically possible, provided that the measurement record is also negated. Despite this restoration of dynamical reversibility, a statistical arrow of time emerges, and may be quantified by the log-likelihood difference between forward and backward propagation hypotheses. We then show that such reversibility is a universal feature of non-projective measurements, with forward or backward Janus measurement sequences that are time-reversed inverses of each other.
Could Adding a Fifth Dimension Solve the Quantum Gravity Puzzle and Unify Physics?!: Thermal Dimension of Quantum Spacetime Recent results suggest that a crucial crossroad for quantum gravity is the characterization of the effective dimension of spacetime at short distances, where quantum properties of spacetime become significant. This is relevant in particular for various scenarios of “dynamical dimensional reduction” which have been discussed in the literature. We are here concerned with the fact that the related research effort has been based exclusively on analyses of the “spectral dimension”, which involves an unphysical Euclideanization of spacetime and is highly sensitive to the off-shell properties of a theory. As here shown, different formulations of the same physical theory can have wildly different spectral dimension. We propose that dynamical dimensional reduction should be described in terms of the “thermal dimension” which we here introduce, a notion that only depends on the physical content of the theory. We analyze a few models with dynamical reduction both of the spectral dimension and of our thermal dimension, finding in particular some cases where thermal and spectral dimension agree, but also some cases where the spectral dimension has puzzling properties while the thermal dimension gives a different and meaningful picture.
Support for Edward Witten's Claim that Non-Commutative Geometry is Essential for Planck-Scale Quantum-Gravity Two Arguments: Meaning of Noncommutative Geometry and the Planck-Scale Quantum Group This is an introduction for nonspecialists to the noncommutative geometric approach to Planck scale physics coming out of quantum groups. The canonical role of the ‘Planck scale quantum group’ C[x]I/C[p] and its observable-state T-dualitylike properties are explained. The general meaning of noncommutativity of position space as potentially a new force in Nature is explained as equivalent under quantum group Fourier transform to curvature in momentum space. More general quantum groups C(G?)I/U(g) and Uq(g) are also discussed. Finally, the generalisation from quantum groups to general quantum Riemannian geometry is outlined. The semiclassical limit of the latter is a theory with generalised non-symmetric metric gµν obeying ∇µgνρ − ∇νgµρ = 0. Intersecting Quantum Gravity with Noncommutative Geometry We review applications of noncommutative geometry in canonical quantum gravity. First, we show that the framework of loop quantum gravity includes natural noncommutative structures which have, hitherto, not been explored. Next, we present the construction of a spectral triple over an algebra of holonomy loops. The spectral triple, which encodes the kinematics of quantum gravity, gives rise to a natural class of semiclassical states which entail emerging fermionic degrees of freedom. In the particular semiclassical approximation where all gravitational degrees of freedom are turned of, a free fermionic quantum field theory emerges. We end the paper with an extended outlook section.
Quantum Gravity Necessitates the Quantization of SpaceTime: it has a Non-Commutative Algebraic Structure In the search for a complete theory of quantum gravity there has been many proposals
over the last years. One approach that has gained quite some popularity comes from mathematics: non-commutative geometry. The main motivation to study
non-commutative geometry in the realm of physics comes from the "geometrical" measurement problem, [DFR95]. Basically, the problem goes as follows; by combining principles of quantum mechanics and general relativity it turns out that the
measurement of a space-time point with arbitrary precision is not possible and thus
1
2
space-time, around the Planck length, does not have a continuous structure. Hence,
geometry of space-time has to be replaced by a non-commutative version thereof.
One of the most studied and well understood examples of such a non-commutative
geometric structure is known by the Moyal-Weyl plane. In essence, one has a
constant non-commutativity between the space-time coordinates (that are replaced
by operators). This is equivalent to the non-commutativity that is introduced
between the observables, i.e. momentum and coordinate, in quantum mechanics.
Another essential reason to study non-commutative geometry is the search for
quantum gravity. Let us elaborate on this point a bit further. Essentially, the Einstein
Field Equations of general relativity tell us that gravity is a force, experienced
by the curvature of space-time. Hence, ultimately quantization of gravity corresponds
to the quantization of space-time. What is quantization? This question has many
possible answers. However, all answers have something in common. Namely, we
take a classical theory and by introducing, for example, a new product, we extend
the classical framework. An interesting example is to take classical mechanics and
perform the so-called deformation quantization in order to obtain quantum
mechanics but in terms of functions with a non-commuting product. Hence, in the
former example we went from classical mechanics to quantum mechanics. Therefore,
deformation of classical space-time should lead us to a quantized, i.e. quantum
space-time. Another, equally valid approach is to start with a non-commutative
algebra and to obtain by techniques developed in non-commutative geometry the
equivalent of a metric, i.e. a distance function that corresponds to a non-commutative
space-time and therefore a quantum space-time.
Hence, what we thrive to have, in the context of non-commutative geometry,
is a mathematical rigorous deformation quantization of space-time that in addition
displays, at least the first, steps towards the quantization of gravity. Intuitively, there
are two main ideas that motivate this work. The first idea takes into account that
in quantum theory all observables are given by self-adjoint (possibly unbounded)
operators on a Hilbert-space. Hence, in a theory of quantum gravity, a quantity
identified with the metric, has to be given in terms of self-adjoint operators defined
on a dense subset of the Hilbert space as well. The second intuitive idea is the
fact that many proposed quantum gravity theories face the issue of recovering flat
space-time. Therefore, we should start with the Minkowski metric, that is given by
self-adjoint operators, and by a deformation quantization induce a curved space-time
metric. This is the reason why we refer to such an outcome as emergent gravity. In
particular, Gravity, i.e. curved space-time is not preassumed but rather induced by a
well-defined mathematical framework in the context of operator theory.
Hence, the concrete question that we pose is the following: Can we obtain a
curved space-time from a purely flat space-time by deformation quantization of
linear operators that are defined on a Hilbert space? Hence, is there a possibility to
understand the emergence of a gravitational field from a strict deformation of flat
space-time. Moreover, in this context, does such a curved space-time supply us with
strong arguments in favor of a non-commutative space-time?
Primary answers to these questions were given in [And13] and [Muc14], where
the dynamics of a free quantum mechanical particle were deformed and a minimal
substitution was induced. The minimal substitution was understood as a gravitomagnetic
field. The advantage therein was an understanding of gravitational effects
that were obtained by analogous electro-magnetic phenomena. However, the subject
of curved metrics was neither obtained nor investigated in this context.
3
Hence, a scheme of obtaining a curved metric from a flat one by a strict deformation
quantization procedure is still missing. This paper intends to resolve this
issue. We present a concrete and moreover (mathematically) strict scheme in which it
is possible to apply a deformation quantization of the flat metric and obtain a curved
space-time. The emergence of curvature by a deformation quantization is achieved
by combining two major mathematical developments in non-commutative geometry.
The first development that we use is the universal differential structure of Connes
[Con95, Chapter 3, Section 1] that associates to any associative unital algebra a
differential structure. While the second framework used is the Rieffel deformation,
[Rie93] (and extensions thereof see [GL07], [GL08], [BS], [BLS11]), that deals with
strict deformation quantizations of C^∗−algebras.
Empirical/Data-Analytic Confirmation of Supersymmetry, Strings and D-branes: the Superpartner as Dark Matter with Consideration to Inflation Due to Experimentation Abstract: Superparticles including gravitons appearing in the laboratory due to a novel technique (Tahan, 2011, 2012) meant that the gravitino exists, concluded to have a low mass consequently requiring this affirmative presentation and deeply solves the greatest mystery in cosmology: read on that here from the Harvard University Center for Astrophysics. A low mass gravitino would not have been problematic related to Big Bang Nucleosynthesis (BBN) and baryogenes is if understanding a correlation between inflation and gravitino abundance (Ellis, Linde, & Nanopoulos, 1982), particularly when considering the particle to be dark matter and the lightest supersymmetric
particle or superparticle (LSP). Gravitinos--proposed by this paper to be in the fifth dimension as dark matter--subsist because of inflation. This manuscript is a first discussion of the gravitino based on experimentation. A novel technique that led to the observations of superpartner including graviton effects in the lab forced the need to present declaratively the existence of the gravitino, only speculative presentations being in the literature, and to propose anew how the low mass gravitino can exist as dark matter particularly in consideration of inflation, which has been appreciated as part of the history of the Cosmos though a specific particle related to it has not been observed. Since only inflation would have permitted an acceptable gravitino abundance in keeping with visualizations of the Universe (Khlopov & Linde, 1984), the superpartner dark matter that accompanies the graviton can be acknowledged as a representative particle for it. Experienced readers could understand this work to be partly a review; yet, what should be remembered is that this manuscript is not a compendium regarding the gravitino but is focused on and exists due to the innovative Figure 1 method. Considering a varied readership due to interests in topics of this paper, well-recognized ideas to physicists are provided occasionally to improve understanding of information gathered experimentally. References are presented if related to what has been learned from experimentation; accordingly, certain readers may notice the omission of well-known citations that may have little connection to the experiments or have not been included since the works do not add significantly in relation to references already presented for the understanding of how the innovation can advance particular disciplines, e.g. dark matter studies. Experiments were conducted to test the affects of low frequency quanta on Hydrogen in a magnetic field. From set-ups energy that imparted mass (mass-energy) emerged that could be directed over distances while adhering to the inverse square law, hinting at the possibility that the mass-energy was due to gravitons. Support for the emergence of the carrier graviton was observed when laser light was incorporated in the set-up for experiments as shown in Figure 1. The laser light should be appreciated not to have been part of the symmetry breaking technique but simply an addition of light to the set-up to understand if gravitons were emerging. The thought was that an increased bending of spacetime due to released gravitons from the Hydrogen area in the tube could change the path of the light traveling in spacetime--as with the bending of light by celestial bodies--because spacetime is ubiquitous, not only outside of the atmosphere of the Earth. The light was recorded to curve around the tube holding the Hydrogen while a D-brane with an open string for the laser light appeared, which demonstrated that strings including the graviton exist thereby showing string theory to be a unifying theory (Tahan, 2011). The curving of the laser light resulted from gravitons having coupled to the tube, which consequently sufficiently bent spacetime due to additional mass from the mass-energy carrier gravitons. Based on the theoretical work for supergravity, the gravitino should be understood to exist due to the recorded effects of gravitons in the laboratory--the superpartner being well-accepted by scholars to accompany the boson graviton. Various control trials were performed to understand if the appearance of the D-brane that nearly mirrored images from literature of D-branes with open strings could be explained differently. No other conclusion in view of the set-up seemed reasonable, testable hypotheses resulting in being unacceptable. 1.1 Introduction Consideration of the gravitino as dark matter (Pagels & Primack, 1982) including in relation to inflation is unoriginal, as well as presenting the gravitino to be the LSP (Khlopov & Linde, 1984; Bolz, Buchmüller, & Plümacher, 1998; Moroi, Murayama, & Yamaguchi, 1993); numerous other references could have been included since the gravitino has been well-studied theoretically, particularly as a dark matter particle. But the gravitino has never been known to exist. Various collider experiments have presented no evidence for supersymmetry, which would exclude the gravitino as dark matter particularly if superparticles are never detected when believing colliders to be the only means to study supersymmetry. Still, before the lack of supersymmetry evidence the gravitino had lost favor as a dark matter candidate, initially due to the gravitino problem (Weinberg, 1982). The Figure 1 technique allowing for examinations of supergravity permits conclusions no longer simply to be conceptual; this manuscript is unique since it discusses the gravitino as a reality. The facile, inexpensive method that allowed for symmetry breaking on the lab bench could permit direct studies of the gravitino, including other superparticles and strings. Through experiments the nucleon was understood to be a brane that will be discussed in greater detail in upcoming manuscripts. The brane structure allowed for the picturing of a separation between the Standard Model visible sector and the underlying region of superparticles including the graviton. Accordingly, the symmetry breaking due to the Figure 1 method supported gauge mediated supersymmetry breaking (GMSB). In other words, by exposing Hydrogen in a specific magnetic field strength to particular low frequency quanta at a set amplitude, i.e. the Figure 1 technique, the underlying sector of superpartners was accessible through the quantized gauge field or SU(3) Yang-Mills theory: the Figure 1 method creating a symmetry breaking (Tahan, 2012) involving QCD. Experiments resulting in exposure of the Standard Model visible sector to superparticles including gravitons through GMSB meant that the gravitino should have a low mass. After learning that a calculated mass for the Higgs boson using events involving the Figure 1 technique (Tahan, 2012) was not significantly different from synchrotron detections if the Standard Model Higgs boson has been found (Incandela, 2012; Gianotti, 2012), the gravitino mass was appreciated should not be over 1keV, in consideration of theoretical work indicating that gravitinos would have overclosed the Universe otherwise (Cho & Uehara, 2004) -- accepting the superpartner to be the LSP thereby preventing problems for BBN. Gravitinos being light supersymmetric particles would suggest the bodies to be dark matter (Takayama & Yamaguchi, 2000; Giudice & Rattazzi, 1999). Accordingly, superparticles can have sub-TeV masses while the gravitino can have a sub-keV mass (Albaid & Babu, 2012), considerations that impelled this manuscript primarily since a sub-keV gravitino is not the consensus among scholars for the main dark matter candidate. Yet, other mass considerations will be mentioned in this manuscript for historical context and possibly for use by groups in future studies. This paper does not suggest that scholars had stopped considering the gravitino in relation to the Cosmos. Though the gravitino grew less popular as a dark matter candidate, the influence of the superpartner also as the LSP has continued to be examined including for the particle make-up of the Universe, e.g. having impacted cosmic Lithium abundances (Bailly, Jedamzik, & Moultaka, 2009). Emergence of the gravitino has been connected to baryogenesis (Trodden, 2004). Association of the superparticle with baryogenesis has been studied in detail, presentations having been nearly complete explanations with supersymmetry for baryogenesis and leptogenesis (Buchmüller, 1998). This manuscript can be included in discussions to bring nearer to completion work on baryogenesis since a problem in the field has been inclusion of the gravitino in view of its mass, i.e. uncertainty leading to theoretical studies with varied mass considerations as <1 keV or on the TeV level (Buchmüller, 1998). Accordingly, this manuscript as a confirmation of the existence of the gravitino should be understood necessary, events related to a low mass gravitino potentially answering multiple questions in the Cosmos. This work thus continues with an explanation of how the low mass gravitino can exist in the Universe, which involves inflation--existence of gravitino dark matter to have been a consequence of it. Discussion will include the potential influence entropy has had on the superparticle, particularly in relation to black holes due to observations of black hole evaporations in the laboratory (Tahan, 2011). The possible existence of primordial black holes as dark matter will be mentioned before describing the Figure 1 technique as a viable option for supersymmetry and string studies and concluding by reiterating certain observations from experiments, significant as contributions to explanations for the landscape of the Universe - presented by this manuscript to be a 2-brane.
A Strengthening of the Lovelock-Theorem: the Einstein Field Equation is Valid in ALL Dimensions, not Just 4 The standard argument for the uniqueness of the Einstein field equation is based on
Lovelock’s Theorem, the relevant statement of which is restricted to four dimensions. I
prove a theorem similar to Lovelock’s, with a physically modified assumption: that the
geometric object representing curvature in the Einstein field equation ought to have the
physical dimension of stress-energy. The theorem is stronger than Lovelock’s in two ways:
it holds in all dimensions, and so supports a generalized argument for uniqueness; it does
not assume that the desired tensor depends on the metric only up second-order partialderivatives,
that condition being a consequence of the proof. This has consequences for
understanding the nature of the cosmological constant and theories of higher-dimensional
gravity. Another consequence of the theorem is that it makes precise the sense in which
there can be no gravitational stress-energy tensor in general relativity. Along the way, I
prove a result of some interest about the second jet-bundle of the bundle of metrics over
a manifold.
In What Could Be a Decisive Moment in Physics, a Major Breakthrough for SuperString Theory: Precision Lattice Test of the Gauge/Gravity Duality at Large-N - the First Confirmation of the Supergravity Prediction We pioneer a systematic, large-scale lattice simulation of D0-brane quantum mechanics. The large-N and continuum limits of the gauge theory are taken for the first time at various temperatures 0.4 ≤ T ≤ 1.0. As a way to directly test the gauge/gravity duality conjecture we compute the internal energy of the black hole directly from the gauge theory and reproduce the coefficient of the supergravity result E/N2 = 7.41T 14/5 . This is the first confirmation of the supergravity prediction for the internal energy of a black hole at finite temperature coming directly from the dual gauge theory. We also constrain stringy corrections to the internal energy.
A Deep Debate in the Philosophy of Physics: is Gauge Symmetry Fundamental or a Useless Redundancy: a Bold Argument for the Former - Locality and Unitarity as Consequences of Gauge Invariance We conjecture that the leading two-derivative tree-level amplitudes for gluons and gravitons can
be derived from gauge invariance together with mild assumptions on their singularity structure.
Assuming locality (that the singularities are associated with the poles of cubic graphs), we prove
that gauge-invariance in just (n − 1) particles together with minimal power-counting uniquely fixes
the amplitude. Unitarity in the form of factorization then follows from locality and gauge invariance. We also give evidence for a stronger conjecture: assuming only that singularities occur when the sum of a subset of external momenta go on-shell, we show in non-trivial examples that gauge-invariance and power-counting demand a graph structure for singularities. Thus both locality and unitarity emerge from singularities and gauge invariance. Similar statements hold for theories of Goldstone bosons like the non-linear sigma model and Dirac-Born-Infeld, by replacing the condition of gauge invariance with an appropriate degree of vanishing in soft limits.
How General is the Holographic Principle in Theoretical Physics? This is the question I will explore in this thesis. As this question is very fundamental, one is well advised to try and tackle the problem in an environment which is as simple as possible but still interesting and complex enough to allow for a general interpretation of the results. Since gravity in 2+1 dimensions satisfies those requirements I will focus on holography involving 2+1 dimensional spacetimes and 1+1 dimensional quantum field
theories. The two most important reasons for this are: (i) gravity in 2+1 dimensions can be described very efficiently on a technical level. (ii) The dual quantum field theories have infinitely many symmetries and thus allow for a very high degree of control. This allows one to explicitly and exactly check new holographic correspondences. Of very special interest regarding the generality of the holographic principle are so-called higher-spin gravity theories which extend the usual local invariance under coordinate changes by a more general set of symmetries. In this thesis I will first focus on higher-spin holography which is based on spacetimes that do not asymptote to Anti-de Sitter spacetimes. Starting from a given higher-spin theory, I will determine the corresponding asymptotic symmetries of the corresponding dual
quantum field theories and their unitary representations. Furthermore, using "non-Anti-de
Sitter holography" I will describe a dual quantum field theory, which allows for an arbitrary
(albeit not infinitely) large number of quantum microstates. The second part of this thesis is concerned with holography for asymptotic flat spacetimes. First I will show how to obtain various results, like an analogue of a (higher-spin) Cardy
formula which counts the number of microstates of a conformal field theory at a given temperature, or the asymptotic symmetries of asymptotically flat spacetimes, as a limit of vanishing cosmological constant from the known Anti-de Sitter results.
Furthermore, I will explore unitary representations of the asymptotic symmetry algebras of asymptotically flat spacetimes, which under certain assumptions, will result in a NO-GO theorem that forbids having flat space, higher-spins and unitarity at the same time. In addition I will elaborate on a specific example that allows to circumvent this NO-GO theorem. I will also show how to consistently describe asymptotically flat spacetimes with additional (higher-spin) chemical potentials in a holographic setup and how to determine the corresponding thermal entropy of certain cosmological asymptotically flat spacetimes.
The finale of this thesis will be an explicit check of the holographic principle for asymptotically
flat spacetimes. I will present a method, using a special version of a Wilson-line, which allows one to determine the entanglement entropy of field theories which are assumed to be dual to asymptotically flat spacetimes in a holographic manner. I will also extend this method in order to be able to also successfully include higher-spin symmetries and determine the thermal entropy of the corresponding dual field theories.
How String-Theory Subsumes Loop Quantum Gravity Theory In this work we study canonical gravity in finite regions for which we introduce a generalisation of the Gibbons-Hawking boundary term including the Immirzi parameter. We study the canonical formulation on a spacelike hypersuface with a boundary sphere and show how the presence of this term leads to an unprecedented type of degrees of freedom coming from the restoration of the gauge and diffeomorphism symmetry at the boundary. In the presence of a loop quantum gravity state, these boundary degrees of freedom localize along a set of punctures on the boundary sphere. We demonstrate that these degrees of freedom are effectively described by auxiliary strings with a 3-dimensional internal target space attached to each puncture. We show that the string currents represent the local frame field, that the string angular momenta represent the area flux and that the string stress tensor represents the two dimensional metric on the boundary of the region of interest. Finally, we show that the commutators of these broken diffeomorphisms charges of quantum geometry satisfy at each puncture a Virasoro algebra with central charge c = 3. This leads to a description of the boundary degrees of freedom in terms of a CFT structure with central charge proportional to the number of loop punctures. The boundary SU(2) gauge symmetry is recovered via the action of the U(1) 3 Kac-Moody generators (associated with the string current) in a way that is the exact analog of an infinite dimensional generalization of the Schwinger spin-representation. We finally show that this symmetry is broken by the presence of background curvature.
Two Papers: 'Objections to Loop Quantum Gravity'; and Why it and: Loop Quantum Cosmology Are False Loop quantum gravity makes too many assumptions about the behavior of geometry at very short distances. It assumes that the metric tensor is a good variable at all distance scales, and it is the only relevant variable. It even assumes that Einstein's equations are more or less exact in the Planckian regime. The spacetime dimensionality (four) is another assumption that cannot be questioned, much like the field content. Each of these assumptions is challenged in a general enough theory of quantum gravity, for example all the models that emerge from string theory. These assumptions have neither theoretical nor experimental justification. Particular examples will be listed in a separate entry. The most basic, underlying assumption is that the existence of a meaningful classical theory, of general relativity, implies that there must exist a "quantization" of this theory. This is commonly challenged. Many reasons are known why some classical theories do not have a quantum counterpart. Gauge anomalies are a prominent example. General relativity is usually taken to be another example, because its quantum version is not renormalizable. It is known, therefore, that a classical theory is not always a good starting point for a quantum theory. Theorists of loop quantum gravity work with the assumption that "quantization" can be done, and continue to study it even if their picture seems inconsistent. According to the logic of the renormalization group, the Einstein-Hilbert action is just an effective description at long distances; and it is guaranteed that it receives corrections at shorter distances. String theory even allows us to calculate these corrections in many cases. There can be additional spatial dimensions; they have emerged in string theory and they are also naturally used in many other modern models of particle physics such as the Randall-Sundrum models. An infinite amount of new fields and variables associated with various objects (strings and branes) can appear, and indeed does appear according to string theory. Geometry underlying physics may become noncommutative, fuzzy, non-local, and so on. Loop quantum gravity ignores all these 20th and 21st century possibilities, and it insists on a 19th century image of the world which has become naive after the 20th century breakthroughs. On one hand, loop quantum gravity prohibits many types of new physics that have become natural and likely in the recent era. On the other hand, it brings no order to the structure and values of the higher-derivative terms. Indeed, the infinitely many parameters that govern the effective action and make quantized general relativity non-renormalizable and unpredictive are simply replaced by an infinite number of parameters of the Hamiltonian constraint and/or the spin foam Feynman rules. Loop quantum gravity is therefore not a predictive theory. Moreover, it does not offer any possibility to predict new particles, forces and phenomena at shorter distances: all these objects must be added to the theory by hand. Loop quantum gravity therefore also makes it impossible to explain any relations between the known physical objects and laws. Loop quantum gravity is not a unifying theory. This is not just an aesthetic imperfection: it is impossible to find a regime in real physics of this Universe in which non-gravitational forces can be completely neglected, except for classical physics of neutral stars and galaxies that also ignores quantum mechanics. For example, the electromagnetic and strong force are rather strong even at the Planck scale, and the character of the black hole evaporation would change dramatically had the Nature omitted the other forces and particles. Also, the loop quantum gravity advocates often claim that the framework of loop quantum gravity regularizes all possible UV divergences of gravity as well as other fields coupled to it. That would be a real catastrophy because any quantum field theory - including all non-renormalizable theories with any fields and any interactions - could be coupled to loop quantum gravity and the results of the calculations could be equal to anything in the world. The predictive power would be exactly equal to zero, much like in the case of a generic non-renormalizable theory. There is absolutely no uniqueness found in the realistic models based on loop quantum gravity. The only universal predictions - such as the Lorentz symmetry breaking discussed below - seem to be more or less ruled out on experimental grounds. Unlike string theory, loop quantum gravity has not offered any non-trivial self-consistency checks of its statements and it has had no impact on the world of mathematics. While string theory smells by God, loop quantum gravity smells by Man. It seems that the people are constructing it, instead of discovering it. There are no nice surprises in loop quantum gravity - the amount of consistency in the results never exceeds the amount of assumptions and input. For example, no answer has ever been calculated in two different ways so that the results would match. Whenever a really interesting question is asked - even if it is apparently a universal question, for example: "Can topology of space change?" - one can propose two versions of loop quantum gravity which lead to different answers. There are many reasons to think that loop quantum gravity is internally inconsistent, or at least that it is inconsistent with the desired long-distance limit (which should be smooth space). Too many physical wisdoms seem to be violated. Unfortunately the loop quantum gravity advocates usually choose to ignore the problems. For example, the spin foam (path-integral) version of loop quantum gravity is believed to break unitarity. The usual reaction of the loop quantum gravity practitioners is the statement that unitarity follows from time-translation symmetry, and because this symmetry is broken (by a generic background) in GR, we do not have to require unitarity anymore. But this is a serious misunderstanding of the meaning and origin of unitarity. Unitarity is the requirement that the total probability of all alternatives (the squared length of a vector in the Hilbert space) must be conserved (well, it must always be 100%), and this requirement - or an equally strong generalization of it - must hold under any circumstances, in any physically meaningful theory, including the case of the curved, time-dependent spacetime. Incidentally, the time-translation symmetry is related, via Noether's theorem, to a time-independent, conserved Hamiltonian, which is a completely different thing than unitarity. A similar type of "anything goes" approach seems to be applied to other no-go theorems in physics. Loop quantum gravity is isolated from particle physics. While extra fields must be added by hand, even this ad hoc procedure seems to be impossible in some cases. Scalar fields can't really work well within loop quantum gravity, and therefore this theory potentially contradicts the observed electroweak symmetry breaking; the violation of the CP symmetry, and other well-known and tested properties of particle physics. Loop quantum gravity also may deny the importance of many methods and tools of particle physics - e.g. the perturbative techniques; the S-matrix, and so on. Loop quantum gravity therefore potentially disagrees with 99% of physics as we know it. Unfortunately, the isolation from particle physics follows from the basic opinions of loop quantum gravity practitioners and it seems very hard to imagine that a deeper theory can be created if the successful older theories, insights, and methods (and exciting newer ones) in the same or closely related fields are ignored. Moreover, it has been recently argued that in every consistent theory of quantum gravity, gravitational force must be the weakest one (much like in the real world). More precisely, there must always exist objects whose repulsive force of any kind exceeds the attractive force of gravity. Loop quantum gravity, much like all other attempts to start with "pure quantum gravity", violates this insight maximally because it is an expansion around the point in which the other interactions are turned off. This point in the parameter space is inconsistent. Loop quantum gravity does not guarantee that smooth space as we know it will emerge as the correct approximation of the theory at long distances; there are in fact many reasons to be almost certain that the smooth space cannot emerge, and these problems of loop quantum gravity are analogous to other attempts to discretize gravity (e.g. putting gravity on lattice). While string theory confirms general relativity or its extensions at long distances - where GR is tested - and modifies it at the shorter ones, loop quantum gravity does just the opposite. It claims that GR is formally exact at the Planck scale, but implies nothing about the correct behavior at long distances. It is reasonable to assume that the usual ultraviolet problems in quantum gravity are simply transmuted into infrared problems, except that the UV problems seem to be present in loop quantum gravity, too. Loop quantum gravity violates the rules of special relativity that must be valid for all local physical observations. Spin networks represent a new reincarnation of the 19th century idea of the luminiferous aether - environment whose entropy density is probably Planckian and that picks a priviliged reference frame. In other words, the very concept of a minimal distance (or area) is not compatible with the Lorentz contractions. The Lorentz invariance was the only real reason why Einstein had to find a new theory of gravity - Newton's gravitational laws were not compatible with his special relativity. Despite claims about the background independence, loop quantum gravity does not respect even the special 1905 rules of Einstein; it is a non-relativistic theory. It conceptually belongs to the pre-1905 era and even if we imagine that loop quantum gravity has a realistic long-distance limit, loop quantum gravity has even less symmetries and nice properties than Newton's gravitational laws (which have an extra Galilean symmetry, and can also be written in a "background independent" way - and moreover, they allow us to calculate most of the observed gravitational effects well, unlike loop quantum gravity). It is a well-known fact that general relativity is called "general" because it has the same form for all observers including those undergoing a general accelerated motion - it is symmetric under all coordinate transformations - while "special" relativity is only symmetric under a subset of special (Lorentz and Poincare) transformations that interchange inertial observers. The symmetry under any coordinate transformation is only broken spontaneously in general relativity, by the vacuum expectation value of the metric tensor, not explicitly (by the physical laws), and the local physics of all backgrounds is invariant under the Lorentz transformations. Loop quantum gravity proponents often and explicitly state that they think that general relativity does not have to respect the Lorentz symmetry in any way - which displays a misunderstanding of the symmetry structure of special and general relativity (the symmetries in general relativity extend those in special relativity), as well as of the overwhelming experimental support for the postulates of special relativity. Loop quantum gravity also depends on the background in a lot of other ways - for example, the Hamiltonian version of loop quantum gravity requires us to choose a pre-determined spacetime topology which cannot change. One can imagine that the Lorentz invariance is restored by fine-tuning of an infinite number of parameters, but nothing is known about the question whether it is possible, how such a fine-tuning should be done, and what it would mean. Also, it has been speculated that special relativity in loop quantum gravity may be superseded by the so-called doubly special relativity, but doubly special relativity is even more problematic than loop quantum gravity itself. For example, its new Lorentz transformations are non-local (two observers will not agree whether the lion is caught inside the cage) and their action on an object depends on whether the object is described as elementary or composite. The discrete area spectrum is not a consequence, but a questionable assumption of loop quantum gravity. The redefinition of the variables - the formulae to express the metric in terms of the Ashtekar variables (a gauge field) - is legitimate locally on the configuration space, but it is not justified globally because it imposes new periodicities and quantization laws that do not follow from the metric itself. The area quantization does not represent physics of quantum gravity but rather specific properties of this not-quite-legitimate field redefinition. One can construct infinitely many similar field redefinitions (sibblings of loop quantum gravity) that would lead to other quantization rules for other quantities. It is probably not consistent to require any of these new quantization rules - for instance, one can see that these choices inevitably break the Lorentz invariance which is clearly a bad thing. The discrete area spectrum is not testable, not even in principle. Loop quantum gravity does not provide us with any "sticks" that could measure distances and areas with a sub-Planckian precision, and therefore a prediction about the exact sub-Planckian pattern of the spectrum is not verifiable. One would have to convert this spectrum into a statement about the scattering amplitudes. Loop quantum gravity provides us with no tools to calculate the S-matrix, scattering cross sections, or any other truly physical observable. It is not surprising; if loop quantum gravity cannot predict the existence of space itself, it is even more difficult to decide whether it predicts the existence of gravitons and their interactions. The S-matrix is believed to be essentially the only gauge-invariant observable in quantum gravity, and any meaningful theory of quantum gravity should allow us to calculate it, at least in principle. Loop quantum gravity does not really solve any UV problems. Quantized eigenvalues of geometry are not enough, and one can see UV singular and ambiguous terms in the volume operators and most other operators, especially the Hamiltonian constraint. Because the Hamiltonian defines all of dynamics, which contains most of the information about a physical theory, it is a serious object. The whole dynamics of loop quantum gravity is therefore at least as singular as it is in the usual perturbative treatment based on semiclassical physics. We simply do have enough evidence that a pure theory of gravity, without any new degrees of freedom or new physics at the Planck scale, cannot be consistent at the quantum level, and loop quantum gravity advocates need to believe that the mathematical calculations leading to the infinite and inconsistent results (for example, the two-loop non-renormalizable terms in the effective action) must be incorrect, but they cannot say what is technically incorrect about them and how exactly is loop quantum gravity supposed to fix them. Moreover, the loop quantum gravity proponents seem to believe that the naive notion of "atoms of space" is the only way to fix the UV problems. String theory, which allows us to make real quantitative computations, proves that it is not the case and there are more natural ways to "smear out" the UV problems. In fact, a legitimate viewpoint implies that the discrete, sharp character of the metric tensor and other fields at very short distances makes the UV behavior worse, not better. Moreover, as explained above, the "universal solution of the UV problems by discreteness of space" implies at least as serious loss of predictive power as in a generic non-renormalizable theory. Even if loop quantum gravity solved all the UV problems, it would mean that infinitely many coupling constants are undetermined - a situation analogous to a non-renormalizable theory. Despite various claims, loop quantum gravity is not able to calculate the black hole entropy, unlike string theory. The fact that the entropy is proportional to the area does not follow from loop quantum gravity. It is rather an assumption of the calculation. The calculation assumes that the black hole interior can be neglected and the entropy comes from a new kind of dynamics attached to the surface area - there is no justification of this assumption. Not surprisingly, one is led to an area/entropy proportionality law. The only non-trivial check could be the coefficient, but it comes out incorrectly (see the Immirzi discrepancy). The Immirzi discrepancy was believed to be proportional to the logarithm of two or three, and a speculative explanation in terms of quasinormal modes was proposed. However it only worked for one type of the black hole - a clear example of a numerical coincidence - and moreover it was realized in July 2004 that the original calculation of the Immirzi parameter was incorrect, and the correct value (described by Meissner) is not proportional to the logarithm of an integer. The value of the Immirzi parameter - even according to the optimists - remains unexplained. Another description of the situation goes as follows: Because the Immirzi parameter represents the renormalization of Newton's constant and there is no renormalization in a finite theory - and loop quantum gravity claims to be one - the Immirzi parameter should be equal to one which leads to a wrong value of the black hole entropy. While all useful quantum theories in physics are based on a separable Hilbert space, i.e. a Hilbert space with a countable basis, loop quantum gravity naturally leads to a non-separable Hilbert space, even after the states related by diffeomorphisms are identified. This space can be interpreted as a very large, uncountable set of superselection sectorsthat do not talk to each other and prevent physical observables from being changed continuously. All known procedures to derive a different, separable Hilbert space are physically unjustified. Equivalently, loop quantum gravity does not allow one to derive the conventional notion of continuity of space. Loop quantum gravity has no tools and no solid foundations to answer other important questions of quantum gravity - the details of Hawking radiation; the information loss paradox; the existence of naked singularities in the full theory; the origin of holography and the AdS/CFT correspondence; mechanisms of appearance and disappearance of spacetime dimensions; the topology changing transitions (which are most likely forbidden in loop quantum gravity); the behavior of scattering at the Planck energy; physics of spacetime singularities; quantum corrections to geometry and Einstein's equations; the effect of the fluctuating metric tensor on locality, causality, CPT-symmetry, and the arrow of time; interpretation of quantum mechanics in non-geometric contexts including questions from quantum cosmology; the replacement for the S-matrix in de Sitter space and other causally subtle backgrounds; the interplay of gravity and other forces; the issues about T-duality and mirror symmetry. Loop quantum gravity is criticised as a philosophical framework that wants us to believe that these questions should not be asked. As if general relativity is virtually a complete theory of everything (even though it apparently can't be) and all ideas in physics after 1915 can be ignored. The criticisms of loop quantum gravity regarding other fields of physics are misguided. They often dislike perturbative expansions. While it is a great advantage to look for a framework that allows us to calculate more than the perturbative expansions, it should never be less powerful. In other words, any meaningful theory should be able to allow us to perform (at least) approximative, perturbative calculations (e.g. around a well-defined classical solution, such as flat space). Loop quantum gravity cannot do this, definitely a huge disadvantage, not an advantage as some have claimed. A good quantum theory of gravity should also allow us to calculate the S-matrix. Loop quantum gravity's calls for "background independence" are misled. A first constraint for a correct physical theory is that it allows the (nearly) smooth space(time) - or the background - which we know to be necessary for all known physical phenomena in this Universe. If a theory does not admit such a smooth space, it can be called "background independent" or "background free", but it may be a useless theory and a physically incorrect theory. It is a very different question whether a theory treats all possible shapes of spacetime on completely equal footing or whether all these solutions follow from a more fundamental starting point. However, it is not a priori clear on physical grounds whether it must be so (it can be just an aesthetic feature of a particular formulation of a theory, not the theory itself), and moreover, for a theory that does not predict many well-behaved backgrounds the question is meaningless altogether. Physics of string theory certainly does respect the basic rules of general relativity exactly - general covariance is seen as the decoupling of unphysical (pure gauge) modes of the graviton. This exact decoupling can be proved in string theory quite easily. It can also be seen in perturbative string theory that a condensation of gravitons is equivalent to a change of the background; therefore physics is independent of the background we start with, even if it is hard to see for the loop quantum gravity advocates. Loop quantum gravity is not science because every time a new calculation shows that some quantitative conjectures were incorrect, the loop quantum gravity advocates invent a non-quantitative, ad hoc explanation why it does not matter. Some borrow concepts from unrelated and fields, including noiseless information theory and philosophy, and some explanations why previous incorrect results should be kept are not easily credible! Second paper: Loop quantum cosmology is a tentative approach to model the universe down to the Planck era where quantum gravity settings are needed. The quantization of the universe as a dynamical space-time is inspired by Loop Quantum Gravity ideas. In addition, loop quantum cosmology could bridge contact with astronomical observations, and thus potentially investigate quantum cosmology modellings in the light of observations. To do so however, modelling both the background evolution and its perturbations is needed. The latter describe cosmic inhomogeneities that are the main cosmological observables. In this context, we present the so-called deformed algebra approach implementing the quantum corrections to the perturbed universe at an effective level by taking great care of gauge issues. We particularly highlight that in this framework, the algebra of hypersurface deformation receives quantum corrections, and we discuss their meaning. The primordial power spectra of scalar and tensor inhomogeneities are then presented, assuming initial conditions are set in the contracting phase preceding the quantum bounce and the wellknown expanding phase of the cosmic history. These spectra are subsequently propagated to angular power spectra of the anisotropies of the cosmic microwave background. It is then shown that regardless of the choice for the initial conditions inside the effective approach for the background evolution (except that they are set in the contracting phase), the predicted angular power spectra of the polarized B-modes exceed the upper bound currently set by observations. The exclusion of this specific version of loop quantum cosmology establishes the falsifiability of the approach, though one shall not conclude here that either loop quantum cosmology or loop quantum gravity is excluded.
What is Time and how can General Relativity and Quantum Physics Unify on it The conflict between quantum theory and the theory of relativity is exemplified in their treatment
of time. We examine the ways in which their conceptions differ, and describe a semiclassical clock model combining elements of both theories. The results obtained with this clock model in flat
spacetime are reviewed, and the problem of generalizing the model to curved spacetime is discussed, before briefly describing an experimental setup which could be used to test of the model. Taking an operationalist view, where time is that which is measured by a clock, we discuss the conclusions that can be drawn from these results, and what clues they contain for a full quantum relativistic theory of time. When an experiment is carried out, the experimenter hopes to gain some information about nature through her controlled interaction with the system under study. In classical physics, systems posess a set of measurable properties with definite values, which can in principle be interrogated simultaneously, to arbitrary accuracy, and without affecting
the values of those properties. Any uncertainty in the measurements arises from some lack of knowledge on the part of the experimenter (for example due to some imperfect calibration of the aparatus) which could, in principle, be corrected. In quantum theory on the other hand, uncertainty relations between conjugate variables, and the necessary backreaction of the measurement on the system, combine to pose strict limits on the information which can be obtained from nature. Although there are nontrivial complications in defining time as a quantum observable (see the introductory discussion in [1], and Section 12.8 of [2], for example), it is nonetheless apparent that quantum restrictions must also be applied to its measurement [3–5]. The general theory of relativity (GR) lies within the classical paradigm with respect to the measurements that can be performed, though the outcomes of such measurements are affected by the observer’s state of motion, and the distribution of energy around them. The theory is built upon the notion of “ideal” clocks and rods, through which the observer gathers information. In special relativity, an ideal clock is a pointlike object whose rate with respect to
some observer depends only on its instantaneous speed, and not directly on acceleration [6]. The latter property is sometimes referred to as the “clock postulate”, and can be justified by the fact that an observer can “feel” their own acceleration, in contrast to velocity. Therefore, given a clock whose rate depends on acceleration in a well-defined
manner, one can simply attach an accelerometer to it, and use the resulting measurements to add/subtract time such that the acceleration effect is removed, recovering an ideal clock. Combining this clock postulate with the constancy
of the speed of light, one finds that an ideal clock measures the proper time along its trajectory according to the usual formulas of special relativity. The concept of an ideal clock (and therefore proper time) is imported into GR via
Einstein’s equivalence principle [6]. This principle states that local experiments conducted by a freely-falling observer cannot detect the presence or absence of a gravitational field. Here “local” means within a small enough volume that the gravitational field can be considered uniform.
We note four conceptual issues which arise when combining GR and quantum theory. The first is understanding how quantum theory imposes constraints on the clocks and rods of GR, and how this in turn affects the information gathered by an observer. Here, we concern ourselves with clocks, and we refer the reader to [7] for a review of possible limitations to spatial measurements. Some progress has been made with this issue, for example [8], wherein the mass and mass uncertainty of a clock system are related to its accuracy and precision (neglecting spacetime curvature). In [9], using a gedankenexperiment, one such mass-time relation is rederived and combined with the “hoop conjecture” (a supposed minimum size before gravitational collapse [10]), to argue that the product of a clock’s
spatial and temporal uncertainty is bounded below by the product of the Planck length and the Planck time. A second, perhaps more difficult problem, is that of reconciling the definition of time via a pointlike trajectory in GR with the impossiblity of such trajectories according to quantum mechanics (a result of the uncertainty principle between position and momentum). A third issue is the prediction that acceleration affects quantum states via the Unruh [11, 12] and dynamical Casimir [13] effects (DCE), which in turn will affect clock rates [14]. One must therefore reconsider whether it is always possible to measure and remove acceleration effects and recover an ideal clock. Finally,the fourth issue is that, given the locality of the equivalence principle (i.e. it only holds exactly when we consider a pointlike observer), it is unclear to what extent it applies to quantum objects, which do not follow pointlike trajectories. We investigate the interplay of these four issues, seeking to answer the following questions: what time does a quantum clock measure as it travels through spacetime, and what factors affect its precision? What are the fundamental limitations imposed by quantum theory on the measurement of time, and are these affected by the motion of the clock? To answer this question, we cannot in general rely on the Schr¨odinger equation, as we must use a particular
time parameter therein, which in turn requires the use of a particular classical trajectory.1 The relativistic clockmodel detailed in Section III gives a compromise; its boundaries follow classical trajectories, but the quantum field contained therein, and hence the particles of that field, do not. In Section IV we examine the extent to which this clock has allowed the four issues discussed above to be addressed, and possible future progress.
Given the difference in the scales at which quantum theory and GR are usually applied, one may ask what weexpect to gain by examining their overlap. Our response to such a question is threefold. Firstly, we note that optical clocks have reached a precision where gravitational time dilation as predicted by GR has been measured over scales accessible within a single laboratory [15]. Indeed, modern clocks are precise enough that they are sensitive to a
height change of 2 cm at the Earth’s surface [16]. Given the rate of improvement of this technology (see Figure 1 of [17], for example), one can anticipate an even greater sensitivity in the near future. The detection of a nuclear transition in thorium-229 [18], proposed as a new frequency standard [19], means that we may soon enter an era of “nuclear clocks”, surpassing that which is achievable with clocks based on electronic transitions. Considering this ever-increasing precision together with proposals to exploit quantum effects for superior timekeepking (e.g. [20, 21]),
we argue that a consideration of GR alongside quantum theory will become not simply possible, but in fact necessary in order to accurately describe the outcomes of experiments. Our second response is to point out the possibility of new technologies and experiments. There are already suggestions
exploiting the clock sensitivity mentioned above, such as the proposal to use changes in time dilation for earthquake prediction and volcanology [22]. On the other hand, there are proposals to use effects which are both quantum and relativistic in order to measure the Schwarzschild radius of the Earth [23], or to make an accelerometer [24], for example. See [25] for a review of experiments carried out or proposed which employ both quantum and general
relativistic features. Beyond specific proposals, there are practical questions which we cannot answer with quantum mechanics and GR separately; for example, what happens if we distribute entanglement across regions with differing spacetime curvatures, or how do we correlate a collection of satellite-based quantum clocks? The answers to these questions are relevant for proposals to use correlated networks of orbiting atomic clocks for entanglement-assisted GPS [20], or to search for dark matter [26] Finally, there is a strong motivation from the perspective of fundamental science to investigate the nature of time at the overlap of GR and quantum theory. Beyond the intrinsic interest of finding a coherent combination of the two most fundamental theories in physics, a quantum relativistic conception of time may be of relevance when using quantum clocks to test the equivalence principle [27, 28] and to single out GR from the family of gravitational theories obeying this principle [29], for example. In addition, since we expect a viable theory of quantum gravity to also be a quantum theory of space and time, it must either reproduce a relativistic quantum theory of time in the semiclassical limit, or contradict it, giving a potential test of the quantum gravity theory compared to the semiclassical one that
we use here.
Time in quantum cosmology We have pointed out that time reparameterization invariance of effective equations is not
guaranteed after quantization even in systems with a single constraint, and illustrated this often overlooked property in a specific cosmological model. Our detailed analysis of the underlying quantum gauge system has led us to a new procedure in which one can implement proper-time evolution at the effective level. This new definition includes all analogs of different classical choices of coordinate time and is time reparameterization
invariant in this sense. Moreover, our procedure unifies models with coordinate times and
internal times because they are all obtained from the same first-class constrained system
by imposing different gauge conditions, up to factor orderings. The last condition is important and ultimately leads to violations of time reparameterization invariance or covariance of internal-time formulations. The effective constrained system provides gauge transformations that map moment corrections in an evolution generator for one time choice to the moment corrections obtained with a different time choice,
including proper time. However, in our model, the time choices we studied explicitly, given
by scalar time, cosmological time and proper time, all require different factor orderings of
the constraint operator for real evolution generators. Since effective constraints are computed
for a given factor ordering of the constraint operator, they do not allow gauge transformations
that would change factor ordering corrections. Factor ordering terms therefore generically imply that different time choices lead to different predictions, and time reparameterization
invariance of internal-time formulations is broken. The only solution to this important problem is to insist on one specific time choice for all derivations. The only distinguished time choice, in our opinion, is proper time: it refers directly to the time experienced by observers and gives evolution equations that can be used directly in an effective Friedmann equation of cosmological models. Moreover, it is time-reparameterization invariant
when compared with other choices of coordinate time, while there are no complete transformations for different choices of internal time. We have worked entirely at an effective level up to second order in moment corrections, corresponding to a semiclassical approximation to first order in ~. This order suffices to demonstrate our claims because differences in quantum corrections between the models are visible at this order. In principle, one can extend the effective expansion to higher orders,
but it becomes more involved and is then best done using computational help. We have not considered such an extension in the present paper because the orders we did include already show quite dramatic differences between the models if improper gauge conditions are used, for instance by trying to rewrite a deparameterized model in proper time by using the 1-parameter chain rule. Our deparameterized models could certainly be formulated with operators acting on a physical Hilbert space without using an effective theory. However, no general method is known that would allow one to compare physical Hilbert spaces based on different deparameterizations, or to introduce proper time at this level. By using an effective formulation,
we have gained the advantage of being able to embed all such models within the same constrained
system, and to transform their moment corrections by simple changes of gauge conditions. These properties were crucial in our strict definition of proper-time evolution at the quantum level, for which we used effective observables such as invariant moments
instead of operators on a physical Hilbert space. Internal-time formulations based on a single physical Hilbert space, as used for instance in loop quantum cosmology, cannot be assumed to give correct moment terms in effective equations, strengthening the results of [6]. Investigations of internal-time formulations of quantum cosmological models with
significant quantum fluctuations are therefore likely to be spurious.