Testing different approaches to quantum gravity with cosmology: An overview Among the available quantum gravity proposals, string theory, loop quantum gravity, noncommutative geometry, group field theory, causal sets, asymptotic safety, causal dynamical triangulation, emergent gravity are among the best motivated models. As an introductory summary to this special issue of Comptes Rendus Physique, I explain how those different theories can be tested or constrained by cosmological observations. A century after the first claim, by Einstein, that general relativity needs to be quantized, a consensual solution is still missing. There are however several convincing approaches available. Obviously, small black holes and the early Universe are the best “places” to confront those models with observations. As light black holes have never been observed, primordial cosmology is probably the most promising situation to consider. I try to review here the merits and weaknesses of each approach when compared with cosmological observations. This article does not pretend to be exhaustive or unbiased and is highly based on the invited articles written for the special Comptes Rendus Physique issue “testing quantum gravity with cosmology”, that I coordinate. For some quite well known approaches, I focus only on the cosmological consequences. For other, less konwn ones, I also recall the basics of the construction. Relevant references for most of the models described in the following are given in the associated articles published in this special volume. The reference list given at the end of this article is therefore in no way sufficient.
Direct counterfactual communication via quantum Zeno effect
In the non-intuitive quantum domain, the phenomenon of counterfactuality is defined as the transfer of a quantum state from one site to another without any quantum or classical particle transmitted between them. Counterfactuality requires a quantum channel between sites, which means that there exists a tiny probability that a quantum particle will cross the channel—in that event, the run of the system is discarded and a new one begins. It works because of the wave-particle duality that is fundamental to particle physics: Particles can be described by wave function alone.
The quantum Zeno effect occurs when an unstable quantum system is subjected to a series of weak measurements. Unstable particles can never decay while they are being measured, and the system is effectively frozen with a very high probability. This is one of the implications of the well known but highly non-intuitive principle that looking at something changes it in the quantum realm.
Using this effect, the authors of the new study achieved direct communication between sites without carrier particle transmission. In the setup they designed, two single-photon detectors were placed in the output ports of the last of an array of beam splitters. According to the quantum Zeno effect, it's possible to predict which single-photon detector will "click" when photons are allowed to pass. The system's nested interferometers served to measure the state of the system, thereby preventing it from changing.
Alice transfers a single photon to the nested interferometer; it is detected by three single photon detectors, D0, D1 and Df. If D0 or D1 click, Alice concludes a logic result of one or zero. If Df clicks, the result is considered inconclusive, and is discarded in post-processing. After the communication of all bits, the researchers were able to reassemble the image—a monochrome bitmap of a Chinese knot. Black pixels were defined as logic 0, while white pixels were defined as logic 1.
The idea came from holography technology. The authors write, "In the 1940s, a new imaging technique—holography—was developed to record not only light intensity but also the phase of light. One may then pose the question: Can the phase of light itself be used for imaging? The answer is yes." In the experiment, the phase of light itself became the carrier of information, and the intensity of the light was irrelevant to the experiment.
The authors note that besides applications in quantum communication, the technique could be used for such activities as imaging ancient artifacts that would be damaged by directly shining light.
Abstract
Intuition from our everyday lives gives rise to the belief that information exchanged between remote parties is carried by physical particles. Surprisingly, in a recent theoretical study [Salih H, Li ZH, Al-Amri M, Zubairy MS (2013) Phys Rev Lett 110:170502], quantum mechanics was found to allow for communication, even without the actual transmission of physical particles. From the viewpoint of communication, this mystery stems from a (nonintuitive) fundamental concept in quantum mechanics—wave-particle duality. All particles can be described fully by wave functions. To determine whether light appears in a channel, one refers to the amplitude of its wave function. However, in counterfactual communication, information is carried by the phase part of the wave function. Using a single-photon source, we experimentally demonstrate the counterfactual communication and successfully transfer a monochrome bitmap from one location to another by using a nested version of the quantum Zeno effect.Emergent Spacetime and Quantum Entanglement in M-Theory In the context of the Bank-Fishler-Shenker-Susskind Matrix theory, we analyze a spherical membrane in light-cone M theory along with two asymptotically distant probes. In the appropriate energy regime, we find that the membrane behaves like a smeared Matrix black hole; and the spacetime geometry seen by the probes can become non-commutative even far away from regions of Planckian curvature. This arises from non-linear Matrix interactions where fast matrix modes lift a flat direction in the potential – akin to the Paul trap phenomenon in atomic physics. In the regime where we do have a notion of emergent spacetime, we show that there is non-zero entanglement entropy between supergravity modes on the membrane and the probes. The computation can easily be generalized to other settings, and this can help develop a dictionary between entanglement entropy and local geometry – similar to Ryu-Takayanagi but instead for asymptotically flat backgrounds. Classical gravity is a phenomenon of geometrical origin, encoded in the curvature of spacetime. Quantum considerations however, whether in the setting of string theory or otherwise, suggest that the geometrical picture of gravity may be an effective long distance approximation scheme. It appears that at Planckian distances, a fundamental rethinking of the nature of gravity sets in. There have also been recent suggestions that the perception of gravity is entropic, arising from quantum entanglement [1]-[8]. And subsequently, one talks about the concept of ‘emergent geometry’: the idea that gravitational geometry is a collective phenomenon associated with underlying microscopic degrees of freedom. In attempting to understand these ideas in a concrete computational setting, the Banks-Fishler-Shenker-Susskind (BFFS) Matrix model [9] – and its related cousin, the Berenstein-Maldacena-Nastase (BMN) system [10] – provide for a rich playground. They purport to describe quantum gravity in the full non-perturbative framework of light-cone M theory. The degrees of freedom are packaged into matrices that, in principle, encode geometrical gravity data at low enough energies. Spacetime curvature is then expected to arise from the collective dynamics of these matrix degrees of freedom. Unfortunately, the map between emergent geometry and matrix dynamics has proven to be a difficult one to unravel (but see recent progress in this direction [11]-[18]). A crude cartoon of Matrix theory dynamics goes as follows. The degrees of freedom, arranged in matrices, represent an interlinked complex web of membranes and fivebranes. At low energies, one can find settings where a hierarchy separates the different matrix degrees of freedom. Sub-blocs of the matrices, modes that remain light and slow, describe localized and widely separated lumps of energy; while other ‘off-diagonal’ modes become heavy and frozen in their ground states – heuristically corresponding to membranes/fivebranes stretched between the lumps. The effective dynamics of the lumps leads to the expected low energy supergravity dynamics, and hence a notion of emergent geometry. From this perspective, it is not surprising that a mechanism of entanglement across the degrees of freedom in subblocs of the matrices is key to the notion of emergent spacetime geometry. However, to our knowledge the role of quantum entanglement has not yet been explored in this context. In this work, our goal is to take the first steps in understanding how geometry may be encoded in Matrix theory degrees of freedom through a regime of heavy off-diagonal modes and through quantum entanglement of diagonal ones. We consider a particularly simple setup in an attempt to make the otherwise challenging computation feasible. We will arrange a spherical membrane in light-cone M theory, stabilized externally so as to source a smooth static curved spacetime; and then we will add two probe supergravity particles a large distance away from the source. Using two probes instead of just one allows us to potentially capture a local invariant notion of gravity – 1 avoiding the possibility of missing out on emergent gravity due to the equivalence principle and background dependence. The configuration we consider is as expected also unstable in BFFS Matrix theory and the sphere needs to be externally stabilized2 . Realizing the setup in matrices, we are immediately led to explore fluctuations of matrix modes that describes membranes stretched between sphere and probe. We first determine the relevant energy scale at which all off-diagonal matrix modes become heavy – a criterion we expect to be a prerequisite for identifying emergent commutative geometry [9, 12, 13, 15, 16]. We find that the off-diagonal modes corresponding to the strings stretched between the two probes can become light far away from the center of the massive shell, i.e. far away from where spacetime curvature is Planck scale. Using the configuration as a background scaffolding and integrating out heavy off-diagonal modes, we then focus on the effective dynamics of the diagonal fermionic modes. Zero modes of the fermionic degrees of freedom describe a system of qubits with a dense network of interactions. The qubit states map onto the eleven dimensional supergravity multiplet; hence, one is describing the interactions of supergravity modes in the given background. The setup for example has been used recently to demonstrate fast scrambling of supergravity modes in Matrix theory [19, 20]. We find that the effective Hamiltonian for the qubits includes direct couplings between qubits on the membrane and qubits on the probes – when we focus on matrix dynamics with low enough energies to render the off-diagonal modes heavy and frozen. We then proceed to finding the entanglement entropy between the sphere and probe qubits in the vacuum. Effectively, through the BFSS conjecture, we are computing the entanglement between supergravity modes on the sphere and on the probes. The computation can be carried out using expansions in several small parameters such as the ratio of the radius of the sphere to the sphere-probe distance. That is, we compute the entanglement entropy in the regime the probes are far away from the sphere. The presentation is organized as follows. Section 2 gives an overview of the Matrix theory of interest, the M-theory perspective, and a sketch of the Matrix theory analysis. Section 3 presents the detailed computation in Matrix theory, the derivation of the effective Hamiltonian, and the computation of entanglement entropy. Finally, Section 4 collects some concluding thoughts and directions for the future. Four appendices summarize technical details that arise in the main text.
No Smooth Beginning for Spacetime We identify a fundamental obstruction to any theory of the beginning of the universe as a semiclassical quantum process, describable using complex, smooth solutions of the classical Einstein equations. The no boundary and tunneling proposals are examples of such theories. We argue that the Lorentzian path integral for quantum cosmology is meaningful and, with the use of PicardLefschetz theory, provides a consistent definition of the semi-classical expansion. Framed in this way, the no boundary and tunneling proposals become identified, and the resulting framework is unique. Unfortunately, the Picard-Lefschetz approach shows that the primordial fluctuations are out of control: larger fluctuations receive a higher quantum mechanical weighting. We prove a general theorem to this effect, in a wide class of theories. A semi-classical description of the beginning of the universe as a regular tunneling event thus appears to be untenable. Quantum gravity is supposed to provide a quantum description of spacetime. If so, it should address basic questions like: how did classical spacetime emerge? or even, how did the universe begin? Vilenkin’s tunneling from nothing proposal [1] and Hartle and Hawking’s no boundary proposal [2–4] represent two important attempts in this direction. Both are most naturally formulated in terms of sums over spacetime geometries, taken to have a certain form. However, the lack of a consistent definition for the Feynman path integral for quantum gravity has significantly hampered attempts to extract clear predictions from either proposal. Hartle and Hawking take the Euclidean path integral for quantum gravity to be fundamental and attempt to describe the beginning of the universe using a complex, smooth, saddle point solution of the Einstein equations. In their picture, the real, Lorentzian universe is obtained as an analytic continuation from a compact Euclidean geometry. The big bang singularity is avoided because the Euclidean region closes off smoothly in the past (left panel, Fig. 1). Vilenkin in contrast adopted an entirely Lorentzian picture, in which the universe begins on a three-geometry of size zero, i.e., a point. He assumed, without further justification, that there would be no freedom to input boundary data there. In a previous paper [5] we showed that, in simple cosmological models, the Lorentzian path integral for quantum gravity is a meaningful, convergent quantity whereas the Euclidean version is not. Instead of performing a Wick rotation of a timelike coordinate, the convergence of the path integral is improved by analytically continuing in the fields. Picard-Lefschetz theory allows one to unambiguously identify the contributing saddle points and integration contours, thereby defining a consistent semiclassical expansion for the full quantum propagator. We believe Picard-Lefschetz theory provides the most conservative and minimal approach to semiclassical quantum gravity, possessing considerable advantages over other approaches. This being the case, any proposal for the initial conditions of the universe, such as the tunneling or no boundary proposals, should be possible to formulate within the Picard-Lefschetz approach. Like Vilenkin, our starting point is the Lorentzian theory. However, like Hartle and Hawking, we demand that any contributing saddle point must be a smooth (albeit complex) solution of the Einstein equations. This condition is equally necessary, we believe, in Vilenkin’s picture in order to avoid additional boundary data entering on the initial three-geometry. Remarkably, then, in the Picard-Lefschetz approach the tunneling and no boundary proposals become equivalent. In [5], we showed that implementing regularity `a la Hartle and Hawking, results in a negative real part of the semiclassical exponent, disagreeing with them but in agreement with Vilenkin. In this Letter, we extend the discussion to linear fluctuations around such a background, and to slow-roll in- flationary models. Contrary to the small perturbations hoped for in both proposals, we find large fluctuations are preferred. The middle and right panels in Fig. 1 are increasingly accurate descriptions of the no boundary or tunneling proposal. Perturbation theory and the normalizability of the perturbations break down, making a smooth, semi-classical beginning of spacetime impossible.
On the difference between Poincaré and Lorentz gravity It is known from Utiyama’s paper [ 1] that GR can be viewed as a gauge theory of the Lorentz group. Later the diffeomorphism invariance is identified to the translational invariance, then GR is found to be a gauge theory of the Poincar´e group [ 2]. This interpretation of Poincar´e invariance as a sum of Lorentz invariance and diffeomorphism invariance becomes the standard one in the Poincar´e gauge theory of gravity [ 3 , 4]. However, the diffeomorphism invariance is not always equal to the translational invariance. For example, they are different in the de Sitter (dS) gauge theory of gravity. In fact, the diffeomorphism symmetry does not correspond to any conservation law in dS gravity [ 5]. Then one would wonder whether the identification of diffeomorphism with translation in Poincar´e gravity is also problematic. In this paper, we show that a theory with Lorentz invariance and diffeomorphism invariance is not necessarily Poincar´e invariant. We call such a theory the Lorentz gauge theory of gravity. Just like the case of dS gravity, the diffeomorphism symmetry does not correspond to any conservation law in Lorentz gravity. There exists only the angular momentum (AM) conservation with respect to the Lorentz symmetry, in other words, the energy-momentum (EM) conservation is absent. Then how to interpret the Poincar´e invariance? The answer lies in the construction of the Lorentz gravity. That is to introduce a vector field whose components may be called the local inertial coordinates (LIC). The prototype of the LIC is Cartan’s radius-vector field [ 4 , 6], which is GL (n, R) covariant. The dS/AdS/Poincar´e-covariant LIC are first given by Guo [ 7], West [ 8] and Pilch [ 9], respectively. See also Refs. [10 , 11] for a proof ∗Email: ljagdgz@163.com, lujiaan@mail.sysu.edu.cn 1 of their existence on an arbitrary spacetime. With the help of the LIC, the Poincar´e invariance can be interpreted as an internal symmetry, just like the Lorentz symmetry. In this formalism, the fundamental variables are the Poincar´e connection and LIC. In the Lorentz gauge, the LIC are fixed, and the Poincar´e connection turns out to be a combination of the Lorentz connection and tetrad field, which are the traditional variables for Poincar´e gravity. For completeness, we also compute the conservation law with respect to the Poincar´e symmetry in this formalism, which reduces to the ordinary one [2] in the Lorentz gauge. Although Kawai [12] has attempted to do this, his result does not completely respect the spirit of this formalism. The key point is that the Poincar´e-covariant derivative has not entered Kawai’s conservation law, and thus the AM conservation has not been formulated into a neat form, which will be given here. The paper is organized as follows. In section 2, we construct the Lorentz gravity and derive the conservation law. In section 3, the same thing is done except changing the Lorentz group to the Poincar´e group. In section 4, some remarks on the diffeomorphism symmetry and the gauge group of gravity are presented.
Foliation, Jet Bundle and Quantization of Einstein Gravity In Park [1] we proposed a way of quantizing gravity with the Hamiltonian and Lagrangian analyses in the ADM setup. One of the key observations was that the physical configuration space of the 4D Einstein-Hilbert action admits a three-dimensional description, thereby making gravity renormalization possible through a metric field redefinition. Subsequently, a more mathematical and complementary picture of the reduction based on foliation theory was presented in Park [2]. With the setup of foliation the physical degrees of freedom have been identified with a certain leaf. Here we expand the work of Park [2] by adding another mathematical ingredient—an element of jet bundle theory. With the introduction of the jet bundle, the procedure of identifying the true degrees of freedom outlined therein is made precise and the whole picture of the reduction is put on firm mathematical ground. There have been two main approaches in tackling the quantization of Einstein gravity; namely, canonical and covariant (see, e.g.,[3–5] for reviews1). At an early stage, the canonical approach was pursued within the Hamiltonian formulation, and led to the Wheeler-DeWitt equation. Later, its main theme became the so-called configuration space reduction [7, 8] that employs the machinery of differential geometry, in particular, symplectic geometry and jet bundle. Meanwhile the covariant approach followed a path within a more conventional physics framework. The main endeavors along this line were enumeration of the counter terms in the effective action [9–13] and the progress made in the asymptotically safe gravity [14–18]. It was established for 4D Einstein action that the divergences do not cancel except for one-loop; one faces proliferation of counter terms as the order of loop increases—which turns out to be typical of other gravity theories, and the theory loses its predictability. A question has recently been raised [1] regarding the conventional framework of the Feynman diagram computation in which the non-dynamical fields contribute, under the umbrella of the covariant approach, to loop diagrams and appear as external states as well. The number of degrees of freedom (i.e., the number of a metric components) of a 4D metric is ten to start. One gauge-fixes the 4D gauge symmetry thereby effectively removing four degrees of freedom. This means that six metric components run around the loop diagrams. In quantization and diagrammatic analysis, one first examines the physical states of the theory (usually) through the canonical Hamiltonian analysis. In the canonical Hamiltonian analysis carried out in the ADM setup (which had been introduced with the goal of separating out the dynamical degrees of freedom), it was revealed that some of the metric components, named the lapse function and shift vector, are non-dynamical, a result that is well known by now. Therefore, only two out of ten components are dynamical. An approach in which all of the unphysical degrees of freedom are removed has been proposed with the observation [1, 2, 19] that the non-dynamism of the shift vector and lapse function leads to effective reduction of 4D gravity when expanded around relatively simple vacuua (A more precise characterization of these vacuua will be discussed below). The approach of Park [1, 2, 19] has features of both the canonical and covariant approaches: it employs the 3+1 splitting as does the canonical approach and a fixed background is considered as in the covariant approach. The analysis in Park [1] was carried out in the ADM formalism. After starting with the ADM Lagrangian, the (more or less standard) Dirac's Hamiltonian formulation was employed, and the Lagrangian setup was revisited afterwards. Based on the fact that the lapse function and shift vector are non-dynamical, it was proposed in Park [1, 19] that the shift vector be gauge-fixed by using the 3D residual symmetry that remains after the 4D bulk gauge-fixing through the de Donder gauge. The shift vector gauge-fixing introduces a constraint—analogous to the momentum constraint in the Hamiltonian quantization—in the ADM Lagrangian setup (The method of Park [1] is applicable to a class of relatively simple backgrounds. A more precise characterization will be given in Section 4 below). The relevance of foliation theory (reviews on foliation theory can be found, e.g., in Molino [20], Moerdijk and Mrčun [21], Gromoll and Walschap[22], Rovenskii [23], Candel and Conlon [24], and Montano [25]) was recognized while examining possible implications of the shift vector constraint; it was realized that the shift vector constraint implies that the foliation of the spacetime should be of a special type known as Riemannian in foliation theory. Interestingly, Riemannian foliation admits another special foliation, dual totally geodesic foliation, a result relatively recent in the timeframe of mathematics [26]. One of the facts that makes these special foliations interesting is the presence of the so-called parallelism [20], and in the context of the totally geodesic foliation under consideration the parallelism is “tangential” [26] and has the associated abelian Lie algebra. The proposal in Park [1] has a complementary, more mathematical version [2] that relies crucially on this duality between Riemannian foliation and totally geodesic foliation of a manifold (see Figure 1). In particular, a totally geodesic foliation has the so-called tangential parallelism and the corresponding Lie algebra (the duals of the transverse parallelism and its Lie algebra of the Riemannian foliation [20]). In the case under consideration the Lie algebra is abelian, and it was proposed in Park [2] that the abelian symmetry be associated with the gauge symmetry that allows the gauge-fixing of the lapse and shift. In other words, the lapse and shift gauge symmetry should somehow be related to the action of group fibration that generates the “time” direction (i.e., the tangential parallelism; see below). The gauge-fixing then corresponds to taking the quotient of the bundle by the group, bringing us to the holographic reduction2. The potential significance of this abelian algebra for the physics context becomes much clearer once the whole mathematical setup is reconstructed in the framework of jet bundle theory (see, e.g., [27–29] for reviews of jet bundle theory). For example, the jet bundle setup brings to light that the abelian symmetry is a gauge symmetry. Here, we expand the work of Park [2]3 and elaborate on the mathematical picture by adding another ingredient, a jet bundle. The key result is that the proposal in Park [2] that the abelian symmetry be associated with the gauge symmetry now is on firmer ground, and we have a refined confirmation of the mathematical picture of the reduction presented in Park [2]. In particular, the modding-out procedure, which is central for the mathematical reduction picture but only qualitatively outlined in Park [2], is made quantitative and precise.
Entanglement, space-time and the Mayer-Vietoris theorem Entanglement appears to be a fundamental building block of quantum gravity leading to new principles underlying the nature of quantum space-time. One such principle is the ER-EPR duality. While supported by our present intuition, a proof is far from obvious. In this article I present a first step towards such a proof, originating in what is known to algebraic topologists as the MayerVietoris theorem. The main result of this work is the re-interpretation of the various morphisms arising when the Mayer-Vietoris theorem is used to assemble a torus-like topology from more basic subspaces on the torus in terms of quantum information theory resulting in a quantum entangler gate (Hadamard and c-NOT).
Special Grand Unification We discuss new-type grand unified theories based on grand unified groups broken to their special subgroups as well as their regular subgroups. In the framework, when we construct four-dimensional (4D) chiral gauge theories, i.e., the Standard Model (SM), 4D gauge anomaly cancellation restricts the minimal number of generations of the 4D SM Weyl fermions. We show that in a six-dimensional (6D) SU(16) gauge theory on M4 × T 2/Z2 one generation of the SM fermions can be embedded into a 6D bulk Weyl fermion. For the model including three chiral generations of the SM fermions, the 6D and 4D gauge anomalies on the bulk and fixed points are canceled out without exotic 4D chiral fermions.
Planck scale Corrections to the Harmonic Oscillator, Coherent and Squeezed States One of the most active research areas in theoretical physics is the formulation of a quantum theory of gravity that would reproduce the well-tested theories of Quantum Mechanics (QM) and General Relativity (GR) at low energies. The large energy scales necessary to test proposed theories of Quantum Gravity make this investigation extremely challenging. Nonetheless, it is important to propose phenomenological models and tests of such theories at low energies, and the Generalized Uncertainty Principle (GUP) offers precisely this opportunity. Many theories of QG suggest that there exists a momentum-dependent modification of the Heisenberg Uncertainty Principle (HUP), and the consequent existence of a minimal measurable length [1–9]. This modification is universal and affects every Hamiltonian, since it will affect the kinetic term. In the last couple of decades several investigations have been conducted on many aspects and systems of QM, such as Landau levels [10,11], Lamb shift, the case of a potential step and of a potential barrier [10, 11], the case of a particle in a box [12], and the theory of angular momentum [13]. Furthermore, potential experimental tests have been proposed considering microscopic [14] or macroscopic Harmonic Oscillators (HO) [15], or using Quantum Optomechanics [16, 17]. The most general modification of the HUP, was proposed in [11] and includes linear and quadratic terms in momentum of the following form
in 1 dimension, where δ and ǫ are two dimensionless parameters defining the particular model, and
where MP and c are Planck mass and the speed of light, respectively, and γ0 is a dimensionless parameter. The presence of the quadratic term is dictated by string theory [1,2] and gedanken experiments in black hole physics [3,7], whereas the linear term is motivated by doubly special relativity [9], by the Jacobi identity of the corresponding qi , pj commutators, and as a generalization of the quadratic model. To incorporate both possibilities, we will leave the two parameters δ and ǫ undetermined, unless otherwise specified. While it is possible to absorb γ into redefinitions of δ and ǫ, it is in practice useful to keep γ distinct so that it can function as an expansion parameter under various circumstances. In the present work, we revisit the problem of quantizing the HO [5] by incorporating the GUP (1), and rigorously investigate the implications of this for coherent and squeezed states. Unlike previous investigations [18–20], our focus is on a rigorous algebraic approach, which not only allows for a more efficient definition of the HO energy spectrum but also defines coherent and squeezed states of the HO with GUP (1) in a consistent way. We are thus able to derive potentially observable deviations from the standard HO due to GUP or Planck scale effects. In particular, we find a model-dependent modification of the HO energy spectrum, as well as modified position and momentum uncertainties for coherent and squeezed states. This paper is organized as follows. In Sec. 2, we consider a GUP-induced perturbation of the HO, computing perturbed energy eigenstates and eigenvalues. Moving beyond the previous treatment of the HO with the GUP [5], the algebraic method adopted here turns out to be more concise, allowing for definitions that will be useful in the rest of the paper. A new set of ladder operators for the perturbed states is defined in Sec. 3, and in Sec. 4, we consider coherent states, focusing on their position and momentum uncertainties, and their time evolution. A similar analysis for squeezed states for a HO is performed in Sec. 5, with special reference to specific GUP models [5] and [11]. In Sec. 6 we summarize our work, and comment on potential applications.
String Theory, Supersymmetry, Unification, and All That John H. Schwarz and Nathan Seiberg: Abstract - String theory and supersymmetry are theoretical ideas that go beyond the standard model of particle physics and show promise for unifying all forces. After a brief introduction to supersymmetry, we discuss the prospects for its experimental discovery in the near future. We then show how the magic of supersymmetry allows us to solve certain quantum field theories exactly, thus leading to new insights about field theory dynamics related to electric-magnetic duality. The discussion of superstring theory starts with its perturbation expansion, which exhibits new features including “stringy geometry.” We then turn to more recent non-perturbative developments. Using new dualities, all known superstring theories are unified, and their strong coupling behavior is clarified. A central ingredient is the existence of extended objects called branes.
Quantum theory is a quasi-stochastic process theory There is a long history of representing a quantum state using a quasi-probability distribution: a distribution allowing negative values. In this paper we extend such representations to deal with quantum channels. The result is a convex, strongly monoidal, functorial embedding of the category of trace preserving completely positive maps into the category of quasi-stochastic matrices. This establishes quantum theory as a subcategory of quasi-stochastic processes. Such an embedding is induced by a choice of minimal informationally complete POVM’s. We show that any two such embeddings are naturally isomorphic. The embedding preserves the dagger structure of the categories if and only if the POVM’s are symmetric, giving a new use of SIC-POVM’s. We also study general convex embeddings of quantum theory and prove a dichotomy that such an embedding is either trivial or faithful. The results of this paper allow a clear explanation of the characteristic features of quantum mechanics coming from being epistemically restricted (no-cloning, teleportation) and having negative probabilities (Bell inequalities, computational speed-up). In Feynman’s famous 1981 paper on quantum computation [8] he writes “The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative”. In this paper we wish to make this statement exact. Of course, much work has already been done in this regard going all the way back to the Wigner quasi-probability distribution in 1932 [20]. The Wigner function allows you to associate a probability distribution over a phase space for a quantum particle, with the only caveat that the probability sometimes has to be negative. The negativity appearing in probabilistic representations of quantum systems is something that lies at the heart of quantum theory: Spekkens has shown [16] that the necessity of negativity in probabilistic representations is equivalent to the contextuality of quantum theory. It is also a necessity for a quantum speedup as states represented positively by the Wigner function can be efficiently simulated [12, 13, 18, 19]. The main contribution of this paper will be to represent all of (finite-dimensional) quantum theory as a set of quasi-stochastic processes, not just the states. In particular we will use the language of category theory to establish that the category of quantum processes is a subcategory of the category of quasistochastic processes. We will also study what is necessary to preserve the tensor product and adjoint in these representations giving an application of respectively minimal and symmetric informationally complete POVM’s. Representing quantum processes by quasi-stochastic matrices is not a new idea. In particular it is used in [3] to argue the similarity of a unitary process and the Born rule, although they stop short of extending the rule to all quantum channels and of composing them. In [13] they also don’t go into detail about the compositional nature of these representations either. As far as the author is aware, this paper is the first to consider the compositional structure of quasi-stochastic representations of quantum theory.
Spaces of Quantum Field Theories The concept of a “space of quantum field theories” or “theory space” was set out in the 1970’s in work of Wilson, Friedan and others. This structure should play an important role in organizing and classifying QFTs, and in the study of the string landscape, allowing us to say when two theories are connected by finite variations of the couplings or by RG flows, when a sequence of QFTs converges to another QFT, and bounding the amount of information needed to uniquely specify a QFT, enabling us to estimate their number. As yet we do not have any definition of theory space which can be used to make such arguments. In this talk, we will describe various concepts and tools which should be developed for this purpose, inspired by the analogous mathematical problem of studying the space of Riemannian manifolds. We state two general conjectures about the space of two-dimensional conformal field theories, and we define a distance function on this space, which gives a distance between any pair of theories, whether or not they are connected by varying moduli. Quantum field theory (QFT) is a remarkably successful physical framework, describing particle physics, critical phenomena, and certain many-body systems. Superstring theory is also closely based on QFT. A central problem of theoretical physics is to classify the QFTs and understand the relations between them. After more than fifty years of effort, we have no complete classification, nor is such a thing even on the horizon, except for a few special subclasses of theories. In the broadest terms, we can classify QFT’s by the dimension of space-time, and the presence of continuous symmetry and/or supersymmetry. Within such a class, to get a well posed problem, we should place some upper bound on the total number of degrees of freedom, under some precise definition. The most basic question of classification would then be: is there any sense in which the total set of QFTs in this class is finite, or countable, or compact, or something we can in any way put definite limits on? To put this a different way, can we make a list, say of Lagrangians for definiteness, in which every QFT is guaranteed a place on the list? In itself this need not be the final answer – an entry on the list might need to satisfy further consistency conditions to be a QFT, or several entries might describe the same QFT. Still, given such a starting point, we could continue by addressing these difficulties to systematically map out the space of QFTs. But, except for a few special classes, we have no such list, and no answer to this question. Now, given sufficiently restrictive assumptions, it can happen that some finite subset of choices of field content and coupling constants fixes the entire theory, making the classification problem tractable. For example, if we are interested in four-dimensional theories with 16 supercharges,1 there are good arguments that the only examples are the supersymmetric Yang-Mills theories. These are determined by a choice of gauge group G (a compact Lie group) and a complexified gauge coupling for each group factor, with certain known equivalences between theories (Sdualities). Another class of QFTs which has been classified is the “minimal CFTs,” the two-dimensional conformal field theories with central charge c < 1. Their classification is a bit more complicated, with an “algebraic” part (the representation theory of the Virasoro algebra) determining the spectrum of primary fields, and a “global” part which determines the list of closed operator algebras satisfying modular invariance (see [16, 7] for an overview of this classification). While there are a few more results of this type, the general classification problem, even for cases of central interest such as d = 4, N = 1 supersymmetric theories, seems so distant that it is not much discussed. This is because we have no answer to the question we posed above. More specifically, because we do not sufficiently well understand the relation between the description of a QFT (such as a bare Lagrangian) and the actual observables, it is not known how to reduce the problem to finite terms. To illustrate this point, let us consider what might appear to be a simple analog to the minimal CFTs, namely the 2d CFT’s with c < 2. As is well known, the representation theory of the Virasoro algebra does not get us very far here; the number of primary fields grows very quickly with dimension (superpolynomially) and we do not have any way to reduce this large amount of data to a manageable subset. While one can consider larger algebras and rational CFTs, this amounts to looking under a rather well inspected lamppost; we know that there are nonrational CFTs, and that crucial aspects of the problem (such as the fact that CFTs can have moduli) are not respected by this simplification. There is no reason why we must take an algebraic approach, and one might try to describe the space of c < 2 CFTs in other ways. For example, one could start by considering the Lagrangians with two scalar fields and a nontrivial potential. By the c-theorem, we know all such theories will flow to CFTs (often trivial) with c < 2. While these could be listed, there is no reason to think this includes all the c < 2 theories. An example of a c < 2 theory which probably cannot be obtained this way is the direct sum of an Ising model (c = 1/2) with two tricritical Ising models (c = 7/10). This particular theory can be obtained using three scalar fields, and it is a reasonable hypothesis that any c < 2 theory could be obtained in a similar way, as a flow from a starting point with more scalar fields. If this were the case, we could imagine solving the classification problem by simply listing Lagrangians, but we would need one more ingredient for this list to have an end: an “inverse c-theorem” that tells us that all theories with c < 2 can be obtained as RG flows from a set of theories with at most N scalar fields. At present we do not know whether this is so, nor is there any conjecture for the N required for such a claim to hold. This is an example of a fairly concrete approach to classification, which might or might not work. But one of the points I want to make in this lecture is that we (physicists) have too simple an idea of what a classification should be. I believe we need to broaden this idea to make progress, in ways analogous to those which were developed to address similar problems in mathematics. Intuitively, one classifies a set of mathematical objects by listing them, and giving an algorithm which, given an object X, determines which one it is. The compact Lie groups are a paradigmatic example in which the objects are classified by discrete structures. As with the minimal CFTs, this classification has an algebraic part, the classification of Lie algebras (say in terms of Dynkin diagrams), and a global part, the classification of discrete subgroups which can appear as the fundamental group. In other cases, one has families of objects depending on parameters. One then needs to understand the moduli spaces in which these parameters take values; familiar examples are the moduli spaces of Riemann surfaces. These are cases in which the classification program was successful. On the other hand, many other superficially analogous mathematical problems have not been solved in such concrete terms, and we will argue that this includes problems quite analogous to classification of QFT. This did not stop the mathematicians, but rather inspired the development of other ways to think about classification problems, more abstract than listing objects or defining moduli spaces. Some are familiar in physics, such as the idea of a topological invariant. Others are not, including most of the ideas of metric geometry. [18] Before proceeding, let us recall the best existing “classification” of QFTs, which is based on perturbation theory. A large set of QFTs (perhaps all of them) can be defined by quantizing a classical action, say by using a regularized functional integral, computing in perturbation theory, and performing renormalization. For these theories, the classification problem has three parts. First, we need to enumerate actions and the couplings they depend on. We also need to identify equivalences between theories and redundant couplings. We then need to study the renormalization group and decide which operators are relevant and marginal, so that we can take the continuum limit. In perturbation theory, these steps are by now fairly well understood. Finally, we need to understand the relation between the perturbation theory and the actual observables of the theory. In some cases, this is fairly clear – quantities computed in perturbation theory are asymptotic to the true correlation functions or S-matrix elements as the couplings go to zero. If the physics at finite coupling is qualitatively similar to the limiting free theory, one can essentially regard the coefficients of the perturbative expansion as physical observables. If these agree between two theories, they are the same; if they are “close” in some sense the theories are close, and so on. Although this map from the action to the observables is somewhat complicated, as physicists we learn how to work with it, and this is our working definition of a “space of QFTs.” Thus the stable particles correspond to fields, certain leading coefficients in scattering amplitudes or correlation functions correspond to couplings, and so on. The resulting moduli space of QFTs is a space with one component for each choice of field content, and with each component parameterized by a finite set of couplings in the Lagrangian. One problem with this working definition is that it is based on perturbation theory, and there are many QFTs for which this is not a good description. Formally, we might distrust perturbation theory because the bare coupling is large, or because a coupling is relevant and becomes large at low energy. Now in itself this does not prove that the physics is qualitatively different from the picture given by perturbation theory; an example where it is not is the 2d Landau-Ginzburg models (i.e., scalar fields with a potential with isolated critical points). But in many other cases we know that it is. The perturbative excitations might be confined, so that only composite objects are observable. Even when the perturbative excitations are not confined, there might be solitons which are lighter, and in a physical sense are more fundamental. These thoughts lead to the concept of duality equivalences, and the idea that a large part of theory space might be described by patching together perturbative descriptions which are valid in different regimes.2 Of course, there might be other regions where no perturbative description is valid. Indeed, we do not know whether we yet know about all the types of QFT, or whether we have all of the concepts we will need to describe them. One test is that a local description of theory space must be complete, in the sense that any relevant or marginal perturbation keeps us within the space. One must also ask whether there are disconnected components of theory space, not obtained by perturbing the theories we know about. To address this question, one’s basic definitions probably must place the QFTs within some larger set with a simpler description. These are attractive questions and ideas, which are part of the general study of the space of QFTs. This concept seems to have come out of statistical field theory; it was discussed in Wilson and Kogut’s famous review [24] and many works since. At least as an intuition, it provides a framework for a good deal of work on QFT. Nevertheless, we do not know very much about it, as illustrated by the question of our introduction, or more precise questions such as these: • Is theory space connected (by paths)? Given two QFTs X and Y , are they in the same component? If so, is the path between them “finite length” or “infinite length” ? • How “big” is theory space? Can we show that the number of QFTs satisfying some conditions is finite, and estimate the number? • How many parameters (or measurements) do we need to characterize a QFT? At weak coupling, we can answer this, but more generally? In trying to make these ideas and questions precise, one runs up against unsolved problems. Certain necessary concepts and tools have not yet been developed, and another goal of the talk is to explain what these are. In previous talks on this subject [12] I expressed this point with the phrase that “to study the space of Xs” (here meaning QFT), one must have a definition of an X. While I would still maintain this, having the definition is not enough; there are other equally important aspects of the problem. Let me list a few, which I will explain below: • Topology and metric on theory space • Embedding theorems • Weak QFT In explaining these ideas, we will follow the basic analogy between the classification of QFT and the classification of Riemannian manifolds, i.e. manifolds with metrics, which first emerged in Friedan’s work on renormalization of 2d sigma models [14].
Supersymmetry as a probe of the topology of manifolds These lectures are a brief introduction to Topological field theories. Many technical details have been omitted with the hope of providing the reader a flavour of the results rather than the details. In the first lecture we discuss how supersymmetric quantum mechanics provides connections with the Atiyah-Singer Index theorems as well as topological invariants such as Euler classes of vector bundles. In the second lecture, we show how one can construct supersymmetric field theories whose observables are topological invariants on some moduli space. These topological field theories can be constructed using a standard procedure due to Witten called twisting of N = 2 supersymmetric field theories. I have organised the two lectures to follow the historical sequence. The application of supersymmetry to probe topology has occured in two distinct phases. The first phase occurred in the early 80’s starting from the work of Witten on supersymmetry breaking and Morse theory[1, 2]. Witten’s work was extended by Alvarez-Gaum´e[3] and independently by Friedan and Windey[4] to provide “proofs” of the Atiyah-Singer Index theorem for various elliptic complexes. The second phase occurred in the late 80’s and early 90’s and is still not over. Unlike the first phase, where supersymmetry provided a different way of understanding well known mathematical results, the second phase has led to new results which were not known earlier to mathematicians. It has led to this brand of physics to be labelled as experimental Mathematics by some mathematicians. It is experimental because the methodology employed is not standard (as yet!) in Mathematics and needs to be substantiated by “proofs” in the conventional sense. Of course, it is possible that the use of quantum field theory in mathematics might become legitimate in the future. Some of the major advances in the second phase has been the quantum field theoretic understanding of Jones’ polynomial invariants for knots[5] and Donaldson’s invariants for four manifolds [6]. The quantum field theory approach provided rich generalisations of Jones’ invariants right away. However, this was not the case with Donaldson invariants until recently. This changed towards the end of 1994 when Seiberg and Witten[7] succeeded in describing N=2 supersymmetric SU(2) gauge theory in its strong coupling ∗Lectures presented at the First Kumari L.A. Meera Memorial Meeting on “Frontiers in Physics”, Mysore, Feb. 8-14, 1996. 1 limit. Using a version of electric-magnetic duality, they were able to construct a theory whose weak coupling limit was the strong coupling limit of SU(2) gauge theories. Putting it differently, supersymmetric QCD could be solved! As we shall see later, given a N=2 supersymmetric theory, we can construct a topological field theory from it. Donaldson theory is the topological field theory obtained from the weak coupling limit of N=2 supersymmetric Yang-Mills. Seiberg and Witten’s result implied that we could also study the theory in its strong coupling limit too[8]. It turns out that this theory is a lot simpler because one ends up dealing with abelian gauge fields rather than something non-abelian like SU(2). From the mathematical viewpoint, this implies that a lot of proofs of Donaldson, which are very long and difficult to follow (even for an accomplished mathematician) become simple enough for graduate students to understand. In lecture one, we will discuss how to use supersymmetric quantum mechanics to obtain topological invariants using the Euler characteristic as an example. In addition, we will briefly discuss how one can prove the Atiyah-Singer Index theorem using supersymmetric quantum mechanics. In lecture two, we will discuss cohomological topological field theories (TFT) of which Donaldson theory is the prototype. We will not be able to discuss topological field theories of the Schwarz type of which Chern-Simons theory is the prototype[5]. It is interesting to note that cohomological field theories naturally occur in even dimensions while TFT’s of the Schwarz type occur in odd dimensions. (This is not true rigorously, since one can always use dimensional reduction to obtain cohomological TFT’s in odd dimensions.) A TFT is defined to be a quantum field theory which is independent of the metric and whose observables are also independent of the metric. In Chern-Simons theory, the observables are Wilson loops which are gauge invariant and are independent of the metric (since they are obtained from one-forms integrated over one-cycles).
Can Global Internal and Spacetime Symmetries be Connected without Supersymmetry Can global internal and spacetime symmetries be connected without supersymmetry? To answer this question, we investigate Minkowski spacetimes with d space-like extra dimensions and point out under which general conditions external symmetries induce internal symmetries in the effective 4-dimensional theories. We further discuss in this context how internal degrees of freedom and spacetime symmetries can mix without supersymmetry in agreement with the Coleman-Mandula theorem. We present some specific examples which rely on a direct product structure of spacetime such that orthogonal extra dimensions can have symmetries which mix with global internal symmetries. This mechanism opens up new opportunities to understand global symmetries in particle physics. The nature of spacetime is still a great mystery in fundamental physics and it might be a truly fundamental quantity or it could be an emergent concept. An appealing and most minimalistic approach would be if spacetime and propagating degrees of freedom would have a common origin on equal footing. In such a scenario, spacetime is thus an emergent quantity and there seems to be no reason for it to be restricted to a 4-dimensional Poincar´e symmetry apart from low energy phenomenology. The only exception are additional time-like dimensions which typically lead to inconsistencies when requiring causality [1, 2], while there is no consistency problem with additional space-like dimensions. Additional space-like dimensions have therefore been widely studied. If spacetime and particles consist of the same building blocks, then a fundamental connection of these low energy quantities should exist at high energies. Early attempts in this direction have lead to the Coleman-Mandula no-go theorem [3]. The no-go theorem shows under general assumptions that a symmetry group accounting for 4-dimensional Minkowski spacetime and internal symmetries has to factor into the direct product of spacetime and internal symmetries. This implies that spacetime and particle symmetries cannot mix in relativistic interacting theories. One way to circumvent the no-go theorem is to study graded symmetry algebras which introduce fermionic symmetry generators and are known as supersymmetries [4]. The possibility to mix spacetime and internal symmetries in a relativistic theory is a strong theoretical argument for supersymmetry and supersymmetric extensions of the Standard Model of particle physics are therefore widely studied. However, there is no experimental evidence for supersymmetry, see e.g. [5–7], and it is a finely question to ask: Are there alternative ways to circumvent the Coleman-Mandula theorem? The answer to this question is: Yes. We therefore relax the assumption that spacetime is described by the 4-dimensional Poincar´e symmetry. We then investigate new alternative scenarios to mix global spacetime and internal symmetries. Next, we review the Coleman-Mandula theorem to understand how to circumvent the theorem with extra space dimensions. In section III, we discuss translational invariant extra dimensions and show how momentum conservation can be interpreted as new internal symmetry. We then go further in section IV and consider extra dimensions described by rotational invariant spacetimes which lead to “hidden” spins. Finally, we investigate how rotational and internal symmetries can mix if the rotational symmetry group is compact in section V. Such scenarios can for example lead to an explanation of the three Standard Model families. We conclude and give an outlook for further investigations in section VI.
Dark Energy Density in SUGRA models and degenerate vacua In N = 1 supergravity the tree-level scalar potential of the hidden sector may have a minimum with broken local supersymmetry (SUSY) as well as a supersymmetric Minkowski vacuum. These vacua can be degenerate, allowing for a consistent implementation of the multiple point principle. The first minimum where SUSY is broken can be identified with the physical phase in which we live. In the second supersymmetric phase, in flat Minkowski space, SUSY may be broken dynamically either in the observable or in the hidden sectors inducing a tiny vacuum energy density. We argue that the exact degeneracy of these phases may shed light on the smallness of the cosmological constant. Other possible phenomenological implications are also discussed. In particular, we point out that the presence of such degenerate vacua may lead to small values of the quartic Higgs coupling and its beta function at the Planck scale in the physical phase. It is expected that at ultra-high energies the Standard Model (SM) is embedded in an underlying theory that provides a framework for the unification of all interactions such as Grand Unified Theories (GUTs), supergravity (SUGRA), String Theory, etc. At low energies this underlying theory could lead to new physics phenomena beyond the SM. Moreover the energy scale associated with the physics beyond the SM is supposed to be somewhat close to the mass of the Higgs boson to avoid a fine-tuning problem related to the need to stabilize the scale where electroweak (EW) symmetry is broken. Despite the successful discovery of the 125 GeV Higgs boson in 2012, no indication of any physics beyond the SM has been detected at the LHC so far. On the other hand, there are compelling reasons to believe that the SM is extremely fine-tuned. Indeed, astrophysical and cosmological observations indicate that there is a tiny energy density spread all over the Universe (the cosmo- April 28, 2017 0:20 ws-procs961x669 WSPC Proceedings - 9.61in x 6.69in nevzorov-singapore-cc-2017 page 2 2 logical constant), i.e. ρΛ ∼ 10−123M4 P l ∼ 10−55M4 Z 1,2, which is responsible for its acceleration. At the same time much bigger contributions must come from the EW symmetry breaking (∼ 10−67M4 P l) and QCD condensates (∼ 10−79M4 P l). Because of the enormous cancellation between the contributions of different condensates to ρΛ, which is required to keep ρΛ around its measured value, the smallness of the cosmological constant can be considered as a fine-tuning problem. Here, instead of trying to alleviate fine-tuning of the SM we impose the exact degeneracy of at least two (or even more) vacua. Their presence was predicted by the so-called Multiple Point Principle (MPP)3 . According to the MPP, Nature chooses values of coupling constants such that many phases of the underlying theory should coexist. This corresponds to a special (multiple) point on the phase diagram where these phases meet. At the multiple point these different phases have the same vacuum energy density. The MPP applied to the SM implies that the Higgs effective potential, which can be written as Vef f (H) = m2 (φ)H†H + λ(φ)(H†H) 2 , (1) where H is a Higgs doublet and φ is a norm of the Higgs field, i.e. φ 2 = H†H, has two degenerate minima. These minima are taken to be at the EW and Planck scales 4 . The corresponding vacua can have the same energy density only if λ(MP l) ≃ 0 , βλ(MP l) ≃ 0 . (2) where βλ = dλ(φ) d log φ is the beta–function of λ(φ). It was shown that the MPP conditions (2) can be satisfied when Mt = 173 ± 5 GeV and MH = 135 ± 9 GeV4 . The application of the MPP to the two Higgs doublet extension of the SM was also considered5–7 . The measurement of the Higgs boson mass allows us to determine quite precisely the parameters of the Higgs potential (1). Furthermore using the extrapolation of the SM parameters up to MP l with full 3–loop precision it was found8 that λ(MP l) = −0.0143 − 0.0066 Mt GeV − 173.34 (3) +0.0018 α3(MZ) − 0.1184 0.0007 + 0.0029 MH GeV − 125.15 . The computed value of βλ(MP l) is also rather small, so that the MPP conditions (2) are basically fulfilled. The successful MPP predictions for the Higgs and top quarks masses 4 suggest that we may use this idea to explain the tiny value of the cosmological constant as well. In principle, the smallness of the cosmological constant could be related to an almost exact symmetry. Nevertheless, none of the generalizations of the SM provides any satisfactory explanation for the smallness of this dark energy density. An exact global supersymmetry (SUSY) guarantees that the vacuum energy density April 28, 2017 0:20 ws-procs961x669 WSPC Proceedings - 9.61in x 6.69in nevzorov-singapore-cc-2017 page 3 3 vanishes in all global minima of the scalar potential. However the non-observation of superpartners of quarks and leptons implies that SUSY is broken. The breakdown of SUSY induces a huge and positive contribution to the dark energy density which is many orders of magnitude larger than M4 Z . Here the MPP assumption is adapted to (N = 1) SUGRA models, in order to provide an explanation for the tiny deviation of the measured dark energy density from zero.
A Poincaré Covariant Noncommutative Spacetime? We interpret, in the realm of relativistic quantum field theory, the tangential operator given by Coleman, Mandula [CM67] (see also [MPS16]) as an appropriate coordinate operator. The investigation shows that the operator generates a Snyder-like noncommutative spacetime with a minimal length that is given by the mass. By using this operator to define a noncommutative spacetime, we obtain a Poincaré invariant noncommutative spacetime and in addition solve the soccer-ball problem. Moreover, from recent progress in deformation theory we extract the idea how to obtain, in a physical and mathematical well-defined manner, an emerging noncommutative spacetime. This is done by a strict deformation quantization known as Rieffel deformation (or warped convolutions). The result is a noncommutative spacetime combining a Snyder and a Moyal-Weyl type of noncommutativity that in addition behaves covariant under transformations of the whole Poincaré group.
Thermodynamics and the structure of quantum theory Despite its enormous empirical success, the formalism of quantum theory still raises fundamental questions: why is nature described in terms of complex Hilbert spaces, and what modifications of it could we reasonably expect to find in some regimes of physics? Here we address these questions by studying how compatibility with thermodynamics constrains the structure of quantum theory. We employ two postulates that any probabilistic theory with reasonable thermodynamic behavior should arguably satisfy. In the framework of generalized probabilistic theories, we show that these postulates already imply important aspects of quantum theory, like self-duality and analogues of projective measurements, subspaces and eigenvalues. However, they may still admit a class of theories beyond quantum mechanics. Using a thought experiment by von Neumann, we show that these theories admit a consistent thermodynamic notion of entropy, and prove that the second law holds for projective measurements and mixing procedures. Furthermore, we study additional entropy-like quantities based on measurement probabilities and convex decomposition probabilities, and uncover a relation between one of these quantities and Sorkin’s notion of higher-order interference.
Spacetime geometric structures and the Search for a Quantum Theory of Gravity One of the biggest challenges to theoretical physics of our time is to find a background-independent quantum theory of gravity. Today one encounters a profusion of different attempts at quantization, but no fully accepted - or acceptable, theory of quantum gravity. Any such approach requires a response to a question that lies at the heart of this problem. “How shall we resolve the tension between the background dependence of all hitherto-successful quantum theories, both non-relativistic quantum mechanics and special-relativistic quantum field theory, and the background independence of classical general relativity?” (see [28]) The need for a background-independent quantization procedure forms the starting point of my approach. In this paper I shall present a gaugenatural formulation of general relativity, and provide some insights into the structure of the space of geometries, which plays an important role in the construction of a non-perturbative quantum gravity using a path integral approach, as well as in string theory (see e.g., [2, 18, 31])
A Superstring Field Theory for Supergravity A covariant closed superstring field theory, equivalent to classical ten-dimensional Type II supergravity, is presented. The defining conformal field theory is the ambitwistor string worldsheet theory of Mason and Skinner. This theory is known to reproduce the scattering amplitudes of Cachazo, He and Yuan in which the scattering equations play an important role and the string field theory naturally incorporates these results. We investigate the operator formalism description of the ambitwsitor string and propose an action for the string field theory of the bosonic and supersymmetric theories. The correct linearised gauge symmetries and spacetime actions are explicitly reproduced and evidence is given that the action is correct to all orders. The focus is on the Neveu-Schwarz sector and the explicit description of tree level perturbation theory about flat spacetime. Application of the string field theory to general supergravity backgrounds and the inclusion of the Ramond sector are briefly discussed.
Modeling Time’s Arrow Quantum gravity, the initial low entropy state of the Universe, and the problem of time are interlocking puzzles. In this article, we address the origin of the arrow of time from a cosmological perspective motivated by a novel approach to quantum gravitation. Our proposal is based on a quantum counterpart of the equivalence principle, a general covariance of the dynamical phase space. We discuss how the nonlinear dynamics of such a system provides a natural description for cosmological evolution in the early Universe. We also underscore connections between the proposed non-perturbative quantum gravity model and fundamental questions in non-equilibrium statistical physics. The second law of thermodynamics, perhaps the deepest truth in all of science, tells us that the entropy is a non-decreasing function of time. The arrow of time implies that the Universe initially inhabited a low entropy state and the subsequent cosmological evolution took the Universe away from this state. A theory of quantum gravity must explain the singular nature of the initial conditions for the Universe. Such a theory should then, in turn, shed light on time, its microscopic and directional nature, its arrow. The quest for a theory of quantum gravity is fundamentally an attempt to reconcile two disparate notions of time. On the one hand, Einstein’s theory of general relativity teaches us that time is ultimately an illusion. On the other hand, quantum theory tells us that time evolution is an essential part of Nature. The failure to resolve the conflict between these competing notions is at the heart of our inability to properly describe the earliest moment of our Universe, the Big Bang. The puzzle of the origin of the Universe intimately connects to a second fundamental issue: should the initial conditions be treated separately from or in conjunction with the basic framework of the description of the dynamics? The discord between the general relativistic and quantum theoretic points of view has very deep roots. In this essay we will trace these roots to their origins. Then, drawing inspiration from the profound lessons learned from relativity and quantum theory, we propose a radical yet conservative solution to the problem of time. In quantum mechanics, time manifests as the fundamental evolution parameter of the underlying unitary group. We have a state |ψ⟩|ψ〉, and we evolve it as e−iℏHt|ψ⟩e-iℏHt|ψ〉. The Hamiltonian operator of a given system generates translations of the initial state in time. Unlike other conjugate quantities in the theory such as momentum and position, the relation between time and energy, which is the observable associated to the Hamiltonian, is distinguished. Time is not an observable in quantum theory in the sense that generally there is no associated “clock” operator. In the Schrödinger equation time simply enters as a parameter. This conception of time as a Newtonian construct that is global or absolute in a post-Newtonian theory persists even when we promote quantum mechanics to relativistic quantum field theory. In contrast, time in general relativity is local as well as dynamical. Suppose we promote general relativity to a quantum theory of gravity in a naïve fashion. In the path integral, the metric of spacetime is one more dynamical variable. It fluctuates quantum mechanically. So notions such as whether two events are spacelike separated become increasingly fuzzy as the fluctuations amplify. Indeed, Lorentzian metrics exist for almost all pairs of points on a spacetime manifold such that the metric distance is not spacelike. Clearly the notion of time, even locally, becomes problematic in quantum gravitational regimes. The commutation relation [O(x),O(y)]=0[O(x),O(y)]=0 when x and y are spacelike separated, but this is ambiguous once the metric is allowed to fluctuate. The failure of microcausality means that the intuitions and techniques of quantum field theory must be dramatically revised in any putative theory of quantum gravity. Crafting a theory of quantum gravity that resolves the problem of time is a monumental undertaking. From the previous discussion, we see that the standard conceptions of time in quantum theory and in classical general relativity are in extreme tension: time in the quantum theory is an absolute evolution parameter along the real line whereas in general relativity there can be no such one parameter evolution. A global timelike Killing direction may not even exist. If the vacuum energy density is the cosmological constant Λ, we may inhabit such a spacetime, de Sitter space. In any attempt to reconcile the identity of time in gravitation and canonical quantum theory, one is also immediately struck by the remarkable difference in the most commonly used formulations of the two theories. Whereas general relativity is articulated in a geometric language, quantum mechanics is most commonly thought of in algebraic terms within an operatorial, complex Hilbert space formalism. A principal obstacle to overcome rests with the rôle of time being intrinsically tied to the underlying structure of quantum theory, a foundation which—as recapitulated below—is rigidly fixed. Yet when quantum theory is examined in its less familiar geometric form, it mimics general relativity in essential aspects. In fact, these parallels provide a natural way to graft gravity into the theory at the root quantum level. Particularly, the geometric formulation illuminates the intrinsically statistical nature and rigidity of time in quantum theory and points to a very specific way to make time more elastic as is the case in general relativity. Thus, by loosening the standard quantum framework minutely, we can surprisingly deduce profound implications for quantum gravity, such as a resolution of the problems of time and of its arrow. In subsequent sections, we will lay out a framework for general quantum relativity in which the geometry of the quantum is identified with quantum gravity and time is given a dynamical statistical interpretation. Some bonuses stem from this point of view. We achieve a new conceptual understanding of the origin of the Universe, the unification of initial conditions, and a new dynamical framework in which to explore these and other issues. Here to reach a broader audience, conceptual rendition takes precedence over mathematical formalism the details of which are available in the literature given below. In Section 2, we examine the problem of time’s arrow. In Section 3, we briefly recall the geometric formulation of standard quantum mechanics, which naturally leads to a generalized background independent quantum theory of gravity and matter, the latter of which is embodied by M(atrix) theory. In Section 4, we apply this prescription to a theory of quantum gravity in its cosmological setting: the initial low entropy state of the Universe and cosmological evolution away from this state. Specifically we will discuss key properties of the new space of quantum states—a nonlinear Grassmannian—features which notably embody an initial cosmological state with zero entropy and provides a description of cosmological evolution when viewed as of a far from equilibrium dissipative system. We also will elaborate on a more general connection between quantum gravity, the concept of holography and some fundamental results in non-equilibrium statistical physics. Finally, Section 5 offers some concluding remarks.
All Roads Lead to String Theory: Two from N=8 SuperGravity Also: SUSY must exist because the number 3/2 can't be missing from nature/universe
Back in the early 1980s, people became excited about the N=8 supergravity theory (or "SUGRA" for short), a four-dimensional field theory that combines Einstein's general relativity with as much supersymmetry as you can get. Its existence looked somewhat non-trivial.
It was and it still is a beautiful structure. The high degree of supersymmetry made it more likely for divergences to cancel, a reason why some people use the term "the simplest quantum field theory" for this most supersymmetric field theory in four dimensions.
Soon it became clear that there were too major remaining problems with this supergravity, namely
- the divergences cancel but not all of them
- the structure was too constrained; N=8 supersymmetry surely doesn't allow you to have chiral fermions and similar things known from the Standard Model
Finiteness of supergravity theories
for more details about the quantum loops where various gravitational divergences occur. This oldest guess was recently proven too pessimistic. As the hardcore technical papers by Bern, Dixon, Roiban, and others show, the three-loop divergences cancel, too.
Now it is likely that the first divergences occur at nine-loop level - see Green, Russo, Vanhove (2006) - or, which I find most likely, the theory is finite to all orders of perturbation theory.See also: Lance Dixon's SUGRA puzzle
I find the "perturbatively finite" answer to be the most likely one. The most convincing general reason of the finiteness is that the N=8 gravity comes from a closed superstring much like the N=4 gauge theory arises from an open (or heterotic) superstring. The closed superstring includes left-moving and right-moving oscillations. They and their amplitudes are therefore related to second powers of the open (or heterotic) string amplitudes. Because the N=4 gauge theory is perturbatively finite, the N=8 supergravity may be perturbatively finite, too.
This relationship may be clarified in terms ofthe Kawai-Lewellen-Tye (KLT) relations.However, you should realize that the argument only works perturbatively because it is only the perturbative expansion where you can actually see the strings and argue that closed strings are open strings squared. These KLT relations tell you nothing about the non-perturbative behavior. As we will see later in the text, N=8 SUGRA is non-perturbatively inconsistent. Besides the KLT relations, there are many other stringy methods (and corresponding papers) that are very helpful to determine the character of divergences in the N=8 supergravity, including a careful counting of the powers of the stringy parameter alpha' in various counterterms. See e.g. Bohr Jr & Vanhove (2008). One of the direct conclusions of the (surprising but observed) three-loop finiteness is that the cancellations can't be fully explained by supersymmetry. The Bohr-Vanhove paper and others claim that the "higher" cancellations are partly due to general covariance and stringy constraints although the complete picture is not yet known. At any rate, it would be bizarre to use the new cancellations (surprising from a pure SUSY-QFT viewpoint) as an argument against string theory (that is probably the real reason why they take place). Nevertheless, the problems 1,2 listed above haven't disappeared. The theory (pure supergravity) still breaks at the Planck scale - the perturbative series diverge, as explained below, and the theory cannot describe things like black hole production properly - and its phenomenology is not viable. I would like to explain that any intelligent enough person who accepts that the N=8 supergravity is on the right track inevitably ends up with string theory as her candidate unifying theory. All roads lead to string theory Such a person may decide to look for a fully consistent theory inspired by the N=8 supergravity or she may search for a phenomenologically acceptable generalization of the N=8 supergravity. In both cases, she is led to string theory. In order to see why, it is necessary to look at the two bugs of supergravity somewhat more accurately. Non-perturbative completion At the very beginning of this section, I (and a helpful anonymous commenter) feel it is important to emphasize that if a field theory is perturbatively finite - i.e. finite at every fixed order of perturbation theory - it doesn't mean that you can actually resum the series and obtain a fully finite result. Quite on the contrary: in quantum field theory, and N=8 SUGRA is hardly an exception, the Taylor sums are almost always divergent (and in perturbative string theory, they are divergent, too). They are "asymptotic series". This divergence means that the perturbative result can't be the whole story but it doesn't mean that there can't exist a complete, finite answer. Just on the contrary: a complete finite result does exist and the perturbative expansion is its Taylor approximation. The minimum error you encounter by summing an "optimal" number of the perturbative terms to get a pretty accurate but convergent result (for a very small coupling constant) is of the same order as the leading non-perturbative correction(s). That's why we say that the perturbative and non-perturbative parts cannot be really separated in field theory (and string theory) except for cases when the whole (or whole except for a few terms) perturbative contribution vanishes. Unlike field theory, string theory tells us what the full result is. See also a text about non-perturbative well-definedness. Now, when we have spent some time with clarifying what the adjective "perturbatively finite" means, let us ask: What are the "finite amplitudes" of the supergravity theory that we often talk about? Well, they are the scattering amplitudes of the massless gravitons, graviphotons, scalars, gravitinos, and other fermions in the multiplet with 128 bosonic and 128 fermionic states. At some level, you might say that these amplitudes encode all of physics. But it is not hard to see that physicists should look at other things, too. A theory with gravitons should include massive objects - because gravity's most favorite role is to couple to massive objects. In a similar fashion, supergravity contains 28 U(1) fields i.e. 28 copies of a "photon". Whenever you have photons, you should also have charged objects because photons love to interact with them. In fact, love is not the only justification of this statement. ;-) A theory that has gravitational and electromagnetic fields - such as N=8 SUGRA - also admits classical solutions that look like massive and charged black holes. It seems almost guaranteed, both theoretically and phenomenologically, that some of these massive and charged solutions must correspond to actual objects in the theory, at least if the dimensionless charges are much greater than one and the classical limit is legitimate. Let me assume that you believe me. If there are 28 copies of the electromagnetic field, there should exist all kinds of "electrons" that carry charges under those 28 U(1) groups. Because of the exceptional symmetry of the supergravity, electric and magnetic charges should be treated on equal footing. If electric charges exist, magnetic monopoles associated with the U(1) groups should exist, too. To summarize: the space of allowed charges should be R56. Actually, this is just the classical result, describing charges of allowed black hole solutions and assuming that the charges are large enough for quantum mechanics to be irrelevant. Quantum mechanically, you can easily see that charges cannot be continuous. A magnetic monopole requires a Dirac string for the gauge field(s) to be well-defined and the wave functions of electrically charged objects must be single-valued when these particles rotate around the Dirac string. This comment implies that the charges must be quantized and the unit of the magnetic charge is inversely proportional to the unit of the electric charge: that's known as the Dirac quantization rule. At any rate, the charges cannot live in the continuous 56-dimensional space. They must live in a 56-dimensional lattice instead. This conclusion follows by "pure thought" from gedanken experiments involving the Dirac strings and other objects. There is no way to avoid the conclusion. Quantum mechanics often forces quantities to take discrete values: that's why it's called quantum mechanics. Get used to it. ;-) Now, when you study how the 56-dimensional lattice (inside the 56-dimensional continuum we mentioned above) can look like, you find out that there are many possible lattices to do the job. Some elementary electric charges can be higher (and their associated magnetic charges must be lower) and the charges can mix with each other (the lattice can be tilted). If you study the space of possible solutions (56-dimensional lattices where everything works, including some Chern-Simons terms), you find out that the space is
E7(7)(Z) \ E7(7) / SU(8),the same 133-63 = 70-dimensional moduli space we know from string theory. Different lattices - different points on the moduli space - give non-equivalent theories because they give you different spectra of the allowed charges. Only if you map the lattice onto itself, you obtain the same background: that's why we have to take the quotient by the discrete group on the left side. The other groups already occur in classical supergravity. Note that this non-equivalence of the vacua on the moduli space follows from quantum mechanics because quantum mechanics forced the charges to be quantized. This has also reduced the original E_{7(7)} continuous symmetry of the classical supergravity theory to E_{7(7)}(Z), its important discrete subgroup (that we know as the U-duality group). You should see that any consistent quantum theory - where wave functions are single-valued - only allows the discrete, E_{7(7)}(Z) symmetry. The choice of the lattice is then physical. In the classical theory, charges are continuous, the symmetry group is also continuous, and all choices of the "continuous lattice", R^{56}, are equivalent. The absolute values of scalars are unphysical, only the relative ones matter. With quantum mechanics, it can't be the whole story. Various things have to become quantized, new states have to appear, the scalars have to become physical parameters of the "environment". Supergravity without the new stringy effects is in the swampland, as argued by Green, Ooguri, Schwarz (2007) and others. See also Non-decoupling of N=8 d=8 SUGRA. For the scattering of gravitons, graviphotons, and their supersymmetric friends, this quantization doesn't seem to matter perturbatively. The perturbative expansion effectively assumes that the charged particles cannot run in the loops at all. All the contributions with charged stuff in the loops is erased. However, if you want the exact results, even for the scattering amplitudes of massless gravitons, it is important to realize that charged objects can run in the loops, too. Because these charged objects are massive, their pair production is exponentially suppressed (because in field theory, such a pair production of charged black holes would correspond to an instanton). Because of similar subtle technical reasons, the fact that the perturbative SUGRA completely misinterpreted the spectrum of allowed electromagnetic charges didn't spoil the perturbative result for the scattering amplitudes. But non-perturbatively, you need to know what objects can run in the loops. Pair production of charged objects in the loops must be allowed. It adds exponentially suppressed terms and the quantization rules (the lattice) starts to matter. Non-perturbatively, the full stringy answer is the only consistent one and there is a 70-dimensional space of possible backgrounds (each of them giving you a superselection sector of the stringy/M-theoretical Hilbert space). Whether the first inconsistencies in ordinary N=8 supergravity occur at the three-loop level, nine-loop level, or non-perturbative level is a technicality. What's qualitatively important for the "big picture" is that the N=8 supergravity cannot be the whole story. And it is, in fact, possible to deduce the aspects of stringy physics that have to be a part of the picture and that the old understanding of the N=8 supergravity neglected. Extra dimensions You find one more related surprise - one that has been known since the birth of supergravity theories but whose importance was not fully appreciated. The N=8 supergravity is the dimensional reduction of the 11-dimensional supergravity. If you take the "most beautiful" gravitational effective field theory in four dimensions, you are pretty much guaranteed to discover its 11-dimensional origin - another hint of string/M-theory. The 11-dimensional interpretation is clearly natural and more symmetric but the relationship is actually not about naturalness only: it is much more inevitable. In classical field theory, dimensional reduction means that you assume that fields don't depend on certain dimensions at all and some corresponding components of vectorial and tensorial fields are set to zero. Consequently, the dimensions cease to exist. You kill them by hand. That's how Kaluza originally reduced the 5-dimensional Einstein's gravity to a four-dimensional electrogravitational theory. In the full quantum theory, dimensional reduction is more subtle. As Klein realized, you must actually compactify the unwanted dimensions on circles. If the circles are very short, fields are not allowed to depend on the circular coordinate much because such variable modes would have a huge momentum (and energy). However, in the full theory, these variable (Kaluza-Klein) modes must still exist. In the case of N=8 supergravity, it is the dimensional reduction of 11-dimensional supergravity or type IIA or type IIB supergravity in 10 dimensions. At the quantum level, when you require all quantization rules to hold, the theory arises from the compactification of M-theory on a seven-torus or type IIA or type IIB string theory on a six-torus. The extra dimensions are not an additional or arbitrary assumption. I can always use Fourier series to reconstruct the additional dimensions from the wave functions of charged fields that already existed before we learned about the extra dimensions (charged black holes etc.). The only new thing is that such an approach turns out to be useful and it leads to new insights and a more concise theory. Note that there are only six "elementary" supersymmetric string backgrounds in the highest (10 or 11) dimensions - M-theory; type IIA; type IIB; type I; heterotic E; heterotic O string theories. Because the N=8 supergravity is directly related to three of them (all the backgrounds with 32 supercharges, the maximum number), it is damn important in the structure of string/M-theory. The maximally supersymmetric vacua of string theory - those related to the N=8 supergravity - are the best understood ones and a statement that you should avoid string theory exactly in this "inherently stringy" and special (and fully mapped!) segment of the landscape would be entirely ridiculous. The "mini-landscape" of the maximally supersymmetric backgrounds has been fully mapped, its (not only) qualitative physics has been entirely understood (since 1995, it's the string theorist's alphabet that he can recite in both directions), and we know for sure that all the stringy conclusions about any similar theory with 32 supercharges are inevitable. Incidentally, the only other allowed set of vacua with 32 supercharges - besides M-theory on tori - involve type IIB string theory on AdS5 x S5 (or its orbifold), the best known example of the AdS/CFT correspondence. (In the N=4 gauge-theoretical "boundary CFT" description, 1/2 of the supercharges emerge as superconformal generators.) The more supersymmetry we have, the more we know about the internal structure of the theory. I have explained that the consistent completion of supergravity has the same discrete charges and discrete symmetries as string/M-theory. But you can actually derive other stringy features of physics in the same way. For example, you can see that there are the strings in the game and they become damn important in a certain limit. You don't have to add the strings by hand: you can derive their existence and their importance by general considerations involving any theory similar to the N=8 SUGRA. For example, the 70-dimensional space (U-duality\E_{7(7)}/SU(8)) of possible backgrounds (lattices) that we described above - a space that already existed in classical SUGRA as the configuration space of the scalar fields - has various limits. In one limit, 7 charges become nearly continuous. This is the "decompactification" limit where 7 dimensions become very large and therefore useful: the charges are interpreted as momenta along these 7 directions. The local physics of the theory can be seen to be 11-dimensional supergravity. You may even find a lot of non-stringy evidence that these 11 dimensions must locally respect the Lorentz symmetry - they are parts of the same geometry; this proposition is clearly a fact once you adopt string theory. This theory has classical M2-brane and M5-brane solutions. Quantization arguments similar to those above imply that the charges of these M2-branes and M5-branes are discrete, too. Of course, the quanta are completely fixed and known in string/M-theory. Take the minimum-charge M2-brane and wrap it on a circle: you produce a type II string. If this (wrapped) circle shrinks to a small size, and such a point is guaranteed to exist on the moduli space of the (any) "quantum-completed-supergravity", this string becomes the lightest degree of freedom. It can be shown to have the same interactions and importance as it is known to have in string theory: it has to be the same string, after all. You can prove that it has certain internal degrees of freedom and they can be quantized to obtain the usual stringy tower of states (spectrum). Adding interacting strings that vibrate in the graviton "dance" can be shown equivalent to a modification of the background geometry, proving that the geometry is really made out of these strings in this limit, and so on. If you think about physics carefully and if you have the required background in physics up to quantum field theory (that's enough!), you should be able to think in the "stringy way" in a few minutes. There are no "really new" arguments that wouldn't have been successfully tested in older physics disciplines (they are only put together in new ways to learn new fascinating things) - which is why Barton Zwiebach could have written an undergrad textbook of string theory and present it as a set of exercises about classical and quantum mechanics and electromagnetism. Afterwords, you will be using pretty much the same arguments as string theorists to deduce and calculate other detailed features of physics. In some sense, you are still studying the N=8 supergravity. But you know much more about it. You need string theory to understand its non-perturbative physics properly. Again, perturbative string theory is only sufficient to crack a larger but still limited realm of questions. But we know a lot about non-perturbative string theory, too: most of the string research since the mid 1990s was dedicated to non-perturbative string theory. Dualities, M-theory, detailed black hole thermodynamics, and partly also the AdS/CFT holography are among the most important concepts that have emerged in this research. It may be fair to say that this non-perturbative knowledge about string theory extends the previous knowledge about perturbative string theory in an analogous way as perturbative string theory itself extended the insights that can be obtained from ordinary supergravity (an effective quantum field theory). In this process, new things inevitably emerge such as new physical objects such as fields, particles, and branes, new quantization rules, new interactions, new symmetries and equivalences, new phase transitions, new limits, and new effective degrees of freedom that govern these limits. We shouldn't really be using the term "string theory" for the full non-perturbative structure we have in mind today (strings are no longer "the fundamental objects") - except that the term "string theory" is convenient and it stuck. ;-) We are still looking at the "same" theory as the N=8 SUGRA but many mysteries have evaporated and our understanding is much more detailed. It is simply not true that everything goes. 32 supercharges and physical consistency hugely constrain the laws of physics and string/M-theory on tori plus type IIB on AdS5 times a sphere (or orbifold) give the only solutions to these constraints. Summary: consistency completes SUGRA to string/M-theory To summarize, if you accept that the N=8 supergravity is on the right track, gedanken experiments and consistency inevitably lead you to the full string/M-theory as the only completion of the supergravity theory that is non-perturbatively consistent. All people who studied supergravity in the early 1980s realize this fact today - they have understood that string theory is the only broader justification for their approximate calculations - and the supergravity community has become a subset of the string theory community (or stringy-supergravity federal community, if you want me to be excessively polite), especially once the importance of M-theory was appreciated, even though some people are still better at computing low-energy results and other people are better at other things. Everyone who wants to "unlearn" these things is completely deluded. String theory has become an unavoidable component of a proper analysis of N=8 supergravity and many similar theories. It is known to be fully consistent while supergravity with continuous non-compact symmetries has been shown (above and elsewhere) to be non-perturbatively inconsistent. Finally, I want to address a particular related myth that is being propagated in some not-really-serious but mildly influential corners - namely the statement that string theory is only known to be consistent perturbatively, much like supergravity. This is clearly false. When we talked about the lattices of charges and the (missing) loops with charged (and thus massive) stuff in supergravity, it was an explanation that supergravity was non-perturbatively inconsistent per se. In string theory, these particular inconsistencies are absent, even non-perturbatively, and one can actually show that other inconsistencies are also absent. String theory properly describes even the most characteristic features of quantum gravity such as the black hole microstates and their evaporation: that's why string theorists could have computed the entropy of many black holes so much more accurately than general relativity. This fine, quantum analysis of black holes in string theory surely requires one to use non-perturbative tools. But on the other hand, D-branes are an example of non-perturbative (very heavy, solitonic) objects that is nevertheless studied by (almost) the very same methods as perturbative string theory itself (with different boundary conditions etc.). A description of physics involving D-branes is "trans-perturbative" but it is effectively enough to get to any coupling, including infinite coupling. For example, black holes in strongly coupled stringy backgrounds can be mapped to fully understood configurations of D-branes and strings. Also, perturbative string theory has a different expansion parameter than supergravity. String theory has a dimensionless coupling constant related to the dilaton (a scalar field) while supergravity has dimensionful Newton's constant that must be multiplied by a power of energy to become dimensionless. In the case of supergravity, the regime where the parameter goes to infinity is a high-energy regime (black hole production) that is not understood and looks inconsistent in (perturbative) SUGRA: the divergences of the Taylor expansion spoil the party and the perturbative definition gives you no rule how to resum the divergent series. On the other hand, in string theory, when the coupling is sent to infinity, dualities allow us to use another, equivalent, and well-behaved description. In string theory, it's still impossible to resum the (still divergent) perturbative expansion but you can discover other objects that make the theory work beyond the perturbative series and they actually allow you to calculate the infinitely coupled limit exactly. So in string theory, physics is clearly well-behaved for a small coupling, including the whole perturbative expansion and the non-perturbative consistency checks that failed in SUGRA. It also satisfies the very same consistency checks when the coupling goes to the other extreme, namely to infinity: SUGRA fails completely when you push it to the opposite limit. If you still think that this is insufficient to believe that string theory is also well-behaved everywhere in the middle, let me finally say that the maximally supersymmetric backgrounds - M-theory and its compactifications (from 0-torus to 5-torus) - can be fully and non-perturbatively defined in terms of Matrix theory (valid for any size of the torus and any energy and involving no expansions needed for the definition) which eliminates the remaining doubts. Phenomenological road In the first portion of the text, I have described the first, "consistency" road that inevitably leads everyone interested in supergravity to string theory. As we hinted at the beginning, the other road is based on phenomenology - the requirement that the theory agrees with the real world in detail and incorporates the Standard Model or its generalization. The same N=8 supersymmetry that guaranteed those nice cancellations in the supergravity theory is also a bad thing. It is far too constraining. For every boson in your theory, it predicts a fermion - in fact, a lot of other fermions (and bosons!), too. All of their masses and interactions are forced to be "the same". On the other hand, the Standard Model (reality) describes fields and particles that are far more generic and unrelated. From this viewpoint, the extended supersymmetry looks like green shackles. You might propose to break the N=8 supersymmetry spontaneously. However, N=8 supersymmetry cannot be spontaneously broken to N=1 or N=0 supersymmetries - the latter being two realistic choices. In fact, even N=2 supersymmetry is too constraining and cannot be spontaneously broken to N=1 or N=0, at least not by field theoretical methods in four dimensions. If you can't break the N=8 supersymmetry spontaneously, you may want to break it explicitly. However, in that case, you lose the cancellations completely. Even with a reduced degree of supersymmetry in four dimensions (e.g. in other supergravity theories, including gauged supergravities), the divergences start to reappear, even in the perturbative expansion. No other gravitational field theories that are perturbatively finite, besides the N=8 supergravity, are known. (It is likely that even if such hypothetical additional theories existed, you would probably need string theory to find them and/or to prove the perturbative finiteness.) So how can you preserve the nice features of the N=8 supersymmetry and get rid of the bad features? There is a solution. You may spontaneously (read this word carefully, Daniel from Sao Paolo!) break the maximally extended supersymmetry but you must do it in a non-field-theoretical fashion. In fact, you need to return to the full description of the N=8 supergravity which is M-theory on a seven-torus. The correct spontaneous breaking of the supersymmetry instructs you to compactify M-theory or string theory on a different, more complicated manifold such as a G2 holonomy manifold (or a Calabi-Yau manifold times a line interval, or related choices). Without a loss of generality, a conceptual discussion may focus on the G2 holonomy manifolds. Locally, G2 holonomy manifolds are made out of the same 7-dimensional space as 7-tori. In the full configuration space of string/M-theory, you may "connect" G2 manifolds and 7-tori. For this particular pair of manifolds, it can only be done off-shell - you need to consider configurations with non-zero energy as intermediate points in order to achieve this brutal topology change - but in some moral sense, G2 manifolds can be obtained by adding a vev to 7-tori. The ultraviolet behavior is unaffected because at very short distances, 7-tori and G2 manifolds look the same: they are 7-dimensional manifolds, after all. In fact, "locally", there are still all 32 supercharges. So if M-theory on 7-tori is finite, it is not shocking that M-theory on G2 manifolds is finite, too. But let me warn you: for general backgrounds, the finiteness depends on string theory. If you truncate the stringy description into the massless fields, the perturbative finiteness of the field theory will never work as well as it did in the N=8 SUGRA, as far as I can say. M-theory on G2 manifolds leads to N=1 supersymmetry in four dimensions which is a realistic degree of supersymmetry, one that can be spontaneously broken to N=0 supersymmetry. It describes a lot of phenomenologically acceptable models, including models that contain pretty much pure MSSM at low energies. So if you try to preserve as many "nice" aspects of the N=8 supergravity as you can get, it is actually possible to preserve the "full, maximally supersymmetric local physics" of the N=8 SUGRA, but in 11 dimensions rather than 4, and use a different space than a 7-torus for compactification. At any rate, you end up with the conventional superstring model building (and you may find other realistic classes of vacua that are a priori as OK as the G2 manifolds once you fully adopt the stringy rules of the game). In the G2 context, at high energies (at distances shorter than the size of the G2 manifold), you still "morally" have 32 supercharges because the theory is M-theory, after all. At lower energies (at distances longer than the size of the G2 manifold), these 32 supercharges are broken down to 4 supercharges by the G2 geometry. Breaking of symmetry at long distances (or low energies) is exactly what you are used to from field theory except that string theory and its extra dimensions gave you completely new tools to achieve this goal. The set of a priori possible vevs giving us semi-realistic compactifications is rather large - it is the landscape - but if you realize how unconstrained and "arbitrary" the real world (and its particle spectrum) is in comparison with the N=8 supergravity (where the mini-landscape is very simple), you should understand that the large size of the N=1 or N=0 landscape is another "required" feature that qualitatively agrees with observations. Whether there exists a "nearly unique" way to find the correct set of vevs - or whether the anthropic people are correct in saying that we live in a "random", trash Universe - remains to be seen. But again, the knowledge that the landscape exists is another step that deepens our understanding of physics. Once you realize most (not necessarily all) of these basic physical arguments, there is no way for you to "unrealize" them. String/M-theory describes the only non-perturbatively consistent completion as well as the only phenomenologically acceptable generalization of "nice" theories such as the N=8 supergravity and I would like to claim that a physically literate, intelligent reader can find the proof in this very essay. If you think that the N=8 supergravity is on the right track and you try to make the picture more complete, more self-consistent, or more compatible with observations, you are led in the same direction in all three cases: you are led to string/M-theory. Everyone who pretends that it is not the case is deluded. There is no way for theoretical physics to "forget" string/M-theory because it has answered many questions about older theories such as the N=8 supergravity that used to be puzzling but that cannot be "unanswered" today. Entertainingly enough, one of Peter Woit's passionate readers, enthusiastic creationist Roger Schlafly, was more self-consistent than Peter Woit when he criticized Woit's attempts to promote the N=8 supergravity as a separate entity (and a full-fledged candidate for a unifying theory):
Peter, you are going soft on us. This is just another wacky theory with no connection to reality. It does not because [LM: should be: "become"] valid or useful just because some of the infinities cancel.Of course, when your brain works properly, there is no way to "cut the strings" away from the rest of physics. Such an attempt is fully analogous to the creationist attempts to divide the evolution into the "naturally tolerable one" and "one that surely requires God". There exists no point where you could draw such a boundary (read about the unity of strings). In fact, when you ask different creationists where this hypothetical boundary is supposed to be located, they will pinpoint all possible points. ;-) The very same observation applies to the anti-string-theoretical crackpots: read more about the evolution-string-theory analogies. When you ask them when physics exactly began to generate wrong results, they will give you all possible years and all possible boundaries between papers. Peter Woit puts the boundary somewhere in the early 1980s because this is when he realized that his brain was not capable to do physics. Lee Smolin places the boundary somewhere in the 1970s (or earlier) because this is when this particular critic of science realized the same thing - that's also why Smolin tells you that all of particle physics sucks, in a sense - while Roger Schlafly puts the boundary somewhere to the 19th century because this is when physics began to support things like the old Earth and Darwin's theory that he simply cannot stand. ;-) The only semi-consistent solution for anti-scientific activists such as Peter Woit would be to deny all of physics completely. Science hasn't stopped (and it will never end) and even after 1838 or 1975 or 1982, there exists a significant portion (thousands) of completely solid physics papers. Any attempt to draw a boundary somewhere in the middle of physics can be proven ludicrous because physics is a very tightly connected conglomerate of insights and evidence. An attempt to cut physics into two pieces along an argument can be shown illegitimate by repeating the proof that the argument is actually correct. For example, an attempt to say that the N=8 supergravity with the non-compact continuous E_{7(7)} symmetry was still on the right track while its stringy completion with the discrete U-duality symmetry is not contradicts the single-valuedness of the wave functions, as used in the Dirac quantization argument. It contradicts hundreds of other facts about physics, too, but I wanted to be very specific. String theory is indisputably at the very heart of physics which is at the very heart of natural science which also means that the critics of string theory find themselves at the very center of a gigantic excrement. And that's the memo.
F-theory and Grand Unified Theories String theory remains the main contender for achieving a consistent quantum theory of gravitation, which will reconcile gravitation with the Standard Model. In order to do so, string theory must be able to reproduce our low energy description of particle physics when compactified to 4 non-compact dimensions. In this review we introduce the 12-dimensional F-theory as a framework for a non-perturbative description of compactifications of Type IIB string theory with [p, q] 7-branes, and a varying axion-dilaton. We review the main motivations for introducing F-theory, which includes a geometric interpretation of the Type IIB SL(2,Z) symmetry as the modular group of a torus (elliptic curve), and the monodromy and backreaction of 7-branes. With a focus on compactifications of F-theory on K3 surfaces the geometry of elliptic fibrations is introduced along with the emergence of gauge groups, and a duality to Type IIB on orientifolds. General methods for generating matter and Yukawa couplings with 7-branes wrapping a del Pezzo surface in the internal space is then presented. These features are shown to appear at loci of singularities with increasing co-dimension. Finally, the developed tools of model building are used to provide an overview of the phenomenological properties of a local F-theory model with an SU(5) supersymmetric Grand Unified Theory. The description includes discussions of the local brane construction, GUT breaking by hyperflux, SUSY breaking, proton decay, flavour physics and neutrino physics. In the light of this, F-theory is a promising framework for embedding the Standard Model in string theory.
Information Theoretic Description of Time Physics is studying a system based on the information available about it. There are two approaches to physics deterministic and the nondeterministic. The deterministic approaches assume complete availability of information. Since full information about the system is never available nondeterministic approaches such as statistical physics and quantum physics are of high importance. This article is concerned with informational foundations of Physics and a description of time in terms of information. This article addresses the problem of time and a cluster of problems around measurement in quantum mechanics. It gives an interpretation for time in terms of information.
The thermal time is shown to be emergent from the noncommutativity of quantum theory.
U-Duality and M-Theory This work is intended as a pedagogial introdution to M-theory and to its maximally supersymmetric toroidal compactifiations, in the frameworks of 11D supergravity, type II string theory and M(atrix) theory . U-duality is used as the main tool and guideline in uncovering the spectrum of BPS states. We review the 11D supergravity algebra and elementary 1/2-BPS solutions, discuss T-duality in the perturbative and non-perturbative sectors from an algebraic point of view, and apply the same tools to the analysis of U-duality at the level of the effective action and of the BPS spectrum, with a particular emphasis on Weyl and Borel generators. We derive the U-duality multiplets of BPS particles and strings, U-duality in variant mass formulae for 1/2- and 1/4-BPS states for general toroidal compactifiations on skew toric with gauge backgrounds, and U-duality multiplets of constraints for states to preserve a given fraction of supersymmetry. A number of mysterious states are encountered in D ≤ 3 , whose existence is implied b y T-duality and 11D Lorentz invariance. We then move to the M(atrix) theory point of view, give an introduction to Discrete Light-Cone Quantization (DLCQ) in general and DLCQ of M-theory in particular. We discuss the realization of U-duality as electri-magnetic dualities of the Matrix gauge theory , display the Matrix gauge theory BPS spectrum in detail, and discuss the conjectured extended U-duality group in this scheme.
One Loop Tadpole in Heterotic String Field Theory We compute the off-shell 1-loop tadpole amplitude in heterotic string field theory. With a special choice of cubic vertex, we show that this amplitude can be computed exactly. We obtain explicit and elementary expressions for the Feynman graph decomposition of the moduli space, the local coordinate map at the puncture as a function of the modulus, and the b-ghost insertions needed for the integration measure. Recently developed homotopy algebra methods provide a consistent configuration of picture changing operators. We discuss the consequences of spurious poles for the choice of picture changing operators.
1 Introduction 1 2 Quantum Closed String Field Theories 2 2.1 Closed Bosonic String Field Theory. 2 2.2 Heterotic String Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 One Loop Tadpole in Closed Bosonic String Field Theory 10 3.1 Elementary Cubic Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Propagator Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Elementary One Loop Tadpole Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 One Loop Tadpole in Heterotic String Field Theory 22 4.1 General Construction of Cubic and Tadpole Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 Case I: Local PCO insertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Spurious Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.4 Case II: PCO contours around punctures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5 Concluding Remarks 33 1 Introduction In this paper we compute the off-shell, one-loop tadpole amplitude in heterotic string field theory. The purpose is twofold: (1) First, we would like to show that the amplitude can be computed exactly. Our success in this regard is largely due to a nonstandard choice of cubic vertex defined by SL(2, C) local coordinate maps.4 We will take special care to provide explicit results concerning the Feynman-graph decomposition of the moduli space, the local coordinate map as a function of the modulus, and the b-ghost insertions needed for the integration measure. Actually, these data are primarily associated with closed bosonic string field theory, but they also represent the most significant obstacle to explicit results for the heterotic string. Importantly, our string vertices differ from the canonical ones defined by minimal area metrics [1]. The minimal area vertices are cumbersome for elementary calculations, though some analytic results are available at tree level up to 4 points [2] and numerical calculations have been performed up to 5 points [3, 4].5 (2) Second, we would like to see how recent homotopy-algebraic constructions of classical superstring field theories [6, 7, 8, 9, 10, 11, 12] may be extended to the quantum level. A significant issue is the appearance of spurious poles in βγ correlation functions at higher genus [13]. We find that the general methodology behind tree level constructions still functions in loops. Spurious poles are mainly important for the choice of picture changing operators (PCOs), which must ensure that loop vertices are finite and unambiguously defined. We consider two approaches to inserting PCOs. In the first approach, PCOs appear at specific points in the surfaces defining the vertices, in a manner similar to [14]. In the second approach, PCOs appear as contour integrals around the punctures, paralleling the construction of classical superstring field theories. The second approach has a somewhat different nature in loops, however, due to the necessity of specifying the PCO contours precisely in relation spurious poles. A naive treatment can lead to inconsistencies. In this paper we are not concerned with computing the 1-loop tadpole in a specific background or for any particular on- or off-shell states. So when we claim to “compute” this amplitude, really what we mean is that we specify all background and state-independent string field theory data that goes into the definition of this amplitude. It will remain to choose a vertex operator of interest, compute the correlation functions, and integrate over the moduli space to obtain a final expression. Ultimately, however, one would like to use string field theory to compute physical 4The possibility of using SL(2, C) maps in defining vertices was pointed out to us by A. Sen. We thank him for this suggestion. 5A new approach to the computation of off-shell amplitudes has recently been proposed based on hyperbolic geometry [5]. It would be interesting to approach the computation of the heterotic tadpole from this perspective. 1 amplitudes in situations where the conventional formulation of superstring perturbation theory breaks down. The computations of this paper can be regarded as a modest step in this direction. The paper is organized as follows. In section 2 we briefly review the definition of bosonic and heterotic closed string field theories. To obtain the data associated to integration over the bosonic moduli space, in section 3 we compute the 1-loop tadpole in closed bosonic string field theory. This requires, in particular, choosing a suitable cubic vertex, computing the contribution to the tadpole from gluing two legs of the cubic vertex with a propagator, and defining an appropriate fundamental tadpole vertex to fill in the remaining region of the moduli space. In section 4 we compute the 1-loop tadpole in heterotic string field theory. This requires dressing the amplitude of the bosonic string with a configuration of PCOs. Homotopy algebraic methods constrain the choice of PCOs to be consistent with quantum gauge invariance, but some freedom remains. We discuss two approaches: one which inserts PCOs at specific points in the surfaces defining the vertices, and another which inserts PCOs in the form of contour integrals around the punctures. We discuss the consequences of spurious poles for both approaches. We also present the amplitudes in a form which is manifestly well-defined in the small Hilbert space. We end with concluding remarks. Note Added: While this work was in preparation, we were informed of the review article [15] which contains a schematic discussion of spurious poles in the 1-loop tadpole amplitude in the limit of large stub parameter. Our work contains a more complete calculation of the tadpole amplitude, but considering this issue our conclusions are in agreement.
Renormalization group in super-renormalizable quantum gravity The calculation of quantum corrections always had a very special role in quantum theories of gravity. The first relevant calculation was done by t’Hooft and Veltman [1], who derived the one-loop divergences in the quantum version of general relativity, including coupling to the minimal scalar field. Soon after similar calculations were performed for gravity-vector and gravity-fermion systems [2]. These first calculations have a great merit, regardless of the fact that the output was shown to be gauge-fixing dependent [3]. Later on one could learn a lot from the two-loop calculations in general relativity [4, 5]. Technically more complicated are calculations in four-derivative gravity, which were first performed in [6] and with some corrections in [7], [8] and finally in [9], where some extra control of the calculations was introduced and the hypothesis of the non-zero effect of the topological Gauss-Bonnet term [10] explored. Let us also mention similar calculations in the conformal version of the four-derivative theory [7, 11, 12]. The importance of four-derivative quantum gravity is due to its renormalizability [13], which is related to the presence of massive unphysical ghosts, typical in the higher derivative field theories. Naturally, there were numerous and interesting works trying to solve the unitarity problem in this theory [14–16]. The mainstream approach is based on the expectation that the loop corrections may transform the real massive unphysical pole in the tensor sector of the theory into a pair of complex conjugate poles, which do not spoil unitarity within the Lee-Wick quantization scheme [17]. However, it was shown that the definite knowledge of whether this scheme works or not requires an exact non-perturbative beta-function for the coefficient of the Weyl-squared term and for the Newton constant [18]. The existing methods to obtain such a non-perturbative result give some hope [19], but unfortunately they are not completely reliable1 and, therefore, the situation with unitarity in the four-derivative quantum gravity is not certain, at least. Another interesting aspect of quantum corrections in models of gravity is related to the running of the cosmological constant Λcc and especially Newton constant G. These quantum effects may be relevant in cosmology and astrophysics (see, e.g. [21, 22]) and can be explored in different theoretical frameworks, such as semiclassical gravity [23], higher derivative quantum gravity [7, 15], low-energy effective quantum gravity [24], induced gravity [25] and functional renormalization group [26]. Indeed, the status of the corresponding types of quantum corrections is different, but there are also some common points. In particular, in many cases one can formulate general restrictions on the running of the Newton constant G, which are based on covariance and dimensional arguments [27]. The beta-function for the ∗Electronic address: lmodesto@sustc.edu.cn, lmodesto1905@icloud.com †Electronic address: grzerach@gmail.com ‡Electronic address: shapiro@fisica.ufjf.br 1 One of the reasons is a strong gauge-fixing dependence of the on-shell average effective action, which was discussed in Yang-Mills theory [20] and is expected to take place also in quantum gravity. 2 inverse Newton constant which follows from these condition has the form µ d dµ 1 G = X ij Aij mimj , (1) where mi are masses of the fields or more general parameters in the action with the dimensions of masses and Aij are given by series in coupling constants of the theory. In the perturbative quantum gravity case there may be one more complication. In the model based on EinsteinHilbert’s gravity there is no beta-function for G, and in the four-derivative model this beta-function is dependent on the choice of gauge fixing condition [7, 28, 29]. Only a dimensionless combination of G and Λcc has well-defined running, but this is not sufficient for the mentioned applications to cosmology and astrophysics. Recently there was a significant progress in development of perturbative quantum gravity models which have very different properties. If the action of the theory includes local covariant terms that have six or more derivatives, this theory may be i) super-renormalizable [30]; ii) unitary, in case of only complex conjugate massive poles in the treelevel propagator [31, 32], and iii) have gauge-fixing and field reparametrization independent beta-functions for both G and Λcc. The theory is unitary at any order in the perturbative loop expansion when the CLOP [33] prescription is implemented or non-equivalently if the theory is defined through a non-analytic Wick rotation from Euclidean to Minkowskian signature [34–36]. This means that such a theory satisfies the minimal set of consistency conditions and deserves a detailed investigation at both classical and quantum levels. The classical aspects of the theory started to be explored recently in [37], where it was shown that the version with real simple poles has no singularity in the modified Newtonian potential. Quite recently this result was generalized for more general cases including multiple and complex poles [38, 39]. Furthermore, in [38, 40] the detailed analysis of light bending in six-derivative models was given. Another generalization of simple higher derivative model is nonlocal gravity, where we allow for nonlocal functions of differential operators [41]. Moreover, there exists a class of nonlocal theories in which UV behaviour is exactly the same like in polynomial higher derivative theories. Therefore, they also satisfy the above three points, namely they are quantum super-renormalizable models and the analysis of divergences and RG running presented here apply to these theories as well. Until now, the unique example of quantum calculations in the super-renormalizable quantum gravity was the derivation of the beta-function for the cosmological constant in [30]. Here we start to explore the models further and derive the most relevant phenomenologically one-loop beta-function for the Newton constant G. The work is organized as follows. In Sect. II one can find a brief general review of the super-renormalizable models [30], including power counting and gauge-fixing independence of the beta-functions. In Sect. III, we describe the oneloop calculations. Some of the relevant bulky formulas are separated into Appendix A, to provide a smooth reading of the main text. In Sect. IV, two important classes of the nonlocal models of quantum gravity are considered. It turns out that the derivation of one-loop divergences by taking a limit in the results for a polynomial models meets serious difficulties, which can be solved only for a special class of nonlocal theories, which are asymptotically polynomial in the ultraviolet regime. But still these theories are super-renormalizable or even finite. In Sect. V, the renormalization group for the Newton and cosmological constants are discussed, within the minimal subtraction scheme of renormalization. Finally, in Sect. VI we draw our conclusions and outline general possibilities for further work.
Conformal quantum mechanics and holography in noncommutative space-time We analyze the effects of noncommutativity in conformal quantum mechanics (CQM) using the κ-deformed space-time as a prototype. Upto the first order in the deformation parameter, the symmetry structure of the CQM algebra is preserved but the coupling in a canonical model of the CQM gets deformed. We show that the boundary conditions that ensure a unitary time evolution in the noncommutative CQM can break the scale invariance, leading to a quantum mechanical scaling anomaly. We calculate the scaling dimensions of the two and three point functions in the noncommutative CQM which are shown to be deformed. The AdS2/CF T1 duality for the CQM suggests that the corresponding correlation functions in the holographic duals are modified. In addition, the Breitenlohner-Freedman bound also picks up a noncommutative correction. The strongly attractive regime of a canonical model of the CQM exhibit quantum instability. We show that the noncommutativity softens this singular behaviour and its implications for the corresponding holographic duals are discussed.
From Supersymmetric Quantum Mechanics to Scalar Field Theories In this work we address the reconstruction problem, investigating the construction of field theories from supersymmetric quantum mechanics. The procedure is reviewed, starting from reflectionless potentials that admit one and two bound states. We show that, although the field theory reconstructed from the potential that supports a single bound state is unique, it may break unicity in the case of two bound states. We illustrate this with an example, which leads us with two distinct field theories.
Quantum Logic and Geometric Quantization We assume that M is a phase space and H an Hilbert space yielded by a quantization scheme. In this paper we consider the set of all “experimental propositions” of M and we look for a model of quantum logic in relation to the quantization of the base manifold M. In particular we give a new interpretation about previous results of the author in order to build an “asymptotic quantum probability space” for the Hilbert lattice L(H). 1 Introduction Geometric quantization is a scheme involving the construction of Hilbert spaces by a phase space, usually a symplectic or Poisson manifold. In this paper we will see how this complex machinery works and what kind of objects are involved in this procedure. This mathematical approach is very classic and basic results are in [1]. About the quantization of K¨ahler manifolds and the Berezin–Toeplitz quantization we suggest the following literature [2], [3],[4], [5] and [6]. From another point of view we have the quantum logic. This is a list of rules to use for a correct reasoning about propositions of the quantum world. Fundamental works in this field are [7], [8] and [9]. In order to emphasize the importance of these studies we shall notice that these are used in quantum physics to describe the probability aspects of a quantum system. A quantum state is generally described by a density operator and AMS Subject Classification (2010). Primary 03G12; Secondary 03G10, 81P10, 53D50. The first part of these notes was inspired by a series of conversations when the author was in Wien at E.S.I. in occasion of GEOQUANT2013. 1 the result used to introduce a notion of probability in the Hilbert space is a celebrated theorem due to Gleason in [10]. We will see how recent developments in POVM theory (positive operator–valued measure) suggest to see the classical methods of quantization as special cases of the POVM formalism. Regarding these developments on POVMs see [11], [12] and [13]. The principal idea that inspires this work is to consider the special case of the geometric quantization as a “machine” of Hilbert lattices and try to find a possible measurable probability space.
Describing Non-Equilibrium Phase Transitions with "a New Branch of Quantum Mechanics" Though the overwhelming majority of natural processes occur far from the equilibrium, general theoretical approaches to non-equilibrium phase transitions remain scarce. Recent breakthroughs introduced a description of open dissipative systems in terms of non-Hermitian quantum mechanics enabling the identification of a class of non-equilibrium phase transitions associated with the loss of combined parity (reflection) and time-reversal symmetries. Here we report that the time evolution of a single classical spin (e.g. monodomain ferromagnet) governed by the Landau-Lifshitz-Gilbert-Slonczewski equation in the absence of magnetic anisotropy terms is described by a Möbius transformation in complex stereographic coordinates. We identify the parity-time symmetry-breaking phase transition occurring in spin-transfer torque-driven linear spin systems as a transition between hyperbolic and loxodromic classes of Möbius transformations, with the critical point of the transition corresponding to the parabolic transformation. This establishes the understanding of non-equilibrium phase transitions as topological transitions in configuration space.
Two physicists at the U.S. Department of Energy's Argonne National Laboratory offered a way to mathematically describe a particular physics phenomenon called a phase transition in a system out of equilibrium. Such phenomena are central in physics, and understanding how they occur has been a long-held and vexing goal; their behavior and related effects are key to unlocking possibilities for new electronics and other next-generation technologies. In physics, "equilibrium" refers to a state when an object is not in motion and has no energy flowing through it. As you might expect, most of our lives take place outside this state: we are constantly moving and causing other things to move. "A rainstorm, this rotating fan, these systems are all out of equilibrium," said study co-author of the Valerii Vinokur, an Argonne Distinguished Fellow and member of the joint Argonne-University of Chicago Computation Institute. "When a system is in equilibrium, we know that it is always at its lowest possible energy configuration, but for non-equilibrium this fundamental principle does not work; and our ability to describe the physics of such systems is very limited." He and co-author Alexey Galda, a scientist with Argonne and the University of Chicago's James Franck Institute, had been working on ways to describe these systems, particularly those undergoing a phase transition - such as the moment during a thunderstorm when the charge difference between cloud and ground tips too high, and a lightning strike occurs. They found their new approach to non-equilibrium physics in a new branch of quantum mechanics. In the language of quantum mechanics, the energy of a system is represented by what is called a Hamiltonian operator. Traditionally, quantum mechanics had held that the operator to represent the system cannot contain imaginary numbers if it would mean the energy does not come out as a "real" and positive value—because the system actually does exist in reality. This condition is called Hermiticity. But physicists have been taking a harder look at operators that violate Hermiticity by using imaginary components, Vinokur said; several such operators discovered a few years ago are now widely used in quantum optics. "We noticed that such operators are a beautiful mathematical tool to describe out-of-equilibrium processes," he said. To describe the phase transition, Galda and Vinokur wrote out the Hamiltonian operator, introduced an applied force to take it out of equilibrium, and then they made the force imaginary. "This is a trick which is illegal from any common-sense point of view; but we saw that this combination, energy plus imaginary force, perfectly mathematically describes the dynamics of the system with friction," Vinokur said. They applied the trick to describe other out-of-equilibrium phase transitions, such as a dynamic Mott transition and a spin system, and saw the results agreed with either observed experiments or simulations. In their latest work, they connected their description with an operation called a Möbius transformation, which appears in a branch of mathematics called topology. "We can understand non-equilibrium transitions now as topological transitions in the space of energy," Galda said. This bit of quantum mischief needs to be understood more deeply, they said, but is valuable all the same; the theory describes basic areas of physics that are of great interest for next-generation electronics technology. "For the moment the connection with topology looks like mathematical candy, a beautiful thing we can't yet use, but we know from history that if the math is elegant enough, very soon its practical implications follow," Vinokur said.