Do We Have Free Will?!: Quantum Physics and an Experimental Test of the Free Will Theorem The free will theorem states that if experimenters have free will in the sense that their choices are not a function of the past, so must some elementary particles. The theorem goes beyond Bell’s theorem as it connects the two fundamental resources behind quantum technologies: single-particle contextuality, which supplies the power for quantum computation, and two particle non-locality, which allows for quantum secure communication. The theorem relies on three axioms: (i) There is a maximum speed for propagation of information, (ii) single particles can exhibit contextuality, (iii) two separated particles can exhibit Einstein-PodolskyRosen (EPR) correlations. Here we report the first experimental test of the free will theorem. We used pairs of hyper-entangled photons entangled in path and polarization and enforced the conditions for invoking axiom (i) by measuring each photon in a different laboratory. We certified axiom (ii) by testing the violation of the Peres-Mermin non-contextuality inequality and axiom (iii) by showing EPR correlations between the two laboratories. The three axioms imply an upper bound on the sum of the correlations among the outcomes of sequential measurements in the same laboratory and the correlations between the outcomes in both laboratories. We observed a violation of this bound by more than 66 standard deviations. This reveals that quantum non-locality can be produced when single particle contextuality is combined with correlations which are not non-local by themselves. Our results demonstrate the resources needed for quantum computation and quantum secure communication simultaneously in the same experiment and open the door to quantum machines that efficiently achieve both tasks.
Which Way? Waves All the Way?! It May be Time to Give-Up Wave-Particle Duality in Quantum Physics "I report the result of a which-way experiment based on Young’s double-slit experiment. It reveals which slit photons go through while retaining the (self) interference of all the photons collected. The idea is to image the slits using a lens with a narrow aperture and scan across the area where the interference fringes would be. The aperture is wide enough to separate the slits in the images, i.e., telling which way. The illumination pattern over the pupil is reconstructed from the series of slit intensities. The result matches the double-slit interference pattern well. As such, the photon’s wave-like and particle-like behaviors are observed simultaneously in a straightforward and thus unambiguous way. The implication is far reaching. For one, it presses hard, at least philosophically, for a consolidated wave-and-particle description of quantum objects, because we can no longer dismiss such a challenge on the basis that the two behaviors do not manifest at the same time. A bold proposal is to forgo the concept of particles. Then, Heisenberg’s uncertainty principle would be purely a consequence of waves without being ordained upon particles."
What is the Nature of SpaceTime? How Is the Universe Built? Grain by Grain Slightly smaller than what Americans quaintly insist on calling half an inch, a centimeter (one-hundredth of a meter) is easy enough to see. Divide this small length into 10 equal slices and you are looking, or probably squinting, at a millimeter (one-thousandth, or 10 to the minus 3 meters). By the time you divide one of these tiny units into a thousand minuscule micrometers, you have far exceeded the limits of the finest bifocals. But in the mind's eye, let the cutting continue, chopping the micrometer into a thousand nanometers and the nanometers into a thousand picometers, and those in steps of a thousandfold into femtometers, attometers, zeptometers, and yoctometers. At this point, 10 to the minus 24 meters, about one-billionth the radius of a proton, the roster of convenient Greek names runs out. But go ahead and keep dividing, again and again until you reach a length only a hundred-billionth as large as that tiny amount: 10 to the minus 35 meters, or a decimal point followed by 34 zeroes and then a one. You have finally hit rock bottom: a span called the Planck length, the shortest anything can get. According to recent developments in the quest to devise a so-called "theory of everything," space is not an infinitely divisible continuum. It is not smooth but granular, and the Planck length gives the size of its smallest possible grains. The time it takes for a light beam to zip across this ridiculously tiny distance (about 10 to the minus 43 seconds) is called the Planck time, the shortest possible tick of an imaginary clock. Combine these two ideas and the implication is that space and time have a structure. What is commonly thought of as the featureless void is built from tiny units, or quanta. "We've long suspected that space-time had to be quantized," said Dr. Steven B. Giddings, a theorist at the University of California at Santa Barbara. "Recent developments have led to some exciting new proposals about how to make these ideas more concrete." The hints of graininess come from attempts to unify general relativity, Einstein's theory of gravity, with quantum mechanics, which describes the workings of the three other forces: electromagnetism and the strong and weak nuclear interactions. The result would be a single framework -- sometimes called quantum gravity -- that explains all the universe's particles and forces. The most prominent of these unification efforts, superstring theory, and a lesser-known approach called loop quantum gravity, both strongly suggest that space-time has a minute architecture. But just what the void might look like has physicists straining their imaginations. As Dr. John Baez, a theorist at the University of California at Riverside put it: "There's a lot we don't know about nothing." Since the days of ancient Greece, some philosophers have insisted that reality must be perfectly smooth like the continuum of real numbers: pick any two points, no matter how close together, and there is an infinity of gradations in between. Others have argued that, on the smallest scale, everything is surely divided into irreducible units like the so-called natural or counting numbers, with nothing between, say, 3 and 4. The development of modern atomic theory, in the 19th century, pushed science toward viewing the universe as lumpy instead of smooth. At the beginning of this century, sentiments swung further in that direction when Max Planck found that even light was emitted in packets. From that unexpected discovery emerged quantum field theory, in which all the forces are carried by tiny particles, or quanta -- all, that is, except gravity. This force continues to be explained, in entirely different terms, by general relativity: as the warping of a perfectly smooth continuum called space-time. A planet bends the surrounding space-time fabric causing other objects to move toward it like marbles rolling down a hill. Scientists have long assumed that unification would reveal that gravity, like the other forces, is also quantum in nature, carried by messenger particles called gravitons. But while the other forces can be thought of as acting within an arena of space and time, gravity is space-time. Quantizing one is tantamount to quantizing the other. It is hardly surprising that space-time graininess has gone unnoticed here in the macroscopic realm. Even the tiny quarks that make up protons, neutrons and other particles are too big to feel the bumps that may exist on the Planck scale. More recently, though, physicists have suggested that quarks and everything else are made of far tinier objects: superstrings vibrating in 10 dimensions. At the Planck level, the weave of space-time would be as apparent as when the finest Egyptian cotton is viewed under a magnifying glass, exposing the warp and woof. It was Planck himself who first had an inkling of a smallest possible size. He noticed that he could start with three fundamental parameters of the universe -- the gravitational constant (which measures the strength of gravity), the speed of light, and his own Planck's constant (a gauge of quantum graininess) -- and combine them in such a way that the units canceled one another to yield a length. He was not sure about the meaning of this Planck length, as it came to be called, but he felt that it must be something very basic. In the 1950's, the physicist John Wheeler suggested that the Planck length marked the boundary where the random roil of quantum mechanics scrambled space and time so violently that ordinary notions of measurement stopped making sense. He called the result "quantum foam." "So great would be the fluctuations that there would literally be no left and right, no before and no after," Dr. Wheeler recently wrote in his memoir, "Geons, Black Holes and Quantum Foam" (Norton, 1998). "Ordinary ideas of length would disappear. Ordinary ideas of time would evaporate." Half a century later, physicists are still trying to work out the bizarre implications of a minimum length. In superstring theory, a mathematical relationship called T duality suggests that one can shrink a circle only so far. As the radius contracts, the circle gets smaller and smaller and then bottoms out, suddenly acting as though it is getting bigger and bigger. "This behavior implies that there is a minimum 'true size' to the circle," Dr. Giddings said. Many believe this will turn out to be roughly comparable to the Planck scale. There are other indications of graininess. According to the Heisenberg uncertainty principle, certain pairs of quantities are "noncommutative": you cannot simultaneously measure a particle's position and momentum, for example, or its energy and life span. The more precisely you know one, the fuzzier your knowledge of the other becomes. In string theory, the very geometry of space may turn out to be noncommutative, making it impossible to measure simultaneously the horizontal and vertical position of a particle to perfect precision. The graininess of space itself would get in the way. Not everyone in the unification business is a string theorist. Coming from an entirely different direction, researchers in a discipline called loop quantum gravity have devised a theory in which space is constructed from abstract mathematical objects called spin nets. Imagine a tiny particle spinning like a top on its axis. Now send it on a roundtrip journey, a loop through space. Depending on the Einsteinian shape of the space the particle traverses, it will return home with its axis tilted in a different direction. This change then provides a clue about how the space is curved. Using particles with various spins, theorists can probe space in more detail. The different trajectories can then be combined into a web, called a spin network, that captures everything you need to know about how the space is curved -- what physicists call its geometry. "Our space in which we live is just this enormously complicated spin network," said Dr. Carlo Rovelli of the University of Pittsburgh. He and Dr. Lee Smolin of the Center for Gravitational Physics and Geometry at Pennsylvania State University have figured out how to use spin nets to calculate area and volume -- all this information is encoded within the weblike structure. Suppose you are sitting at a table. To calculate its area you would add up the spins of all the links of the spin net that are passing through it, and multiply by the square of the Planck length. A table with an area of about one square meter would be impinged by some 10 to the 65th of these trajectories. The implication is that the very idea of a surface is an illusion generated by the spin network. The picture gets even weirder. In quantum mechanics, an electron orbiting an atomic nucleus is thought of as a cloud of probability: a "superposition" in which all the electron's possible locations hover together. In the view of Dr. Rovelli, Dr. Smolin and their colleagues, the universe itself is a superposition of every conceivable spin net -- all the possible ways that it can be curved. Where does time fit into the picture? A spin net provides a snapshot of the geometry of three-dimensional space at a particular instant. To describe space-time, Dr. Baez and other theorists have stretched spin nets into the fourth dimension, devising what they call spin foam. Slice it and each infinitely thin cross section is a spin net. Most perplexing of all, spin nets and spin foam cannot be thought of as existing in space and time. They reside on a more fundamental level, as a deep structure that underlies and gives rise to space-time. "That is the core of the matter," Dr. Rovelli said. "They don't live somewhere. They are the quantum space-time." The universe, in this view, is conjured up from pure mathematics. And the old idea of space and time as the stage on which everything happens no longer seems to apply. "If we believe what we really have discovered about the world with quantum mechanics and general relativity, then the stage fiction has to be abandoned," Dr. Rovelli said, "and we have to learn to do physics and to think about the world in a profoundly new way. Our notions of what are space and time are completely altered. In fact, in a sense, we have to learn to think without them."
Prime Numbers aren't 'Really' Random: the Last Digits of Nearby-Primes Have 'Anti-Sameness' Bias A pair of mathematicians with Stanford University has found that the distribution of the last digit of prime numbers are not as random as has been thought, which suggests prime's themselves are not. In their paper, Robert Lemke Oliver and Kannan Soundararajan describe their study of the last digit in prime numbers, how they found it to be less than random, and what they believe is a possible explanation for their findings. Though the idea behind prime numbers is very simple, they still are not fully understood— they cannot be predicted, for example and finding each new one grows increasingly difficult. Also, they have, at least until now, been believed to be completely random. In this new effort, the researchers have found that the last digit of prime number does not repeat randomly. Primes can only end in the numbers 1, 3,7 or 9 (apart from 2 and 5 of course), thus if a given prime number ends in a 1, there should be a 25 percent chance that the next one ends in a 1 as well—but, that is not the case the researchers found. In looking at all the prime numbers up to several trillion, they made some odd discoveries. For the first several million, for example, prime numbers ending in 1 were followed by another prime ending in 1 just 18.5 percent of the time. Primes ending in a 3 or a 7 were followed by a 1, 30 percent of the time and primes ending in 9 were followed by a 1, 22 percent of the time. These numbers show that the distribution of the final digit of prime numbers is clearly not random, which suggests that prime numbers are not actually random. On the other hand, they also found that the more distant prime numbers became the more random the distribution of their last digit became. The researchers cannot say for sure why the last digit in prime numbers is not random, but they suspect it has do with how often pairs of primes, triples and even larger grouping of primes appear—as predicted by as the k-tuple conjecture, which frustratingly, has yet to be proven.
Observing The Observer - a Deep Quantum Physics 'Paradox'?! Schrödinger's cat meets Wigner’s friend The idea that quantum mechanics applies to everything in the universe, even to us humans: here's why that is true, can lead to some interesting conclusions. Consider Deutsch’s variant of the iconic Schrödinger's cat thought experiment that builds on Wigner’s ideas. I will try to use as interpretation-free language as possible and let you draw your own conclusions. I’ve personally witnessed all sorts of reactions from physicists upon hearing this. I’ve seen a conversion from Copenhagen to ManyWorlds (which is the quantum equivalent of a religious conversion from, say, Christian Orthodoxy to Buddhism). At the other extreme, I’ve heard colleagues say: “so what’s the big deal with this stuff?” (The latter reply, in my experience, typically comes either from those who truly understand quantum physics or those who completely missed the point). Suppose that a very able experimental physicist, Alice, puts her friend Bob inside a room with a cat, a radioactive atom and cat poison that gets released if the atom decays. The point of having a human there is that we can communicate with him. (Getting answers from cats is not that easy - believe me, I’ve tried.) As far as Alice is concerned, the atom enters into a state of being both decayed and not decayed, so that the cat is both dead and alive (that’s where Schrödinger stops). Bob, however, can directly observe the cat and sees it as one or the other. This is something we know from everyday experience: we never see dead and alive cats. To confirm this, Alice slips a piece of paper under the door asking Bob whether the cat is in a definite state. He answers, “Yes, I see a definite state of the cat”. At this point, mathematically speaking, the state of the system has changed from the initial state |Ψii = |no-decay > |poison in the bottle > |cat alive > |Bob sees alive cat > |blank piece of paper > (1) to the state (from Alice’s global perspective): |Ψ1/2i = (|decay > |poison released > |cat dead > |Bob sees dead cat > + |no-decay > |poison still in the bottle > |cat alive > |Bob sees alive cat >) ⊗ |paper says: yes, I see a definite state of the cat > (2) I am assuming that, because Alice’s laboratory is isolated, every transformation leading up to this state is unitary. This includes the decay, the poison release, the killing of the cat and Bob’s observation - Alice has a perfect quantum coherent control of the experiment. Note that Alice does not ask whether the cat is dead or alive because for her that would force the outcome or, as some physicists might say, collapse the state (this is exactly what happens in Wigner’s version, where he communicates the state to a friend, who communicates to another friend and so on...). She is content observing that Bob sees the cat either alive or dead and does not ask which it is. Because Alice avoided collapsing the state, quantum theory holds that slipping the paper under the door was a reversible act. She can undo all the steps she took since each of them is just a unitary transformation. In other words, the paper does not get entangled to the rest of the laboratory. When Alice reverses the evolution, if the cat was dead, it would now be alive, the poison would be in the bottle, the particle would not have decayed and Bob would have no memory of ever seeing a dead cat. If the cat was alive, it would also come back to the same state (everything, in other words, comes back to the starting state where the atom has not decayed, the poison is in the bottle, the cat is alive and Bob sees alive cat and has no memory of the experiment he was subjected to). And yet one trace remains: the piece of paper saying “yes, I see a definite state of the cat”. Alice can undo Bob’s observation in a way that does not also undo the writing on the paper. The paper remains as proof that Bob had observed the cat as definitely alive or dead. (Note that I am still remaining interpretation neutral. A Many Worlds supporter would say that there are two copies of Bob, one that observes a dead cat and one that sees alive cat; a Copenhagen or Quantum Bayesian supporter could say that relative to one state of Bob the cat is dead, while, 2 relative to the other, it is alive - either way, supporters of any interpretation ought to make the same predictions in this experiment). That leads to a startling conclusion for someone who believes that measurements have definite irreversible outcomes. Alice was able to reverse the observation because, as far as she was concerned, she avoided collapsing the state; to her, Bob was in just as indeterminate a state as the cat. But the friend inside the room thought the state did collapse. That person did see a definite outcome; the paper is proof of it. In this way, the experiment demonstrates two seemingly contradictory principles. Alice thinks that quantum mechanics applies to macroscopic objects: not just cats but also Bobs can be in quantum limbo. Bob, halfway through the experiment and before Alice’s reversal, thinks that cats are only either dead or alive. I deliberately avoided interpretational jargon because this experiment cannot distinguish between different interpretations (nothing can, that’s why they are called interpretations). What it does is tell the difference between quantum physics being valid at macroscopic scales and there being a genuine collapse due to observation. If Bob genuinely collapsed the quantum state inside by seeing one outcome or the other (definitely “dead” or definitely “alive”), then the reversal would not be possible with unit probability. Namely, both outcomes would occur at the end of the experiment, the atom could also be found decayed and the cat dead (with one half probability). Therefore repeating this experiment a few times would tell us if the observation leads to a definitive collapse or not. Doing such an experiment with an entire human being would be daunting, but physicists can accomplish much the same with simpler systems. We can take a photon and bounce it off a mirror. If the photon is reflected, the mirror recoils, but if the photon is transmitted, the mirror stays still. The photon plays the role of the decaying atom; it can exist simultaneously in more than one state. The mirror, made up of many atoms, acts as the cat and as Bob. Whether it recoils or not is analogous to whether the cat lives or dies and is seen to live or die by Bob. The process can be reversed by reflecting the photon back at the mirror. If the photon always comes out the way it came in, we confirmed that it was in a superposition after the mirror and before the reversal. Otherwise, there was a collapse somewhere along the way (needless to say, there is no collapse if things are done properly in actual photonic experiments of this kind). We can do similar experiments with (collections of) atoms and molecules, where we entangle them and subsequently disentangle them. Again, no collapse recorded. In developing this devious thought experiment, Wigner and Deutsch followed in the footsteps of Schrödinger, Einstein, Bell and other theorists who argued that physicists have yet to grasp quantum mechanics in a deep way. For decades most physicists scarcely cared because the foundational issues had no effect on practical applications of the theory. But now that we can perform these experiments for real, the task of exploring the full extent of quantum mechanics has become all the more urgent. Disclosure of potential CoI: to me personally, the validity of quantum physics at the macroscopic level naturally suggests the Many Worlds picture. The Many Worlds interpretation is not without problems, such as the (lack of a convincing) derivation of the Born rule and (in my opinion much less of an issue, but still ... ) the basis problem. However, other interpretations have much the same concerns (which they avoid by calling the Born rule a postulate and saying that the coupling to the environment selects the basis, therefore eliminate the need for explanation). The simple fact demonstrated here, however, namely that from one perspective (Bob’s above) we can have definite outcomes, while, at the same time, from a higher perspective (Alice’s above) everything remains in a quantum coherent state, seems to me to be spiritually leaning towards Many Worlds.
Deep: Spacetime with zero point length is two-dimensional at the Planck scale It is generally believed that any quantum theory of gravity should have a generic feature — a quantum of length. We provide a physical ansatz to obtain an effective non-local metric tensor starting from the standard metric tensor such that the spacetime acquires a zero-point-length ℓ0 of the order of the Planck length LP . This prescription leads to several remarkable consequences. In particular, the Euclidean volume VD(ℓ, ℓ0) in a D-dimensional spacetime of a region of size ℓ scales as VD(ℓ, ℓ0) ∝ ℓ D−2 0 ℓ 2 when ℓ ∼ ℓ0, while it reduces to the standard result VD(ℓ, ℓ0) ∝ ℓ D at large scales (ℓ ≫ ℓ0). The appropriately defined effective dimension, Deff , decreases continuously from Deff = D (at ℓ ≫ ℓ0) to Deff = 2 (at ℓ ∼ ℓ0). This suggests that the physical spacetime becomes essentially 2-dimensional near Planck scale.
Quantum teleportation achieved via entanglement with zero-loss of information that is transmitted fully and instantly, without any loss of time "Beam me up, Scotty" - even if Captain Kirk supposedly never said this exact phrase, it remains a popular catch-phrase to this day. Whenever the chief commander of the television series starship USS Enterprise (NCC-1701) wanted to go back to his control centre, this command was enough to take him back to the control centre instantly - travelling through the infinity of outer space without any loss of time. But is all of this science fiction that was thought up in the 1960s? Not quite: Physicists are actually capable of beaming—or "teleporting" as it is called in technical language - if not actual solid particles at least their properties. "Many of the ideas from Star Trek that back then appeared to be revolutionary have become reality," explains Prof. Dr Alexander Szameit from the University of Jena (Germany). "Doors that open automatically, video telephony or flip phones—all things we have first seen on the starship USS Enterprise," exemplifies the Juniorprofessor of Diamond-/Carbon-Based Optical Systems. So why not also teleporting? "Elementary particles such as electrons and light particles exist per se in a spatially delocalized state," says Szameit. For these particles, it is with a certain probability thus possible to be in different places at the same time. "Within such a system spread across multiple locations, it is possible to transmit information from one location to another without any loss of time." This process is called quantum teleportation and has been known for several years. The team of scientists lead by science fiction fan Szameit has now for the first demonstrated in an experiment that the concept of teleportation does not only persist in the world of quantum particles, but also in our classical world. Szameit and his colleagues report about these achievements in the scientific journal "Laser & Photonics Reviews". They used a special form of laser beams in the experiment. "As can be done with the physical states of elementary particles, the properties of light beams can also be entangled," explains Dr Marco Ornigotti, a member of Prof. Szameit's team. For physicists, "entanglement" means a sort of codification. "You link the information you would like to transmit to a particular property of the light," clarifies Ornigotti who led the experiments for the study that was now presented. In their particular case, the physicists have encoded some information in a particular polarisation direction of the laser light and have transmitted this information to the shape of the laser beam using teleportation. "With this form of teleportation, we can, however, not bridge any given distance," admits Szameit. "On the contrary, classic teleportation only works locally." But just like it did at the starship USS Enterprise or in quantum teleportation, the information is transmitted fully and instantly, without any loss of time. And this makes this kind of information transmission a highly interesting option in telecommunication for instance, underlines Szameit. Teleportation describes the transmission of information without transport of neither matter nor energy. For many years, however, it has been implicitly assumed that this scheme is of inherently nonlocal nature, and therefore exclusive to quantum systems. Here, we experimentally demonstrate that the concept of teleportation can be readily generalized beyond the quantum realm. We present an optical implementation of the teleportation protocol solely based on classical entanglement between spatial and modal degrees of freedom, entirely independent of nonlocality. Our findings could enable novel methods for distributing information between different transmission channels and may provide the means to leverage the advantages of both quantum and classical systems to create a robust hybrid communication infrastructure.
Who Was Srinivasa Ramanujan: The Man Who Knew Infinity, and possibly the greatest genius in mathematical history?! A MUST-READ "Sir, an equation has no meaning for me unless it expresses a thought of GOD" ~ Ramanujan" ... "These equations had to be true because if they weren't, nobody could have imagined them" ~ G. H. Hardy! Read it all to be convinced of both quotes! A Remarkable Letter. They used to come by physical mail. Now it’s usually email. From around the world, I have for many years received a steady trickle of messages that make bold claims—about prime numbers, relativity theory, AI, consciousness or a host of other things—but give little or no backup for what they say. I’m always so busy with my own ideas and projects that I invariably put off looking at these messages. But in the end I try to at least skim them—in large part because I remember the story of Ramanujan. On about January 31, 1913 a mathematician named G. H. Hardy in Cambridge, England received a package of papers with a cover letter that began: “Dear Sir, I beg to introduce myself to you as a clerk in the Accounts Department of the Port Trust Office at Madras on a salary of only £20 per annum. I am now about 23 years of age….” and went on to say that its author had made “startling” progress on a theory of divergent series in mathematics, and had all but solved the longstanding problem of the distribution of prime numbers. The cover letter ended: “Being poor, if you are convinced that there is anything of value I would like to have my theorems published…. Being inexperienced I would very highly value any advice you give me. Requesting to be excused for the trouble I give you. I remain, Dear Sir, Yours truly, S. Ramanujan”. Please, do yourselves a favor and read all of Stephen Wolfram's article.
Stochastic Einstein equations with fluctuating FLRWC volume avoid the big bang singularity The idea of taking into account stochastic effects in Einstein gravity in order to investigate quantum gravitational effects is not new. In fact, several approaches have been proposed. For instance a stochastic theory of gravity was proposed in which the gravitational and matter fields are treated as probabilistic quantities. A different approach is the so-called semiclassical stochastic gravity. In a full theory of quantum gravity, one would expect that a metric operator ˆgµν(x) is related to the energy-momentum operator Tˆµν (x) in a (still unknown) consistent manner. A theory consists in considering the classical metric gµν(x), instead of the metric operator, as a classical variable, and the expectation value of the energy-momentum operator < Tˆ µν(x) >, instead of the operator itself. This semiclassical theory of gravity has been used successfully in black hole physics and in cosmological scenarios. As long as the gravitational field is assumed to be described by a classical metric, this semiclassical theory is the only dynamical equation for this metric. Indeed, the metric couples to the the matter fields through the energy-momentum tensor and for a given quantum state the expectation value is the only physical observable that one can construct. However, since no full theory of quantum gravity is known, it is not clear how to determine the limits of applicability of the semiclassical theory. It is expected that this theory should break down at Planck scales at which the quantum effects of gravity should not be ignored. Also, the semiclassical theory is expected to break down when the quantum fluctuations of the energy-momentum tensor become large and non-negligible. Stochastic semiclassical theory is an attempt to take into account in a consistent manner the quantum fluctuations of the energy-momentum tensor. Formally, to consider quantum fluctuations at the level of the field equations one adds a stochastic function ξ(x) to the expectation value of the energy-momentum tensor so that the classical metric gµν(x) couples now to the sum < Tˆ µν (x) > + ξ(x). The resulting semiclassical Einstein-Langevin equations determine the dynamic evolution of the classical metric which acquires now a stochastic character. It is not easy to analyze physical situations in semiclassical stochastic gravity because it is necessary to handle not only the conceptual and technical problems of describing quantum fields in curved spacetimes, but also the stochastic function source that accounts for the quantum fluctuations. This is probably one of the reasons why research in this direction is less intensive than in other formal quantization methods of gravity. Nevertheless, a concrete set of stochastic Einstein equations was formulated recently in [8]. In this approach, the dynamical variables correspond to stochastic quantities that fluctuate according to some probability density. To obtain the probability density and the field equations, it is assumed that the Ricci flow is a statistical system and every metric in the Ricci flow corresponds to a microscopical state. In this work, we propose an alternative model to take into account quantum fluctuations. All the approaches used to quantize gravity try to show or assume that at some scales spacetime becomes quantized, i.e., it can have only discrete values. This expectation has been corroborated in loop quantum gravity where area and volume quantum operators have been obtained with discrete spectra. Although still many technical and conceptual issues remain to be solved, the quantization of the spacetime volume seems to be well established. This implies that quantum fluctuations should appear that would affect the geometric structure of spacetime. We propose a model in which the stochastic character of spacetime is taken into account in a very simple manner at the level of the field equations. In fact, we will see that it is possible to introduce a stochastic factor into the field equations so that the structure of spacetime becomes affected by the presence of fluctuations. Thus, our model consists of a classical theory on the background of a quantum fluctuating volume. We will apply our model to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology in order to explore the question about the possibility of avoiding the initial big bang singularity. We will find the conditions under which the avoidance is possible. This work is organized as follows. In Sec. II, we derive the stochastic classical field equations in the case of a perfect-fluid source, and study its properties. We show that the field equations can be interpreted as containing an effective cosmological constant with dynamic properties which arise as a consequence of quantum fluctuations. Interestingly, stiff matter turns out to remain unaffected by the presence of quantum fluctuations. In Sec. III, we investigate the stochastic classical equations in the case of a FLRW cosmological model, and show that the big bang singularity can be avoided in a large class of cosmological models. Finally, in Sec. IV, we discuss our results.
Evidence for the Multiverse in the Standard Model of Physics and Beyond In any theory it is unnatural if the observed values of parameters lie very close to special values that determine the existence of complex structures necessary for observers. A naturalness probability, P, is introduced to numerically evaluate the degree of unnaturalness. If P is very small in all known theories, corresponding to a high degree of fine-tuning, then there is an observer naturalness problem. In addition to the well-known case of the cosmological constant, we argue that nuclear stability and electroweak symmetry breaking represent significant observer naturalness problems. The naturalness probability associated with nuclear stability depends on the theory of flavor, but for all known theories is conservatively estimated as Pnuc <∼ (10−3 – 10−2 ), and for simple theories of electroweak symmetry breaking PEWSB <∼ (10−2 – 10−1 ). This pattern of unnaturalness in three different arenas, cosmology, nuclear physics, and electroweak symmetry breaking, provides evidence for the multiverse, since each problem may be easily solved by environmental selection. In the nuclear case the problem is largely solved even if the multiverse distribution for the relevant parameters is relatively flat. With somewhat strongly varying distributions, it is possible to understand both the close proximity to neutron stability and the values of me and md − mu in terms of the electromagnetic mass difference between the proton and neutron, δEM ≃ 1 ± 0.5 MeV. It is reasonable that multiverse distributions are strong functions of Lagrangian parameters, since they depend not only on the landscape of vacua, but also on the population mechanism, “integrating out” other parameters, and on a density of observers factor. In any theory with mass scale M that is the origin of electroweak symmetry breaking, strongly varying multiverse distributions typically lead either to a little hierarchy, v/M ≈ (10−2 – 10−1 ), or to a large hierarchy, v ≪ M. In certain multiverses, where electroweak symmetry breaking occurs only if M is below some critical value, we find that a little hierarchy develops with the value of v 2/M2 suppressed by an extra loop factor, as well as by the strength of the distribution. Since the correct theory of electroweak symmetry breaking is unknown, our estimate for PEWSB is theoretical. The LHC will lead to a much more robust determination of PEWSB, and, depending on which theory is indicated by the data, the observer naturalness problem of electroweak symmetry breaking may be removed or strengthened. For each of the three arenas, the discovery of a natural theory would eliminate the evidence for the multiverse; but in the absence of such a theory, the multiverse provides a provisional understanding of the data.
Why the Concept of 'Field' and SpaceTime in Quantum Physics Might be Non-Sensical in Quantum-Gravity Theories: Random Tensors Theory In physics, a field is a physical quantity that has a value for each point in space and time. This current wikipedia definition of a field reminds us how much a preexistent classical space-time with its associated notion of locality is deeply ingrained in our common representation of the physical world. However there is an “emerging” consensus than an ab initio theory of quantum gravity requires to at least modify and probably abandon altogether the concepts of absolute locality and absolute classical space-time [1–3]. It is a task for many generations of theoretical physicists to come, requiring even more work than when classical physics had to be abandoned in favor of quantum mechanics. The tensor track [4–6] can be broadly described as a step in this direction and more precisley as a program to explore (Euclidean) quantum gravity as a (Euclidean) quantum field theory of space-time rather than on space-time, relying on a specific mathematical formalism, namely the modern theory of random tensors [7–10] and their associated 1/N expansions [11–15], which generalizes in a natural way the theory of random vectors and random matrices [16]. In this formalism observables, interactions, and Feynman graphs are all represented by regular edge-colored graphs. 1 arXiv:1603.07278v1 [math-ph] 23 Mar 2016 It is both simple and natural but also general enough to perform quantum sums over space-times in any dimension, pondering them with a discrete analog of the Einstein-Hilbert action [17]. In three dimensions the formalism sums over all topological manifolds, and in four dimensions over all triangulated manifolds1 with all their different smooth structures [18–21]. More precisely the tensor track proposes to explore renormalization group flows [22–26] in the associated tensor theory space [27]. In doing this, it can retain several of the most characteristic aspects of the successful quantum field theories of the standard model of particle physics, such as perturbative renormalizability [28–33] and asymptotic freedom [34–38]. Also at least in the simplest cases, random tensor models and field theories can be contructed nonperturbatively [39–44]. The tensor track can be considered both as a generalization of random matrix models, successfully used to quantize two-dimensional gravity [16], and as an improvement of group field theory [45–51] motivated by the desire to make it renormalizable [52–55]. It proposes another angle for the study of dynamical triangulations [56–58]. It also relies on non-commutative field theory and in particular on the Grosse-Wulkenhaar model [59–66] as a strong source of inspiration. This brief paper is intended as an introduction in non-technical terms to this approach. We warn the reader that it overlaps strongly with previous reviews and papers of the author such as [4–6, 27, 38] After a summer course with J. Schwinger and a long road trip through the United States with R. Feynman, F. Dyson understood how to relate Feynman’s functional integration (sum over histories) to Schwinger’s equations [67]. The iterative solution of these “Schwinger-Dyson” equations is then indexed by Feynman’s graphs. Quantum field theory was born. From the start it had to struggle with a famous problem: the amplitudes associated to Feynman graphs contain ultraviolet divergencies. Any quick fix of this problem through imposing an ultra-violet cutoff violates some of the most desired properties of the theory such as Euclidean invariance or OsterwalderSchrader positivity, the technical property which allows continuation to real time and unitarity, hence ultimately ensures that quantum probabilities add up to 1, as they should in any actual experiment. The full solution of the problem, namely renormalization, took some time to polish. The textbook example is the one of an interacting theory obtained by perturbing a non-local free theory by a local interaction with a small coupling constant. The free theory is represented by a Gaussian measure based on a nonlocal propagator, such as 1/(p 2 + m2 ), which becomes however of shorter and shorter range, hence is asymptotically local in the ultraviolet limit. In this case the key ingredients allowing renormalization to work are the notion of scale and the locality principle: high scale (ultraviolet) connected observables seem more and more local when observed with lower scale (infrared) propagators. It also requires a power counting tool which allows to classify observables into relevant, marginal and irrelevant ones. Under all these conditions, the ultraviolet part of divergent Feynman graphs can be absorbed and understood as modifying the effective value of the coupling constant according to the observation scale. This is the standard example of a renormalization group flow. The renormalization group of K. Wilson and followers is a powerful generalization of the previous example. The action takes place in a certain theory space which is structured by the locality principle: the local potential approximation corresponds to the purely local powers of the field at the same point. Such local operators can be dressed with an arbitrary number of derivatives, creating a hierarchy of “quasi-local” operators. The operator product expansion expands the average value of any non-local operator, when observed through infrared probes, into a dominant local part plus correction terms with more and more derivatives, hence it is an expansion into quasi-local operators which generalizes the multipolar expansion of close configurations of charges in classical electrostatics. The (irreversible) renormalization group flows from bare (ultraviolet) to effective (infrared) actions by integrating out (quasi-local) fluctuations modes. Fixed points of the flow need no longer be Gaussian, but interesting ones should still have only a small number of associated relevant and marginal directions. Indeed it is under this condition that the corresponding physics can be described in terms of only a few physical parameters. Scales and locality play the fundamental role in this standard picture of a renormalization group analysis. Locality is also at the core of the mathematically rigorous formulation of quantum field theory. It is a key Wightman axiom [68] and in algebraic quantum field theory [69] the fundamental structures are the algebras of local observables. It is therefore not surprising that to generalize quantum field theory and the locality principle to an abstract framework independent of any preexisting space-time takes some time. Such a development seems however required for a full-fledged ab initio theory of quantum gravity. Near the Planck scale, space-time should fluctuate so violently that the ordinary notion of locality may no longer be the relevant concept. Among the many arguments one can list pointing into this direction are the Doplicher-Fredenhagen-Roberts remark that to distinguish two objects closer than the Planck scale would require to concentrate so much energy in such a little volume that it would create a black hole, preventing the observation [70]. String theory, which (in the case of closed strings) contains a gravitational sector, is another powerful reason to abandon strict locality. Indeed strings are one-dimensional extended objects, whose interaction cannot be localized at any 3 space time point. Moreover, closed strings moving in compactified background may not distinguish between small and large such backgrounds because of dualities that exchange their translational and “wrapping around” degrees of freedom. Another important remark is that two and three dimensional pure quantum gravity are topological theories. In such theories observables, being functions of the topology only, cannot be localized in a particular region of space-time. Remark however that four dimensional gravity is certainly not a purely topological theory, as confirmed by the recent observation of gravitational waves. It is nevertheless an educated guess that pure (Euclidean) four dimensional quantum gravity should not completely loose any topological character, In other words modifications of the topology of space-time should not be strictly forbidden. Indeed space-times of different (complicated) topologies can interpolate between fixed boundaries, even when the latter have simple topology. Suppressing some of them because they do not have a simple topology would be similar to suppressing arbitrarily instantons from Feynman’s sum over histories, and this is known to lead to quantum probabilities no longer adding up to 1. Space-time dimension four is the first in which gravity has local degrees of freedom but it is also the first dimension in which exotic smooth structures can appear on a fixed topological background manifold. It would be desirable to establish a strong link between these two properties of dimension four, in particular because smooth structures in four dimensions are intimately related to gauge theories, which describe the other physical interactions, electroweak and strong, in the standard model. Staying within the scope of this small review, let us simply remark again that the basic building blocks proposed by the tensor track, namely tensor invariants, are dual to piecewise linear manifolds with boundaries. Therefore they precisely distinguish in four dimension all smooth structures associated to a given topological structure. Let us try to forget for a minute the enormous amount of technical work spent on various theories of quantum gravity, and consider with a fresh eye a very general, even naive question: what are the most basic mathematical tools and physical principles at work in general relativity and quantum mechanics? How could we most naturally join them together? The very name of general relativity suggests how Einstein derived it: as a consequence of the independence of the laws of physics under any particular choice of space-time coordinates. In a sense our coordinates systems are manmade: the Earth does not come equipped with meridians or parallels drawn on the ground. So it is natural to expect the general laws of physics not to depend on such man-made coordinates. The corresponding symmetry group is the group of diffeomorphisms of space-time. But since this group depends on the underlying space-time manifold, in particular of its topology, the classical general relativity principle is not fully background invariant. 4 In quantum mechanics, instead of space-time trajectories, the basic objects are states, elements of a Hilbert space and the observables are operators acting on them. An important observation is that although there are many different manifolds in a given dimension, each with its own different diffeomorphism group, there is only a single Hilbert space of any given dimension N, denoted HN (up to isomorphism, which here means up to change of orthonormal basis). There is also a single separable infinite-dimensional Hilbert space, namely the Hilbert space H. Separable means here that H admits a countable orthonormal basis. H can be identified with the space of square integrable series `2(N), or with `2(Z) = L 2 (U(1)) or with the L 2 space of square integrable functions on any Riemannian manifold of any finite dimension. It is therefore a truly background-independent mathematical structure. Our universe contains a huge number of degrees of freedom. Hence H = limN→∞ HN seems the right mathematical starting point for a background independent quantum theory of gravity. That this was not emphasized more in the early days of quantum mechanics probably only means that absolute spacetime was still a deeply ingrained notion in the minds of the theoretical physicists at that time. It is then tempting to extend the general relativity principle into a quantum relativity principle by postulating that the laws of physics should be invariant under the symmetry group of that unique Hilbert space H, namely independent of any preferred choice of an orthonormal basis in H. Just like for coordinates systems, an orthonormal basis seems to be observer-made. In a very large universe with N degrees of freedom it would amount to postulate the U(N) invariance of physics. However such a postulate quickly appears a bit too extreme. The only (polynomial) U(N) invariants in a finite dimensional Hilbert space HN are polynomials in the scalar product. We know that quantum states, being rays in Hilbert space, can be normalized to 1, hence there seems to be no interesting U(N)-invariant physical observables for such states. Beauty has sometimes been defined as a “slightly broken symmetry” and this may also be a good definition for physics. Just like ordinary quantum field theory is not exactly local but only asymptotically local in the ultraviolet regime, the giant U(N) invariance of HN at very large N could be asymptotic in a certain regime which by analogy we should still call the ultraviolet regime. In practice it has to be broken in any actual experiment. If for no other reason, it should be at least because we are finite size observers in an effective geometry of space-time. Hence we are led to consider an attenuated version of invariance of physics under change of basis, namely Quantum Relativity Principle In the extreme “ultraviolet” regime N → ∞ the laws of physics should become asymptotically independent of any preferred choice of basis in the quantum Hilbert space describing the universe. 5 Of course we have to explain in more detail the meaning of the words “asymptotically independent in the extreme ultraviolet regime N → ∞”. It has to be understood in a renormalization group sense. To define an abstract (space-time independent) renormalization group and its corresponding asymptotic ultraviolet regime is the main goal of the tensor track, and it requires several ingredients. We need first an initial device to break the U(N)-symmetry group and allow to label and regroup the degrees of freedom in a way suited for a renormalization group analysis. In particular it should allow to group together the degrees of freedom into renormalization group slices. The ones with many degrees of freedom will be called the ultraviolet slices, and the ones with less degrees of freedom the infrared slices. In tensor field theories this device is a U(N)-breaking propagator for the free theory, whose spectrum allows to label and regroup the degrees of freedom of HN , exactly like Laplacian or Dirac-based propagators do in ordinary quantum field theory2 . Once such a device is in place, the renormalization group always then means a decimation, to find the effective infrared theory after integrating ultraviolet slices of the theory. Asymptotic invariance in the ultraviolet regime means that the bare action should be closer and closer to an exactly U(N)-invariant action. If we agree on the “quantum relativity principle” above and on this general strategy, the next step is to search for the most natural symmetry breaking pattern that could occur on the path from the extreme ultraviolet regime to effective symmetry-broken infrared actions...which brings us to random-tensors: do read
Can Chaos be Observed in Quantum Gravity? Full general relativity is almost certainly non-integrable and likely chaotic and therefore almost certainly possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.
This is deep, as it implies quantum realism: upper limit found for quantum world - an objective border discovered distinguishing quantum laws and the neuro-cognitive laws governing human perception The quantum world and our world of perception obey different natural laws. Leiden physicists search for the border between both worlds. Now they suggest an upper limit in a study reported in Physical Review Letters. The laws of the quantum domain do not apply to our everyday lives. We are used to assigning an exact location and time to objects. But fundamental particles can only be described by probability distributions—imagine receiving a traffic ticket for speeding 30 to 250 km/h somewhere between Paris and Berlin, with a probability peak for 140 km/h in Frankfurt. Because the laws are completely different in both worlds, a clear boundary might exist between them. Size and mass could then be used to determine whether an object obeys quantum or macroscopic laws, but the edge of this boundary is elusive. Leiden physicist Tjerk Oosterkamp and his research group have now established established an upper limit for quantum phenomena, closing in on the answer. 'We keep excluding values, so that we slowly close in on the boundary's location,' says Oosterkamp. 'If we only have a small area left, we can better design our experiments to see what is happening at the edge of the quantum world.' According to a certain quantum mechanical model, you can describe a particle's position with a probability distribution that sometimes spontaneously collapses. In that case, its position is, indeed, determined precisely, within a certain margin. This margin and how often the spontaneous collapse occurs are the two parameters that physicists seek. If they determine those, they have a complete formula to define a strict border between quantum and macro. Oosterkamp has now determined an upper limit on these parameters of 31 collapses per year per atomic mass unit with a margin around 10 nanometers, to 1 collapse per 100 years with a margin of 1 micron. For their next measurement, the researchers expect fewer collapses, so they can define an even stricter upper limit.
Researchers demonstrate 'quantum surrealism': a must-read if you thought quantum 'reality' was already weird and spooky enough New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." In a new version of an old experiment, CIFAR Senior Fellow Aephraim Steinberg (University of Toronto) and colleagues tracked the trajectories of photons as the particles traced a path through one of two slits and onto a screen. But the researchers went further, and observed the "nonlocal" influence of another photon that the first photon had been entangled with. The results counter a long-standing criticism of an interpretation of quantum mechanics called the De Broglie-Bohm theory. Detractors of this interpretation had faulted it for failing to explain the behaviour of entangled photons realistically. For Steinberg, the results are important because they give us a way of visualizing quantum mechanics that's just as valid as the standard interpretation, and perhaps more intuitive. "I'm less interested in focusing on the philosophical question of what's 'really' out there. I think the fruitful question is more down to earth. Rather than thinking about different metaphysical interpretations, I would phrase it in terms of having different pictures. Different pictures can be useful. They can help shape better intuitions." At stake is what is "really" happening at the quantum level. The uncertainty principle tells us that we can never know both a particle's position and momentum with complete certainty. And when we do interact with a quantum system, for instance by measuring it, we disturb the system. So if we fire a photon at a screen and want to know where it will hit, we'll never know for sure exactly where it will hit or what path it will take to get there. The standard interpretation of quantum mechanics holds that this uncertainty means that there is no "real" trajectory between the light source and the screen. The best we can do is to calculate a "wave function" that shows the odds of the photon being in any one place at any time, but won't tell us where it is until we make a measurement. Yet another interpretation, called the De Broglie-Bohm theory, says that the photons do have real trajectories that are guided by a "pilot wave" that accompanies the particle. The wave is still probabilistic, but the particle takes a real trajectory from source to target. It doesn't simply "collapse" into a particular location once it's measured. In 2011 Steinberg and his colleagues showed that they could follow trajectories for photons by subjecting many identical particles to measurements so weak that the particles were barely disturbed, and then averaging out the information. This method showed trajectories that looked similar to classical ones - say, those of balls flying through the air. But critics had pointed out a problem with this viewpoint. Quantum mechanics also tells us that two particles can be entangled, so that a measurement of one particle affects the other. The critics complained that in some cases, a measurement of one particle would lead to an incorrect prediction of the trajectory of the entangled particle. They coined the term "surreal trajectories" to describe them. In the most recent experiment, Steinberg and colleagues showed that the surrealism was a consequence of non-locality - the fact that the particles were able to influence one another instantaneously at a distance. In fact, the "incorrect" predictions of trajectories by the entangled photon were actually a consequence of where in their course the entangled particles were measured. Considering both particles together, the measurements made sense and were consistent with real trajectories. Steinberg points out that both the standard interpretation of quantum mechanics and the De Broglie-Bohm interpretation are consistent with experimental evidence, and are mathematically equivalent. But it is helpful in some circumstances to visualize real trajectories, rather than wave function collapses, he says
Single-world interpretations of quantum theory cannot be self-consistent According to quantum theory, a measurement may have multiple possible outcomes. Single-world interpretations assert that, nevertheless, only one of them “really” occurs. Here we propose a gedankenexperiment where quantum theory is applied to model an experimenter who herself uses quantum theory. We find that, in such a scenario, no single-world interpretation can be logically consistent. This conclusion extends to deterministic hidden-variable theories, such as Bohmian mechanics, for they impose a single-world interpretation.
What Are Quantum Gravity's Alternatives To String Theory by Ethan Siegel Great read on String-Theory its alternatives 1) Loop Quantum Gravity, 2) Asymptotically Safe Gravity, 3) Causal Dynamical Triangulations, 4) Emergent Gravity! General Relativity as our theory of gravity and quantum field theories of the other three forces — has a problem that we don’t often talk about: it’s incomplete, and we know it. Einstein’s theory on its own is just fine, describing how matter-and-energy relate to the curvature of space-and-time. Quantum field theories on their own are fine as well, describing how particles interact and experience forces. Normally, the quantum field theory calculations are done in flat space, where spacetime isn’t curved. We can do them in the curved space described by Einstein’s theory of gravity as well (although they’re harder — but not impossible — to do), which is known as semi-classical gravity. This is how we calculate things like Hawking radiation and black hole decay. But even that semi-classical treatment is only valid near and outside the black hole’s event horizon, not at the location where gravity is truly at its strongest: at the singularities (or the mathematically nonsensical predictions) theorized to be at the center. There are multiple physical instances where we need a quantum theory of gravity, all having to do with strong gravitational physics on the smallest of scales: at tiny, quantum distances. Important questions, such as: What happens to the gravitational field of an electron when it passes through a double slit? What happens to the information of the particles that form a black hole, if the black hole’s eventual state is thermal radiation?
And what is the behavior of a gravitational field/force at and around a singularity? In order to explain what happens at short distances in the presence of gravitational sources — or masses — we need a quantum, discrete, and hence particle-based theory of gravity. The known quantum forces are mediated by particles known as bosons, or particles with integer spin. The photon mediates the electromagnetic force, the W-and-Z bosons mediate the weak force, while the gluons mediate the strong force. All these types of particles have a spin of 1, which for massive (W-and-Z) particles mean they can take on spin values of -1, 0, or +1, while for massless ones (like gluons and photons), they can take on values of -1 or +1 only. The Higgs boson is also a boson, although it doesn’t mediate any forces, and has a spin of 0. Because of what we know about gravitation — General Relativity is a tensor theory of gravity — it must be mediated by a massless particle with a spin of 2, meaning it can take on a spin value of -2 or +2 only. This is fantastic! It means that we already know a few things about a quantum theory of gravity before we even try to formulate one! We know this because whatever the true quantum theory of gravity turns out to be, it must be consistent with General Relativity when we’re not at very small distances from a massive particle or object, just as — 100 years ago — we knew that General Relativity needed to reduce to Newtonian gravity in the weak-field regime. The big question, of course is how? How do you quantize gravity in a way that’s correct (at describing reality), consistent (with both GR and QFT), and hopefully leads to calculable predictions for new phenomena that might be observed, measured or somehow tested. The leading contender, of course, is something you’ve long heard of: String Theory. String Theory is an interesting framework — it can include all of the standard model fields and particles, both the fermions and the bosons. It includes also a 10-dimensional Tensor-Scalar theory of gravity: with 9 space and 1 time dimensions, and a scalar field parameter. If we erase six of those spatial dimensions (through an incompletely defined process that people just call compactification) and let the parameter (ω) that defines the scalar interaction go to infinity, we can recover General Relativity. But there are a whole host of phenomenological problems with String Theory. One is that it predicts a large number of new particles, including all the supersymmetric ones, none of which have been found. It claims to not need to need “free parameters” like the standard model has (for the masses of the particles), but it replaces that problem with an even worse one. String theory refers to “10^500 possible solutions,” where these solutions refer to the vacuum expectation values of the string fields, and there’s no mechanism to recover them; if you want String Theory to work, you need to give up on dynamics, and simply say, “well, it must’ve been anthropically selected.” There are frustrations, drawbacks, and problems with the very idea of String Theory. But the biggest problem with it may not be these mathematical ones. Instead, it may be that there are four other alternatives that may lead us to quantum gravity instead; approaches that are completely independent of String Theory.
The Quantum WaveFunction is a Real Physical Object: The 'Pusey'-Theorem 3 links in this deep-read: A) Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state truly represents. One possibility is that a pure quantum state corresponds directly to reality. However, there is a long history of suggestions that a quantum state (even a pure state) represents only knowledge or information about some aspect of reality. Here we show that any model in which a quantum state represents mere information about an underlying physical state of the system, and in which systems that are prepared independently have independent physical states, must make predictions which contradict those of quantum theory. B) Quantum mechanics is an outstandingly successful description of nature, underpinning fields from biology through chemistry to physics. At its heart is the quantum wavefunction, the central tool for describing quantum systems. Yet it is still unclear what the wavefunction actually is: does it merely represent our limited knowledge of a system, or is it in direct correspondence to reality? Recent no-go theorems argued that if there was any objective reality, then the wavefunction must be real. However, that conclusion relied on debatable assumptions. Here we follow a different approach without these assumptions and experimentally bound the degree to which knowledge interpretations can explain quantum phenomena. Using single photons, we find that no knowledge interpretation can fully explain the limited distinguishability of non-orthogonal quantum states in three and four dimensions. Assuming that a notion of objective reality exists, our results thus strengthen the view that the wavefunction should directly correspond to this reality. C) At the heart of the weirdness for which the field of quantum mechanics is famous is the wavefunction, a powerful but mysterious entity that is used to determine the probabilities that quantum particles will have certain properties. Now, a preprint posted online on 14 November1 reopens the question of what the wavefunction represents — with an answer that could rock quantum theory to its core. Whereas many physicists have generally interpreted the wavefunction as a statistical tool that reflects our ignorance of the particles being measured, the authors of the latest paper argue that, instead, it is physically real. “I don't like to sound hyperbolic, but I think the word 'seismic' is likely to apply to this paper,” says Antony Valentini, a theoretical physicist specializing in quantum foundations at Clemson University in South Carolina. Valentini believes that this result may be the most important general theorem relating to the foundations of quantum mechanics since Bell’s theorem, the 1964 result in which Northern Irish physicist John Stewart Bell proved that if quantum mechanics describes real entities, it has to include mysterious “action at a distance”. Action at a distance occurs when pairs of quantum particles interact in such a way that they become entangled. But the new paper, by a trio of physicists led by Matthew Pusey at Imperial College London, presents a theorem showing that if a quantum wavefunction were purely a statistical tool, then even quantum states that are unconnected across space and time would be able to communicate with each other. As that seems very unlikely to be true, the researchers conclude that the wavefunction must be physically real after all. David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. “This strips away obscurity and shows you can’t have an interpretation of a quantum state as probabilistic,” he says.
Gravitational waves detected 100 years after Einstein's prediction: a new window on the universe opensFebruary 11, 2016 For the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at Earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein's 1915 general theory of relativity and opens an unprecedented new window to the cosmos. Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot be obtained from elsewhere. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed. The gravitational waves were detected on Sept. 14, 2015 at 5:51 a.m. EDT (09:51 UTC) by both of the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington. The LIGO observatories are funded by the National Science Foundation (NSF), and were conceived, built and are operated by the California Institute of Technology (Caltech) and the Massachusetts Institute of Technology (MIT). The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors. Based on the observed signals, LIGO scientists estimate that the black holes for this event were about 29 and 36 times the mass of the sun, and the event took place 1.3 billion years ago. About three times the mass of the sun was converted into gravitational waves in a fraction of a second -- with a peak power output about 50 times that of the whole visible universe. By looking at the time of arrival of the signals -- the detector in Livingston recorded the event 7 milliseconds before the detector in Hanford -- scientists can say that the source was located in the Southern Hemisphere. According to general relativity, a pair of black holes orbiting around each other lose energy through the emission of gravitational waves, causing them to gradually approach each other over billions of years, and then much more quickly in the final minutes. During the final fraction of a second, the two black holes collide at nearly half the speed of light and form a single more massive black hole, converting a portion of the combined black holes' mass to energy, according to Einstein's formula E=mc2. This energy is emitted as a final strong burst of gravitational waves. These are the gravitational waves that LIGO observed. The existence of gravitational waves was first demonstrated in the 1970s and 1980s by Joseph Taylor, Jr., and colleagues. In 1974, Taylor and Russell Hulse discovered a binary system composed of a pulsar in orbit around a neutron star. Taylor and Joel M. Weisberg in 1982 found that the orbit of the pulsar was slowly shrinking over time because of the release of energy in the form of gravitational waves. For discovering the pulsar and showing that it would make possible this particular gravitational wave measurement, Hulse and Taylor were awarded the 1993 Nobel Prize in Physics. The new LIGO discovery is the first observation of gravitational waves themselves, made by measuring the tiny disturbances the waves make to space and time as they pass through the earth. "Our observation of gravitational waves accomplishes an ambitious goal set out over five decades ago to directly detect this elusive phenomenon and better understand the universe, and, fittingly, fulfills Einstein's legacy on the 100th anniversary of his general theory of relativity," says Caltech's David H. Reitze, executive director of the LIGO Laboratory. The discovery was made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed -- and the discovery of gravitational waves during its first observation run. NSF is the lead financial supporter of Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project. Several of the key technologies that made Advanced LIGO so much more sensitive were developed and tested by the German UK GEO collaboration. Significant computer resources were contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University and the University of Wisconsin-Milwaukee. Several universities designed, built and tested key components for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Florida, Stanford University, Columbia University of the City of New York and Louisiana State University. "In 1992, when LIGO's initial funding was approved, it represented the biggest investment NSF had ever made," says France Córdova, NSF director. "It was a big risk. But NSF is the agency that takes these kinds of risks. We support fundamental science and engineering at a point in the road to discovery where that path is anything but clear. We fund trailblazers. It's why the U.S. continues to be a global leader in advancing knowledge." LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom and the University of the Balearic Islands in Spain. "This detection is the beginning of a new era: The field of gravitational wave astronomy is now a reality," says Gabriela González, LSC spokesperson and professor of physics and astronomy at Louisiana State University. LIGO was originally proposed as a means of detecting gravitational waves in the 1980s by Rainer Weiss, professor of physics, emeritus, from MIT; Kip Thorne, Caltech's Richard P. Feynman Professor of Theoretical Physics, emeritus; and Ronald Drever, professor of physics, emeritus, also from Caltech. "The description of this observation is beautifully described in the Einstein theory of general relativity formulated 100 years ago and comprises the first test of the theory in strong gravitation. It would have been wonderful to watch Einstein's face had we been able to tell him," says Weiss. "With this discovery, we humans are embarking on a marvelous new quest: the quest to explore the warped side of the universe -- objects and phenomena that are made from warped spacetime. Colliding black holes and gravitational waves are our first beautiful examples," says Thorne. Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: six from Centre National de la Recherche Scientifique (CNRS) in France; eight from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; two in the Netherlands with Nikhef; the Wigner RCP in Hungary; the POLGRAW group in Poland; and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy. Fulvio Ricci, Virgo spokesperson, notes that: "This is a significant milestone for physics, but more importantly merely the start of many new and exciting astrophysical discoveries to come with LIGO and Virgo." Bruce Allen, managing director of the Max Planck Institute for Gravitational Physics adds: "Einstein thought gravitational waves were too weak to detect, and didn't believe in black holes. But I don't think he'd have minded being wrong!" "The Advanced LIGO detectors are a tour de force of science and technology, made possible by a truly exceptional international team of technicians, engineers, and scientists," says David Shoemaker of MIT, the project leader for Advanced LIGO. "We are very proud that we finished this NSF-funded project on time and on budget." At each observatory, the 2 1/2-mile (4-km) long, L-shaped LIGO interferometer uses laser light split into two beams that travel back and forth down the arms (four-foot diameter tubes kept under a near-perfect vacuum). The beams are used to monitor the distance between mirrors precisely positioned at the ends of the arms. According to Einstein's theory, the distance between the mirrors will change by an infinitesimal amount when a gravitational wave passes by the detector. A change in the lengths of the arms smaller than one-ten-thousandth the diameter of a proton (10-19 meter) can be detected. "To make this fantastic milestone possible took a global collaboration of scientists -- laser and suspension technology developed for our GEO600 detector was used to help make Advanced LIGO the most sophisticated gravitational wave detector ever created," says Sheila Rowan, professor of physics and astronomy at the University of Glasgow. Independent and widely separated observatories are necessary to determine the direction of the event causing the gravitational waves, and also to verify that the signals come from space and are not from some other local phenomenon. Toward this end, the LIGO Laboratory is working closely with scientists in India at the Inter-University Centre for Astronomy and Astrophysics, the Raja Ramanna Centre for Advanced Technology, and the Institute for Plasma to establish a third Advanced LIGO detector on the Indian subcontinent. Awaiting approval by the government of India, it could be operational early in the next decade. The additional detector will greatly improve the ability of the global detector network to localize gravitational-wave sources."Hopefully this first observation will accelerate the construction of a global network of detectors to enable accurate source location in the era of multi-messenger astronomy," says David McClelland, professor of physics and director of the Centre for Gravitational Physics at the Australian National University.
Anti-Nominalistic Scientific Realism: a Defence - Stathis Psillos "Philosophy of science proper has been a battleground in which a key battle in the philosophy of mathematics is fought. On the one hand, indispensability arguments capitalise on the strengths of scientific realism, and in particular of the no-miracles argument (NMA), in order to suggest that a) the reality of mathematical entities (in their full abstractness) follows from the truth of (literally understood) scientific theories; and b) there are good reasons to take certain theories to be true.1 On the other hand, arguments from the causal inertness of abstract entities capitalise on the strengths of scientific realism, and in particular of NMA, in order to suggest that a) if mathematical entities are admitted, the force of NMA as an argument for the truth of scientific theories is undercut; and b) the best bet for scientific realism is become Nominalistic Scientific Realism (NSR) and to retreat to the nominalistic adequacy of theories. In what follows, I will try to show that anti-nominalistic scientific realism is still defensible and that the best arguments for NSR fail on many counts. In Section 2, I will argue that there are good reasons not to read NMA as being at odds with the reality of abstract entities. In Section 3, I will discuss what is required for NSR to get off the ground. In Section 4, I will question the idea of the nominalistic content of theories as well as the idea of causal activity as a necessary condition for commitment to the reality of an entity. In Section 5, I will challenge the notion of nominalistic adequacy of theories. In Section 6, I will try to motivate the thought that there are mixed physico-mathematical truth-makers, some of which are bottom-level. Finally, in Section 7, I will offer a diagnosis as to what the root problem is in the debate between Platonistic Scientific Realism and NSR and a conjecture as to how it might be (re)solved.
Quantum Cosmology and Quantized Einsteinian General Relativity lead to a Parmenidean Conclusion that there is no Time "After a brief introduction to issues that plague the realization of a theory of quantum gravity, I suggest that the main one concerns defining superpositions of causal structures. This leads me to a distinction between time and space, to a further degree than that present in the canonical approach to general relativity. With this distinction, one can make sense of superpositions as interference between alternative paths in the relational configuration space of the entire Universe. But the full use of relationalism brings us to a timeless picture of Nature, as it does in the canonical approach (which culminates in the Wheeler-DeWitt equation). After a discussion of Parmenides and the Eleatics’ rejection of time, I show that there is middle ground between their view of absolute timelessness and a view of physics taking place in timeless configuration space. In this middle ground, even though change does not fundamentally exist, the illusion of change can be recovered in a way not permitted by Parmenides. It is recovered through a particular density distribution over configuration space which gives rise to ‘records’. Incidentally, this distribution seems to have the potential to dissolve further aspects of the measurement problem that can still be argued to haunt the application of decoherence to Many-Worlds quantum mechanics. I end with a discussion indicating that the conflict between the conclusions of this paper and our view of the continuity of the self may still intuitively bother us. Nonetheless, those conclusions should be no more challenging to our intuition than Derek Parfit’s thought experiments on the subject."
Theorists propose a new method to probe the beginning of the universe How did the universe begin? And what came before the Big Bang? Cosmologists have asked these questions ever since discovering that our universe is expanding. The answers aren't easy to determine. The beginning of the cosmos is cloaked and hidden from the view of our most powerful telescopes. Yet observations we make today can give clues to the universe's origin. New research suggests a novel way of probing the beginning of space and time to determine which of the competing theories is correct. The most widely accepted theoretical scenario for the beginning of the universe is inflation, which predicts that the universe expanded at an exponential rate in the first fleeting fraction of a second. However a number of alternative scenarios have been suggested, some predicting a Big Crunch preceding the Big Bang. The trick is to find measurements that can distinguish between these scenarios. One promising source of information about the universe's beginning is the cosmic microwave background (CMB) - the remnant glow of the Big Bang that pervades all of space. This glow appears smooth and uniform at first, but upon closer inspection varies by small amounts. Those variations come from quantum fluctuations present at the birth of the universe that have been stretched as the universe expanded.
The conventional approach to distinguish different scenarios searches for possible traces of gravitational waves, generated during the primordial universe, in the CMB. "Here we are proposing a new approach that could allow us to directly reveal the evolutionary history of the primordial universe from astrophysical signals. This history is unique to each scenario," says coauthor Xingang Chen of the Harvard-Smithsonian Center for Astrophysics (CfA) and the University of Texas at Dallas. While previous experimental and theoretical studies give clues to spatial variations in the primordial universe, they lack the key element of time. Without a ticking clock to measure the passage of time, the evolutionary history of the primordial universe can't be determined unambiguously. "Imagine you took the frames of a movie and stacked them all randomly on top of each other. If those frames aren't labeled with a time, you can't put them in order. Did the primordial universe crunch or bang? If you don't know whether the movie is running forward or in reverse, you can't tell the difference," explains Chen. This new research suggests that such "clocks" exist, and can be used to measure the passage of time at the universe's birth. These clocks take the form of heavy particles, which are an expected product of the "theory of everything" that will unite quantum mechanics and general relativity. They are named the "primordial standard clocks." Subatomic heavy particles will behave like a pendulum, oscillating back and forth in a universal and standard way. They can even do so quantum-mechanically without being pushed initially. Those oscillations or quantum wiggles would act as clock ticks, and add time labels to the stack of movie frames in our analogy. "Ticks of these primordial standard clocks would create corresponding wiggles in measurements of the cosmic microwave background, whose pattern is unique for each scenario," says coauthor Yi Wang of The Hong Kong University of Science and Technology. However, current data isn't accurate enough to spot such small variations. Ongoing experiments should greatly improve the situation. Projects like CfA's BICEP3 and Keck Array, and many other related experiments worldwide, will gather exquisitely precise CMB data at the same time as they are searching for gravitational waves. If the wiggles from the primordial standard clocks are strong enough, experiments should find them in the next decade. Supporting evidence could come from other lines of investigation, like maps of the large-scale structure of the universe including galaxies and cosmic hydrogen. And since the primordial standard clocks would be a component of the "theory of everything," finding them would also provide evidence for physics beyond the Standard Model at an energy scale inaccessible to colliders on the ground.
Belief revision generalized: A joint characterization of Bayes' and Jeffrey's rules The authors present a general framework for representing belief-revision rules and use it to characterize Bayes' rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes' rule, but a new assignment of probabilities to some events. Despite their differences, Bayes' and Jeffrey's rules can be characterized in terms of the same axioms: responsiveness, which requires that revised beliefs incorporate what has been learnt, and conservativeness, which requires that beliefs on which the learnt input is ‘silent’ do not change. To illustrate the use of non-Bayesian belief revision in economic theory, the authors sketch a simple decision-theoretic application.
The Universe Is an Inside-Out Star by Jonah Miller: I will skip to the philosophically crucial point by Jonah: The Big Bang Wasn’t a Point. One thing I like about this analogy is that it takes the center of the sun, which is a single point, and smears it out so that it becomes the surface of a very large sphere, one with the same radius as the observable universe. I like this because it reverses a common misconception. People usually imagine that the Big Bang, the beginning of the universe, was a single point from which everything emerged. This is completely wrong. The beginning of the universe happened about fourteen billion years ago at every point in space. So, in our inside-out sun analogy, the smeared stellar center is the Big Bang. (Of course, there may not have been a Big Bang if, for example, cosmic inflation is correct. But that’s a story for another time.)
Time, chance and quantum theory When quantum mechanics emerged in the 1920s as the basic format in which all subsequent theories of physics would be couched, it exhibited two distinct features which represented its radical conceptual departure from classical physics. These two features have been given [21] the confusingly similar labels of indeterminacy and indeterminism. Indeterminacy is the more startling and harder to grasp of these two new concepts, yet it has the clearer mathematical formulation in the theory. Physically, it is the idea that physical quantities need not have definite values. Mathematically, this is expressed by the representation of such quantities by hermitian operators instead of real numbers, and the mathemtical fact that not every vector is an eigenvector of such an operator. This yields the physical notion of superposition, well known as a stumbling block in understanding the theory. Indeterminism, on the other hand, is simply the opposite of determinism, as canonically expressed by Laplace [12]. It is the denial of the idea that the 1 future of the universe is rigidly determined by its present state. It can be seen as not so much a rejection, more a reluctant abandonment: if Laplace’s determinism is seen, not as a statement of faith, but as a declaration of intent — “We will look for an exact cause in the past of everything that happens in the physical world” — then quantum theory is an admission of failure: “We will not try to find the reason why a radioactive nucleus decays at one time rather than another; we do not believe that there is any physical reason for its decay at a particular time.” Indeterminism, unlike indeterminacy, does not present any conceptual difficulty. In the history of human thought, the determinism of classical physics is a fairly recent and, to many, a rather troubling idea. To the pre-scientific mind, whether in historical times or in our own childhood, the apparent fact of experience is surely that there are things which might happen but they might not: you can never be sure of the future. As it is put in the Bible, “time and chance happeneth to them all”. On simply being told that the new theory of physics is to be indeterministic, one might think one could see the general form that theory would take: given a mathematical description of the state of a physical system at time t0, there would be a mathematical procedure which, instead of yielding a single state at each subsequent time t > t0, would yield a set of possible states and a probability distribution over this set. This would give a precise expression of the intuition that some things are more likely to happen than others; the probabilities would be objective facts about the world. But this is not how indeterminism is manifested in quantum mechanics. As it was conceived by Bohr and formalised by Dirac [6] and von Neumann [22] and most textbooks ever since (the “Copenhagen interpretation”), the indeterminism does not enter into the way the world develops by itself, but only in the action of human beings doing experiments. In the basic laws governing physical evolution, probabilities occur only in the projection postulate, which refers to the results of experiments. The idea that experiments should play a basic part in the theory has been trenchantly criticised by John Bell [4]. This unsatisfactory, poorly defined and, frankly, ugly aspect of the theory consititutes the measurement problem of quantum mechanics. It could be argued that the indeterminism of quantum theory is played down in the Copenhagen interpretation, in the sense that the role of time in physical probability is minimised. This is not a formal feature of the theory, but an aspect of its presentation. Experiments are presented as if they took no time to be completed; probabilities are properties of the instantaneous state. Heisenberg referred to the contents of the state vector as representing potentiality, and this aspect of the state vector has been emphasised by de Ronde [5], but it seems to me that the idea of something being potential 2 looks to the future for its actualisation. The theory remained in this state (and some say it still does) until the papers of Everett and Wheeler in 1957. Everett, as endorsed by Wheeler, argued that there was no need for the projection postulate and no need for experiments to play a special role in the theory. But this did not give a satisfactory place for indeterminism in the theory: on the contrary, quantum mechanics according to Everett is essentially deterministic and probability is generally regarded as a problem for the Everett-Wheeler interpretation. There are at least four different concepts that go under the name of “probability” (or, if you think that it is a single concept, four theories of probability) [10]. Probability can be a strength of logical entailment, a frequency, a degree of belief, or a propensity. Physicists tend to favour what they see as a no-nonsense definition of probability in terms of frequencies, but in the context of physics this is ultimately incoherent. Classical deterministic physics has a place for probability in situations of incomplete knowledge; this is how Laplace considers probability. The concept being used here is degree of belief. Some quantum physicists [8] would like to retain this concept in the indeterministic realm. Many, however, would reject this subjectivist approach as abandoning the ideal of an objective theory. The most appropriate concept for an indeterministic theory, as outlined above, would seem to be that of propensity; if there is a set of possible future states for a system, the system has various propensities to fall into these states. This gives an objective meaning of “probability”. This concept is also appropriate to the Copenhagen interpretation of quantum mechanics: an experimental setup has propensities to give the different possible results of the experiment. The Everett-Wheeler picture, however, seems to have no place for propensities. Everett liked to describe the universal state vector, which occupies centre stage in his theory, as giving a picture of many worlds; Wheeler, I think rightly, rejected this language. Both views, however, attribute simultaneous existence to the different components of the universal state vector which are the seat of probabilities in the Copenhagen interpretation. It is hard to see how propensities can attach to them. Everett, consequently, pursued the idea of probabilities as frequencies, and even claimed to derive the quantummechanical formula for probability (the Born rule) rather than simply postulating it. It is generally agreed, however, that his argument was circular ([23], p. 127). In this article I will explore the Everett-Wheeler formulation of quantum mechanics from the point of view of a sentient physical system inside the world, such as each of us. I argue that it is necessary to recognise the validity of two contexts for physical propositions: an external context, in which the universal state vector provides the truth about the world and its 3 development is deterministic; and an internal context in which the world is seen from a component of the universal state vector. In the latter context the development in time is indeterministic; probabilities apply, in a particular component at a particular time, to statements referring to the future. This leads to a fifth concept of probability: the probability of a statement in the future tense is the degree of truth of that statement. I conclude with a formulation of the predictions of quantum mechanics from the internal perspective which fits the template for an indeterministic theory described earlier in this section: for each state at time t0, the universal state vector provides a set of possible states at subsequent times t > t0 and a probabiilty distribution over this set. I start by arguing for the relative-state interpretation of Everett and Wheeler as the best approach to the problem of understanding quantum theory.
Varieties of Capitalism: Some Philosophical and Historical Considerations The literature on varieties of capitalism has stimulated some authors to challenge notions of ‘essentialism’ and even the concept of capitalism itself. This essay argues that the existence of varieties of capitalism does not rule out the need for, or possibility of, specification or definition of that type. Accordingly, ‘capitalism’ is still a viable term. The critique of ‘essentialism’ is also countered, after clarifying its meaning. In particular, a suitably defined ‘essentialism’ does not imply some kind of ontological or explanatory reductionism—‘economic’ or otherwise. But while adopting what are basically Aristotelian arguments about essences, we need to reject Aristotle’s auxiliary notion that variety generally results from temporary deviations from a representative type or trend. Furthermore, capitalism is a historically specific and relatively recent system: we need to develop a classificatory definition of that system that demarcates it from other past or possible social formations.
Stepping beyond our 3-D world: Great Read on a 'Magical' Relation between String-Theory (Physics), Biology and Mathematics Since the dawn of time, humans have endeavoured to unravel the laws governing the physical world around us. Over centuries we have tried to discover a Theory of Everything.Possible candidates for this cachet, such as String Theory and Grand Unified Theory, require higher dimensions or higher-dimensional symmetries, for instance ten dimensions, despite their radical difference from the world we actually experience. One such symmetry – known as E8 – exists in eight dimensions and is the largest symmetry without counterparts in every dimension and is therefore called exceptional. It features prominently in String Theory and Grand Unified Theory. Now a University of York scientist has constructed E8 for the first time, along with other exceptional 4D symmetries, in the 3D space we inhabit. Dr Pierre-Philippe Dechant, of the Departments of Mathematics and Biology at York, has created these exceptional symmetries essentially as 3D phenomena in disguise. This new view of the exceptional geometries has the potential to open large areas of mathematics and physics up for reinterpretation. The research is published in Proceedings of the Royal Society A. Dr Dechant, who is also a member of the York Centre for Complex Systems Analysis, developed a unique combination of working with the Platonic root systems for applications in mathematical virology and an unusual Clifford algebraic approach, to lay the foundation for this fundamental new insight. The construction of E8, which is fundamental to String Theory and Grand Unified Theory, using the 3D geometry of the icosahedron – a polyhedron with 20 faces—is ground-breaking and completely against the prevailing eight-dimensional view. It was made possible by the fact that 3D geometric quantities (points, lines, planes, volumes) in the Clifford algebra approach actually themselves form an eight-dimensional space. He said: "Usually when one argues for higher-dimensional theories one considers them as fundamental, and we might only experience a part of this whole structure in our 3D world. The results of this paper completely subvert this by showing that these 'obscure' higher-dimensional symmetries actually have 'space' to fit into the 3D geometry of our natural biological world. "This was made possible by my unusual position of working on the symmetries of viruses whilst having a mathematical physics background and is thus a unique inspiration of mathematical biology back into mathematical physics."
On the Brittleness of Bayesian Inference With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a fi- nite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with fi- nite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.
Quantum leap: Physicists can better study the quantum behaviour of objects on the atomic and macro-scale Erwin Schrödinger was an interesting man. Not only did he conceive a most imaginative way to (theoretically) kill a cat, he was in a constant state of superposition between monogamy and not. He shared a household with one wife and one mistress. (Although he got into trouble at Oxford for this unconventional lifestyle, it didn’t pose a problem in largely Catholic Dublin.) Just like the chemist Albert Hofmann, who tried LSD (lysergic acid diethylamide) on himself first, Schrödinger might have pondered how it would feel for a person to be in a genuine state of quantum superposition. Or even how a cat might feel. In principle, quantum mechanics would certainly allow for Schrödinger, or any of us, to enter a state of quantum superposition. That is, according to quantum theory, a large object could be in two quantum states at the same time. It is not just for subatomic particles. Everyday experience, of course, indicates that big objects behave classically. In special labs and with a lot of effort, we can observe the quantum properties of photons or electrons. But even the best labs and greatest efforts are yet to find them in anything approaching the size of a cat. Could they be found? The question is more than head-in-the-clouds philosophy. One of the most important experimental questions in quantum physics is whether or not there is a point or boundary at which the quantum world ends and the classical world begins. A straightforward approach to clarifying this question is to experimentally verify the quantum properties of ever-larger macroscopic objects. Scientists find these properties in subatomic particles when they confirm that the particles sometimes behave as a wave, with characteristic peaks and dips. Likewise, lab set-ups based on the principle of quantum interference, using many mirrors, lasers and lenses, have successfully found wave behaviour in macromolecules that are more than 800 atoms in size. Other techniques could go larger. Called atom interferometers, they probe atomic matter waves in the way that conventional interferometers measure light waves. Specifically, they divide the atomic matter wave into two separate wave packets, and recombine them at the end. The sensitivity of these devices is related to how far apart they can perform this spatial separation. Until now, the best atomic interferometers could put the wave packets about 1 centimetre apart. In this issue, physicists demonstrate an astonishing advance in this regard. They show quantum interference of atomic wave packets that are separated by 54 centimetres. Although this does not mean that we have an actual cat in a state of quantum superposition, at least a cat could now comfortably take a nap between the two branches of a superposed quantum state. (No cats were harmed in the course of these experiments.) Making huge molecules parade their wave nature and constructing atom interferometers that can separate wave packets by half a metre are extraordinary experimental achievements. And the technology coming from these experiments has many practical implications: atom interferometers splendidly measure acceleration, which means that they could find uses in navigation. And they would make excellent detectors for gravitational waves, because they are not sensitive to seismic noise. Schrödinger was more of a philosopher than an engineer, so it is plausible that he would not have taken that much interest in the practical ramifications of his theory. However, he would surely have clapped his hands at the prospect that experimenters could one day induce large objects to have quantum properties. And there are plenty of proposals for how to ramp up the size of objects with proven quantum behaviour: a microscopic mirror in a quantum superposition, created through interaction with a photon, would involve about 1014 atoms. And, letting their imaginations run wild, researchers have proposed a method to do the same with small biological structures such as viruses. To be clear, science is not close to putting a person or a cat into quantum superposition. Many say that, because of the way large objects interact with the environment, we will never be able to measure a person’s quantum behaviour. But it’s Christmas, so indulge us. If we could, and if we could be aware of such a superposition state, then how would we feel? Because ‘feeling’ would amount to measuring the wave function of the object, and because measuring causes the wave function to collapse, it should really feel like, well, nothing — or perhaps just a grin.
Quantum Mechanics and Kant’s Phenomenal World Quantum indeterminism seems incompatible with Kant’s defense of causality in his Second Analogy. The Copenhagen interpretation also takes quantum theory as evidence for anti-realism. This article argues that the law of causality, as transcendental, applies only to the world as observable, not to hypothetical (unobservable) objects such as quarks, detectable only by high energy accelerators. Taking Planck’s constant and the speed of light as the lower and upper bounds of observability provides a way of interpreting the observables of quantum mechanics as empirically real even though they are transcendentally (i.e., preobservationally) ideal
Do we have free will? Researchers test mechanisms involved in decision-making + New brain research refutes results of earlier studies that cast doubts on free will Our choices seem to be freer than previously thought. Using computer-based brain experiments, researchers from Charité - Universitätsmedizin Berlin studied the decision-making processes involved in voluntary movements. The question was: Is it possible for people to cancel a movement once the brain has started preparing it? The conclusion the researchers reached was: Yes, up to a certain point—the 'point of no return'. The results of this study have been published in the journal PNAS. The background to this new set of experiments lies in the debate regarding conscious will and determinism in human decision-making, which has attracted researchers, psychologists, philosophers and the general public, and which has been ongoing since at least the 1980s. Back then, the American researcher Benjamin Libet studied the nature of cerebral processes of study participants during conscious decision-making. He demonstrated that conscious decisions were initiated by unconscious brain processes, and that a wave of brain activity referred to as a 'readiness potential' could be recorded even before the subject had made a conscious decision. How can the unconscious brain processes possibly know in advance what decision a person is going to make at a time when they are not yet sure themselves? Until now, the existence of such preparatory brain processes has been regarded as evidence of 'determinism', according to which free will is nothing but an illusion, meaning our decisions are initiated by unconscious brain processes, and not by our 'conscious self'. In conjunction with Prof. Dr. Benjamin Blankertz and Matthias Schultze-Kraft from Technische Universität Berlin, a team of researchers from Charité's Bernstein Center for Computational Neuroscience, led by Prof. Dr. John-Dylan Haynes, has now taken a fresh look at this issue. Using state-of-the-art measurement techniques, the researchers tested whether people are able to stop planned movements once the readiness potential for a movement has been triggered. "The aim of our research was to find out whether the presence of early brain waves means that further decision-making is automatic and not under conscious control, or whether the person can still cancel the decision, i.e. use a 'veto'," explains Prof. Haynes. As part of this study, researchers asked study participants to enter into a 'duel' with a computer, and then monitored their brain waves throughout the duration of the game using electroencephalography (EEG). A specially-trained computer was then tasked with using these EEG data to predict when a subject would move, the aim being to out-maneuver the player. This was achieved by manipulating the game in favor of the computer as soon as brain wave measurements indicated that the player was about to move. If subjects are able to evade being predicted based on their own brain processes this would be evidence that control over their actions can be retained for much longer than previously thought, which is exactly what the researchers were able to demonstrate. "A person's decisions are not at the mercy of unconscious and early brain waves. They are able to actively intervene in the decision-making process and interrupt a movement," says Prof. Haynes. "Previously people have used the preparatory brain signals to argue against free will. Our study now shows that the freedom is much less limited than previously thought. However, there is a 'point of no return' in the decision-making process, after which cancellation of movement is no longer possible." Further studies are planned in which the researchers will investigate more complex decision-making processes. Part 2, second link: When people find themselves having to make a decision, the assumption is that the thoughts, or voice that is the conscious mind at work, deliberate, come to a decision, and then act. This is because for most people, that’s how the whole process feels. But back in the early 1980’s, an experiment conducted by Benjamin Libet, a neuroscientist with the University of California, cast doubt on this idea. He and his colleagues found in watching EEG readings of volunteers who had been asked to make a spontaneous movement (it didn’t matter what kind) that brain activity prior to the movement indicated that the subconscious mind came to a decision about what movement to make before the person experienced the feeling of making the decision themselves. This, Libet argued, showed that people don’t have nearly the degree of free will regarding decision making, as has been thought. Since then, no one has really refuted the theory. Now new research by a European team has found evidence that the brain activity recorded by Libet and other’s is due to something else, and thus, as they write in their paper published in the Proceedings of the National Academy of Sciences, that people really do make decisions in their conscious mind. To come to this conclusion, the team looked at how the brain responds to other decision forcing stimuli, such as what to make of visual input. In such instances, earlier research has shown that the brain amasses neural activity in preparation for a response, giving us something to choose from. Thus the response unfolds as the data is turned into imagery our brains can understand and we then interpret what we see based on what we’ve learned in the past. The researchers suggest that choosing to move an arm or leg or finger, works the same way. Our brain gets a hint that we are contemplating making a movement, so it gets ready. And it’s only when a critical mass occurs that decision making actually takes place. To test this theory, the team built a computer model of what they called a neural accumulator, then watched as it behaved in a way that looked like it was building up to a potential action. Next, they repeated the original experiment conducted by Libet.