Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.
More Theoretical 'Evidence' for SuperString Theory: as Low-Energy Limit of Supergroup Gauge Theories" We consider Yang-Mills theory with N = 2 super translation group in d = 10 auxiliary dimensions as the structure group. The gauge theory is defined on a direct product manifold Σ2 × H2 , where Σ2 is a two-dimensional Lorentzian manifold and H2 is the open disc in R 2 with the boundary S 1 = ∂H2 . We show that in the adiabatic limit, when the metric on H2 is scaled down, the Yang-Mills action supplemented by the d = 5 Chern-Simons term becomes the Green-Schwarz superstring action. More concretely, the Yang-Mills action in the infrared limit flows to the kinetic part of the superstring action and the d = 5 Chern-Simons action, defined on a 5-manifold with the boundary Σ2 × H2 , flows to the Wess-Zumino part of the superstring action. The same kind of duality between gauge fields and strings is established for type IIB superstring on AdS5 × S 5 background and a supergroup gauge theory with PSU(2,2|4) as the structure group. So, summing: we have introduced a Yang-Mills-Chern-Simons model whose action functional in the low-energy limit reduces to the Green-Schwarz superstring action. It was shown that B-field and Wess-Zuminotype terms in string theory appear from the Yang-Mills topological terms (4.9) and (4.6), respectively. Combining these results with the results for bosonic string [22], one can show that heterotic string theory can also be embedded into Yang-Mills theory as a subsector of low-energy states. In fact, the described correspondence is a new kind of gauge/string duality. Thus, all five superstring theories can be described in a unified manner via infrared limit of Yang-Mills-Chern-Simons theory. Such supergroup gauge theories almost never been studied in the literature (see a discussion in [23]).
Deep: Evidence of Spontaneous Dimensional Reduction to 2D in Quantum Gravity open the link, this is a must read: Hints from a number of different approaches to quantum gravity point to a phenomenon of “spontaneous dimensional reduction” to two spacetime dimensions near the Planck scale. I examine the physical meaning of the term “dimension” in this context, summarize the evidence for dimensional reduction, and discuss possible physical explanations.
Very Deep: Spacetime Equals Quantum Entanglement How does the semiclassical picture arise from the fundamental theory of quantum gravity? Recently it has become increasingly clear that quantum entanglement in holographic [1, 2] descriptions plays an important role in the emergence of the classical spacetime of general relativity [3–8]. This raises the possibility that entanglement is indeed the defining property that controls the physics of dynamical spacetimes. In this letter we take the view that entanglement in holographic theories determines gravitational spacetimes at the semiclassical level. Rather than proving this statement, we adopt it as a guiding principle and explore its consequences. This principle has profound implications for the structure of the Hilbert space of quantum gravity. In particular, it allows us to obtain a classical spacetime as a superposition of (an exponentially large number of) different classical spacetimes. We show that despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. To illustrate these concepts, we use a putative holographic theory for cosmological spacetimes, in which the effects appear cleanly. Our basic points, however, persist more generally; in particular, we expect that they apply to a region of the bulk in the AdS/CFT correspondence [9]. In the context of Friedmann-Robertson-Walker (FRW) universes, we find an interesting “Russian doll” structure: states representing a universe filled with a fluid having an equation of state parameter w are obtained as exponentially many (exponentially rare) superpositions of those having an equation of state with w ′ > w (< w). While completing this work, we received Ref. [10] by Almheiri, Dong and Swingle which studies how holographic entanglement entropies are related to linear operators in the AdS/CFT correspondence. Their analysis of the thermodynamic limit of the area operators overlaps with ours. See also Ref. [11] for related discussion. We begin by describing the holographic framework we work in. The AdS/CFT case appears as a special situation of this more general (albeit more conjectural) framework. The covariant entropy bound [12] implies that the entropy on a null hypersurface generated by a congruence of light rays terminated by a caustic or singularity is bounded by its largest cross sectional area A divided by 2 in Planck units. (The entropy on each side of the largest cross sectional surface is bounded by A /4.) This suggests that for a fixed gravitational spacetime, the holographic theory lives on a hypersurface—called the holographic screen—on which null hypersurfaces foliating the spacetime have the largest cross sectional areas [13]. The procedure of erecting a holographic screen has a large ambiguity. A particularly useful choice [14, 15] is to adopt an “observer centric reference frame.” Let the origin of the reference frame follow a timelike curve p ( τ ) which passes through a fixed spacetime point p 0 at τ = 0, and consider the congruence of past-directed light rays emanating from p 0. Assuming the null energy condition, the light rays focus toward the past, and we may identify the apparent horizon, i.e. the codimension-2 surface on which the expansion of the light rays vanishes, to be an equal-time hypersurface—called a leaf—of a holographic screen. Repeating the procedure for all τ, we obtain a specific holographic screen, with the leaves parameterized by τ, corresponding to foliating the spacetime region accessible to the observer at p(τ). Such a foliation is consonant with complementarity [16] which asserts that a complete description of a system refers only to the spacetime region that can be accessed by a single observer. With this construction, we can view a quantum state of the holographic theory as living on a leaf of the holographic screen obtained as above. We can then consider the collection of all possible quantum states on all possible leaves, obtained by considering all timelike curves in all spacetimes.
Quantum Physics keeps Getting 'Weirder': Quantum-Coherent Mixtures of Incompatible Causality Mechanisms - the Cause-Effect and the Common-Cause Ones Understanding the causal influences that hold among the parts of a system is critical both to explaining that system’s natural behaviour and to controlling it through targeted interventions. In a quantum world, understanding causal relations is equally important, but the set of possibilities is far richer. The two basic ways in which a pair of time-ordered quantum systems may be causally related are by a cause-effect mechanism or by a common cause acting on both. Here, we show that it is possible to have a coherent mixture of these two possibilities. We realize such a nonclassical causal relation in a quantum optics experiment and derive a set of criteria for witnessing the coherence based on a quantum version of Berkson’s paradox. The interplay of causality and quantum theory lies at the heart of challenging foundational puzzles, such as Bell’s theorem and the search for quantum gravity, but could also provide a resource for novel quantum technologies.
Spontaneous Wave-Function Collapse: a Solution to the Measurement Problem and a Source of the Decay in Mesonic Systems Dynamical reduction models propose a solution to the measurement problem in quantum mechanics: the collapse of the wave function becomes a physical process. We consider the two most promising collapse models, the QMUPL (Quantum Mechanics with Universal Position Localization) model and the mass-proportional CSL (Continuous Spontaneous Localization) model, and derive their effect onto flavour oscillations of neutral mesons. We find that the dynamics of neutral mesons depends on the very assumptions of the noise field underlying any collapse model, thus the physics of the noise field becomes investigatable for these particular systems. Secondly, we find that the decay property of the mass eigenstates can be dynamically generated by the spontaneous collapse in space. Taking collapse models seriously we conclude that accelerator facilities have measured the absolute masses of eigenstates of the Hamiltonian giving raise to decay; this in turn is at the same footings as the mass difference giving raise to the flavour oscillations (predicted also by standard quantum mechanics). Thus dynamical reduction models can cover the full dynamics, oscillation and decay, of neutral mesons.
Solution to the Quantum Measurement Problem via Entanglement-STT-Theory with Ontological Objectivity Regarding the Wave-Function and the Collapse It seems that entanglement should be the key to solving the measurement problem because one of the essential differences between the quantum world and the classical world is the presence or absence of entanglement. If we wish to establish a theory such that collapse occurs when some physical threshold is reached, it is reasonable to expect that such a threshold should be an entanglement-related quantity. Entanglement itself is not an appropriate choice for the threshold for the following reason. If collapse occurs when entanglement reaches a high threshold, even the classical world should exhibit entanglement-related phenomena because highly entangled systems that have not yet reached the threshold would exist in the classical world. This supposition is contradictory to everyday observations. As a suitable threshold that overcomes this difficulty, we choose the entangling speed, i.e., the time derivative of the von Neumann entropy of a system. The entangling speed can be large even when entanglement itself is small, and, in particular, it can be enormous when a system is simultaneously interacting with a large number of environmental particles. Simultaneous interaction with a myriad of particles is a common feature of macroscopic classical objects. Hence, if we postulate that collapse occurs when the entangling speed reaches a certain threshold, then macroscopic objects should be able to reach that threshold easily. After collapse, the entangling speed of the object can increase again very rapidly, resulting in multiple consecutive collapses within a short period of time. We will see that this nearly continuous collapse causes macroscopic object to behave classically. In this respect, we expect the entangling-speed-threshold theory to suitably explain the quantum-to-classical transition. According to the orthodox interpretation of quantum mechanics, there are two different types of processes in the universe: deterministic unitary processes and indeterministic collapse processes that occur at measurement. This dualism has made many physicists uncomfortable. Because, even at measurement, one can define a closed system that contains both the measured system and the measuring apparatus, whether the collapse actually occurs has remained controversial. Moreover, even if one accepts the collapse postulate, the question of what conditions are required to achieve measurement has remained problematic. In addition, the question of why classical objects are observed in a certain preferred basis among infinitely many legitimate bases has also been intensively discussed. These problems are collectively known as the quantum measurement problem and have been a subject of debate since the birth of quantum mechanics. Many theories and interpretations have been proposed to address the quantum measurement problem. Some of them explicitly or at least tacitly accept the collapse postulate, whereas others reject the notion of collapse. On the collapse side, examples include the Copenhagen interpretation, the von Neumann-Wigner interpretation [1], and objective collapse theories such as the Ghirardi-Rimini-Weber theory [2] and the Penrose theory [3]. On the no-collapse side, examples include the de Broglie-Bohm theory [4], the many-worlds interpretation [5], the many-minds interpretation [6], the consistent histories interpretation [7], and many others. Despite the variety of these endeavors, there has been no broad consensus that the problem was clearly solved. The theory proposed in this paper is an objective collapse theory in which both the wavefunction and the process of collapse are regarded as ontologically objective. We accept the dualism that states that there are fundamentally two different types of processes in the universe: unitary processes and collapse processes. Because current quantum theory is satisfactory for unitary processes, we intend to establish a theory about collapse processes. The key postulate of the theory is that the state of a system collapses when the entangling speed of that system reaches a threshold. We call this theory the entangling-speed-threshold theory. Using this theory, we provide plausible answers to the questions of where and when collapse occurs, what determines the collapse basis, how subsystems should be defined given a large system, and what determines the observables (or, more generally, the measurement. operators). We also explain how deterministic classical dynamics emerges from indeterministic quantum collapse, where nearly continuous collapse plays a crucial role in explaining the quantum-to-classical transition. In addition, we show that before and after collapse, energy is accurately conserved when the environment consists of many degrees of freedom. To convince ourselves that the theory is consistent with everyday observations of classical phenomena, we apply the theory to a macroscopic flying body such as a bullet in the air, and show that the collapse basis of the bullet derived by the theory has both a highly localized position and a well-defined momentum. The success achieved in deriving the classical states of a macroscopic body can be considered as evidence that the theory is well suited for explaining the quantum-to-classical transition. Finally, we suggest an experiment that can verify the theory. To resolve the quantum measurement problem, we propose an objective collapse theory in which both the wavefunction and the process of collapse are regarded as ontologically objective. The theory, which we call the entangling-speed-threshold theory, postulates that collapse occurs when the entangling speed of a system reaches a threshold, and the collapse basis is determined so as to eliminate the entangling speed and to minimize its increasing rate. Using this theory, we provide answers to the questions of where and when collapse occurs, how the collapse basis is determined, what systems are (in other words, what the actual tensor product structure is), and what determines the observables. We also explain how deterministic classical dynamics emerges from indeterministic quantum collapse, explaining the quantum-to-classical transition. In addition, we show that the theory guarantees energy conservation to a high accuracy. We apply the theory to a macroscopic flying body such as a bullet in the air, and derive a satisfactory collapse basis that is highly localized in both position and momentum, consistent with our everyday observation. Finally, we suggest an experiment that can verify the theory.
Mathematics Solves the Puzzle of why Humanity Evolved to Cooperate: Altruism is Favored by Random Fluctuations in Nature Why do we feel good about giving to charity when there is no direct benefit to ourselves, and feel bad about cheating the system? Mathematicians may have found an answer to the longstanding puzzle as to why we have evolved to cooperate. An international team of researchers, publishing in the Proceedings of the National Academy of Sciences, has found that altruism is favoured by random fluctuations in nature, offering an explanation to the mystery as to why this seemingly disadvantageous trait has evolved. The researchers, from the Universities of Bath, Manchester and Princeton, developed a mathematical model to predict the path of evolution when altruistic "cooperators" live alongside "cheats" who use up resources but do not themselves contribute. Humans are not the only organisms to cooperate with one another. The scientists used the example of Brewer's yeast, which can produce an enzyme called invertase that breaks down complex sugars in the environment, creating more food for all. However, those that make this enzyme use energy that could instead have been used for reproduction, meaning that a mutant "cheating" strain that waits for others to do the hard work would be able to breed faster as a result. Darwinian evolution suggests that their ability to breed faster will allow the cheats (and their cheating offspring) to proliferate and eventually take over the whole population. This problem is common to all altruistic populations, raising the difficult question of how cooperation evolved. Dr Tim Rogers, Royal Society University Research Fellow at the University of Bath, said: "Scientists have been puzzled by this for a long time. One dominant theory was that we act more favourably towards genetic relatives than strangers, summed up by J. S. Haldane's famous claim that he would jump into a river to save two brothers or eight cousins. "What we are lacking is an explanation of how these behaviours could have evolved in organisms as basic as yeast. Our research proposes a simple answer - it turns out that cooperation is favoured by chance." The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term. Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10: "Although the odds winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more then they lose out.
Complex Networks Theory Could Unify Einstein's tGR [Gravity] and Quantum Physics In quantum gravity, several approaches have been proposed until now for the quantum description of discrete geometries. These theoretical frameworks include loop quantum gravity, causal dynamical triangulations, causal sets, quantum graphity, and energetic spin networks. Most of these approaches describe discrete spaces as homogeneous network manifolds. Here we define Complex Quantum Network Manifolds (CQNM) describing the evolution of quantum network states, and constructed from growing simplicial complexes of dimension d. We show that in d = 2 CQNM are homogeneous networks while for d > 2 they are scale-free i.e. they are characterized by large inhomogeneities of degrees like most complex networks. From the self-organized evolution of CQNM quantum statistics emerge spontaneously. Here we define the generalized degrees associated with the δ-faces of the d-dimensional CQNMs, and we show that the statistics of these generalized degrees can either follow Fermi-Dirac, Boltzmann or Bose-Einstein distributions depending on the dimension of the δ-faces.
The Quantum Minimum Time Interval Implies a Heisenberg Uncertainty Relation Between Space and Time: ΔrΔt>Gℏ/c4, and that is a Foundational Problem for Einstein-GR We critically discuss the measure of very short time intervals. By means of a Gedankenexperiment, we describe an ideal clock based on the occurrence of completely random events. Many previous thought experiments have suggested fundamental Planck-scale limits on measurements of distance and time. Here we present a new type of thought experiment, based on a different type of clock, that provide further support for the existence of such limits. We show that the minimum time interval Δtthat this clock can measure scales as the inverse of its size Δr. This implies an uncertainty relation between space and time: ΔrΔt>G/c4, where G, , and c are the gravitational constant, the reduced Planck constant, and the speed of light, respectively. We outline and briefly discuss the implications of this uncertainty conjecture.
Deep! Quantum Mechanics = Bayesian Theory in the Complex-Numbers We consider the problem of gambling on a quantum experiment and enforce rational behaviour by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalised to the space of Hermitian matrices. This very theory is quantum mechanics: in fact, we derive all its four postulates from the generalised Bayesian theory. This implies that quantum mechanics is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes’ rule (measurement), marginalisation (partial tracing), independence (tensor product). To say it with a slogan, we obtain that quantum mechanics is the Bayesian theory in the complex numbers.
Bell’s Inequality Meets a Hidden Variable Theory for Quantum Mechanics Based on ’t Hooft-Ontology with Local Realism and Determinism By assuming a deterministic evolution of quantum systems and taking realism into account, we carefully build a hidden variable theory for Quantum Mechanics based on the notion of ontological states proposed by ’t Hooft. We view these ontological states as the ones embedded with realism and compare them to the (usual) quantum states that represent superpositions, viewing the latter as mere information of the system they describe. With this basis we reach a local understanding of Quantum Mechanics, whose predictions come to be in accordance with Bell’s inequality and with the experiments in the laboratory. In this way we show that Quantum Mechanics can indeed have a local interpretation, and thus meet with the Theory of Relativity in a satisfying mode! Since Bell came up with his impossibility proof of local hidden variables[1][2] , the quest for realism has taken a wide variety of paths. New interpretations have been made to what the violation of causality, embedded in quantum mechanics, might physically mean. We need a better understanding of the most basic phenomena of quantum mechanics, and several no-go theorems have shut the door for realism and locality. But in which way? With what assumptions? Is the door really locked? We are going to examine these questions by proposing a realist hidden variable interpretation of quantum mechanics: factuality, and with this in mind, analysing the first and most important of the no-go theorems: Bell’s inequality.
Very Deep: A Second-Quantization of Loop Quantum Gravity Theory Describes Physics Beyond Black-Hole-Event-Horizons and Supports the Holographic Hypothesis Implied by SuperString/M-Theory: In principle, nothing that enters a black hole can leave the black hole. This has considerably complicated the study of these mysterious bodies, which generations of physicists have debated since 1916, when their existence was hypothesized as a direct consequence of Einstein's Theory of Relativity. There is, however, some consensus in the scientific community regarding black hole entropy—a measure of the inner disorder of a physical system—because its absence would violate the second law of thermodynamics. In particular, Jacob Bekenstein and Stephen Hawking have suggested that the entropy of a black hole is proportional to its area, rather than its volume, as would be more intuitive. This assumption also gives rise to the "holography" hypothesis of black holes, which (very roughly) suggests that what appears to be three-dimensional might, in fact, be an image projected onto a distant two-dimensional cosmic horizon, just like a hologram, which, despite being a two-dimensional image, appears to be three-dimensional. As we cannot see beyond the event horizon (the outer boundary of the back hole), the internal microstates that define its entropy are inaccessible. So how is it possible to calculate this measure? The theoretical approach adopted by Hawking and Bekenstein is semiclassical (a sort of hybrid between classical physics and quantum mechanics) and introduces the possibility (or necessity) of adopting a quantum gravity approach in these studies in order to obtain a more fundamental comprehension of the physics of black holes. Planck's length is the (tiny) dimension at which space-time stops being continuous as we see it, and takes on a discrete graininess made up of quanta, the "atoms" of space-time. The universe at this dimension is described by quantum mechanics. Quantum gravity is the field of enquiry that investigates gravity in the framework of quantum mechanics. Gravity has been very well described within classical physics, but it is unclear how it behaves at the Planck scale. Daniele Pranzetti and colleagues, in a new study published in Physical Review Letters, present an important result obtained by applying a second quantization formulation of loop quantum gravity (LQG) formalism. LQG is a theoretical approach within the problem of quantum gravity, and group field theory is the "language" through which the theory is applied in this work. "The idea at the basis of our study is that homogenous classical geometries emerge from a condensate of quanta of space introduced in LQG in order to describe quantum geometries," explains Pranzetti. "Thus, we obtained a description of black hole quantum states, suitable also to describe 'continuum' physics—that is, the physics of space-time as we know it." Condensates, quantum fluids and the universe as a hologram A "condensate" in this case is a collection of space quanta, all of which share the same properties so that even though there are huge numbers of them, we can nonetheless study their collective behavior by referring to the microscopic properties of the individual particle. So now, the analogy with classical thermodynamics seems clearer—just as fluids at our scale appear as continuous materials despite consisting of a huge number of atoms, similarly, in quantum gravity, the fundamental constituent atoms of space form a sort of fluid—that is, continuous space-time. A continuous and homogenous geometry (like that of a spherically symmetric black hole) can, as Pranzetti and colleagues suggest, be described as a condensate, which facilitates the underlying mathematical calculations, keeping in account an a priori infinite number of degrees of freedom. "We were therefore able to use a more complete and richer model compared with those done in the past in LQG, and obtain a far more realistic and robust result," says Pranzetti. "This allowed us to resolve several ambiguities afflicting previous calculations due to the comparison of these simplified LQG models with the results of semiclassical analysis as carried out by Hawking and Bekenstein". Another important aspect of Pranzetti and colleagues' study is that it proposes a concrete mechanism in support to the holographic hypothesis, whereby the three-dimensionality of black holes could be merely apparent: all their information could be contained on a two-dimensional surface, without having to investigate the structure of the inside (hence the link between entropy and surface area rather than volume).
Here's Why Bell’s Inequality Does NOT Imply Non-Locality or Anti-Realism in Quantum Physics It finally happened! In 2015 three world’s leading experimental groups working on foundational aspects of quantum mechanics announced (practically simultaneously) that they had performed the loophole free tests of violation of the Bell inequality: the groups of Ronald Hanson (Delf University of Technology), Anton Zeilinger (University of Vienna) and Linden Shalm (NIST, Boulder). This is definitely a great event in quantum foundations; the more so because it took so long to evolve from the pioneer experiment of Alain Aspect to these final Bell’s tests. (Besides, some experts in quantum foundations presented the arguments, in the spirit of Heisenberg uncertainty principle, that the locality and detection loopholes could not be closed in one experiment). I was surprised to see that this event did not generate a new wave of the quantum foundational enthusiasm, neither in the quantum community nor in general mass-media (even oriented to popularization of science). One of the reasons for this rather mild reaction is that, as was already mentioned, the result was commonly expected. A similar reason is that the Bell inequality has already been widely “sold”. In literature and talks the Bell argument is typically presented as if everything has already been experimentally proven; scientists working on closing loopholes were considered merely as polishing the famous Aspect’s experiment. Those who are closer to quantum foundations could additionally point that G. Weihs contributed to complete Aspect’s experiment by closing the locality loophole. Therefore, it would be great if the recent publications ignited a serious discussion on the possible impact of this event of the realization of the totally loophole free Bell test. Such a discussion is especially important because conclusions presented in [3]-[5], see also comments on these tests in [9], [10], present only a part of the wide spectrum of views on Bell’s argument. Although the presented “conventional viewpoint” dominates in the quantum community, it would be natural to represent other, so to say, singular, parts of this spectrum of viewpoints, see, e.g., the recent comment of M. Kupczynski [11]. We briefly remind the conventional viewpoint presented in [1], [2] and in hundreds of articles and monographs, e.g., [12]-[14], [3]-[10]. It was finally confirmed experimentally that 1. CV1: Einstein was wrong and Bohr was right; 2. CV2: there is spooky action at a distance; 3. CV3: quantum realism is incompatible with locality. The views of those who disagree with the presented “conventional position” are characterized by the high degree of diversity [15]-[26]. Therefore I shall not try to elaborate on some common “non-conventional viewpoints”, but present only my own position [25]: 1. NCV1: both Einstein and Bohr were right; 2. NCV2: there is no need in spooky action at a distance; 3. NCV3: quantum realism is compatible with locality. In section 2 I shall question the CV1-CV3 viewpoint and try to justify the NCV1-NCV3 viewpoint; in particular, I confront the “action at a distance interpretation” with the Copenhagen interpretation of QM. (It is surprising that one may combine without cognitive dissonance these two interpretations.) Then to discuss the issue of realism in QM I appeal to its onticepistemic analysis in the spirit of Atmanspacher and Primas [27]. From this viewpoint, Bell’s argument can be treated as the conjecture that ontic states can be identified with epistemic ones. We also discuss this conjecture by appealing to the old Bild conception (Hertz, Boltzman, Schr¨odinger) about the two descriptive levels of nature, theoretical and observational, see, e.g., [28] and chapter 1 of monograph [29]. Our conclusion is that the rejection of Bell’s conjecture as the result of the recent experiments cannot be treated as the impossibility to keep the realist viewpoint. There is neither need in action at a distance. Section 3 is started with the presentation of Kolmogorov’s interpretation [30] of classical probability (CP) as the observational theory (describing the epistemic states of nature). By Kolmogorov CP is a contextual theory assigning probability spaces to experimental contexts, complexes of experimental physical conditions. This position leads to the contextual representation of the probabilistic structure of Bell’s experimental test [23], [25]. In section 4 I present my personal picture of future development of quantum foundations, in “after Bell epoch”: from the total rejection of Bell’s conjecture to novel studies on the two descriptive levels approach to QM. In contrast to rather common opinion (see, e.g., Aspect’s paper [9] entitled “Closing the door on Einstein and Bohr’s quantum debate” and Wiseman’s paper [10] entitled “Quantum physics: Death by experiment for local realism”), for me the final Bell test did not imply the total impossibility to “go beyond quantum” [29]. The main message of this test is that the way to a proper subquantum model is more tricky than it was hypothesized by Bell.
]Is Spacetime an illusion? Merely an Emergent 'Property'? Spacetime-Free Approach to Quantum Theory and Effective Spacetime Structure Motivated by hints of the effective emergent nature of spacetime structure, we develop a spacetime-free framework for quantum theory. We find that quantum states on an extended observable algebra, the free algebra generated by the observables, may give rise to effective spacetime structures. Accordingly, perturbations of the quantum state (e.g., excitations of the vacuum) lead to perturbations of the induced effective spacetime geometry. We initiate the study of these perturbations, and their relation to gravitational phenomena"
Deep: One of Science and Philosophy's Deepest Question: Why is SPACE 3-Dimensional, Has Been Answered(?!)" The question of why space is three-dimensional (3D) and not some other number of dimensions has puzzled philosophers and scientists since ancient Greece. Space-time overall is four-dimensional, or (3 + 1)-dimensional, where time is the fourth dimension. It's well-known that the time dimension is related to the second law of thermodynamics: time has one direction (forward) because entropy (a measure of disorder) never decreases in a closed system such as the universe. In a new paper published in EPL, researchers have proposed that the second law of thermodynamics may also explain why space is 3D. "A number of researchers in the fields of science and philosophy have addressed the problem of the (3+1)-dimensional nature of space-time by justifying the suitable choice of its dimensionality in order to maintain life, stability and complexity," coauthor Julian Gonzalez-Ayala, at the National Polytechnic Institute in Mexico and the University of Salamanca in Spain, told Phys.org. "The greatest significance of our work is that we present a deduction based on a physical model of the universe dimensionality with a suitable and reasonable scenario of space-time. This is the first time that the number 'three' of the space dimensions arises as the optimization of a physical quantity." The scientists propose that space is 3D because of a thermodynamic quantity called the Helmholtz free energy density. In a universe filled with radiation, this density can be thought of as a kind of pressure on all of space, which depends on the universe's temperature and its number of spatial dimensions. Here the researchers showed that, as the universe began cooling from the moment after the big bang, the Helmholtz density reached its first maximum value at a very high temperature corresponding to when the universe was just a fraction of a second old, and when the number of spatial dimensions was approximately three. The key idea is that 3D space was "frozen in" at this point when the Helmholtz density reached its first maximum value, prohibiting 3D space from transitioning to other dimensions. This is because the second law allows transitions to higher dimensions only when the temperature is above this critical value, not below it. Since the universe is continuously cooling down, the current temperature is far below the critical temperature needed to transition from 3D space to a higher-dimensional space. In this way, the researchers explain, spatial dimensions are loosely analogous to phases of matter, where transitioning to a different dimension resembles a phase transition such as melting ice—something that is possible only at high enough temperatures. "In the cooling process of the early universe and after the first critical temperature, the entropy increment principle for closed systems could have forbidden certain changes of dimensionality," the researchers explained. The proposal still leaves room for higher dimensions to have occurred in the first fraction of a second after the big bang when the universe was even hotter than it was at the critical temperature. Extra dimensions are present in many cosmological models, most notably string theory. The new study could help explain why, in some of these models, the extra dimensions seem to have collapsed (or stayed the same size, which is very tiny), while the 3D space continued to grow into the entire observable universe. In the future, the researchers plan to improve their model to include additional quantum effects that may have occurred during the first fraction of a second after the big bang, the so-called "Planck epoch." In addition, the results from a more complete model may also provide guidance for researchers working on other cosmological models, such as quantum gravity.
Here's Why the 'Newton-Einstein-Paradigm' Fails in Quantum cosmology  Quantum cosmology could be said to have begun with Max Planck’ proposal in the conclusion of his legendary presentation in Academy of Sciences in Berlin on May 18, 1899 to introduce the “natural units” of measurement, basing on his new quantum constant. Plank’ idea, however, got no support from his contemporaries, and it was buried in oblivion for more than half a century until in the 1950s John Wheeler rediscovered Planck’ fundamental length in his “geometro-dynamics”. In 1958 Nikolai Kozyrev achieved an important heuristic result introducing first global cosmological quantum parameter - the “course of time constant” 2 e h [16], but like Planck he had not many followers. Despite occasional criticism, cosmology continued to use Newton-Einstein gravitation theory, abandoning for a long time an idea of the search for specific relativistic and quantum laws of mega-world. This was by no means because the failure to realize limited prospects of a mega-world theory based on Newton-Einstein gravitational equations and thermodynamics. The quest for specific quantum mega-world laws was inhibited, until the last quarter of the 20th century, by inferior, compared to quantum physics, amount of reliable quantitative data from observations of distant cosmic structures. An important stimulus for progress in quantum cosmology was the discovery of fractal geometry of the universe large-scale structures. It appeared that fractal dimension of the typical universe large-scale structures D = 2 is the same as the dimension of a fractal micro-particle trajectory described by quantum mechanics.
The Ambiguity of Simplicity and how the Classical versus Quantum Physics Boundary Entails Ockham’s Razor is fundamentally subjective and Possibly a Flawed Principle A system’s apparent simplicity depends on whether it is represented classically or quantally. This is not so surprising, as classical and quantum physics are descriptive frameworks built on different assumptions that capture, emphasize, and express different properties and mechanisms. What is surprising is that, as we demonstrate, simplicity is ambiguous: the relative simplicity between two systems can change sign when moving between classical and quantum descriptions. Thus, notions of absolute physical simplicity—minimal structure or memory—at best form a partial, not a total, order. This suggests that appeals to principles of physical simplicity, via Ockham’s Razor or to the “elegance” of competing theories, may be fundamentally subjective, perhaps even beyond the purview of physics itself. It also raises challenging questions in model selection between classical and quantum descriptions. Fortunately, experiments are now beginning to probe measures of simplicity, creating the potential to directly test for ambiguity.
Is Time Parmenidean: an Illusion?! Quantum GeometroDynamics with Intrinsic Time All of us experience the passage of time. But is time an illusion of our perception, an emergent semi-classical entity, or is it present at the fundamental level even in quantum gravity? General relativity (GR), Einstein’s theory of classical space-time, ties space and time to (pseudo-)Riemannian geometry. But in quantum gravity, space-time is a concept of ‘limited applicability’. That semi-classical space-time is emergent begs the question what, if anything at all, plays the role of ‘time’ in quantum gravity? Wheeler went as far as to claim we have to forgo time-ordering, and to declare ‘there is no spacetime, there is no time, there is no before, there is no after’. But without ‘time -ordering’, how is ‘causality’, which is requisite in any ‘sensible physical theory’, enforced in quantum gravity? Furthermore, a resolution of the ‘problem of time’ in quantum gravity cannot be deemed complete if it fails to account for the intuitive physical reality of time and does not provide satisfactory correlation between time development in quantum dynamics and the passage of time in classical space-times. Wheeler also emphasized it is 3-geometry, rather than 4-geometry, which is fundamental in quantum geometrodynamics. The call to abandon 4-covariance is not new. Simplifications in the Hamiltonian analysis of GR, and the fact that the physical degrees of freedom involve only the spatial metric, lead Dirac to conclude that ‘four-dimensional symmetry is not a fundamental property of the physical world'. A key obstacle to the viability of GR as a perturbative quantum field theory lies in the conflict between unitarity and space-time general covariance: renormalizability can be attained with higher-order curvature terms, but space-time covariance requires time as well as spatial derivatives of the same (higher) order, thus compromising unitarity. Relinquishing 4-covariance to achieve power-counting renormalizability through modifications of GR with higher-order spatial, rather than space-time, curvature terms was Horava’s bold proposal. Geometrodynamics bequeathed with positive-definite spatial metric is the simplest consistent framework to implement fundamental commutation relations predicated on the existence of spacelike hypersurfaces.
On the Limits of Time in Quantum Cosmology and why Time Might not Make Sense "We provide a discussion of some main ideas in our project about the physical foundation of the time concept in cosmology. It is standard to point to the Planck scale (located at ∼ 10^43 seconds after a fictitious “Big Bang” point) as a limit for how far back we may extrapolate the standard cosmological model. In our work we have suggested that there are several other (physically motivated) interesting limits – located at least thirty orders of magnitude before the Planck time – where the physical basis of the cosmological model and its time concept is progressively weakened. Some of these limits are connected to phase transitions in the early universe which gradually undermine the notion of ’standard clocks’ widely employed in cosmology. Such considerations lead to a scale problem for time which becomes particularly acute above the electroweak phase transition (before ∼ 10−11 seconds). Other limits are due to problems of building up a cosmological reference frame, or even contemplating a sensible notion of proper time, if the early universe constituents become too quantum. This quantum problem for time arises e.g. if a pure quantum phase is contemplated at the beginning of inflation at, say, ∼ 10^34 seconds.[
Chaos, a Macro-property, and Entanglement, a Micro-property, Linked by Scientists: and it Blurs the Line Between Classical and Quantum Physics Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature. "It's kind of surprising because chaos is this totally classical concept—there's no idea of chaos in a quantum system," Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. "Similarly, there's no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related." Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years' worth of study including Newton's laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory—the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions—and vacation plans—anywhere on the globe. At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another). And so began the continuing search for connections between the two fields. All systems are fundamentally quantum systems, according Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited. Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them. "The underlying physics is chaos, essentially," he said. The molecules coming to rest—at least on the macroscopic level—is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system. The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics—the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map. "There's a very clear connection between entanglement and chaos in these two pictures," said Neill. "And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization. "What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it's going to thermalize," added Neill, referring to the quantum-level equilibration. "And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle." The study's findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing—such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry—a quantum processor optimized for such calculations will be a very powerful tool. "It means we can study things that are completely impossible to study right now, once we get to bigger systems," said Neill.
Physics: there is an Infinite Number of Quantum Speed Limits The attempt to gain a theoretical understanding of the concept of time in quantum mechanics has triggered significant progress towards the search for faster and more efficient quantum technologies. One of such advances consists in the interpretation of the time-energy uncertainty relations as lower bounds for the minimal evolution time between two distinguishable states of a quantum system, also known as quantum speed limits. We investigate how the non uniqueness of a bona fide measure of distinguishability defined on the quantum state space affects the quantum speed limits and can be exploited in order to derive improved bounds. Specifically, we establish an infinite family of quantum speed limits valid for unitary and nonunitary evolutions, based on an elegant information geometric formalism. Our work unifies and generalizes existing results on quantum speed limits, and provides instances of novel bounds which are tighter than any established one based on the conventional quantum Fisher information. We illustrate our findings with relevant examples, demonstrating the importance of choosing different information metrics for open system dynamics, as well as clarifying the roles of classical populations versus quantum coherences, in the determination and saturation of the speed limits. Our results can find applications in the optimization and control of quantum technologies such as quantum computation and metrology, and might provide new insights in fundamental investigations of quantum thermodynamics. Read this as well.
Statistical/Classical Physics is Characterized by Symmetries of Quantum Entanglement Statistical physics was developed in the XIX century. The fundamental physical theory was then Newtonian mechanics. The key task of statistical physics was to bridge the chasm between microstates (points in phase space) and thermodynamic macrostates (given by temperature, entropy, pressure, etc). Strictly speaking, that chasm is—within the context of Newtonian mechanics—unbridgeable, as classical microstates have vanishing entropy. Thus, a 'half-way house' populated with fictitious but useful concepts such as ensembles was erected—half way between micro and macro—and served as a pillar supporting the bridge. Even before ensembles were officially introduced by Gibbs [1], the concept was de facto used by, e.g., Maxwell [2] and Boltzmann [3]. Doubts about this 'half-way house' strategy nevertheless remained, as controversies surrounding the H-theorem demonstrate. Our point is that, while in XIX century physics ensembles were necessary because thermodynamics anticipated the role played by information in quantum physics, the state of a single quantum system can be mixed. Therefore, the contradiction between the pure classical microstate and an (impure) macrostate does not arise in a Universe that is quantum to the core. Yet, the development of quantum statistical physics consisted to a large extent of re-deploying strategies developed to deal with the fundamental contradiction between Newtonian physics and thermodynamics. We claim that such 'crutches' that were devised (and helpful) in the XIX century became unnecessary with the advent of quantum physics in the XX century. Thus, in the XXI century we can simply dispose of the ensembles invented to justify the use of probabilities representing the ignorance of the observers and to compute the entropy of the macrostate. In the present paper we propose an alternative approach to the foundations of statistical mechanics that is free of the conceptual caveats of classical theory, and relies purely on quantum mechanical notions such as entanglement. Our approach is based on envariance (or entanglement assisted invariance) — a symmetry-based view of probabilities that has been recently developed to derive Born's rule from the non-controversial quantum postulates [4–8]—and that is, therefore, exceptionally well-suited to analyze probabilistic notions in quantum theories.
Markets without Limits: Moral Virtues and Commercial Interests Philosophers like Michael Sandel and Gerald Cohen have argued that as markets expand into new areas of life, important character virtues wither and valuable social relationships decay.[1]Other philosophers like Debra Satz and Elizabeth Anderson direct their criticism at specific markets ranging from sex and surrogacy to kidneys and votes (though they don't always think moral objections justify legal prohibitions). [2] Giving away morally significant goods and services is fine, even noble, but selling them is wrong. Jason Brennan and Peter M. Jaworski's book is a welcome challenge to this view. The authors define a market as "the voluntary exchange of goods and services for valuable consideration" (4). Giving a morally neutral and maximally expansive definition like this is useful because it allows us to include barter and cash markets, markets for goods, services, and ideas, as well as markets that are legally permitted, regulated, or prohibited. The central thesis of the book is that, morally speaking, if you can do something for free, you can do it for money (10). Brennan and Jaworski qualify this claim a bit, but they maintain that in many cases our worries about markets are about the background conditions faced by the parties involved in an exchange (such as unequal bargaining power, or asymmetric information), or about the goods being exchanged (such as sex slaves or hostages, which nobody should be allowed to own). For example, if it's wrong to dump mercury into the municipal water supply or stab a nosy neighbor, it's also wrong for you to get paid to pollute the drinking water or pay a hit man to murder your neighbor. Exchange itself, the authors argue, does not usually raise new moral problems, though it is often mistakenly credited as being the corrupting force behind whatever bad states of affairs we associate with markets (91). Even if we can show that the consequences of a particular exchange are good, and that the exchange produces no rights violations or uncompensated harms, some critics still worry that introducing money into certain relationships expresses the wrong attitude toward the thing being exchanged, or alters the nature of the relationship. Obvious examples include leaving a tip on the pillow of a romantic partner for a job well done, or paying your grandchildren to visit you on your deathbed. Money seems to introduce crass motivations and substitute self-interest for love as the foundation of relationships. Brennan and Jaworski call objections like these "semiotic" because they signify how an action can stand for, or symbolize, something good or bad. Wearing a pink bikini or singing a Satanic verse from a Swedish metal band at a funeral might be a bad idea if it signifies insouciance about the recently departed. Even if our grandmother can't rise from the grave to feel disappointment or reprimand us for our aesthetic decisions, we may have wronged her by disrespecting funeral conventions. Similarly, it might be wrong to sell pink bikinis at our grandmother's funeral not because it violates anyone's rights, or because some people at the service will feel compelled by their emotional vulnerability to buy a bikini. It is wrong in most cases because it signifies disrespect for a solemn occasion. But if Brennan and Jaworski are right, the market for pink bikinis is not intrinsically wrong, and offering them for cash rather than giving them away for free introduces no new wrongness into the situation. What is wrong in this example is to flout culturally-specific norms that signify respect. Part of the reason Brennan and Jaworski think market exchange doesn't necessarily add anything new to the universe of moral problems is that the symbolic meaning of a particular kind of exchange is a socially constructed fact that is subject to change. It is here that the authors make an original move. On Brennan and Jaworski's view, while it is polite to defer to the socially constructed meaning our culture attaches to a particular action, reasons to be polite are not especially strong when other important values are at stake (82). We should ultimately decide to follow or flout symbolically significant norms only if the expected consequences of doing so are good: "our view is that when there is a clash between semiotics and consequences, consequences win" (62). There can be real costs -- individual and social -- to conforming to the semiotics that emerge in a culture. If a legal market in human organs offers us tremendous gains, it is not sufficient for us to conclude that because many people in our culture regard this as a way of disrespecting the living, we should respect the norm, let alone codify it as law. Instead, Brennan and Jaworski think we should rebel against a legal system that bans organ sales as a way of symbolically protecting the dignity of life when a predictable consequence of the prohibition is the premature death of many people (70). Of course, many worry about organ sellers being disproportionately vulnerable people who either don't understand the consequences of selling their organs, or who consider this the best among a bad set of options determined by unjust background conditions. Others worry that if we opened up a market for organs, poor people would more often sell than buy organs, and that the distributive consequences of the market would be unjust. Replies to these objections have existed for a long time in the literature[3], and they include measures like the state letting people sell their organs only after they receive medical and psychiatric counsel, and subsidizing people who qualify for organs but can't afford to buy them. Brennan and Jaworski argue early on that "The question of whether it is morally permissible to have a market in some good or service is not the same as the question of whether it's permissible to have a free, completely unregulated market in that good or service" (25). They consider it reasonable to regulate a market if doing so can be shown to protect third parties, prevent unjust exploitation, or lead to better distributive consequences. But as we've seen, some critics of markets suggest that even if these worries can be assuaged through regulation, some markets are wrong because of the signals they send. Brennan and Jaworski agree that actions have symbolic power, but they want us to ask the economists' question: what are the opportunity costs, or benefits foregone, of maintaining our semiotic codes? And when should we attempt to modify these codes?When the semiotics that pervade your culture are socially harmful, Brennan and Jaworski assert that "you may conscientiously choose to reject the code of meaning" (72). It is easy to agree that we may be morally justified in rejecting a culture's semiotics. But Brennan and Jaworski arguably don't go far enough by offering us moral permissions rather than requirements. It is plausible that we have an obligation to exert significant effort to change socially harmful semiotic conventions, even at great personal cost, when we have the power to do so. Influential academics like Satz and Sandel, or Brennan and Jaworski, can be thought of as norm entrepreneurs with special obligations to defend or criticize the prevailing semiotics with novel arguments.[4] But academics are not alone in having strong obligations to evaluate and potentially change the norms that determine the symbolic meaning of particular actions or exchanges. Anyone who is well-placed to assess the expected consequences of rival norms, and who can influence people to adopt superior norms, seems to have reasons to try to overturn existing norms when the personal risks are small. If this is right, perhaps we have obligations to contribute to norm change in proportion to some combination of our ability to do so, our evidence that a better norm is on offer, and the strength of our reasons to believe that a particular norm among the socially feasible set will produce better consequences than the available alternatives. In the final part of the book, Brennan and Jaworski consider the possibility that some opposition to markets is rooted in visceral but morally unjustifiable feelings like disgust. Indeed, critics sometimes use words like "disgusting," "repugnant," or "repulsive" rather than "unfair" or "socially harmful" to describe markets they worry about (197). Some use these terms to describe the consequences of black markets, but often this language reflects gut reactions to legally sanctioned markets like the sale of cadavers or body parts from aborted fetuses. Sometimes our gut reactions are reliable indicators that there is something morally wrong going on, but psychologists have illustrated the many ways in which people tend to develop post-hoc rationalizations for their feelings of disgust. In one study, Jonathan Haidt recorded the reactions of subjects to a hypothetical case of an adult brother and sister who decide to have consensual sex during a vacation in France. They use birth control, and keep their affair secret in order to avoid offending their parents. Researchers reassured subjects that the decision was not made under duress and that the liaison would not produce children. Yet many subjects continued to maintain that the relationship was wrong for reasons they couldn't specify.[5] Social psychologists call this phenomenon "moral dumbfounding," and Brennan and Jaworski use this concept to explain at least some of the intractable intuitions people have about the wrongness of markets that strike us as objectionable for reasons we can't quite articulate (207). Our moral intuitions evolved, in part, to solve collective action problems in small-scale societies.[6] So they are often unreliable guides to how we should organize large-scale political institutions, or react to how people raised in very different communities choose to live their lives. This suggests that we should be careful to avoid elevating a moral intuition or a value-laden gut reaction to the status of an enforceable law unless we can show that doing so prevents harm to others, improves social welfare, protects autonomy, or promotes other widely shared values that can survive scrutiny. Aversion to incest and homosexuality almost certainly helped our ancestors maximize inclusive fitness, even if these aversions fail to promote human welfare, especially in the modern world. Evolution has never "aimed at" making creatures happy, but at least some of the moral intuitions that gave our ancestors reproductive advantages in the past cause tremendous misery in the present, especially when we unreflectively use these intuitions to ground social norms or legal sanctions.[7] Some critics of markets that elicit repugnance would argue that we just haven't yet found a justification for these attitudes,[8] while Brennan and Jaworski would likely say this is because a justification is not forthcoming. Both sides are making an inference to the best explanation, but I suspect Brennan and Jaworski are right. It's worth exploring some parallels between inferences we might make in ethics with those in the sciences. Long before the advent of genetics, Charles Darwin knew that for evolution by natural selection to work, there had to be some mechanism (what we now know to be DNA) for faithfully transmitting information to create body parts and repair tissues (from what we now know to be proteins). The best explanation was that something was there doing the work, although Darwin didn't quite know what. Similarly, cosmologists have noticed that the universe is expanding at a rate that exceeds what gravity alone can account for. Many cosmologists assume the best explanation for the universe's accelerating rate of expansion is something they call "dark energy" (dark energy differs from dark matter, but has a similar placeholder status as an unknown feature of the universe that is supposed to help explain well-understood phenomena). Are the gut reactions we feel when we encounter a market that makes us queasy an indicator that our queasiness is justified, for reasons we don't yet understand? Maybe so, but I doubt it. In high stakes cases like markets for organs or genetically engineered babies, we should take our cue from Brennan and Jaworski and look to the expected consequences of markets -- including regulated markets -- on human welfare as a way of gauging whether our gut reactions are a reliable indicator of whether a market is morally justified.
One of the deepest issues in science and philosophy is demarcating causality from mere correlation: meet Topological Causality Analysis Identification of causal links is fundamental for the analysis of complex systems. In dynamical systems, however, nonlinear interactions may hamper separability of subsystems which poses a challenge for attempts to determine the directions and strengths of their mutual influences. We found that asymmetric causal influences between parts of a dynamical system lead to characteristic distortions in the mappings between the attractor manifolds reconstructed from respective local observables. These distortions can be measured in a model-free, data-driven manner. This approach extends basic intuitions about cause-effect relations to deterministic dynamical systems and suggests a mathematically well defined explanation of results obtained from previous methods based on state space reconstruction.
On Reverse Engineering Life by Building a Minimal Replicating Genome A goal in biology is to understand the molecular and biological function of every gene in a cell. One way to approach this is to build a minimal genome that includes only the genes essential for life. In 2010, a 1079-kb genome based on the genome of Mycoplasma mycoides (JCV-syn1.0) was chemically synthesized and supported cell growth when transplanted into cytoplasm. Hutchison III et al. used a design, build, and test cycle to reduce this genome to 531 kb (473 genes). The resulting JCV-syn3.0 retains genes involved in key processes such as transcription and translation, but also contains 149 genes of unknown function. In 1984, the simplest cells capable of autonomous growth, the mycoplasmas, were proposed as models for understanding the basic principles of life. In 1995, we reported the first complete cellular genome sequences (Haemophilus influenza, 1815 genes, andMycoplasma genitalium, 525 genes). Comparison of these sequences revealed a conserved core of about 250 essential genes, much smaller than either genome. In 1999, we introduced the method of global transposon mutagenesis and experimentally demonstrated that M. genitalium contains many genes that are nonessential for growth in the laboratory, even though it has the smallest genome known for an autonomously replicating cell found in nature. This implied that it should be possible to produce a minimal cell that is simpler than any natural one. Whole genomes can now be built from chemically synthesized oligonucleotides and brought to life by installation into a receptive cellular environment. We have applied whole-genome design and synthesis to the problem of minimizing a cellular genome. Since the first genome sequences, there has been much work in many bacterial models to identify nonessential genes and define core sets of conserved genetic functions, using the methods of comparative genomics. Often, more than one gene product can perform a particular essential function. In such cases, neither gene will be essential, and neither will necessarily be conserved. Consequently, these approaches cannot, by themselves, identify a set of genes that is sufficient to constitute a viable genome. We set out to define a minimal cellular genome experimentally by designing and building one, then testing it for viability. Our goal is a cell so simple that we can determine the molecular and biological function of every gene. Whole-genome design and synthesis were used to minimize the 1079–kilobase pair (kbp) synthetic genome of M. mycoides JCVI-syn1.0.  An initial design, based on collective knowledge of molecular biology in combination with limited transposon mutagenesis data, failed to produce a viable cell. Improved transposon mutagenesis methods revealed a class of quasi-essential genes that are needed for robust growth, explaining the failure of our initial design. Three more cycles of design, synthesis, and testing, with retention of quasi-essential genes, produced JCVI-syn3.0 (531 kbp, 473 genes). Its genome is smaller than that of any autonomously replicating cell found in nature. JCVI-syn3.0 has a doubling time of ~180 min, produces colonies that are morphologically similar to those of JCVI-syn1.0, and appears to be polymorphic when examined microscopically. The minimal cell concept appears simple at first glance but becomes more complex upon close inspection. In addition to essential and nonessential genes, there are many quasi-essential genes, which are not absolutely critical for viability but are nevertheless required for robust growth. Consequently, during the process of genome minimization, there is a trade-off between genome size and growth rate. JCVI-syn3.0 is a working approximation of a minimal cellular genome, a compromise between small genome size and a workable growth rate for an experimental organism. It retains almost all the genes that are involved in the synthesis and processing of macromolecules. Unexpectedly, it also contains 149 genes with unknown biological functions, suggesting the presence of undiscovered functions that are essential for life. JCVI-syn3.0 is a versatile platform for investigating the core functions of life and for exploring whole-genome design.
Do we really understand quantum mechanics? This article presents a general discussion of several aspects of our present understanding of quantum mechanics. The emphasis is put on the very special correlations that this theory makes possible: they are forbidden by very general arguments based on realism and local causality. In fact, these correlations are completely impossible in any circumstance, except the very special situations designed by physicists especially to observe these purely quantum effects. Another general point that is emphasized is the necessity for the theory to predict the emergence of a single result in a single realization of an experiment. For this purpose, orthodox quantum mechanics introduces a special postulate: the reduction of the state vector, which comes in addition to the Schrödinger evolution postulate. Nevertheless, the presence in parallel of two evolution processes of the same object (the state vector) may be a potential source for conflicts; various attitudes that are possible to avoid this problem are discussed in this text. After a brief historical introduction, recalling how the very special status of the state vector has emerged in quantum mechanics, various conceptual difficulties are introduced and discussed. The Einstein Podolsky Rosen (EPR) theorem is presented with the help of a botanical parable, in a way that emphasizes how deeply the EPR reasoning is rooted into what is often called “scientific method”. In another section the GHZ argument, the Hardy impossibilities, as well as the BKS theorem are introduced in simple terms. The final two sections attempt to give a summary of the present situation: one section discusses non-locality and entanglement as we see it presently, with brief mention of recent experiments; the last section contains a (non-exhaustive) list of various attitudes that are found among physicists, and that are helpful to alleviate the conceptual difficulties of quantum mechanics.
Deep: The Physics which Preceded Inflation is NOT Likely to be Imprinted on the Observable Cosmic Microwave Background We revisit the physics of transitions from a general equation of state parameter to the final stage of slow-roll inflation. We show that it is unlikely for the modes comprising the cosmic microwave background to contain imprints from a preinflationary equation of state transition and still be consistent with observations. We accomplish this by considering observational consistency bounds on the amplitude of excitations resulting from such a transition. As a result, the physics which initially led to inflation likely cannot be probed with observations of the cosmic microwave background. Furthermore, we show that it is unlikely that equation of state transitions may explain the observed low multipole power suppression anomaly
Atoms-of-Spacetime?! Spacetime Quanta?! Is Spacetime Discrete in Loop Quantum Cosmology?! On 'Spacetime-atoms': this study considers the operator Tˆ corresponding to the classical spacetime four-volume T of a finite patch of spacetime in the context of Unimodular Loop Quantum Cosmology for the homogeneous and isotropic model with flat spatial sections and without matter sources. Since T is canonically conjugate to the cosmological ”constant” Λ, the operator Tˆ is constructed by solving its canonical commutation relation with Λ - the operator corresponding to Λ. This conjugacy, along Tˆ with the action of Tˆ on definite volume states reducing to T , allows us to interpret that Tˆ is indeed a quantum spacetime four-volume operator. The eigenstates Φτ are calculated and, considering τ ∈ R, we find that the Φτ ’s are normalizable suggesting that the real line R is in the discrete spectrum of Tˆ. The real spacetime four-volume τ is then discrete or quantized.
Deep: Population size fails to explain evolution of complex culture There is a growing consensus among archaeologists and anthropologists that the size of a population determines its ability to develop as well as to maintain complex culture. This view is however severely compromised by a paper published this week in the Proceedings of the National Academy of Sciences (PNAS) by a research team including technology philosopher Krist Vaesen of the Eindhoven University of Technology. Archeologists observe a fairly sudden appearance of behavioural modernity, such as complex technologies, abstract and realistic art and musical instruments, some 40,000 years ago, in the Later Stone Age. For decades archeologists and antropologists are looking for an explanation for these and other 'cultural revolutions', and in this way finding the origin of human culture. Since ten years or so the predominant theory says the driving factor would be growing population numbers. The logic seems inescapable indeed. The bigger the , the higher the probability it contains an Einstein. Hence, bigger populations are more likely to develop complex culture. But this consensus view is however severely compromised by a paper published this week in the Proceedings of the National Academy of Sciences (PNAS) by a research team including technology philosopher Krist Vaesen from Eindhoven University of Technology (working in the Philosopy & Ethics group of the faculty Industrial Engineering and Innovation Sciences), along with archeologists from Simon Fraser University, La Trobe University and Leiden University. They refute this demography hypothesis with a growing body of ethnographic evidence. The authors reveal critical flaws both in the theoretical models and the empirical evidence behind such demographic interpretations of cultural innovation. The models support a relationship between population size and cultural complexity only for a restricted set of extremely implausible conditions. A critical analysis of the available archaeological evidence suggests that there are simply no data to infer that behavioural modernity emerged in a period of population growth or that the size of a population directly influences the rate of innovation in a society's technological repertoire. Hence, archaeologists may need to go back to the drawing board. the idea behind the demography hypothesis is attractive in its simplicity. But complex questions by definition demand complex answers. For the evolution of complex culture, no satisfying answer is available yet. The question of the emergence of complex culture remains as elusive as ever.
Leonard Susskind: Copenhagen vs Everett and Quantum physics's Einstein-Podolsky-Rosen Entanglement = Einstein-Rosen Wormhole Bridge Keep in mind that Einstein-GR and Quantum Physics are incompatible: Quantum mechanics requires a kind of non-locality called Einstein-Podolsky-Rosen entanglement (EPRE). EPR does not violate causality, but nevertheless it is a form of non-locality. General Relativity also has its non-local features. In particular there are solutions to Einstein’s equations in which a pair of arbitrarily distant black holes are connected by a wormhole or Einstein-Rosen bridge (ERB). What if ERBW = EPRE?! DEEP: This is a remarkable claim whose impact has yet to be appreciated. The identity can be viewed as some conception of quantum geometry where two entangled spins - a Bell pair - being 5-connected by a Planckian wormhole. Excellent read by a brilliant mind