Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.
Is Spacetime Inextendible in Physics: is it 'as large as it can be'?! A spacetime (M, g_ab) is inextendible if there does not exist a spacetime (M', g'_ab) ) such that there is a proper isometric embedding ϕ : M → M'. It has been argued by most that inextendibility is a “reasonable physical condition to be imposed on models of the universe” and that a spacetime must be inextendible if it is “to be a serious candidate for describing actuality.” Here, in a variety of ways, we register some skepticism with respect to such positions.
Virtue Epistemology, Agency and a Kantian 'Epistemic Categorical Imperative' Virtue epistemologists hold that knowledge results from the display of epistemic virtues – openmindedness, rigor, sensitivity to evidence, and the like. But epistemology cannot rest satisfied with a list of the virtues. What is wanted is a criterion for being an epistemic virtue. An extension of a formulation of Kant’s categorical imperative yields such a criterion. Epistemic agents should think of themselves as, and act as, legislating members of a realm of epistemic ends: they make the rules, devise the methods, and set the standards that bind them. The epistemic virtues are the traits of intellectual character that equip them to do so. Students then not only need to learn the standards, methods, and rules of the various disciplines, they also need to learn to think of themselves as, and how to behave as, legislating members of epistemic realms who are responsible for what they and their fellows believe. This requires teaching them to respect reasons, and to take themselves to be responsible for formulating reasons their peers can respect
The Ockham Efficiency Theorem for Stochastic Empirical Methods: Does Ockham-Razor Beg the Question Against Truth? Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is the Ockham efficiency theorem (Kelly 2002, Minds Mach 14:485–505, 2004, Philos Sci 74:561–573, 2007a, b, Theor Comp Sci 383:270–289, c, d; Kelly and Glymour 2004), which states that scientists who heed Ockham’s razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random (“mixed”) strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors.
Randomness: quantum versus classical Recent tremendous development of quantum information theory led to a number of quantum technological projects, e.g., quantum random generators. This development stimulates a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is elaboration of a consistent and commonly accepted interpretation of quantum state. Closely related problem is clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. The second part of this review is devoted to the information interpretation of quantum mechanics (QM) in the spirit of Zeilinger and Brukner (and QBism of Fuchs et al.) and physics in general (e.g., Wheeler’s “it from bit”) as well as digital philosophy of Chaitin (with historical coupling to ideas of Leibnitz). Finally, we continue discussion on interrelation of quantum and classical randomness and information interpretation of QM. Recently the interest to quantum foundations was rekindled by the rapid and successful development of quantum information theory. One of the promising quantum information projects which can lead to real technological applications is the project on quantum random generators. Successful realization of this project attracted attention of the quantum community to the old and complicated problem of interrelation of quantum and classical randomness. In this short review we shall discuss this interrelation: classical randomness versus irreducible quantum randomness. This review can be useful for researchers working in quantum information theory, both as a review on classical randomness and on interpretational problems of QM related to the notion of randomness. We emphasize the coupling between information and randomness, both in the classical and quantum frameworks. This approach is very natural in the light of modern information revolution in QM and physics in general. Moreover, “digital philosophy” (in the spirit of Chaitin) spreads widely in modern science, i.e., not only in physics, but in, e.g., computer science, artifical intelligence, biology. Therefore it is natural to discuss jointly randomness and information including novel possibilities to operate with quantum information and randomness outside of physics, e.g., in biology (molecular biology, genetics) and cognitive science and general theory of decision making.
New Record Set for Quantum Superposition at Macroscopic Level Abstract: 'The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale1, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales4 and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence1. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov–Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity'. A team of researchers working at Stanford University has extended the record for quantum superposition at the macroscopic level, from 1 to 54 centimeters. In their paper published in the journal Nature, the team describes the experiment they conducted, their results and also discuss what their findings might mean for researchers looking to find the cutoff point between superposition as it applies to macroscopic objects versus those that only exist at the quantum level. Nature has also published an editorial on the work done by the team, describing their experiment and summarizing their results. Scientists entangling quantum particles and even whole atoms has been in the news a lot over the past couple of years as experiments have been conducted with the goal of attempting to better understand the strange phenomenon—and much has been learned. But, as scientists figure out how to entangle two particles at ever greater distances apart there has come questions about the size of objects that can be entangled. Schrödinger's cat has come up in several such discussions as theorists and those in the applied fields seek to figure out if it might be truly possible to cause a whole cat to actually be in two places at once. In this new work, the team at Stanford has perhaps muddied the water even more as they have extended the record for supposition from a mere one centimeter to just over half a meter. They did it by creating a Bose-Einstein condensate cloud made up of 10,000 rubidium atoms (inside of a super-chilled chamber) all initially in the same state. Next, the used lasers to push the cloud up into the 10 meter high chamber, which also caused the atoms to enter one or the other of a given state. As the cloud reached the top of the chamber, the researchers noted that the wave function was a half-and-half mixture of the given states and represented positions that were 54 centimeters apart. When the cloud was allowed to fall back to the bottom of the chamber, the researchers confirmed that atoms appeared to have fallen from two different heights, proving that the cloud was held in a superposition state. The team acknowledges that while their experiment has led to a new record for superposition at the macroscopic scale, it still was done with individual atoms, thus, it is still not clear if superposition will work with macroscopic sized objects.
Excellent read on the Philosophy of Science and methodology: 'Induction and Deduction in Bayesian Data Analysis' - Andrew Gelman The classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper’s doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning and the idea that a model can be dethroned by a competing model but can never be directly falsified by a significance test. The purpose of this article is to break these associations, which I think are incorrect and have been detrimental to statistical practice, in that they have steered falsificationists away from the very useful tools of Bayesian inference and have discouraged Bayesians from checking the fit of their models. From my experience using and developing Bayesian methods in social and environmental science, I have found model checking and falsification to be central in the modeling process.
Thomas Piketty’s Puzzle: Globalization, Inequality and Democracy In the absence of more aggressive state intervention to redistribute wealth from rich to poor, Piketty’s economic law means that we will see an acceleration of inequality since the rich will always own a disproportionate share of capital. George Robinson examines what this increase in inequality means for democracy. One of the most striking trends of modern times, the concentration of global wealth in hands of the very few, has been popularized by Thomas Piketty in his hugely influential Capital in the Twenty-First Century. Piketty argues that the rate of return on capital consistently exceeds the rate of economic growth. In the absence of more aggressive state intervention to redistribute wealth from rich to poor, Piketty’s economic law means that we will see an acceleration of inequality since the rich will always own a disproportionate share of capital. What does this increase in inequality mean for democracy? I sympathize with Piketty’s view that a significant increase in existing inequalities of wealth may be harmful to the quality of existing democracies. The legitimacy of democracy depends, at least in part, on producing outcomes that citizens think are fair. A rapid growth in inequality risks undermining this implicit contract between citizens. However, we also should consider the impact of inequality on the process of democratization. Our globalized world, with its high levels of financial integration and mobility of capital, may have the curious side effect of making transitions from autocracy to democracy more likely. So while it is right to consider how inequality will change established democracies, we should also think about what it means for the possibility of transitions to democratic government in autocratic states. Political scientists often consider the transition from dictatorship to democracy in game theoretical terms. The elites undertake a kind of back of the envelope calculation about the costs and benefits controlling the apparatus of state. Maintaining an autocracy comes with certain costs, the costs, for instance, of controlling the population. These costs are weighed up against the risks of allowing the general population to set the rate of taxation. This might make one assume that increasing levels of equality will bolster the chances of democracy. Here’s an intuitive theory: as the distribution of income in a society becomes more equal the pressures to pursue redistributive policies from the disenfranchised in society will diminish which, in turn, reduces the costs of tolerating democracy for the elites. That’s a complicated way of saying that higher levels of equality make democratization a cheaper deal for elites. So far this all seems to support the commonsense thesis that a growth in inequality is bad for democratization. Large levels of inequality increase the risks associated with democratization for the ruling elite, so the elite are less likely to relinquish control of the state. But this picture is incomplete. We need to think about how easy it is for elites to move their money around in the modern world. Technological change and financial innovation have made assets more mobile, this has some pretty profound consequences for the risks and rewards associated with allowing a transition to democracy. So how might this increased financial integration change the behavior of autocratic elites? A recent paper by John Freeman and Dennis Quinn presents some interesting conclusions about the implications of financial globalization for democratization. Unsurprisingly they suggest that a greater level financial integration makes it easier for elites to move their assets out of the country and that this is going to reduce the threat of democratization to a ruling elite, as any progressive change to the system of taxation will have less impact on their income and assets. More controversially they argue that this is likely to happen even if the ruling elite is not feeling pressure to democratize. In a financially integrated world an investor naturally seeks an international portfolio of investments, one which diversifies into foreign equities. This kind of international portfolio will reduce risk and increase return because it is less dependent on the performance of the native economy. Innovation in financial products has changed the picture too. Many assets which would have previously be described as fixed – such as land – are now chopped up and traded on global markets. When it was not possible to sell these kind of fixed assets abroad, the wealth of a ruling group was yoked to a particular country but we now live in a world where these assets can be traded easily. In short, we are living in a world where elites are capable of spreading there wealth across the globe and are strongly incentivized to do so. A corollary of this is an increase in income inequality within the country in question; the domestic elites accrue large benefits from the asset sales and can accrue a high rate of return on their capital if they invest it abroad. All of this might point to some interesting and surprising conclusions about the relationship between the level of inequality in autocratic states and the probability of democratization. In many cases, an autocratic state with high levels of financial integration will produce a domestic elite with an international portfolio of capital investments and this is likely to result in increasing inequality within a society. This international portfolio of investments also increases probability of a transition to democracy, as the domestic elite have less incentive to maintain the repressive apparatus required to maintain autocratic rule and less to fear from a system of taxation under the ownership of democratic governments. It may be the case then, that the accelerating level of inequality Piketty has identified–facilitated in part by a globalized economy which allows a high rate of return on capital–may have the perverse effect of making the world a more democratic place by reducing the incentives of elites to maintain control of the apparatus of state in autocratic countries.
Quantum Deep Learning - Dispelling the Myth That The Brain is Bayesian and that Bayesian Methods Underlie Artificial Intelligence and Machine Learning 'We present quantum algorithms to perform deep learning that outperform conventional, state-of-the-art classical algorithms in terms of both training efficiency and model quality. Deep learning is a recent technique used in machine learning that has substantially impacted the way in which classification, inference, and artificial intelligence (AI) tasks are modeled [1–4]. It is based on the premise that to perform sophisticated AI tasks, such as speech and visual recognition, it may be necessary to allow a machine to learn a model that contains several layers of abstractions of the raw input data. For example, a model trained to detect a car might first accept a raw image, in pixels, as input. In a subsequent layer, it may abstract the data into simple shapes. In the next layer, the elementary shapes may be abstracted further into aggregate forms, such as bumpers or wheels. At even higher layers, the shapes may be tagged with words like “tire” or “hood”. Deep networks therefore automatically learn a complex, nested representation of raw data similar to layers of neuron processing in our brain, where ideally the learned hierarchy of concepts is (humanly) understandable. In general, deep networks may contain many levels of abstraction encoded into a highly connected, complex graphical network; training such graphical networks falls under the umbrella of deep learning. Boltzmann machines (BMs) are one such class of deep networks, which formally are a class recurrent neural nets with undirected edges and thus provide a generative model for the data. From a physical perspective, Boltzmann machines model the training data with an Ising model that is in thermal equilibrium. These spins are called units in the machine learning literature and encode features and concepts while the edges in the Ising model’s interaction graph represent the statistical dependencies of the features. The set of nodes that encode the observed data and the output are called the visible units (v), whereas the nodes used to model the latent concept and feature space are called the hidden units (h). Two important classes of BMs are the restricted Boltzmann machine (RBM) which takes the underlying graph to be a complete bipartite graph, and the deep restricted Boltzmann machine which is composed of many layers of RBMs (see Figure 1). For the purposes of discussion, we assume that the visible and hidden units are binary'.
Seeing Quantum (Physics) Motion Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest. For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science. Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl. "In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain." Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it. The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state. But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained. "This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one." Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object. But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech. "There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter." The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time. "We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says. In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says. These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.
Keynes and Sraffa on 'Own-Rates': A Present-Day Misunderstanding in Economics Scholars, who in recent years have studied the Sraffa papers held in the Wren Library of Trinity College, Cambridge, have concluded from Sraffa's critical—though unpublished—observations on Chapter 17 of Keynes's General Theory that he rejected Keynes's central proposition that the rate of interest on money may come to ‘rule the roost’, thus dragging the economy into recession. While Sraffa does express dissatisfaction with Chapter 17, the commentators have, we believe, misunderstood his concern: we suggest he was unhappy with Keynes's use of ‘own-rates’ rather than with the substance of the theory developed in Chapter 17. Since the papers of the late Piero Sraffa have become available to scholars, that most intriguing and important component of Keynes's General Theory, Chapter 17 on ‘The Essential Properties of Interest and Money’, has again come under critical fire, specifically from Ranchetti (2000) and Kurz (2010, 2012, 2013).1 Oka (2010) tries to fend off the criticism. From their study of the Sraffa papers, the critics contend that Keynes, having borrowed Sraffa's concept of ‘commodity rates of interest’ (Keynes preferred the designation ‘own-rates of interest’), failed to make proper use of it, and—as long ago perceived by Sraffa himself, but recorded only in unpublished notes—fell into serious error, calling in question a key proposition of his theory. Keynes famously argued that in conditions of developing recession, a relatively sticky own-rate on money would come to ‘rule the roost’, knocking out investment in other assets, the falling returns on which could not compete with the return on money. Sraffa, it has been discovered, objected privately that, with deflation, the own-rate on money would be lower, not higher, than own-rates on competing assets. On the basis of Sraffa's observations, the critics take it that Keynes was guilty of confusion and error—the implication being that Sraffa had effectively blown the argument of Chapter 17 out of the water. That interpretation needs looking into; the purpose of this paper is to do so.
On the Relation Between Mathematics and Physics: How Not to 'Factor' a Miracle: Mathematics is a bit like Zen, in that its greatest masters are likely to deny there being any succinct expression of what it is. It may seem ironic that the one subject which demands absolute precision in its definitions would itself defy definition, but the truth is, we are still figuring out what mathematics is. And the only real way to figure this out is to do mathematics. Mastering any subject takes years of dedication, but mathematics takes this a step further: it takes years before one even begins to see what it is that one has spent so long mastering. I say “begins to see” because so far I have no reason to suspect this process terminates. Neither do wiser and more experienced mathematicians I have talked to. In this spirit, for example, The Princeton Companion to Mathematics [PCM], expressly renounces any tidy answer to the question “What is mathematics?” Instead, the book replies to this question with 1000 pages of expositions of topics within mathematics, all written by top experts in their own subfields. This is a wise approach: a shorter answer would be not just incomplete, but necessarily misleading. Unfortunately, while mathematicians are often reluctant to define mathematics, others are not. In 1960, despite having made his own mathematically significant contributions, physicist Eugene Wigner defined mathematics as “the science of skillful operations with concepts and rules invented just for this purpose” [W]. This rather negative characterization of mathematics may have been partly tongue-in-cheek, but he took it seriously enough to build upon it an argument that mathematics is “unreasonably effective” in the natural sciences—an argument which has been unreasonably influential among scientists ever since. What weight we attach to Wigner’s claim, and the view of mathematics it promotes, has both metaphysical and practical implications for the progress of mathematics and physics. If the effectiveness of mathematics in physics is a ‘miracle,’ then this miracle may well run out. In this case, we are justified in keeping the two subjects ‘separate’ and hoping our luck continues. If, on the other hand, they are deeply and rationally related, then this surely has consequences for how we should do research at the interface. In fact, I shall argue that what has so far been unreasonably effective is not mathematics but reductionism—the practice of inferring behavior of a complex problem by isolating and solving manageable ‘subproblems’—and that physics may be reaching the limits of effectiveness of the reductionist approach. In this case, mathematics will remain our best hope for progress in physics, by finding precise ways to go beyond reductionist tactics.
Philosophy and the practice of Bayesian statistics - Andrew Gelman and Cosma Rohilla Shaliz A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
Major Step Toward Confirming the Existence of the Majorana Particle: A NIMS MANA group theoretically demonstrated that the results of the experiments on the peculiar superconducting state reported by a Chinese research group in January 2015 prove the existence of the Majorana-type particles. A research group led by NIMS Special Researcher Takuto Kawakami and MANA Principal Investigator Xiao Hu of the International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS) theoretically demonstrated that the results of the experiments on the peculiar superconducting state reported by a Chinese research group in January 2015 can be taken as a proof of the existence of Majorana-type particle. The existence of Majorana particle was predicted in 1937 by the Italian theoretical physicist Ettore Majorana. Though it is fermion, it is equivalent to its own antiparticle. While its existence as an elementary particle still has not been confirmed today—nearly 80 years after the prediction, it was pointed out theoretically in recent years that quasiparticle excitations in special materials called topological superconductors behave in a similar way as Majorana particles. However, it is difficult to capture these Majorana particles in materials due to their unique properties of being charge neutral and carrying zero energy. There have been intense international competitions to confirm their existence. The research group carefully examined the physical conditions of the experiments mentioned above, conducted extensive and highly precise theoretical analysis on superconducting quasiparticle excitations, and demonstrated that Majorana particles are captured inside quantum vortex cores of a topological superconductor by comparing the theoretical analysis with the results of the experiments. In addition, the group suggested a specific method to improve the precision of the experiments by taking advantage of the unique quantum mechanical properties of Majorana particles. The collective behavior of Majorana particles—fermions that are equivalent to their own antiparticles—is different from that of electrons and photons, and it is expected to be useful in the development of powerful quantum computers. Furthermore, their very unique property due to zero-energy could be exploited for the creation of various new quantum functionalities. As such, confirming the existence of Majorana particles at high precision will leave a major ripple impact toward new developments in materials science and technology.
A Deep Connection is Drawn Between 'Evidential Probability' Theory and 'Objective Bayesian Epistemology': Evidential probability (EP), developed by Henry Kyburg, offers an account of the impact of statistical evidence on single-case probability. According to this theory, observed frequencies of repeatable outcomes determine a probability interval that can be associated with a proposition. After giving a comprehensive introduction to EP in §2, in §3 we describe a recent variant of this approach, second-order evidential probability (2oEP). This variant, introduced in Haenni et al. (2008), interprets a probability interval of EP as bounds on the sharp probability of the corresponding proposition. In turn, this sharp probability can itself be interpreted as the degree to which one ought to believe the proposition in question. At this stage we introduce objective Bayesian epistemology (OBE), a theory of how evidence helps determine appropriate degrees of belief (§4). OBE might be thought of as a rival to the evidential probability approaches. However, we show in §5 that they can be viewed as complimentary: one can use the rules of EP to narrow down the degree to which one should believe a proposition to an interval, and then use the rules of OBE to help determine an appropriate degree of belief from within this interval. Hence bridges can be built between evidential probability and objective Bayesian epistemology.
New Research Shows that Evolution Learns from Previous Experience, Providing an Explanation of How Evolution by Natural Selection Produces Intelligent Designs Without Mystification: Evolution may be more intelligent than we thought, according to a University of Southampton professor. Professor Richard Watson says new research shows that evolution is able to learn from previous experience, which could provide a better explanation of how evolution by natural selection produces such apparently intelligent designs. By unifying the theory of evolution (which shows how random variation and selection is sufficient to provide incremental adaptation) with learning theories (which show how incremental adaptation is sufficient for a system to exhibit intelligent behaviour), this research shows that it is possible for evolution to exhibit some of the same intelligent behaviours as learning systems (including neural networks). In an opinion paper, published in Trends in Ecology and Evolution, Professors Watson and Eörs Szathmáry, from the Parmenides Foundation in Munich, explain how formal analogies can be used to transfer specific models and results between the two theories to solve several important evolutionary puzzles. Professor Watson says: "Darwin's theory of evolution describes the driving process, but learning theory is not just a different way of describing what Darwin already told us. It expands what we think evolution is capable of. It shows that natural selection is sufficient to produce significant features of intelligent problem-solving." For example, a key feature of intelligence is an ability to anticipate behaviours that that will lead to future benefits. Conventionally, evolution, being dependent on random variation, has been considered 'blind' or at least 'myopic' - unable to exhibit such anticipation. But showing that evolving systems can learn from past experience means that evolution has the potential to anticipate what is needed to adapt to future environments in the same way that learning systems do. "When we look at the amazing, apparently intelligent designs that evolution produces, it takes some imagination to understand how random variation and selection produced them. Sure, given suitable variation and suitable selection (and we also need suitable inheritance) then we're fine. But can natural selection explain the suitability of its own processes? That self-referential notion is troubling to conventional evolutionary theory - but easy in learning theory. "Learning theory enables us to formalise how evolution changes its own processes over evolutionary time. For example, by evolving the organisation of development that controls variation, the organisation of ecological interactions that control selection or the structure of reproductive relationships that control inheritance - natural selection can change its own ability to evolve. "If evolution can learn from experience, and thus improve its own ability to evolve over time, this can demystify the awesomeness of the designs that evolution produces. Natural selection can accumulate knowledge that enables it to evolve smarter. That's exciting because it explains why biological design appears to be so intelligent."
Physics and Biology: Quantum Criticality at the Origin of Life: Why life persists at the edge of chaos is a question at the very heart of evolution. Here it is shown that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Using tools from Random Matrix Theory, it is confirmed that the energy level statistics of these biomolecules show the universal transitional distribution of the metal-insulator critical point and the wave functions are multifractals in accordance with the theory of Anderson transitions. The findings point to the existence of a universal mechanism of charge transport in living matter. The revealed bio-conductor material is neither a metal nor an insulator but a new quantum critical material which can exist only in highly evolved systems and has unique material properties
Physicists Confirm Thermodynamic Irreversibility in a Quantum System: Deep Result in Light Of T-Invariance of Quantum Laws/Equations: For the first time, physicists have performed an experiment confirming that thermodynamic processes are irreversible in a quantum system—meaning that, even on the quantum level, you can't put a broken egg back into its shell. The results have implications for understanding thermodynamics in quantum systems and, in turn, designing quantum computers and other quantum information technologies. The physicists, Tiago Batalhão at the Federal University of ABC, Brazil, and coauthors, have published their paper on the experimental demonstration of quantum thermodynamic irreversibility in a recent issue of Physical Review Letters. Irreversibility at the quantum level may seem obvious to most people because it matches our observations of the everyday, macroscopic world. However, it is not as straightforward to physicists because the microscopic laws of physics, such as the Schrödinger equation, are "time-symmetric," or reversible. In theory, forward and backward microscopic processes are indistinguishable. In reality, however, we only observe forward processes, not reversible ones like broken egg shells being put back together. It's clear that, at the macroscopic level, the laws run counter to what we observe. Now the new study shows that the laws don't match what happens at the quantum level, either. Observing thermodynamic processes in a quantum system is very difficult and has not been done until now. In their experiment, the scientists measured the entropy change that occurs when applying an oscillating magnetic field to carbon-13 atoms in liquid chloroform. They first applied a magnetic field pulse that causes the atoms' nuclear spins to flip, and then applied the pulse in reverse to make the spins undergo the reversed dynamics. If the procedure were reversible, the spins would have returned to their starting points—but they didn't. Basically, the forward and reverse magnetic pulses were applied so rapidly that the spins' flipping couldn't always keep up, so the spins were driven out of equilibrium. The measurements of the spins indicated that entropy was increasing in the isolated system, showing that the quantum thermodynamic process was irreversible. By demonstrating that thermodynamic irreversibility occurs even at the quantum level, the results reveal that thermodynamic irreversibility emerges at a genuine microscopic scale. This finding makes the question of why the microscopic laws of physics don't match our observations even more pressing. If the laws really are reversible, then what are the physical origins of the time-asymmetric entropy production that we observe? The physicists explain that the answer to this question lies in the choice of the initial conditions. The microscopic laws allow reversible processes only because they begin with "a genuine equilibrium process for which the entropy production vanishes at all times," the scientists write in their paper. Preparing such an ideal initial state in a physical system is extremely complex, and the initial states of all observed processes aren't at "genuine equilibrium," which is why they lead to irreversible processes. "Our experiment shows the irreversible nature of quantum dynamics, but does not pinpoint, experimentally, what causes it at the microscopic level, what determines the onset of the arrow of time," coauthor Mauro Paternostro at Queen's University in Belfast, UK, told Phys.org. "Addressing it would clarify the ultimate reason for its emergence." The researchers hope to apply the new understanding of thermodynamics at the quantum level to high-performance quantum technologies in the future. "Any progress towards the management of finite-time thermodynamic processes at the quantum level is a step forward towards the realization of a fully fledged thermo-machine that can exploit the laws of quantum mechanics to overcome the performance limitations of classical devices," Paternostro said. "This work shows the implications for reversibility (or lack thereof) of non-equilibrium quantum dynamics. Once we characterize it, we can harness it at the technological level."
Isaac Levi: Pragmatism, Philosophy of Science, Truth and Inquiry Isaac Levi is a central figure in contemporary pragmatism, who, drawing extensively on the philosophy of classical pragmatists like Charles S. Peirce and John Dewey, has been able to successfully develop, correct, and implement their views, thus presenting an innovative and significant approach to various issues in contemporary philosophy, including problems in logic, epistemology, decision theory, etc. His books (just to mention a few of them) Gambling with Truth (Knopf 1967), The Enterprise of Knowledge (MIT Press 1980), Hard Choices (Cambridge University Press 1986), and The Fixation of Belief and Its Undoing (Cambridge University Press 1991) propose a solid and elaborate framework to address various issues in epistemology from an original pragmatist perspective. The essays contained in Pragmatism and Inquiry investigate ideas that constitute the core of Levi’s philosophy (like corrigibilism, his account of inquiry, his distinction between commitment and performance, his account of statistical reasoning, his understanding of credal probabilities, etc.), but they do so by putting these views in dialogue with other important philosophical figures of the last and present century (like Edward Craig, Donald Davidson, Jaakko Hintikka, Frank Ramsey, Richard Rorty, Michael Williams, Timothy Williamson, Crispin Wright, etc.), thus providing a renewed entry-point in his thought. The collection contains 11 essays which have been all published already. It is good to have these articles collected together, because they are very interrelated and they present a systematic view. However, it would have been useful to have a longer introduction (the one in the book is just 3 pages long), which guided the reader among the conceptual relationships between the chapters, and which identified their relevance in the current philosophical context. Insofar as there is much interrelation and overlap among the articles, I will avoid commenting them singularly one after another. I will rather identify some of the central themes that run through the book and point out where relevant ideas about these themes are presented in the collection. The first topic I wish to focus on is Levi’s original account of the tasks and purposes of epistemology. According to Levi, epistemology should not be understood as a discipline that identifies the principles according to which we can decide whether our beliefs are justified or not. He takes from the classical pragmatist (in particular from Peirce and Dewey) what he calls the principle of doxastic inertia, or doxastic infallibilism (cf. 32, 231), according to which we have no reason to justify the beliefs we are actually certain of. The task of epistemology is thus not that of justifying our current beliefs, but rather that of justifying changes in beliefs (cf. 165-71). In this context, Levi develops an interesting and original perspective which associates infallibilism and corrigibilism. We should be infallibilist about the beliefs we currently hold as true (according to Levi, it would be incoherent to hold them to be true and stress that they could be false as fallibilists do). Nonetheless we can be corrigibilist about our beliefs, because we can held them to be vulnerable to modification in the course of inquiry.1 This means that we cannot but regard our current beliefs as true, even though we can consider them as open to correction in the course of future inquiry (cf. 120). At this point, it is interesting to refer to Levi’s discussion of the claim, advanced for example by Rorty, that we should aim at warranted assertibility and not at truth. According to Levi, this claim could be understood in different ways. On the one hand, it could be read as implying that we should increase the number of beliefs that are acquired through well-conducted inquiry (133). This would be contrary to the principle of doxastic inertia proposed by the classical pragmatism, because we would require a justification for beliefs we already had. Consequently, if we read Rorty’s claim in this way, his pragmatism would abandon one of the central views of this very tradition. On the other hand, if warranted assertibility is understood as a specific aim of inquiries we actually pursue (where we thus need a justification because we are trying to introduce changes in our beliefs), the simple contention that we should aim at warranted assertibility remains empty if we do not specify which are the proximate aims of the inquiry in question (and if we do so, it seems that a central aim of at least some inquiries should be the attainment of new error free information: a goal that Rorty would probably reject as an aim of inquiry) (cf. 130, 133). From this account of the role and purposes of epistemology, it is clear how the analysis of the structure and procedures of inquiry plays an essential role in Levi’s theory of knowledge. This is the second topic I wish to address. Levi often recognizes his debt to Peirce and Dewey in his account of inquiry (cf. 1), but he also insists that we should develop their views further in order to attain a consistent position. He agrees with Peirce that inquiry is the process which allows us to pass from a state of doubt to a state of belief (cf. 83), but, following Dewey, he criticizes Peirce’s psychological description of these states (1-2).2 However, he does not endorse Dewey’s strategy to avoid psychologism, that is, his description of inquiry as a process starting with an indeterminate situation and ending with a determinate situation (2, 84-5). Rather, he understands changes in states of belief as changes in doxastic commitments, where states of belief understood as commitments are to be distinguished from states of belief understood as performances. Accordingly, a doxastic commitment identifies the set of beliefs we commit ourselves in a state of full belief. It could be totally different from the views we consciously endorse, which identify our state of doxastic performance (cf. 106). A state of belief understood as commitment has then a normative component, because it describes what we should believe and not what we actually believe. Besides identifying the beliefs we are committed to endorse, our state of full belief also decides which are the possible views and theories, on which we might rationally have doubts about (48). In other words, a state of full belief (understood as commitment) decides the space of serious possibilities we can rationally inquire about (169). Accordingly, inquiry should not be understood as a process that generates changes in doxastic performances (which would concern our psychological dispositions and states), but rather as a process which results in changes in doxastic commitments (108). Changes in doxastic commitments can concern either the extension or contraction of our state of full belief. Levi offers us a detailed analysis of the ways in which these changes can be justified. Extension can be justified by either routine expansion or deliberate expansion, where routine expansion identifies a “program for utilizing inputs to form new full beliefs to be added to X’s state of full belief K” (235). Levi refers here to a “program” because he wants to distinguish this kind of expansion from a conclusion obtained through inference, where, for example, the data would figure as premises of an induction (236). The difference here is that the “program” tells us how to use the data before the data are collected, whereas in inductive inferences there is no such identification in advance. He reads Peirce’s late account of induction as developing some elements along these lines (72-3) and he finds some affinities with Hintikka’s account of induction as a process “allowing nature to answer a question put to it by the inquirer” (204). Our state of full belief can also expand by means of deliberate expansion. In the latter “the answer chosen is justified by showing it is the best option among those available given the cognitive goals of the agent” (236). “The justified change is the one that is best among the available options (relevant alternatives) according to the goal of seeking new error-free and valuable information” (237). However, when we expand our state of full belief we can inadvertently generate inconsistencies among our beliefs. When we are in this inconsistent state of belief, we cannot but give up some of our beliefs in order to avoid contradictions. In contracting our state of full belief, we have basically three options. We can give up the new belief that generated the inconsistency or we can give up the old belief with which it is in contradiction. Alternatively, we can also suspend judgment between the two. In all these cases we have a contraction of the state of full belief. Levi describes the criterion which should be followed in deciding between these three options as follows: “In contracting a state of belief by giving up information X would prefer, everything else being equal, to minimize the value of loss of the information X is going to incur” (230). In deciding weather to give up either the new or the old belief, X should then take into consideration which retreat would cause the smaller loss of information. If the loss of information would be equal in the two cases, then X should suspend judgment about the two (181, 229-30). This account of inquiry and of the way in which it justifies changes in doxastic commitments is part of an elaborate and original approach to epistemology. It draws its basic insights from Peirce’s and Dewey’s account of inquiry, but it develops their views in an extremely original and detailed view, which constitutes the core of Levi’s philosophy. Levi’s book contains also interesting reflections on the concept of truth. He argues that, from a pragmatist point of view, we should not be interested in giving a definition of this concept, which clarifies what we do when we use the predicate “is true” in sentences and propositions. Rather, we should be interested in how the concept of truth is relevant for understanding the way in which we change beliefs through inquiry (124-5). Levi criticizes those accounts of inquiry which claim that inquiry should not aim at truth but at warranted assertibility (e.g. Rorty, Davidson, sometimes Dewey) (ch. 7). Against these views, he maintains that a concern with truth is essential to understand at least some of our inquiries, that is, those inquiries which aim to justify changes in full beliefs. It seems essential that these inquiries should try to avoid error (an aim that should be associated with the purpose of attaining new information) and this seems to have an indirect connection with the aim of finding out the truth (135- 6). On the other hand, Levi rejects Peirce’s account of truth as the final opinion that we will reach at the end of inquiry. According to Levi, proposing this understanding of truth as the aim of inquiry would result in insoluble inconsistencies with the kind of corrigibilism that Levi endorses and that he also attributes to Peirce (138-40). Levi’s view seems to be the following: if in my current state of belief I believe h is absolutely true, then I should regard it as an essential part of the final opinion I aim to reach “in the long run.” Thus, I should not be prepared to give up h (which would contradict Peirce’s corrigibilism), insofar as at further steps in inquiry I could end up believing the contrary view (which I now believe is false). Levi concludes that at any determinate time in inquiry we should not be concerned with making the best move in order to contribute to the attainment of the truth intended as the final and definitive description of the world. On the contrary, we should just try to obtain new errorfree information in the next proximate step of inquiry. I do not think that this way of presenting Peirce’s views is fair to his actual position, for two main reasons: (1) Peirce’s account of truth as the final opinion can be read as identifying not substantial theses about reality or the ultimate aims of inquiry, but the commitments we make with respect to a proposition when we asserts that it is true: that is, we commit ourselves to the view that it will hold in the long run;3 (2) even if we identify the attainment of truth as the ultimate aim of inquiry, it seems possible, within Peirce’s model, to maintain that we can be corrigibilist about the views we currently consider true. Of course it would be irrational to doubt or give up these views as long as we still believe in them (this is basically what Levi calls Peirce’s principle of doxastic inertia). This does not imply that we cannot consider those views as corrigible, given that we could incur in circumstances (like new evidence gained through experience, or the identification of inconsistencies in our set of beliefs, etc.) that justify the emergence of a doubt on those views. If we were in these circumstances, it would not be problematic to give up those views, insofar as we would not be any more completely certain that they are true. If our aim were thus the attainment of truth in the long run, we would be justified to give up those views insofar as we would not be any more certain that they contribute to the attainment of the final opinion. Levi’s book also contains important scholarly contributions on Peirce and Dewey. It is undeniable that his approach to the writings of both Peirce and Dewey is strongly influenced by his own views and interests, but Levi is surely distinctive among the central figures in contemporary pragmatism for reading these classics with the attention they deserve. Chapter 4 “Beware of Syllogism: Statistical Reasoning and Conjecturing According to Peirce” presents a reconstruction of the evolution of Peirce’s account of induction and hypothesis. Levi shows how Peirce later abandons his early attempts to define these kinds of inferences by means of a permutation of the structure of a categorical syllogism. In his later writings Peirce first begins to regard these inferences as permutations of statistical deductions (75), and he then abandons this strategy in favor of a description of deduction, induction and abduction reflecting their roles in inquiry (77-8). Chapters 5 “Dewey’s Logic of Inquiry” and 6 “Wayward Naturalism: Saving Dewey from Himself” contain interesting considerations on Dewey’s theory of inquiry and the kind of naturalism we should associate with it. Insofar as the two articles overlap in many respect (unfortunately sometimes the overlap is not only relative to the topics, but textual!, which makes one wonder if it would not have been better to include only one of the two in the collection) I will discuss them together. With respect to chapter 4 on Peirce, these articles are less scholarly and more concerned with a correction of Dewey’s views along the lines Levi suggests. In these chapters, Levi discusses a multiplicity of issues, but I will limit myself to the consideration of his criticism of Dewey’s naturalism (cf. 85-8, 111-16). Accordingly, Levi claims that “activities like believing, evaluating, inquiring, deliberating, and deciding are resistant to naturalization” (105), if the latter is understood as an explanation of these activities by means of psychological or behavioral dispositions. In his attempt to show continuities between the way in which humans rationally conduct inquiries and the way in which animals respond to the challenges posed by their environment, Dewey commits exactly this naturalistic fallacy (cf. 85, 111). However, states of full belief, understood as doxastic commitments, involve a normative element that cannot be reduced to dispositions (106). Endorsing an approach to inquiry based on commitments is equal to endorsing a better naturalism, which Levy calls wayward naturalism (cf. 103-4), and which does not substitute old supernatural entities with new ones (according to Levi, the appeal to dispositions as universal means of explanation in epistemology introduce a new kind of supernaturalism). Following Levi, if we read Dewey properly, it becomes evident that we cannot but develop his account of inquiry in this way (108-9). To conclude, it is surely good to have these essays collected together, insofar as they offer a new perspective on some of the central insights of Levi’s philosophy thanks to a fruitful discussion with recent developments in epistemology. Even though sometimes the overlap between the articles is so significant (as in the case of chapter 5 and 6), that it would have been advisable to avoid redundancies, the texts here presented are surely of interest for any scholar who believes that the classical pragmatists’ account of inquiry has still a lot to offer to the current philosophical debate
Interference Effects of Choice on Confidence: Quantum Characteristics of Evidence Accumulation Decision-making relies on a process of evidence accumulation which generates support for possible hypotheses. Models of this process derived from classical stochastic theories assume that information accumulates by moving across definite levels of evidence, carving out a single trajectory across these levels over time. In contrast, quantum decision models assume that evidence develops over time in a superposition state analogous to a wavelike pattern and that judgments and decisions are constructed by a measurement process by which a definite state of evidence is created from this indefinite state. This constructive process implies that interference effects should arise when multiple responses (measurements) are elicited over time. We report such an interference effect during a motion direction discrimination task. Decisions during the task interfered with subsequent confidence judgments, resulting in less extreme and more accurate judgments than when no decision was elicited. These results provide qualitative and quantitative support for a quantum random walk model of evidence accumulation over the popular Markov random walk model. We discuss the cognitive and neural implications of modeling evidence accumulation as a quantum dynamic system. Significance: Most cognitive and neural decision-making models—owing to their roots in classical probability theory—assume that decisions are read out of a definite state of accumulated evidence. This assumption contradicts the view held by many behavioral scientists that decisions construct rather than reveal beliefs and preferences. We present a quantum random walk model of decision-making that treats judgments and decisions as a constructive measurement process, and we report the results of an experiment showing that making a decision changes subsequent distributions of confidence relative to when no decision is made. This finding provides strong empirical support for a parameter-free prediction of the quantum model.
String-Theory Calculations Describe 'Birth of the Universe' Researchers in Japan have developed what may be the first string-theory model with a natural mechanism for explaining why our universe would seem to exist in three spatial dimensions if it actually has six more. According to their model, only three of the nine dimensions started to grow at the beginning of the universe, accounting both for the universe's continuing expansion and for its apparently three-dimensional nature. String theory is a potential "theory of everything", uniting all matter and forces in a single theoretical framework, which describes the fundamental level of the universe in terms of vibrating strings rather than particles. Although the framework can naturally incorporate gravity even on the subatomic level, it implies that the universe has some strange properties, such as nine or ten spatial dimensions. String theorists have approached this problem by finding ways to "compactify" six or seven of these dimensions, or shrink them down so that we wouldn't notice them. Unfortunately, Jun Nishimura of the High Energy Accelerator Research Organization (KEK) in Tsukuba says "There are many ways to get four-dimensional space–time, and the different ways lead to different physics." The solution is not unique enough to produce useful predictions. These compactification schemes are studied through perturbation theory, in which all the possible ways that strings could interact are added up to describe the interaction. However, this only works if the interaction is relatively weak, with a distinct hierarchy in the likelihood of each possible interaction. If the interactions between the strings are stronger, with multiple outcomes equally likely, perturbation theory no longer works. Matrix allows stronger interactions. Weakly interacting strings cannot describe the early universe with its high energies, densities and temperatures, so researchers have sought a way to study strings that strongly affect one another. To this end, some string theorists have tried to reformulate the theory using matrices. "The string picture emerges from matrices in the limit of infinite matrix size," says Nishimura. Five forms of string theory can be described with perturbation theory, but only one has a complete matrix form – Type IIB. Some even speculate that the matrix Type IIB actually describes M-theory, thought to be the fundamental version of string theory that unites all five known types. The model developed by Sang-Woo Kim of Osaka University, Nishimura, and Asato Tsuchiya of Shizuoka University describes the behaviour of strongly interacting strings in nine spatial dimensions plus time, or 10 dimensions. Unlike perturbation theory, matrix models can be numerically simulated on computers, getting around some of the notorious difficulty of string-theory calculations. Although the matrices would have to be infinitely large for a perfect model, they were restricted to sizes from 8 × 8 to 32 × 32 in the simulation. The calculations using the largest matrices took more than two months on a supercomputer, says Kim. Physical properties of the universe appear in averages taken over hundreds or thousands of matrices. The trends that emerged from increasing the matrix size allowed the team to extrapolate how the model universe would behave if the matrices were infinite. "In our work, we focus on the size of the space as a function of time," says Nishimura. 'Birth of the universe' The limited sizes of the matrices mean that the team cannot see much beyond the beginning of the universe in their model. From what they can tell, it starts out as a symmetric, nine-dimensional space, with each dimension measuring about 10–33 cm. This is a fundamental unit of length known as the Planck length. After some passage of time, the string interactions cause the symmetry of the universe to spontaneously break, causing three of the nine dimensions to expand. The other six are left stunted at the Planck length. "The time when the symmetry is broken is the birth of the universe," says Nishimura. "The paper is remarkable because it suggests that there really is a mechanism for dynamically obtaining four dimensions out of a 10-dimensional matrix model," says Harold Steinacker of the University of Vienna in Austria. Hikaru Kawai of Kyoto University, Japan, who worked with Tsuchiya and others to propose the IIB matrix model in 1997, is also very interested in the "clear signal of four dimensional space–time". "It would be a big step towards understanding the origin of our universe," he says. Although he finds that the evolution of the model universe in time is too simple and different from the general theory of relativity, he says the new direction opened by the work is "worth investigating intensively". Will the Standard Model emerge? The team has yet to prove that the Standard Model of particle physics will show up in its model, at much lower energies than this initial study of the very early universe. If it leaps that hurdle, the team can use it to explore cosmology. Compared with perturbative models, Steinacker says, "this model should be much more predictive". Nishimura hopes that by improving both the model and the simulation software, the team may soon be able to investigate the inflation of the early universe or the density distribution of matter, results which could be evaluated against the density distribution of the real universe. The research will be described in an upcoming paper in Physical Review Letters and a preprint is available at arXiv:1108.1540.
Time-symmetric formulation of quantum theory provides new understanding of causality and free choice: The laws of classical mechanics are independent of the direction of time, but whether the same is true in quantum mechanics has been a subject of debate. While it is agreed that the laws that govern isolated quantum systems are time-symmetric, measurement changes the state of a system according to rules that only appear to hold forward in time, and there is difference in opinion about the interpretation of this effect. Now theoretical physicists at the Université libre de Bruxelles have developed a fully time-symmetric formulation of quantum theory which establishes an exact link between this asymmetry and the fact that we can remember the past but not the future – a phenomenon that physicist Stephen Hawking has named the "psychological" arrow of time. The study offers new insights into the concepts of free choice and causality, and suggests that causality need not be considered a fundamental principle of physics. It also extends a cornerstone theorem in quantum mechanics due to Eugene Paul Wigner, pointing to new directions for search of physics beyond the known models. The findings by Ognyan Oreshkov and Nicolas Cerf have been published this week in the journal Nature Physics. The idea that our choices at present can influence events in the future but not in the past is reflected in the rules of standard quantum theory as a principle that quantum theorists call "causality". In order to understand this principle, the authors of the new study analyze what the concept of choice in the context of quantum theory actually means. For example, we think that an experimenter can choose what measurement to perform on a given system, but not the outcome of the measurement. Correspondingly, according to the principle of causality, the choice of measurement can be correlated with outcomes of measurements in the future only, whereas the outcome of a measurement can be correlated with outcomes of both past and future measurements. The researchers argue that the defining property according to which we interpret the variable describing the measurement as up to the experimenter's choice, while the outcome not, is that it can be known before the actual measurement takes place. From this perspective, the principle of causality can be understood as a constraint on the information available about different variables at different times. This constraint is not time-symmetric since both the choice of measurement and the outcome of a measurement can be known a posteriori. This, according to the study, is the essence of the asymmetry implicit in the standard formulation of quantum theory. "Quantum theory has been formulated based on asymmetric concepts that reflect the fact that we can know the past and are interested in predicting the future. But the concept of probability is independent of time, and from a physics perspective it makes sense to try to formulate the theory in fundamentally symmetric terms", says Ognyan Oreshkov, the lead author of the study. To this end, the authors propose to adopt a new notion of measurement that is not defined only based on variables in the past, but can depend on variables in the future too. "In the approach we propose, measurements are not interpreted as up to the 'free choices' of agents, but simply describe information about the possible events in different regions of space-time", says Nicolas Cerf, a co-author of the study and director of the Centre for Quantum Information and Communication at ULB. In the time-symmetric formulation of quantum theory that follows from this approach, the principle of causality and the psychological arrow of time are both shown to arise from what physicists call boundary conditions – parameters based on which the theory makes predictions, but whose values could be arbitrary in principle. Thus, for instance, according to the new formulation, it is conceivable that in some parts of the universe causality may be violated. Another consequence of the time-symmetric formulation is an extension of a fundamental theorem by Wigner, which characterizes the mathematical representation of physical symmetries and is central to the understanding of many phenomena, such as what elementary particles can exist. The study shows that in the new formulation symmetries can be represented in ways not permitted by the standard formulation, which could have far-reaching physical implications. One speculative possibility is that such symmetries may be relevant in a theory of quantum gravity, since they have the form of transformations that have been conjectured to occur in the presence of black holes. "Our work shows that if we believe that time symmetry must be a property of the fundamental laws of physics, we have to consider the possibility for phenomena beyond those conceivable in standard quantum theory. Whether such phenomena exist and where we could search for them is a big open question", explains Oreshkov.
Power decreases trust in social exchange: How does lacking vs. possessing power in a social exchange affect people’s trust in their exchange partner? An answer to this question has broad implications for a number of exchange settings in which dependence plays an important role. Here, we report on a series of experiments in which we manipulated participants’ power position in terms of structural dependence and observed their trust perceptions and behaviors. Over a variety of different experimental paradigms and measures, we find that more powerful actors place less trust in others than less powerful actors do. Our results contradict predictions by rational actor models, which assume that low-power individuals are able to anticipate that a more powerful exchange partner will place little value on the relationship with them, thus tends to behave opportunistically, and consequently cannot be trusted. Conversely, our results support predictions by motivated cognition theory, which posits that low-power individuals want their exchange partner to be trustworthy and then act according to that desire. Mediation analyses show that, consistent with the motivated cognition account, having low power increases individuals’ hope and, in turn, their perceptions of their exchange partners’ benevolence, which ultimately leads them to trust. Significance: Trust is pivotal to the functioning of society. This work tests competing predictions about how having low vs. high power may impact people’s tendency to place trust in others. Using different experimental paradigms and measures and confirming predictions based on motivated cognition theory, we show that people low in power are significantly more trusting than more powerful people and that this effect can be explained by the constructs of hope and perceived benevolence. Our findings make important contributions to the literatures on trust, power, and motivated cognition.
Physicists put the arrow of time under a quantum microscope: the arrow of time can arise via quantum fluctuation by Jon Cartwright - Disorder, or entropy, in a microscopic quantum system has been measured by an international group of physicists. The team hopes that the feat will shed light on the "arrow of time": the observation that time always marches towards the future. The experiment involved continually flipping the spin of carbon atoms with an oscillating magnetic field and links the emergence of the arrow of time to quantum fluctuations between one atomic spin state and another. "That is why we remember yesterday and not tomorrow," explains group member Roberto Serra, a physicist specializing in quantum information at the Federal University of ABC in Santo André, Brazil. At the fundamental level, he says, quantum fluctuations are involved in the asymmetry of time. The arrow of time is often taken for granted in the everyday world. We see an egg breaking, for example, yet we never see the yolk, white and shell fragments come back together again to recreate the egg. It seems obvious that the laws of nature should not be reversible, yet there is nothing in the underlying physics to say so. The dynamical equations of an egg breaking run just as well forwards as they do backwards. Entropy, however, provides a window onto the arrow of time. Most eggs look alike, but a broken egg can take on any number of forms: it could be neatly cracked open, scrambled, splattered all over a pavement, and so on. A broken egg is a disordered state – that is, a state of greater entropy – and because there are many more disordered than ordered states, it is more likely for a system to progress towards disorder than order. This probabilistic reasoning is encapsulated in the second law of thermodynamics, which states that the entropy of a closed system always increases over time. According to the second law, time cannot suddenly go backwards because this would require entropy to decrease. It is a convincing argument for a complex system made up of a great many interacting particles, like an egg, but what about a system composed of just one particle? Serra and colleagues have delved into this murky territory with measurements of entropy in an ensemble of carbon-13 atoms contained in a sample of liquid chloroform. Although the sample contained roughly a trillion chloroform molecules, the non-interacting quantum nature of the molecules meant that the experiment was equivalent to performing the same measurement on a single carbon atom, one trillion times. Serra and colleagues applied an oscillating external magnetic field to the sample, which continually flipped the spin state of a carbon atom between up and down. They ramped up the intensity of the field oscillations to increase the frequency of the spin-flipping, and then brought the intensity back down again. Had the system been reversible, the overall distribution of carbon spin states would have been the same at the end as at the start of the process. Using nuclear magnetic resonance and quantum-state tomography, however, Serra and colleagues measured an increase in disorder among the final spins. Because of the quantum nature of the system, this was equivalent to an increase in entropy in a single carbon atom. According to the researchers, entropy rises for a single atom because of the speed with which it is forced to flip its spin. Unable to keep up with the field-oscillation intensity, the atom begins to fluctuate randomly, like an inexperienced dancer failing to keep pace with up-tempo music. "It's easier to dance to a slow rhythm than a fast one," says Serra. The group has managed to observe the existence of the arrow of time in a quantum system, says experimentalist Mark Raizen of the University of Texas at Austin in the US, who has also studied irreversibility in quantum systems. But Raizen stresses that the group has not observed the "onset" of the arrow of time. "This [study] does not close the book on our understanding of the arrow of time, and many questions remain," he adds. One of those questions is whether the arrow of time is linked to quantum entanglement – the phenomenon whereby two particles exhibit instantaneous correlations with each other, even when separated by vast distances. This idea is nearly 30 years old and has enjoyed a recent resurgence in popularity. However, this link is less to do with growing entropy and more to do with an unstoppable dispersion of quantum information. Indeed, Serra believes that by harnessing quantum entanglement, it may even be possible to reverse the arrow of time in a microscopic system. "We're working on it," he says. "In the next generation of our experiments on quantum thermodynamics we will explore such aspects." The research is described in Physical Review Letters: What is time? was chosen by Physics World editors as one of the five biggest unanswered questions in physics. In the 25th anniversary issue of the magazine (published in 2013) Adam Frank chronicles what we know and don't know about the mysterious fourth dimension
New Derivation of pi Links Quantum Physics and Pure Mathematics: In 1655 the English mathematician John Wallis published a book in which he derived a formula for pi as the product of an infinite series of ratios. Now researchers from the University of Rochester, in a surprise discovery, have found the same formula in quantum mechanical calculations of the energy levels of a hydrogen atom. "We weren't looking for the Wallis formula for pi. It just fell into our laps," said Carl Hagen, a particle physicist at the University of Rochester. Having noticed an intriguing trend in the solutions to a problem set he had developed for students in a class on quantum mechanics, Hagen recruited mathematician Tamar Friedmann and they realized this trend was in fact a manifestation of the Wallis formula for pi. "It was a complete surprise - I jumped up and down when we got the Wallis formula out of equations for the hydrogen atom," said Friedmann. "The special thing is that it brings out a beautiful connection between physics and math. I find it fascinating that a purely mathematical formula from the 17th century characterizes a physical system that was discovered 300 years later."
Could All Particles in Physics Be Mini Black Holes?: The idea that all particles are mini black holes has major implications for both particle physics and astrophysics, say scientists. Could it really be possible that all particles are mini-black holes? That’s the tantalising suggestion from Donald Coyne from UC Santa Cruz (now deceased) and D C Cheng from the Almaden Research Center near San Jose. Black holes are regions of space in which gravity is so strong that nothing, not even light, can escape. The trouble with gravity is that on anything other than an astrophysical scale, it is so weak that it can safely be ignored. However, many physicists have assumed that on the tiniest scale, the Planck scale, gravity regains its strength. In recent years some evidence to support this contention has emerged from string theory where gravity plays a stronger role in higher dimensional space. It’s only in our four dimensional space that gravity appears so weak. Since these dimensions become important only on the Planck scale, it’s at that level that gravity re-asserts itself. And if that’s the case, then mini-black holes become a possibility. Coyne and Cheng ask what properties black holes might have on that scale and it turns out that they may be far more varied than anyone imagined. The quantisation of space on this level means that mini-black holes could turn up at all kinds of energy levels. They predict the existence of huge numbers of black hole particles at different energy level. So common are these black holes that the authors suggest that: “All particles may be varying forms of stabilized black holes” That’s an ambitious claim that’ll need plenty of experimental backing. The authors say this may come from the LHC, which could begin to probe the energies at which these kinds of black holes will be produced. The authors end with the caution that it would be wrong to think of the LHC as a “black hole factory”; not because it won’t produce black holes (it almost certainly will), but because, if they are right, every other particle accelerator in history would have been producing black holes as well. In fact, if this thinking is correct, there’s a very real sense in which we are made from black holes. Curious! Read more here
Every Thing Must Go: Metaphysics Naturalized: Every Thing Must Go argues that the only kind of metaphysics that can contribute to objective knowledge is one based specifically on contemporary science as it really is, and not on philosophers' a priori intuitions, common sense, or simplifications of science. In addition to showing how recent metaphysics has drifted away from connection with all other serious scholarly inquiry as a result of not heeding this restriction, they demonstrate how to build a metaphysics compatible with current fundamental physics ('ontic structural realism'), which, when combined with their metaphysics of the special sciences ('rainforest realism'), can be used to unify physics with the other sciences without reducing these sciences to physics itself. Taking science metaphysically seriously, Ladyman and Ross argue, means that metaphysicians must abandon the picture of the world as composed of self-subsistent individual objects, and the paradigm of causation as the collision of such objects. Everything Must Go also assesses the role of information theory and complex systems theory in attempts to explain the relationship between the special sciences and physics, treading a middle road between the grand synthesis of thermodynamics and information, and eliminativism about information. The consequences of the author's metaphysical theory for central issues in the philosophy of science are explored, including the implications for the realism vs. empiricism debate, the role of causation in scientific explanations, the nature of causation and laws, the status of abstract and virtual objects, and the objective reality of natural kinds.
Zeno effect' verified—atoms won't move while you watch: The work opens the door to a fundamentally new method to control and manipulate the quantum states of atoms and could lead to new kinds of sensors. The experiments were performed in the Utracold Lab of Mukund Vengalattore, assistant professor of physics, who has established Cornell's first program to study the physics of materials cooled to temperatures as low as .000000001 degree above absolute zero. The work is described in the Oct. 2 issue of the journal Physical Review Letters Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid.,But at such low temperatures, the atoms can "tunnel" from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle's motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another. The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin,, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. Previous experiments have demonstrated the Zeno Effect with the "spins" of subatomic particles. "This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion," Vengalattore said. "Also, due to the high degree of control we've been able to demonstrate in our experiments, we can gradually 'tune' the manner in which we observe these atoms. Using this tuning, we've also been able to demonstrate an effect called 'emergent classicality' in this quantum system." Quantum effects fade, and atoms begin to behave as expected under classical physics. The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can't see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically. "This gives us an unprecedented tool to control a quantum system, perhaps even atom by atom," said Patil, lead author of the paper. Atoms in this state are extremely sensitive to outside forces,l he noted, so this work could lead to the development of new kinds of sensors. The experiments were made possible by the group's invention of a novel imaging technique that made it possible to observe ultracold atoms while leaving them in the same quantum state. "It took a lot of dedication from these students and it has been amazing to see these experiments be so successful," Vengalattore said. "We now have the unique ability to control quantum dynamics purely by observation." The popular press has drawn a parallel of this work with the "weeping angels" depicted in the Dr. Who television series – alien creatures who look like statues and can't move as long as you're looking at them. There may be some sense to that. In the quantum world, the folk wisdom really is true: "A watched pot never boils."
Is Philosophy a Grand Waste of Time?: What is philosophy? Is it largely a grand waste of time, as some scientists (like Peter Atkins and Stephen Hawking) suppose? Here's an extract from a forthcoming publication of mine ... On my view, philosophical questions are for the most part conceptual rather than scientific or empirical and the methods of philosophy are, broadly speaking, conceptual rather than scientific or empirical. Here's a simple conceptual puzzle. At a family get-together the following relations held directly between those present: Son, Daughter, Mother, Father, Aunt, Uncle, Niece, Nephew, and Cousin. Could there have been only four people present at that gathering? At first glance, there might seem to be a conceptual obstacle to there being just four people present - surely, more people are required for all those familial relations to hold between them? But in fact the appearance is deceptive. There could just be four people there. To see that there being just four people present is not conceptually ruled out, we have to unpack, and explore the connections between, the various concepts involved. That is something that can be done from the comfort of your armchair. Many philosophical puzzles have a similar character. Consider for example this puzzle associated with Heraclitus. If you jump into a river and then jump in again, the river will have changed in the interim: the water will have moved, the mud changed position, and so on. So it won't be the same. But if it's not the same river, then the number of rivers that you jump into is two, not one. It seems we're forced to accept the paradoxical - indeed, absurd - conclusion that you can't jump into one and the same river twice. Being forced into such a paradox by a seemingly cogent argument is a common philosophical predicament. This particular puzzle is fairly easily solved: the paradoxical conclusion that the number of rivers jumped into is two not one is generated by a faulty inference. Philosophers distinguish at least two kinds of identity or sameness. Numerical identity holds where the number of objects is one, not two (as when we discover that Hesperus, the evening star, is identical with Phosphorus, the morning star). Qualitative identity holds where two objects share the same qualities (e.g. two billiard balls that are molecule-for molecule duplicates of each other, for example). We use the expression 'the same' to refer to both sorts of identity. Having made this conceptual clarification, we can now see that the argument that generates our paradox trades on an ambiguity. It involves a slide from the true premise that the river jumped in the second time isn't qualitatively 'the same' to the conclusion that it is not numerically 'the same'. We fail to spot the flaw in the reasoning because the words 'the same' are used in each case. But now the paradox is resolved: we don't have to accept that absurd conclusion. Here's an example of how, by unpacking and clarifying concepts, it is possible to solve a classical philosophical puzzle. Perhaps not all philosophical puzzles can be solved by such means, but at least one can. So some philosophical puzzles are essentially conceptual in nature, and some (well, one at least) can be solved by armchair, conceptual methods. Still, I have begun with a simple, some might say trivial, philosophical example. What of the so-called 'hard problems' of philosophy, such as the mind-body problem? The mind-body problem, or at least a certain versions of it, also appears to be essentially conceptual character. On the one hand, there appear reasons to think that if mental is to have causal effects on the physical, then it will have to be identical with the physical. On the other hand, there appear to be conceptual obstacles to identifying the mental with the physical. Of course, scientists might establish various correlations between the mental and the physical. Suppose, for the sake of argument, that science establishes that whenever someone is in pain, their C-fibres are firing, and vice versa. Would scientists have then established that these properties are one and the same property - that pain just is C-fibre firing - in the way they have established that, say, heat just is molecular motion or water just is H2O? Not necessarily. Correlation is not identity. And it strikes many of us as intuitively obvious that pain just couldn't be a physical property like C-fibre firing - that these properties just couldn't be identical in that way. Of course, the intuition that certain things are conceptually ruled out can be deceptive. Earlier, we saw that the appearance the concepts son, daughter, etc. are such that there just had to be more than four people at that family gathering was mistaken: when we unpack the concepts and explore the connections between them it turns out there's no such conceptual obstacle. Philosophers have attempted to sharpen up the common intuition that there's a conceptual obstacle to identifying pain with C-fibre firing or some other physical property into a philosophical argument. Consider Kripke's anti-physicalist argument, for example, which turns on the thought that the conceptual impossibility of fool's pain (of something that feels like pain but isn't because the underlying physical essence is absent), combined with the conceptual possibility of pain without C-fibre firing (I can conceive of a situation in which I think I am in pain though my C-fibres are not firing), conceptually rules out pain having C-fibre firing as an underlying physical essence (which it would have if the identity theory were true). [1] Has Kripke here identified a genuine conceptual obstacle? Perhaps. Or perhaps not: perhaps it will turn out, on closer examination, that there is no such obstacle here. The only way to show that, however, will be through logical and conceptual work. Just as in the case of our puzzle about whether only four people might be at the family gathering and the puzzle about jumping into one and the same river twice, a solution will require we engage, not in an empirical investigation, but in reflective armchair inquiry. Establishing more facts about and a greater understanding of what happens in people's brains when they are in various mental states, etc. will no doubt be scientifically worthwhile, but it won't, by itself, allow us to answer the question of whether there is such a conceptual obstacle. So, many philosophical problems - from some of the most trivial to some of the hardest - appear to be essentially conceptual in nature, requiring armchair, conceptual work to solve. Some are solvable, and indeed have even been solved (the puzzle about the river). Others aren't solved, though perhaps they might be. On the other hand, it might turn out that at least some philosophical problems are necessarily insoluble, perhaps because we have certain fundamental conceptual commitments that are either directly irreconcilable or else generate unavoidable paradoxes when combined with certain empirically discovered facts. So there are perfectly good questions that demand answers, and that can in at least some cases be answered, though not by empirical means, let alone by the very specific forms and institutions of that mode of investigation referred to as 'the scientific method'. In order to solve many classic philosophical problems, we'll need to retire not to the lab, but to our armchairs. But is that all there is to philosophy? What of the grander metaphysical vision traditionally associated with academic philosophy? What of plumbing the deep, metaphysical structure of reality? That project is often thought to involve discerning, again by armchair methods, not what is the case (that's the business of empirical enquiry) but what, metaphysically, must be so. But how are philosophers equipped to reveal such hidden metaphysical depths by sitting in their armchairs with their eyes closed and having a good think? I suspect this is the main reason why there's considerable suspicion of philosophy in certain scientific circles. If we want to find out about reality - about how things stand outside our own minds - surely we will need to rely on empirical methods. There is no other sort of window on to reality - no other knowledge-delivery mechanism by which knowledge of the fundamental nature of that reality might be revealed. This is, of course, a traditional empiricist worry. Empiricists insist it's by means of our senses (or our senses enhanced by scientific tools and techniques) that the world is ultimately revealed. There is no mysterious extra sense, faculty, or form of intuition we might employ, while sat in our armchairs, to reveal further, deep, metaphysical facts about external reality. If the above thought is correct, and armchair methods are incapable of revealing anything about the nature of reality outside our own minds, then philosophy, conceived as a grand metaphysical exploration upon which we can embark while never leaving the comfort of our armchairs, is in truth a grand waste of time.I'm broadly sympathetic to this skeptical view about the value of armchair methods in revealing reality. Indeed, I suspect it's correct. So I have a fairly modest conception of the capabilities of philosophy. Yes, I believe we can potentially solve philosophical puzzles by armchair methods, and I believe this can be a valuable exercise. However, I'm suspicious of the suggestion that we should construe what we then achieve as our having made progress in revealing the fundamental nature of reality, a task to I which suspect such reflective, armchair methods are hopelessly inadequate.
Toward a unifying framework for evolutionary processes: the theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields.