Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.
Piketty and the limits of marginal productivity theory - Lars Pålsson Syll: "The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist today (John Maynard Keynes, General Theory, 1936)." Thomas Piketty’s book Capital in the Twenty-First Century is in many ways an impressive magnum opus. It’s a wide-ranging and weighty book, almost 700 pages thick, containing an enormous amount of empirical material on the distribution of income and wealth for almost all developed countries in the world for the last one and a half centuries. But it does not stop at this massive amount of data. Piketty also theorizes and tries to interpret the trends in the presented historical time series data. One of the more striking – and debated – trends that emerges from the data is a kind of generalized U-shaped Kuznets curve for the shares of the top 10 % and top 1 % of wealth and income, showing extremely high values for the period up to the first world war, and then dropping until the 1970/80s, when they – especially in the top 1% – start to rise sharply. Contrary to Kuznets’s (1955) original hypothesis, there does not seem to be any evidence for the idea that income differences should diminish pari passu with economic development. The gains that the increase in productivity has led to, has far from been distributed evenly in society. The optimistic view on there being automatic income and wealth equalizers, commonly held among growth and development economists until a few years ago, has been proven unwarranted. So, then, why have income differences more or less exploded since the 1980s? In an ongoing trend towards increasing inequality in both developing and emerging countries all over the world, wage shares have fallen substantially – and the growth in real wages has lagged far behind the growth in productivity – over the past three decades. As already argued by Karl Marx 150 years ago, the division between profits and wages is ultimately determined by the struggle between classes – something fundamentally different to hypothesized “marginal products” in neoclassical Cobb-Douglas or CES varieties of neoclassical production functions. Compared to Marx’s Capital, the one written by Piketty has a much more fragile foundation when it comes to theory. Where Piketty is concentrating on classifying different income and wealth categories, Marx was focusing on the facedown between different classes, struggling to appropriate as large a portion of the societal net product as possible. Piketty’s painstaking empirical research is, doubtless, very impressive, but his theorizing – although occasionally critical of orthodox economics and giving a rather dismal view of present-day and future capitalism as a rich-get-richer inequality society – is to a large extent shackled by neoclassical economic theory, something that unfortunately makes some of his more central theoretical analyses rather unfruitful from the perspective of realism and relevance. A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. A society that promotes unfettered selfishness as the one and only virtue, erodes the cement that keeps us together, and in the end we are only left with people dipped in the ice cold water of egoism and greed. If reading Piketty’s magnum opus get people thinking about these dangerous trends in modern capitalism, it may – in spite of its theoretical limitations – have a huge positive political impact. And that is not so bad. For, as the author of the original Capital once famously wrote: The philosophers have only interpreted the world, in various ways. The point, however, is to change it.
Quantum Physics, Interpretations, and Bell’s Theorem: Two Neglected Solutions - Bell’s theorem admits several interpretations or ‘solutions’, the standard interpretation being ‘indeterminism’, a next one ‘nonlocality’. In this article two further solutions are investigated, termed here ‘superdeterminism’ and ‘supercorrelation’. The former is especially interesting for philosophical reasons, if only because it is always rejected on the basis of extra-physical arguments. The latter, supercorrelation, will be studied here by investigating model systems that can mimic it, namely spin lattices. It is shown that in these systems the Bell inequality can be violated, even if they are local according to usual definitions. Violation of the Bell inequality is retraced to violation of ‘measurement independence’. These results emphasize the importance of studying the premises of the Bell inequality in realistic systems.
Softly Fine-Tuned Standard Model and the Scale of Inflation: The direct coupling between the Higgs field and the spacetime curvature, if finely tuned, is known to stabilize the Higgs boson mass. The fine-tuning is soft because the Standard Model (SM) parameters are subject to no fine-tuning thanks to their independence from the Higgs-curvature coupling. This soft fine-tuning leaves behind a large vacuum energy ∝ Λ 4 UV which inflates the Universe with a Hubble rate ∝ ΛUV, ΛUV being the SM ultraviolet boundary. This means that the tensor-to-scalar ratio inferred from cosmic microwave background polarization measurements by BICEP2, Planck and others lead to the determination of ΛUV. The exit from the inflationary phase, as usual, is accomplished via decays of the vacuum energy. Here we show that, identification of ΛUV with the inflaton, as a sliding UV scale upon the SM, respects the soft fine-tuning constraint and does not disrupt the stability of the SM Higgs boson.
Pinpointing punishment: Study identifies how a key brain region orchestrates punitive decisions: It’s a question most attorneys wish they could answer: How and why do judges and juries arrive at their decisions? The answer, according to Joshua Buckholtz, may lie in the way our brains are wired. A new study co-authored by Buckholtz, an assistant professor of psychology at Harvard, René Marois, professor and chair of psychology at Vanderbilt University, and colleagues explains how a brain region called the dorsolateral prefrontal cortex (DLPFC) coordinates third-party punishment decisions of the type made by judges and juries. The study is described in a paper published recently in the journal Neuron. “Third-party punishment is the cornerstone of all modern systems of justice, and this study suggests that our ability to make these types of decisions originates in a very basic form of information processing that is not specific to social decision-making at all,” Buckholtz said. “We think that this low-level, domain-general process of information integration forms a foundation for bootstrapping higher-order cognitive and social processes.” For Buckholtz and Marois, the new paper represents the culmination of more than seven years of work. “We were able to significantly change the chain of decision-making and reduce punishment for crimes without affecting blameworthiness,” said Marois, co-senior author of the study. “This strengthens evidence that the dorsolateral prefrontal cortex integrates information from other parts of the brain to determine punishment and shows a clear neural dissociation between punishment decisions and moral-responsibility judgments.” While still a graduate student at Vanderbilt, Buckholtz and Marois published the first study of the neural mechanisms that underlie such third-party punishment decisions, and continued to explore those mechanisms in later studies. But while those earlier papers showed that dorsolateral prefrontal cortex activity was correlated with punishment behavior, they weren’t able to pin down a causal role, or explain exactly what that brain region did to support these decisions. “It wasn’t entirely clear. Was this region corresponding to an evaluation of the mental state, or blameworthiness, of the perpetrator, or was it performing some other function? Was it assessing causal responsibility in a more general sense?” Buckholtz asked. “In this paper, we tried to develop a way to selectively map the role of this region to a more specific process and exclude alternative hypotheses.” To do that, Buckholtz, Marois, and colleagues turned to transcranial magnetic stimulation (TMS), a non-invasive technique that uses powerful electromagnets to reversibly interrupt brain information processing. As part of the study, Buckholtz and colleagues asked volunteers to read a series of scenarios that described a protagonist committing crimes ranging from simple theft to rape and murder. Each scenario varied by how morally responsible the perpetrator was for his or her actions and the degree of harm caused. In separate sessions, participants estimated perpetrators’ level of blameworthiness for each crime, and decided how much punishment they should face while researchers stimulated the brain region using the transcranial magnetic method. “What we show is that when you disrupt DLPFC activity, it doesn’t change the way they evaluate blameworthiness, but it does reduce the punishments they assign to morally responsible agents” Buckholtz said. The team was able to confirm those findings using functional MRI, and additionally was able to show that the dorsolateral prefrontal cortex was only sensitive to moral responsibility when making punishment (but not blameworthiness) decisions. This supported the idea that the brain region was not simply registering the causal responsibility of an action. Still, it didn’t answer what the region was actually doing during punishment decisions. “There had been some suggestion by others that DLPFC was important for inhibiting self-interested responses during punishment. That idea wasn’t consistent with our prior data, which led us to propose a different model,” Buckholtz said. “What this region is really good at — and it’s good at it regardless of the type of decision being made — is integrating information. In particular, punishment decisions require an integration of the culpability of a perpetrator for a wrongful act with the amount of harm they actually caused by the act.” In a previous study led by co-author Michael Treadway, now at Emory University, the authors showed that other brain regions are principally responsible for representing culpability and harm, and these areas pass this information to the prefrontal cortex when it comes time to make a decision. Using statistical models, the team showed that, under normal conditions, the impact of a perpetrator’s culpability on punishment decisions is negatively correlated with the impact of information about the amount of harm caused. “You can think of it as a zero-sum game,” Marois said. “The more you’re focused on the harm someone causes, the less you’re going to focus on how culpable they are, and the more you’re focused on their culpability, the less you focus on the harm.” Disrupting dorsolateral prefrontal cortex function, however, upends that balance. “It makes people rely more heavily on harm information and less heavily on culpability information,” Buckholtz explained. “Given the fact that, overall, TMS reduces punishment, that seemed counterintuitive to us at first. When we looked at the type of crimes this was affecting, we found it was mostly mid-range harms, like property crime and assaults. In such cases, the harm is relatively mild, but the person committing the crime had the intent to do much worse." As an example, Buckholtz cited the case of an assault that results in a broken arm. If one focuses on the perpetrator’s culpability, it’s easy to imagine that the assailant intended to do much more damage. In such an instance, focusing on the intent will lead to higher punishment than if one gives more weight to the actual amount of harm. The finding that a short dose of magnetic stimulation changes punishment decisions is sure to be of interest to those in the legal field. But not so fast, said Buckholtz. “Any suggestion that there are real-world applications for this work is wildly overblown. The magnitude of the TMS effect is quite modest, and our experiment does not replicate the conditions under which people make decisions in trial courts. The value of this study is in revealing basic mechanisms that the brain uses to render these decisions. TMS has no place in the legal system.” This study was made possible through support from the Research Network on Law and Neuroscience, supported by the John D. and Catherine T. MacArthur Foundation, which fosters research collaboration between neuroscientists and legal scholars; the National Institute of Mental Health; the National Institute on Drug Abuse; the Sloan Foundation; the Brain & Behavior Research Foundation; and the Massachusetts General Hospital Center for Law, Brain, and Behavior.
Physicists experimentally realize a quantum Hilbert hotel: (Phys.org)—In 1924, the mathematician David Hilbert described a hotel with an infinite number of rooms that are all occupied. Demonstrating the counterintuitive nature of infinity, he showed that the hotel could still accommodate additional guests. Although clearly no such brick-and-mortar hotel exists, in a new paper published in Physical Review Letters, physicists Václav Potoček, et al., have physically realized a quantum Hilbert hotel by using a beam of light. In Hilbert's thought experiment, he explained that additional rooms could be created in a hotel that already has an infinite number of rooms because the hotel manager could simply "shift" all of the current guests to a new room according to some rule, such as moving everyone up one room (to leave the first room empty) or moving everyone up to twice their current room number (to create an infinite number of empty rooms by leaving the odd-numbered rooms empty). In their paper, the physicists proposed two ways to model this phenomena—one theoretical and one experimental—both of which use the infinite number of quantum states of a quantum system to represent the infinite number of hotel rooms in a hotel. The theoretical proposal uses the infinite number of energy levels of a particle in a potential well, and the experimental demonstration uses the infinite number of orbital angular momentum states of light. The scientists showed that, even though there is initially an infinite number of these states (rooms), the states' amplitudes (room numbers) can be remapped to twice their original values, producing an infinite number of additional states. On one hand, the phenomena is counterintuitive: by doubling an infinite number of things, you get infinitely many more of them. And yet, as the physicists explain, it still makes sense because the total sum of the values of an infinite number of things can actually be finite. "As far as there being an infinite amount of 'something,' it can make physical sense if the things we can measure are still finite," coauthor Filippo Miatto, at the University of Waterloo and the University of Ottawa, told Phys.org. "For example, a coherent state of a laser mode is made with an infinite set of number states, but as the number of photons in each of the number states increases, the amplitudes decrease so at the end of the day when you sum everything up the total energy is finite. The same can hold for all of the other quantum properties, so no, it is not surprising to the trained eye." The physicists also showed that the remapping can be done not only by doubling, but also by tripling, quadrupling, etc., the states' values. In the laser experiment, these procedures produce visible "petals" of light that correspond to the number that the states were multiplied by. The ability to remap energy states in this way could also have applications in quantum and classical information processing, where, for example, it could be used to increase the number of states produced or to increase the information capacity of a channel
Quantum physics interpretations feel the heat: they are no longer a matter of metaphysics anymore: Rolf Landauer never thought his principle would solve the mysteries of quantum mechanics. He did expect, though, that information would play a part in making sense of quantum weirdness. And sure, nobody thinks that all the mysteries surrounding quantum mechanics are solved now — and many wonder whether they ever will be, for that matter. But a new approach to one deep quantum mystery suggests that viewing the world in terms of information, and applying Landauer's principle to it, does answer one question that many people believed to be unanswerable. That question, posed in many forms, boils down to whether quantum math describes something inherent and real about the physical world. Some experts say yes; others believe quantum math is just about what people can find out about the word. Another way of posing the question is to ask whether the quantum description of nature is “ontic” or “epistemic” — about reality, or about knowledge of reality. Most attempts to articulate an interpretation of what quantum math really means (and there are lots of such interpretations) tend to favor either an ontic or epistemic point of view. But even some epistemic interpretations maintain that outcomes of a measurement are determined by some intrinsic property of the system being measured. Those are sometimes lumped with the ontic group as “Type I” interpretations. Some other interpretations (classified as Type II) believe quantum measurements deal with an observer’s knowledge or belief about an underlying reality, not some inherently fixed property.  Arguments about this issue have raged for decades. And you’d think they would continue to rage, as there would seem to be no possible way to determine which view is right. As long as all experiments come out the same way no matter which interpretation you prefer, it seems like the question is meaningless, or at least moot. But now an internationally diverse group of physicists alleges that there is in fact a way to ascertain which view is correct. If you’re a friend of reality — or otherwise in the Type I camp — you’re not going to like it. There’s no way to decide the debate within the confines of quantum mechanics itself, Adán Cabello and collaborators write in a new paper, online at arXiv.org. But if you throw in thermodynamics — the physics of heat — then a bit of logical deduction and a simple thought experiment can clinch the case for Type II. That experiment involves the manipulation of a quantum state, which is described by a mathematical expression called a wave function. A wave function can be used to compute the outcome of measurements on a particle, say a photon or electron. At the root of many quantum mysteries is the slight hitch that the wave function can only tell you the odds of getting different measurement results, not what the result of any specific measurement will be. To dispense with some unnecessary technicalities, let’s just say you can prepare a particle in a quantum state corresponding to its spin pointing up. You can then measure the spin using a detector that can be oriented in either the up-down direction or left-right direction. Any measurement resets a quantum state; sometimes to a new state, but sometimes resetting it to the same state it was originally. So the net effect of each measurement is either to change the quantum state or leave it the same. If you set this all up properly, the quantum state will change half the time — on average — if you repeat your measurement many times (randomly choosing which orientation to measure). It would be like flipping a coin and getting a random list of heads and tails. So if you kept a record of that chain of quantum measurements, you would write down a long list of 1s and 0s in random order, corresponding to whether the state changes or not. If the quantum state is Type I — corresponding to an intrinsic reality that you’re trying to find out about — it must already contain the information that you record before you make your measurement. But suppose you keep on making measurements, ad infinitum. Unless this quantum system has an infinitely large memory, it can’t know from the outset the ultimate order of all those 0s and 1s. “The system cannot have stored the values of the intrinsic properties for all possible sequences of measurements that the observer can perform,” write Cabello, of the University of Seville in Spain, and colleagues from China, Germany, Sweden and England. “This implies that the system has to generate new values and store them in its memory. For that reason, the system needs to erase part of the previously existing information.” And erasing is where Landauer’s principle enters the picture. Landauer, during a long career at IBM, was a pioneer in exploring the physics of computing. He was particularly interested in understanding the ultimate physical limits of computational efficiency, much in the way that 19th century physicists had investigated the principles regulating the efficiency of steam engines. Any computational process, Landauer showed, could be conducted without using up energy if performed carefully and slowly enough. (Or at least there was no lower limit to how much energy you needed.) But erasing a bit of information, Landauer demonstrated in a 1961 paper, always required some minimum amount of energy, thereby dissipating waste heat into the environment. A Type I quantum state, Cabello and colleagues argue, needs to erase old information to make room for the new, and therefore a long run of measurements should generate a lot of heat. The longer the list, the more heat is generated, leading to an infinite release of heat for an infinitely long list, the researchers calculated. It’s pretty hard to imagine how a finite quantum system could generate an infinite amount of heat. On the other hand, if your measurements are creating the list on the fly, then the quantum state is merely about your knowledge — and there’s no heat problem. If the quantum state is Type II, it “does not correspond to any intrinsic property of the observed system,” Cabello and coauthors note. “Here, the quantum state corresponds to the knowledge or expectations an external observer has. Therefore, the measurement does not cause heat emission from the observed system.” Fans of Type I interpretations could argue that somehow the quantum system knows in advance what measurement you will perform — in other words, you really can’t orient your detector randomly. That would imply that your behavior and the quantum system are both governed by some larger system observing superdeterministic laws that nobody knows anything about. Bizarre as that sounds, it would still probably be a better defense than attacking Landauer’s principle. “Landauer’s principle has been verified in actual experiments and is considered valid in the quantum domain,” Cabello and coauthors point out. “Therefore, whenever the temperature is not zero … the system should dissipate, at least, an amount of heat proportional to the information erased.” If you would rather not take their word for it, you should check out the September issue of Physics Today, in which Eric Lutz and Sergio Ciliberto explain the intimate links between Landauer’s principle, information and the second law of thermodynamics. “Having only recently become an experimental science,” Lutz and Ciliberto write, “the thermodynamics of information has potential to deliver new insights in physics, chemistry and biology.” The new paper by Cabello and colleagues appears to be an example of just such an insight. Nobody should expect this paper to end the quantum interpretation debate, of course. But it surely provides a new point of view for discussing it. “Ultimately, our work indicates that the long-standing question, Do the outcomes of experiments on quantum systems correspond to intrinsic properties? is not purely metaphysical,” Cabello and colleagues write. “Its answer in the affirmative has considerable physical consequences, testable through experimental observation. Its falsification will be equally exciting as it will force us to embrace radically new lines of thought.”
Hawking radiation via tunneling from the spacetime of a spinning cosmic string black holes: In this paper, we study Hawking radiation as a massless particles tunneling process across the event horizon from the Schwarzschild and Reissner-Nordstr¨om black holes pierced by an infinitely long spinning cosmic string and a global monopole. Applying the WKB approximation and using a generalized Painlev´e line element for stationary axisymmetric spacetimes, also by taking into account that the ADM mass of the black hole decreases due to the presence of topological defects, it is shown that the Hawking temperature remains unchanged for these black holes. The tunneling of charged massive particles from Reissner-Nordstr¨om black holes is also studied, in both cases the tunneling rate is related to the change of the Bekenstein-Hawking entropy. The results extend the work of Parikh and Wilczek and are consistent with an underlying unitary theory
- Cosmology from quantum potential in brane-anti-brane system Alireza Sepehri: Recently, some mathematical as well as theoretical physicists, including myself, have removed the big-bang singularity and predicted an infinite age of our universe while deriving all of quantum cosmology 'theory'. In this paper, the author shows that the same result can be obtained in string theory and M-theory - a result I have derived as well and independently; The shape of the universe changes in different epochs. In this mechanism, first, N fundamental strings decay to N D0-anti-D0-brane. Then, D0-branes join to each other, grow and and form a six-dimensional brane-antibrane system. This system is unstable, broken and present form of four dimensional universes , one anti-universe in additional to one wormhole are produced. Thus, there isn’t any big-bang in cosmology and the universe is a fundamental metaplectic-string at the beginning. Also, the total age of the universe contains two parts, one related to the initial age and a second which correspondeds to the 'present' age of universe (t-tot = t-initial + t-present). On the other hand, the 'initial' age of universe includes two parts, the age of that fundamental string and time of transition (t-initial = t-transition + t-f−string). It is observed that only in the case of (t-f−string → ∞), the scale factor of the universe is zero and as a result, the total age of universe is infinity, as I demonstrated: blog-page again!
  • Hypertime -- why we need 2 dimensions of time: A Two-Time Universe? Physicist Explores How Second Dimension of Time Could Unify Physics Laws: For a long time, Itzhak Bars has been studying time. More than a decade ago, the physicist began pondering the role time plays in the basic laws of physics — the equations describing matter, gravity and the other forces of nature.Those laws are exquisitely accurate. Einstein mastered gravity with his theory of general relativity, and the equations of quantum theory capture every nuance of matter and other forces, from the attractive power of magnets to the subatomic glue that holds an atom’s nucleus together. But the laws can’t be complete. Einstein’s theory of gravity and quantum theory don’t fit together. Some piece is missing in the picture puzzle of physical reality. Bars thinks one of the missing pieces is a hidden dimension of time. Bizarre is not a powerful enough word to describe this idea, but it is a powerful idea nevertheless. With two times, Bars believes, many of the mysteries of today’s laws of physics may disappear. Of course, it’s not as simple as that. An extra dimension of time is not enough. You also need an additional dimension of space. It sounds like a new episode of “The Twilight Zone,” but it’s a familiar idea to most physicists. In fact, extra dimensions of space have become a popular way of making gravity and quantum theory more compatible. Extra space dimensions aren’t easy to imagine — in everyday life, nobody ever notices more than three. Any move you make can be described as the sum of movements in three directions — up-down, back and forth, or sideways. Similarly, any location can be described by three numbers (on Earth, latitude, longitude and altitude), corresponding to space’s three dimensions. Other dimensions could exist, however, if they were curled up in little balls, too tiny to notice. If you moved through one of those dimensions, you’d get back to where you started so fast you’d never realize that you had moved. “An extra dimension of space could really be there, it’s just so small that we don’t see it,” said Bars, a professor of physics and astronomy. Something as tiny as a subatomic particle, though, might detect the presence of extra dimensions. In fact, Bars said, certain properties of matter’s basic particles, such as electric charge, may have something to do with how those particles interact with tiny invisible dimensions of space.In this view, the Big Bang that started the baby universe growing 14 billion years ago blew up only three of space’s dimensions, leaving the rest tiny. Many theorists today believe that 6 or 7 such unseen dimensions await discovery. Only a few, though, believe that more than one dimension of time exists. Bars pioneered efforts to discern how a second dimension of time could help physicists better explain nature. “Itzhak Bars has a long history of finding new mathematical symmetries that might be useful in physics,” said Joe Polchinski, a physicist at the Kavli Institute for Theoretical Physics at UC Santa Barbara. “This two-time idea seems to have some interesting mathematical properties.” If Bars is on the right track, some of the most basic processes in physics will need re-examination. Something as simple as how particles move, for example, could be viewed in a new way. In classical physics (before the days of quantum theory), a moving particle was completely described by its momentum (its mass times its velocity) and its position. But quantum physics says you can never know those two properties precisely at the same time. Bars alters the laws describing motion even more, postulating that position and momentum are not distinguishable at a given instant of time. Technically, they can be related by a mathematical symmetry, meaning that swapping position for momentum leaves the underlying physics unchanged (just as a mirror switching left and right doesn’t change the appearance of a symmetrical face). In ordinary physics, position and momentum differ because the equation for momentum involves velocity. Since velocity is distance divided by time, it requires the notion of a time dimension. If swapping the equations for position and momentum really doesn’t change anything, then position needs a time dimension too. “If I make position and momentum indistinguishable from one another, then something is changing about the notion of time,” said Bars. “If I demand a symmetry like that, I must have an extra time dimension.” Simply adding an extra dimension of time doesn’t solve everything, however. To produce equations that describe the world accurately, an additional dimension of space is needed as well, giving a total of four space dimensions. Then, the math with four space and two time dimensions reproduces the standard equations describing the basic particles and forces, a finding Bars described partially last year in the journal Physical Review D and has expanded upon in his more recent work. Bars’ math suggests that the familiar world of four dimensions — three of space, one of time — is merely a shadow of a richer six-dimensional reality. In this view the ordinary world is like a two-dimensional wall displaying shadows of the objects in a three-dimensional room. In a similar way, the observable universe of ordinary space and time may reflect the physics of a bigger space with an extra dimension of time. In ordinary life nobody notices the second time dimension, just as nobody sees the third dimension of an object’s two-dimensional shadow on a wall. This viewpoint has implications for understanding many problems in physics. For one thing, current theory suggests the existence of a lightweight particle called the axion, needed to explain an anomaly in the equations of the standard model of particles and forces. If it exists, the axion could make up the mysterious “dark matter” that astronomers say affects the motions of galaxies. But two decades of searching has failed to find proof that axions exist. Two-time physics removes the original anomaly without the need for an axion, Bars has shown, possibly explaining why it has not been found. On a grander level, two-time physics may assist in the quest to merge quantum theory with Einstein’s relativity in a single unified theory. The most popular approach to that problem today, superstring theory, also invokes extra dimensions of space, but only a single dimension of time. Many believe that a variant on string theory, known as M theory, will be the ultimate winner in the quantum-relativity unification game, and M theory requires 10 dimensions of space and one of time. Efforts to formulate a clear and complete version of M theory have so far failed. “Nobody has yet told us what the fundamental form of M theory is,” Bars said. “We just have clues — we don’t know what it is.” Adopting the more symmetric two-time approach may help. Describing the 11 dimensions of M theory in the language of two-time physics would require adding one time dimension plus one space dimension, giving nature 11 space and two time dimensions. “The two-time version of M theory would have a total of 13 dimensions,” Bars said. For some people, that might be considered unlucky. But for Bars, it’s a reason for optimism. “My hope,” he says, “is that this path that I am following will actually bring me to the right place.
  • You're not irrational, you're just quantum probabilistic: Researchers explain human decision-making with physics theory: The next time someone accuses you of making an irrational decision, just explain that you're obeying the laws of quantum physics. A new trend taking shape in psychological science not only uses quantum physics to explain humans' (sometimes) paradoxical thinking, but may also help researchers resolve certain contradictions among the results of previous psychological studies. According to Zheng Joyce Wang and others who try to model our decision-making processes mathematically, the equations and axioms that most closely match human behavior may be ones that are rooted in quantum physics. "We have accumulated so many paradoxical findings in the field of cognition, and especially in decision-making," said Wang, who is an associate professor of communication and director of the Communication and Psychophysiology Lab at The Ohio State University. "Whenever something comes up that isn't consistent with classical theories, we often label it as 'irrational.' But from the perspective of quantum cognition, some findings aren't irrational anymore. They're consistent with quantum theory—and with how people really behave."In two new review papers in academic journals, Wang and her colleagues spell out their new theoretical approach to psychology. One paper appears in Current Directions in Psychological Science, and the other in Trends in Cognitive Sciences. Their work suggests that thinking in a quantum-like way—essentially not following a conventional approach based on classical probability theory—enables humans to make important decisions in the face of uncertainty, and lets us confront complex questions despite our limited mental resources. When researchers try to study human behavior using only classical mathematical models of rationality, some aspects of human behavior do not compute. From the classical point of view, those behaviors seem irrational, Wang explained. For instance, scientists have long known that the order in which questions are asked on a survey can change how people respond—an effect previously thought to be due to vaguely labeled effects, such as "carry-over effects" and "anchoring and adjustment," or noise in the data. Survey organizations normally change the order of questions between respondents, hoping to cancel out this effect. But in the Proceedings of the National Academy of Sciences last year, Wang and collaborators demonstrated that the effect can be precisely predicted and explained by a quantum-like aspect of people's behavior. We usually think of quantum physics as describing the behavior of sub-atomic particles, not the behavior of people. But the idea is not so far-fetched, Wang said. She also emphasized that her research program neither assumes nor proposes that our brains are literally quantum computers. Other research groups are working on that idea; Wang and her collaborators are not focusing on the physical aspects of the brain, but rather on how abstract mathematical principles of quantum theory can shed light on human cognition and behaviors. "In the social and behavioral sciences as a whole, we use probability models a lot," she said. "For example, we ask, what is the probability that a person will act a certain way or make a certain decision? Traditionally, those models are all based on classical probability theory—which arose from the classical physics of Newtonian systems. So it's really not so exotic for social scientists to think about quantum systems and their mathematical principles, too." Quantum physics deals with ambiguity in the physical world. The state of a particular particle, the energy it contains, its location—all are uncertain and have to be calculated in terms of probabilities. Quantum cognition is what happens when humans have to deal with ambiguity mentally. Sometimes we aren't certain about how we feel, or we feel ambiguous about which option to choose, or we have to make decisions based on limited information. "Our brain can't store everything. We don't always have clear attitudes about things. But when you ask me a question, like 'What do you want for dinner?" I have to think about it and come up with or construct a clear answer right there," Wang said. "That's quantum cognition." "I think the mathematical formalism provided by quantum theory is consistent with what we feel intuitively as psychologists. Quantum theory may not be intuitive at all when it is used to describe the behaviors of a particle, but actually is quite intuitive when it is used to describe our typically uncertain and ambiguous minds." She used the example of Schrödinger's cat—the thought experiment in which a cat inside a box has some probability of being alive or dead. Both possibilities have potential in our minds. In that sense, the cat has a potential to become dead or alive at the same time. The effect is called quantum superposition. When we open the box, both possibilities are no longer superimposed, and the cat must be either alive or dead. With quantum cognition, it's as if each decision we make is our own unique Schrödinger's cat. As we mull over our options, we envision them in our mind's eye. For a time, all the options co-exist with different degrees of potential that we will choose them: That's superposition. Then, when we zero in on our preferred option, the other options cease to exist for us. The task of modeling this process mathematically is difficult in part because each possible outcome adds dimensions to the equation. For instance, a Republican who is trying to decide among the candidates for U.S. president in 2016 is currently confronting a high-dimensional problem with almost 20 candidates. Open-ended questions, such as "How do you feel?" have even more possible outcomes and more dimensions. With the classical approach to psychology, the answers might not make sense, and researchers have to construct new mathematical axioms to explain behavior in that particular instance. The result: There are many classical psychological models, some of which are in conflict, and none of which apply to every situation. With the quantum approach, Wang and her colleagues argued, many different and complex aspects of behavior can be explained with the same limited set of axioms. The same quantum model that explains how question order changes people's survey answers also explains violations of rationality in the prisoner's dilemma paradigm, an effect in which people cooperate even when it's in their best interest not to do so. "The prisoner's dilemma and question order are two completely different effects in classical psychology, but they both can be explained by the same quantum model," Wang said. "The same quantum model has been used to explain many other seemingly unrelated, puzzling findings in psychology. That's elegant."
  • Is Nature Unnatural? Decades of confounding experiments have physicists considering a startling possibility: The universe might not make sense. On an overcast afternoon in late April, physics professors and students crowded into a wood-paneled lecture hall at Columbia University for a talk by Nima Arkani-Hamed, a high-profile theorist visiting from the Institute for Advanced Study in nearby Princeton, N.J. With his dark, shoulder-length hair shoved behind his ears, Arkani-Hamed laid out the dual, seemingly contradictory implications of recent experimental results at the Large Hadron Collider in Europe. “The universe is inevitable,” he declared. “The universe is impossible.” The spectacular discovery of the Higgs boson in July 2012 confirmed a nearly 50-year-old theory of how elementary particles acquire mass, which enables them to form big structures such as galaxies and humans. “The fact that it was seen more or less where we expected to find it is a triumph for experiment, it’s a triumph for theory, and it’s an indication that physics works,” Arkani-Hamed told the crowd. However, in order for the Higgs boson to make sense with the mass (or equivalent energy) it was determined to have, the LHC needed to find a swarm of other particles, too. None turned up. With the discovery of only one particle, the LHC experiments deepened a profound problem in physics that had been brewing for decades. Modern equations seem to capture reality with breathtaking accuracy, correctly predicting the values of many constants of nature and the existence of particles like the Higgs. Yet a few constants — including the mass of the Higgs boson — are exponentially different from what these trusted laws indicate they should be, in ways that would rule out any chance of life, unless the universe is shaped by inexplicable fine-tunings and cancellations. In peril is the notion of “naturalness,” Albert Einstein’s dream that the laws of nature are sublimely beautiful, inevitable and self-contained. Without it, physicists face the harsh prospect that those laws are just an arbitrary, messy outcome of random fluctuations in the fabric of space and time. The LHC will resume smashing protons in 2015 in a last-ditch search for answers. But in papers, talks and interviews, Arkani-Hamed and many other top physicists are already confronting the possibility that the universe might be unnatural. (There is wide disagreement, however, about what it would take to prove it.) “Ten or 20 years ago, I was a firm believer in naturalness,” said Nathan Seiberg, a theoretical physicist at the Institute, where Einstein taught from 1933 until his death in 1955. “Now I’m not so sure. My hope is there’s still something we haven’t thought about, some other mechanism that would explain all these things. But I don’t see what it could be.” Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam. According to a popular but polarizing framework called string theory, the number of possible types of universes that can bubble up in a multiverse is around 10500. In a few of them, chance cancellations would produce the strange constants we observe. In such a picture, not everything about this universe is inevitable, rendering it unpredictable. Edward Witten, a string theorist at the Institute, said by email, “I would be happy personally if the multiverse interpretation is not correct, in part because it potentially limits our ability to understand the laws of physics. But none of us were consulted when the universe was created.” “Some people hate it,” said Raphael Bousso, a physicist at the University of California at Berkeley who helped develop the multiverse scenario. “But I just don’t think we can analyze it on an emotional basis. It’s a logical possibility that is increasingly favored in the absence of naturalness at the LHC.” What the LHC does or doesn’t discover in its next run is likely to lend support to one of two possibilities: Either we live in an overcomplicated but stand-alone universe, or we inhabit an atypical bubble in a multiverse. “We will be a lot smarter five or 10 years from today because of the LHC,” Seiberg said. “So that’s exciting. This is within reach.” Cosmic Coincidence: Einstein once wrote that for a scientist, “religious feeling takes the form of a rapturous amazement at the harmony of natural law” and that “this feeling is the guiding principle of his life and work.” Indeed, throughout the 20th century, the deep-seated belief that the laws of nature are harmonious — a belief in “naturalness” — has proven a reliable guide for discovering truth. “Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation. This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts. Physicists have gone through three generations of particle accelerators searching for new particles, posited by a theory called supersymmetry, that would drive the Higgs mass down exactly as much as the known particles drive it up. But so far they’ve come up empty-handed. The upgraded LHC will explore ever-higher energy scales in its next run, but even if new particles are found, they will almost definitely be too heavy to influence the Higgs mass in quite the right way. The Higgs will still seem at least 10 or 100 times too light. Physicists disagree about whether this is acceptable in a natural, stand-alone universe. “Fine-tuned a little — maybe it just happens,” said Lisa Randall, a professor at Harvard University. But in Arkani-Hamed’s opinion, being “a little bit tuned is like being a little bit pregnant. It just doesn’t exist.”If no new particles appear and the Higgs remains astronomically fine-tuned, then the multiverse hypothesis will stride into the limelight. “It doesn’t mean it’s right,” said Bousso, a longtime supporter of the multiverse picture, “but it does mean it’s the only game in town.” A few physicists — notably Joe Lykken of Fermi National Accelerator Laboratory in Batavia, Ill., and Alessandro Strumia of the University of Pisa in Italy — see a third option. They say that physicists might be misgauging the effects of other particles on the Higgs mass and that when calculated differently, its mass appears natural. This “modified naturalness” falters when additional particles, such as the unknown constituents of dark matter, are included in calculations — but the same unorthodox path could yield other ideas. “I don’t want to advocate, but just to discuss the consequences,” Strumia said during a talk earlier this month at Brookhaven National Laboratory. However, modified naturalness cannot fix an even bigger naturalness problem that exists in physics: The fact that the cosmos wasn’t instantly annihilated by its own energy the moment after the Big Bang. Dark Dilemma: The energy built into the vacuum of space (known as vacuum energy, dark energy or the cosmological constant) is a baffling trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times smaller than what is calculated to be its natural, albeit self-destructive, value. No theory exists about what could naturally fix this gargantuan disparity. But it’s clear that the cosmological constant has to be enormously fine-tuned to prevent the universe from rapidly exploding or collapsing to a point. It has to be fine-tuned in order for life to have a chance. To explain this absurd bit of luck, the multiverse idea has been growing mainstream in cosmology circles over the past few decades. It got a credibility boost in 1987 when the Nobel Prize-winning physicist Steven Weinberg, now a professor at the University of Texas at Austin, calculated that the cosmological constant of our universe is expected in the multiverse scenario. Of the possible universes capable of supporting life — the only ones that can be observed and contemplated in the first place — ours is among the least fine-tuned. “If the cosmological constant were much larger than the observed value, say by a factor of 10, then we would have no galaxies,” explained Alexander Vilenkin, a cosmologist and multiverse theorist at Tufts University. “It’s hard to imagine how life might exist in such a universe.” Most particle physicists hoped that a more testable explanation for the cosmological constant problem would be found. None has. Now, physicists say, the unnaturalness of the Higgs makes the unnaturalness of the cosmological constant more significant. Arkani-Hamed thinks the issues may even be related. “We don’t have an understanding of a basic extraordinary fact about our universe,” he said. “It is big and has big things in it.” The multiverse turned into slightly more than just a hand-waving argument in 2000, when Bousso and Joe Polchinski, a professor of theoretical physics at the University of California at Santa Barbara, found a mechanism that could give rise to a panorama of parallel universes. String theory, a hypothetical “theory of everything” that regards particles as invisibly small vibrating lines, posits that space-time is 10-dimensional. At the human scale, we experience just three dimensions of space and one of time, but string theorists argue that six extra dimensions are tightly knotted at every point in the fabric of our 4-D reality. Bousso and Polchinski calculated that there are around 10500 different ways for those six dimensions to be knotted (all tying up varying amounts of energy), making an inconceivably vast and diverse array of universes possible. In other words, naturalness is not required. There isn’t a single, inevitable, perfect universe. “It was definitely an aha-moment for me,” Bousso said. But the paper sparked outrage. “Particle physicists, especially string theorists, had this dream of predicting uniquely all the constants of nature,” Bousso explained. “Everything would just come out of math and pi and twos. And we came in and said, ‘Look, it’s not going to happen, and there’s a reason it’s not going to happen. We’re thinking about this in totally the wrong way.’ ” Life in a Multiverse: The Big Bang, in the Bousso-Polchinski multiverse scenario, is a fluctuation. A compact, six-dimensional knot that makes up one stitch in the fabric of reality suddenly shape-shifts, releasing energy that forms a bubble of space and time. The properties of this new universe are determined by chance: the amount of energy unleashed during the fluctuation. The vast majority of universes that burst into being in this way are thick with vacuum energy; they either expand or collapse so quickly that life cannot arise in them. But some atypical universes, in which an improbable cancellation yields a tiny value for the cosmological constant, are much like ours. In a paper posted last month to the physics preprint website arXiv.org, Bousso and a Berkeley colleague, Lawrence Hall, argue that the Higgs mass makes sense in the multiverse scenario, too. They found that bubble universes that contain enough visible matter (compared to dark matter) to support life most often have supersymmetric particles beyond the energy range of the LHC, and a fine-tuned Higgs boson. Similarly, other physicists showed in 1997 that if the Higgs boson were five times heavier than it is, this would suppress the formation of atoms other than hydrogen, resulting, by yet another means, in a lifeless universe. Despite these seemingly successful explanations, many physicists worry that there is little to be gained by adopting the multiverse worldview. Parallel universes cannot be tested for; worse, an unnatural universe resists understanding. “Without naturalness, we will lose the motivation to look for new physics,” said Kfir Blum, a physicist at the Institute for Advanced Study. “We know it’s there, but there is no robust argument for why we should find it.” That sentiment is echoed again and again: “I would prefer the universe to be natural,” Randall said. But theories can grow on physicists. After spending more than a decade acclimating himself to the multiverse, Arkani-Hamed now finds it plausible — and a viable route to understanding the ways of our world. “The wonderful point, as far as I’m concerned, is basically any result at the LHC will steer us with different degrees of force down one of these divergent paths,” he said. “This kind of choice is a very, very big deal.” Naturalness could pull through. Or it could be a false hope in a strange but comfortable pocket of the multiverse. As Arkani-Hamed told the audience at Columbia, “stay tuned.” Via Quanta Magazine/This article was reprinted on ScientificAmerican.com.
  • New Principle May Help Explain Why Nature is Quantum: Like small children, scientists are always asking the question 'why?'. One question they've yet to answer is why nature picked quantum physics, in all its weird glory, as a sensible way to behave. Researchers Corsin Pfister and Stephanie Wehner at the Centre for Quantum Technologies at the National University of Singapore tackle this perennial question in a paper published today in Nature Communications.We know that things that follow quantum rules, such as atoms, electrons or the photons that make up light, are full of surprises. They can exist in more than one place at once, for instance, or exist in a shared state where the properties of two particles show what Einstein called "spooky action at a distance", no matter what their physical separation. Because such things have been confirmed in experiments, researchers are confident the theory is right. But it would still be easier to swallow if it could be shown that quantum physics itself sprang from intuitive underlying principles. One way to approach this problem is to imagine all the theories one could possibly come up with to describe nature, and then work out what principles help to single out quantum physics. A good start is to assume that information follows Einstein's special relativity and cannot travel faster than light. However, this alone isn't enough to define quantum physics as the only way nature might behave. Corsin and Stephanie think they have come across a new useful principle. "We have found a principle that is very good at ruling out other theories," says Corsin. In short, the principle to be assumed is that if a measurement yields no information, then the system being measured has not been disturbed. Quantum physicists accept that gaining information from quantum systems causes disturbance. Corsin and Stephanie suggest that in a sensible world the reverse should be true, too. If you learn nothing from measuring a system, then you can't have disturbed it. Consider the famous Schrodinger's cat paradox, a thought experiment in which a cat in a box simultaneously exists in two states (this is known as a 'quantum superposition'). According to quantum theory it is possible that the cat is both dead and alive – until, that is, the cat's state of health is 'measured' by opening the box. When the box is opened, allowing the health of the cat to be measured, the superposition collapses and the cat ends up definitively dead or alive. The measurement has disturbed the cat. This is a property of quantum systems in general. Perform a measurement for which you can't know the outcome in advance, and the system changes to match the outcome you get. What happens if you look a second time? The researchers assume the system is not evolving in time or affected by any outside influence, which means the quantum state stays collapsed. You would then expect the second measurement to yield the same result as the first. After all, "If you look into the box and find a dead cat, you don't expect to look again later and find the cat has been resurrected," says Stephanie. "You could say we've formalised the principle of accepting the facts", says Stephanie. Corsin and Stephanie show that this principle rules out various theories of nature. They note particularly that a class of theories they call 'discrete' are incompatible with the principle. These theories hold that quantum particles can take up only a finite number of states, rather than choose from an infinite, continuous range of possibilities. The possibility of such a discrete 'state space' has been linked to quantum gravitational theories proposing similar discreteness in spacetime, where the fabric of the universe is made up of tiny brick-like elements rather than being a smooth, continuous sheet. As is often the case in research, Corsin and Stephanie reached this point having set out to solve an entirely different problem altogether. Corsin was trying to find a general way to describe the effects of measurements on states, a problem that he found impossible to solve. In an attempt to make progress, he wrote down features that a 'sensible' answer should have. This property of information gain versus disturbance was on the list. He then noticed that if he imposed the property as a principle, some theories would fail. Corsin and Stephanie are keen to point out it's still not the whole answer to the big 'why' question: theories other than quantum physics, including classical physics, are compatible with the principle. But as researchers compile lists of principles that each rule out some theories to reach a set that singles out quantum physics, the principle of information gain versus disturbance seems like a good one to include.
- What if The universe is an illusion? The world around us does a good job of convincing us that it is three dimensional. The problem is that some pretty useful physics says it's a hologram: again, this is another result I have derived - the universe is a hologram, however, my proofs are not based on 'utilitarian physics', but on necessary and sufficient conditions that any quantum theory that unifies Einstein's Theory of General Relativity and Quantum Field Theory must meet.
- New model describes cognitive decision making as the collapse of a quantum superstate
: Quantum physics and the 'mind': is the brain a quantum computer- decision making in an enormous range of tasks involves the accumulation of evidence in support of different hypotheses. One of the enduring models of evidence accumulation is the Markov random walk (MRW) theory, which assigns a probability to each hypothesis. In an MRW model of decision making, when deciding between two hypotheses, the cumulative evidence for and against each hypothesis reaches different levels at different times, moving particle-like from state to state and only occupying a single definite evidence level at any given point. By contrast with MRW, the new theory assumes that evidence develops over time in a superposition state analogous to the wave-like state of a photon, and judgements and decisions are made when this indefinite superposition state "collapses" into a definite state of evidence. In the experiment, nine study participants completed 112 blocks of 24 trials each over five sessions, in which they viewed a random dot motion stimulus on a screen. A percentage of the dots moved coherently in a single direction. The researchers manipulated the difficulty of the test between trials. In the choice condition, participants were asked to decide whether the coherently moving dots were traveling to the left or the right. In the no-choice condition, participants were prompted by an audio tone simply to make a motor response. Then participants were asked to rate their confidence that the coherently moving dots were traveling to the right on a scale ranging from 0 (certain left) to 100 percent (certain right). The researchers report that, on average, confidence ratings were much higher when the trajectories of the dots were highly coherent. Confidence ratings were lower in the no-choice condition than in the choice condition, providing evidence against the read-out assumption of MRW theory, which holds that confidence in the choice condition should be higher. The QRW theory posits that evidence evolves over time, as in MRW, but that judgments and decisions create a new definite state from an indefinite, superposition-like state. "This quantum perspective reconceptualizes how we model uncertainty and formalizes a long-held hypothesis that judgments and decisions create rather than reveal preferences and beliefs," the authors write. They conclude, "... quantum random walk theory provides a previously unexamined perspective on the nature of the evidence accumulation process that underlies both cognitive and neural theories of decision making."
- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World - Pedro Domingos: Machine learning is the automation of discovery - the scientific method on steroids - that enables intelligent robots and computers to program themselves ... in fact, machine learning will replace the scientific method very soon ... all knowledge can be derived from data by a single ‘master algorithm’: If you wonder how AI will change your life, science, history, and 'everything', read this book.
- There Is No Progress in Philosophy Eric Dietrich: Except for a patina of twenty-first century modernity, in the form of logic and language, philosophy is exactly the same now as it ever was; it has made no progress whatsoever. We philosophers wrestle with the exact same problems the Pre-Socratics wrestled with. Even more outrageous than this claim, though, is the blatant denial of its obvious truth by many practicing philosophers. The No-Progress view is explored and argued for here. Its denial is diagnosed as a form of anosognosia, a mental condition where the affected person denies there is any problem. The theories of two eminent philosophers supporting the No-Progress view are also examined. The final section offers an explanation for philosophy’s inability to solve any philosophical problem, ever. The paper closes with some reflections on philosophy’s future.
  • Quantum physics just got less complicated: Here's a nice surprise - quantum physics is less complicated than we thought. An international team of researchers has proved that two peculiar features of the quantum world previously considered distinct are different manifestations of the same thing. The result is published 19 December in Nature Communications. Patrick Coles, Jedrzej Kaniewski, and Stephanie Wehner made the breakthrough while at the Centre for Quantum Technologies at the National University of Singapore. They found that 'wave-particle duality' is simply the quantum 'uncertainty principle' in disguise, reducing two mysteries to one. "The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information," says Wehner, who is now an Associate Professor at QuTech at the Delft University of Technology in the Netherlands.
- How spacetime is built by quantum entanglement: A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields." Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales. The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive. Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators. Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory. "It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."
- Is Time’s Arrow Perspectival? Carlo Rovelli: We observe entropy decrease towards the past. Does this imply that in the past the world was in a non-generic microstate? The author points out an alternative. The subsystem to which we belong interacts with the universe via a relatively small number of quantities, which define a coarse graining. Entropy happens to depends on coarse-graining. Therefore the entropy we ascribe to the universe depends on the peculiar coupling between us and the rest of the universe. Low past entropy may be due to the fact that this coupling (rather than microstate of the universe) is non-generic. The author then argues that for any generic microstate of a sufficiently rich system there are always special subsystems defining a coarse graining for which the entropy of the rest is low in one time direction (the “past”). These are the subsystems allowing creatures that “live in time” —such as those in the biosphere— to exist. He then replies to some objections raised to an earlier presentation of this idea, in particular by Bob Wald, David Albert and Jim Hartle.
- Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction Jeffrey M Schwartz, Henry P Stapp, Mario Beauregard: Neuropsychological research on the neural basis of behaviour generally posits that brain mechanisms will ultimately suffice to explain all psychologically described phenomena. This assumption stems from the idea that the brain is made up entirely of material particles and fields, and that all causal mechanisms relevant to neuroscience can therefore be formulated solely in terms of properties of these elements. Thus, terms having intrinsic mentalistic and/or experiential content (e.g. ‘feeling’, ‘knowing’ and ‘effort’) are not included as primary causal factors. This theoretical restriction is motivated primarily by ideas about the natural world that have been known to be fundamentally incorrect for more than three-quarters of a century. Contemporary basic physical theory differs profoundly from classic physics on the important matter of how the consciousness of human agents enters into the structure of empirical phenomena. The new principles contradict the older idea that local mechanical processes alone can account for the structure of all observed empirical data. Contemporary physical theory brings directly and irreducibly into the overall causal structure certain psychologically described choices made by human agents about how they will act. This key development in basic physical theory is applicable to neuroscience, and it provides neuroscientists and psychologists with an alternative conceptual framework for describing neural processes. Indeed, owing to certain structural features of ion channels critical to synaptic function, contemporary physical theory must in principle be used when analysing human brain dynamics. The new framework, unlike its classic-physics-based predecessor, is erected directly upon, and is compatible with, the prevailing principles of physics. It is able to represent more adequately than classic concepts the neuroplastic mechanisms relevant to the growing number of empirical studies of the capacity of directed attention and mental effort to systematically alter brain function.
- When causation does not imply correlation: robust violations of the Faithfulness axiom Richard Kennaway: it is demonstrated here that the Faithfulness property that is assumed in much causal analysis is robustly violated for a large class of systems of a type that occurs throughout the life and social sciences: control systems. These systems exhibit correlations indistinguishable from zero between variables that are strongly causally connected, and can show very high correlations between variables that have no direct causal connection, only a connection via causal links between uncorrelated variables. Their patterns of correlation are robust, in that they remain unchanged when their parameters are varied. The violation of Faithfulness is fundamental to what a control system does: hold some variable constant despite the disturbing influences on it. No method of causal analysis that requires Faithfulness is applicable to such systems.
- Renormalized spacetime is two-dimensional at the Planck scale T. Padmanabhan, Sumanta Chakraborty: Quantum field theory distinguishes between the bare variables – which we introduce in the Lagrangian – and the renormalized variables which incorporate the effects of interactions. This suggests that the renormalized, physical, metric tensor of spacetime (and all the geometrical quantities derived from it) will also be different from the bare, classical, metric tensor in terms of which the bare gravitational Lagrangian is expressed. The authors provide a physical ansatz to relate the renormalized metric tensor to the bare metric tensor such that the spacetime acquires a zero-point-length ℓ(0) of the order of the Planck length LP . This prescription leads to several remarkable consequences. In particular, the Euclidean volume VD(ℓ, ℓ0) in a D-dimensional spacetime of a region of size ℓ scales as VD(ℓ, ℓ0) ∝ ℓ D−2 0 ℓ 2 when ℓ ∼ ℓ0, while it reduces to the standard result VD(ℓ, ℓ0) ∝ ℓ D at large scales (ℓ ≫ ℓ0). The appropriately defined effective dimension, Deff , decreases continuously from Deff = D (at ℓ ≫ ℓ0) to Deff = 2 (at ℓ ∼ ℓ0). This suggests that the physical spacetime becomes essentially 2-dimensional near Planck scale.
- CERN's LHCb experiment reports observation of exotic pentaquark particles: "The pentaquark is not just any new particle," said LHCb spokesperson Guy Wilkinson. "It represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before in over fifty years of experimental searches. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we're all made, is constituted." Our understanding of the structure of matter was revolutionized in 1964 when American physicist, Murray Gell-Mann, proposed that a category of particles known as baryons, which includes protons and neutrons, are comprised of three fractionally charged objects called quarks, and that another category, mesons, are formed of quark-antiquark pairs. Gell-Mann was awarded the Nobel Prize in physics for this work in 1969. This quark model also allows the existence of other quark composite states, such as pentaquarks composed of four quarks and an antiquark. Until now, however, no conclusive evidence for pentaquarks had been seen. LHCb researchers looked for pentaquark states by examining the decay of a baryon known as Λb (Lambda b) into three other particles, a J/ѱ (J-psi), a proton and a charged kaon. Studying the spectrum of masses of the J/ѱ and the proton revealed that intermediate states were sometimes involved in their production. These have been named Pc(4450)+ and Pc(4380)+, the former being clearly visible as a peak in the data, with the latter being required to describe the data fully. Earlier experiments that have searched for pentaquarks have proved inconclusive. Where the LHCb experiment differs is that it has been able to look for pentaquarks from many perspectives, with all pointing to the same conclusion. It's as if the previous searches were looking for silhouettes in the dark, whereas LHCb conducted the search with the lights on, and from all angles. The next step in the analysis will be to study how the quarks are bound together within the pentaquarks.
- Causes and Consequences of Income Inequality: A Global Perspective - INTERNATIONAL MONETARY FUND: Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain. Not surprisingly then, the extent of inequality, its drivers, and what to do about it have become some of the most hotly debated issues by policymakers and researchers alike. Against this background, the objective of this paper is two-fold. First, the authors show why policymakers need to focus on the poor and the middle class. Earlier IMF work has shown that income inequality matters for growth and its sustainability. Their analysis suggests that the income distribution itself matters for growth as well. Specifically, if the income share of the top 20 percent (the rich) increases, then GDP growth actually declines over the medium term, suggesting that the benefits do not trickle down. In contrast, an increase in the income share of the bottom 20 percent (the poor) is associated with higher GDP growth. The poor and the middle class matter the most for growth via a number of interrelated economic, social, and political channels. Second, the authors investigate what explains the divergent trends in inequality developments across advanced economies and EMDCs, with a particular focus on the poor and the middle class. While most existing studies have focused on advanced countries and looked at the drivers of the Gini coefficient and the income of the rich, this study explores a more diverse group of countries and pays particular attention to the income shares of the poor and the middle class—the main engines of growth. This analysis suggests that technological progress and the resulting rise in the skill premium (positives for growth and productivity) and the decline of some labor market institutions have contributed to inequality in both advanced economies and EMDCs. Globalization has played a smaller but reinforcing role. Interestingly, we find that rising skill premium is associated with widening income disparities in advanced countries, while financial deepening is associated with rising inequality in EMDCs, suggesting scope for policies that promote financial inclusion. Policies that focus on the poor and the middle class can mitigate inequality. Irrespective of the level of economic development, better access to education and health care and well-targeted social policies, while ensuring that labor market institutions do not excessively penalize the poor, can help raise the income share for the poor and the middle class. There is no one-size-fits-all approach to tackling inequality. The nature of appropriate policies depends on the underlying drivers and country-specific policy and institutional settings. In advanced economies, policies should focus on reforms to increase human capital and skills, coupled with making tax systems more progressive. In EMDCs, ensuring financial deepening is accompanied with greater financial inclusion and creating incentives for lowering informality would be important. More generally, complementarities between growth and income equality objectives suggest that policies aimed at raising average living standards can also influence the distribution of income and ensure a more inclusive prosperity.
- Does time dilation destroy quantum superposition? Why do we not see everyday objects in quantum superpositions? The answer to that long-standing question may partly lie with gravity. So says a group of physicists in Austria, which has shown theoretically that a feature of Einstein's general relativity, known as time dilation, can render quantum states classical. The researchers say that even the Earth's puny gravitational field may be strong enough for the effect to be measurable in a laboratory within a few years. Our daily experience suggests that there exists a fundamental boundary between the quantum and classical worlds. One way that physicists explain the transition between the two, is to say that quantum superposition states simply break down when a system exceeds a certain size or level of complexity – its wavefunction is said to "collapse" and the system becomes "decoherent". An alternative explanation, in which quantum mechanics holds sway at all scales, posits that interactions with the environment bring different elements of an object's wavefunction out of phase, such that they no longer interfere with one another. Larger objects are subject to this decoherence more quickly than smaller ones because they have more constituent particles and, therefore, more complex wavefunctions. There are already multiple different explanations for decoherence, including a particle emitting or absorbing electromagnetic radiation or being buffeted by surrounding air molecules. In the latest work, Časlav Brukner at the University of Vienna and colleagues have put forward a new model that involves time dilation – where the flow of time is affected by mass (gravity). This relativistic effect allows for a clock in outer space to tick at a faster rate than one near the surface of the Earth. In their work, Brukner and colleagues consider a macroscopic body – whose constituent particles can vibrate at different frequencies – to be in a superposition of two states at very slightly different distances from the surface of a massive object. Time dilation would then dictate that the state closer to the object will vibrate at a lower frequency than the other. They then calculate how much time dilation is needed to differentiate the frequencies so that the two states get out of step with one another, so much that they can no longer interfere. With this premise, the team worked out that even the Earth's gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond. Beyond any potential quantum-computing applications that would benefit from the removal of unwanted decoherence, the work challenges physicists' assumption that only gravitational fields generated by neutron stars and other massive astrophysical objects can exert a noticeable influence on quantum phenomena. "The interesting thing about this phenomenon is that both quantum mechanics and general relativity would be needed to explain it," says Brukner. Quantum clocks One way to experimentally test the effect would involve sending a "clock" (such as a beam of caesium atoms) through the two arms of an interferometer. The interferometer would initially be positioned horizontally and the interference pattern recorded. It would then be rotated to the vertical, such that one arm experiences a higher gravitational potential than the other, and its output again observed. In the latter case, the two states vibrate at different frequencies due to time dilation. This different rate of ticking would reveal which state is travelling down each arm, and once this information is revealed, the interference pattern disappears. "People have already measured time dilation due to Earth's gravity," says Brukner, "but they usually use two clocks in two different positions. We are saying, why not use one clock in a superposition?" Carrying out such a test, however, will not be easy. The fact that the effect is far smaller than other potential sources of decoherence would mean cooling the interferometer down to just a few kelvin while enclosing it in a vacuum, says Brukner. The measurements would still be extremely tricky, according to Markus Arndt, at the University of Vienna, who was not involved in the current work. He says they could require superpositions around a million times bigger and 1000 times longer lasting than is possible with the best equipment today. Nevertheless, Arndt praises the proposal for "directing attention" towards the interface between quantum mechanics and gravity. He also points out that any improvements to interferometers needed for this work could also have practical benefits, such as allowing improved tests of relativity or enhancing tools for geodesy.
- Judgment Aggregation in Science Liam Kofi Bright, Haixin Dang, and Remco Heesen: This paper raises the problem of judgment aggregation in science. The problem has two sides. First, how do scientists decide which propositions to assert in a collaborative document? And second, how should they make such decisions? The literature on judgment aggregation is relevant to the second question. Although little evidence is available regarding the first question, it suggests that current scientific practice is not in line with the most plausible recommendations from the judgment aggregation literature. The authors explore the evidence that is presently available before suggesting a number of avenues for future research on this problem.
- A Stronger Bell Argument for Quantum Non-Locality Paul M. Nager: It is widely accepted that the violation of Bell inequalities excludes local theories of the quantum realm. This paper presents a stronger Bell argument which even forbids certain non-local theories. Among these excluded non-local theories are those whose only non-local connection is a probabilistic (or functional) dependence between the space-like separated measurement outcomes of EPR/B experiments (a subset of outcome dependent theories). In this way, the new argument shows that the result of the received Bell argument, which requires just any kind of nonlocality, is inappropriately weak. Positively, the remaining non-local theories, which can violate Bell inequalities (among them quantum theory), are characterized by the fact that at least one of the measurement outcomes in some sense probabilistically depends both on its local as well as on its distant measurement setting (probabilistic Bell contextuality). Whether an additional dependence between the outcomes holds, is irrelevant for the question whether a certain theory can violate Bell inequalities. This new concept of quantum non-locality is considerably tighter and more informative than the one following from the usual Bell argument. It is proven that (given usual background assumptions) the result of the stronger Bell argument presented here is the strongest possible consequence from the violation of Bell inequalities on a qualitative probabilistic level
- General relativity as a two-dimensional CFT Tim Adamo: The tree-level scattering amplitudes of general relativity encode the full non-linearity of the Einstein field equations. Yet remarkably compact expressions for these amplitudes have been found which seem unrelated to a perturbative expansion of the EinsteinHilbert action. This suggests an entirely different description of GR which makes this on-shell simplicity manifest. Taking our cue from the tree-level amplitudes, the author discusses how such a description can be found. The result is a formulation of GR in terms of a solvable two-dimensional CFT, with the Einstein equations emerging as quantum consistency conditions.
- Strange behavior of quantum particles may indicate the existence of other parallel universes John Davis: It started about five years ago with a practical chemistry question. Little did Bill Poirier realize as he delved into the quantum mechanics of complex molecules that he would fall down the rabbit hole to discover evidence of other parallel worlds that might well be poking through into our own, showing up at the quantum level. The Texas Tech University professor of chemistry and biochemistry said that quantum mechanics is a strange realm of reality. Particles at this atomic and subatomic level can appear to be in two places at once. Because the activity of these particles is so iffy, scientists can only describe what's happening mathematically by "drawing" the tiny landscape as a wave of probability. Chemists like Poirier draw these landscapes to better understand chemical reactions. Despite the "uncertainty" of particle location, quantum wave mechanics allows scientists to make precise predictions. The rules for doing so are well established. At least, they were until Poirier's recent "eureka" moment when he found a completely new way to draw quantum landscapes. Instead of waves, his medium became parallel universes. Though his theory, called "Many Interacting Worlds," sounds like science fiction, it holds up mathematically. Originally published in 2010, it has led to a number of invited presentations, peer-reviewed journal articles and a recent invited commentary in the premier physics journal Physical Review. "This has gotten a lot of attention in the foundational mechanics community as well as the popular press," Poirier said. "At a symposium in Vienna in 2013, standing five feet away from a famous Nobel Laureate in physics, I gave my presentation on this work fully expecting criticism. I was surprised when I received none. Also, I was happy to see that I didn't have anything obviously wrong with my mathematics." In his theory, Poirier postulates that small particles from many worlds seep through to interact with our own, and their interaction accounts for the strange phenomena of quantum mechanics. Such phenomena include particles that seem to be in more than one place at a time, or to communicate with each other over great distances without explanations.
- Why Not Capitalism? Jason Brennan: 'Most economists believe capitalism is a compromise with selfish human nature. As Adam Smith put it, "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest." Capitalism works better than socialism, according to this thinking, only because we are not kind and generous enough to make socialism work. If we were saints, we would be socialists. In Why Not Capitalism?, Jason Brennan attacks this widely held belief, arguing that capitalism would remain the best system even if we were morally perfect. Even in an ideal world, private property and free markets would be the best way to promote mutual cooperation, social justice, harmony, and prosperity. Socialists seek to capture the moral high ground by showing that ideal socialism is morally superior to realistic capitalism. But, Brennan responds, ideal capitalism is superior to ideal socialism, and so capitalism beats socialism at every level. Clearly, engagingly, and at times provocatively written, Why Not Capitalism? will cause readers of all political persuasions to re-evaluate where they stand vis-à-vis economic priorities and systems—as they exist now and as they might be improved in the future.'
- An argument for ψ-ontology in terms of protective measurements Shan Gao: The ontological model framework provides a rigorous approach to address the question of whether the quantum state is ontic or epistemic. When considering only conventional projective measurements, auxiliary assumptions are always needed to prove the reality of the quantum state in the framework. For example, the Pusey-Barrett-Rudolph theorem is based on an additional preparation independence assumption. In this paper, the author gives a new proof of ψ-ontology in terms of protective measurements in the ontological model framework. It is argued that the proof needs not rely on auxiliary assumptions, and also applies to deterministic theories such as the de Broglie-Bohm theory. In addition, the author gives a simpler argument for ψ-ontology beyond the framework, which is only based on protective measurements and a weaker criterion of reality. The argument may be also appealing for those people who favor an anti-realist view of quantum mechanics.
- Depth and Explanation in Mathematics Marc Lange: This paper argues that in at least some cases, one proof of a given theorem is deeper than another by virtue of supplying a deeper explanation of the theorem — that is, a deeper account of why the theorem holds. There are cases of scientific depth that also involve a common abstract structure explaining a similarity between two otherwise unrelated phenomena, making their similarity no coincidence and purchasing depth by answering why questions that separate, dissimilar explanations of the two phenomena cannot correctly answer. The connections between explanation, depth, unification, power, and coincidence in mathematics and science are compared.
- Does Inflation Solve the Hot Big Bang Model’s Fine Tuning Problems? C.D. McCoy: Cosmological inflation is widely considered an integral and empirically successful component of contemporary cosmology. It was originally motivated (and usually still is) by its solution of certain so-called fine-tuning problems of the hot big bang model, particularly what are known as the horizon problem and the flatness problem. Although the physics behind these problems is clear enough, the nature of the problems depends on the sense in which the hot big bang model is fine-tuned and how the alleged fine-tuning is problematic. Without clear explications of these, it remains unclear precisely what problems inflationary theory is meant to be solving and whether it does in fact solve them. The author analyzes here the structure of these problems and considers various interpretations that may substantiate the alleged fine-tuning. On the basis of this analysis he argues that at present there is no unproblematic interpretation available for which it can be said that inflation solves the big bang model’s alleged fine-tuning problems.
- Towards the geometry of the universe from data H.L. Bester, J. Larena, and N.T. Bishop: the authors present an algorithm that can reconstruct the full distributions of metric components within the class of spherically symmetric dust universes that may include a cosmological constant. The algorithm is capable of confronting this class of solutions with arbitrary data. In this work they use luminosity and age data to constrain the geometry of the universe up to a redshift of z = 1.75. They go on to show that the current data are perfectly compatible with homogeneous models of the universe and that these models seem to be favoured at low redshift.
- Truthful Linear Regression Rachel Cummings, Stratis Ioannidis, Katrina Ligett: the authors consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains their design to mechanisms that provide a privacy guarantee to the participants; the authors use differential privacy to model individuals’ privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. They overcome this challenge through an appropriate design of the computation and payment scheme.