Theory of the effectivity of human problem solving Frantisek Duris: The ability to solve problems effectively is one of the hallmarks of human cognition. Yet, in our opinion it gets far less research focus than it rightly deserves. In this paper the author outlines a framework in which this effectivity can be studied; he identify the possible roots and scope of this effectivity and the cognitive processes directly involved. More particularly, it is observed that people can use cognitive mechanisms to drive problem solving by the same manner on which an optimal problem solving strategy suggested by Solomonoff (1986) is based. Furthermore, evidence is provided for cognitive substrate hypothesis (Cassimatis, 2006) which states that human level AI in all domains can be achieved by a relatively small set of cognitive mechanisms. The results presented in this paper can serve both cognitive psychology in better understanding of human problem solving processes, and artificial intelligence in designing more human like intelligent agents.
- Limit Theorems for Empirical Renyi Entropy and Divergence with Applications to Molecular Diversity Analysis Maciej Pietrzak, Grzegorz A. Rempala, Michał Seweryn, Jacek Wesołowski: Quantitative methods for studying biodiversity have been traditionally rooted in the classical theory of finite frequency tables analysis. However, with the help of modern experimental tools, like high throughput sequencing, the authors now begin to unlock the outstanding diversity of genomic data in plants and animals reflective of the long evolutionary history of our planet. This molecular data often defies the classical frequency/contingency tables assumptions and seems to require sparse tables with very large number of categories and highly unbalanced cell counts, e.g., following heavy tailed distributions (for instance, power laws). Motivated by the molecular diversity studies, the authors propose here a frequency-based framework for biodiversity analysis in the asymptotic regime where the number of categories grows with sample size (an infinite contingency table). The approach here is rooted in information theory and based on the Gaussian limit results for the effective number of species (the Hill numbers) and the empirical Renyi entropy and divergence. The authors argue that when applied to molecular biodiversity analysis our methods can properly account for the complicated data frequency patterns on one hand and the practical sample size limitations on the other. We illustrate this principle with two specific RNA sequencing examples: a comparative study of T-cell receptor populations and a validation of some preselected molecular hepatocellular carcinoma (HCC) markers.
- Brane induced gravity: Ghosts and naturalness Ludwig Eglseer, Florian Niedermann, and Robert Schneider: Linear stability of brane induced gravity in two codimensions on a static pure tension background is investigated. By explicitly calculating the vacuum persistence amplitude of the corresponding quantum theory, the authors show that the parameter space is divided into two regions—one corresponding to a stable Minkowski vacuum on the brane and one being plagued by ghost instabilities. This analytical result affirms a recent nonlinear, but mainly numerical analysis. The main result is that the ghost is absent for a sufficiently large brane tension, in perfect agreement with a value expected from a natural effective field theory point of view. Unfortunately, the linearly stable parameter regime is either ruled out phenomenologically or destabilized due to nonlinearities. The authors argue that inflating brane backgrounds constitute the remaining window of opportunity. In the special case of a tensionless brane, they find that the ghost exists for any nonzero value of the induced gravity scale. Regarding this case, there are contradicting results in the literature, and they are able to fully resolve this controversy by explicitly uncovering the errors made in the “no-ghost” analysis. Finally, a Hamiltonian analysis generalizes the ghost result to more than two codimensions.
- How mathematics reveals the nature of the cosmos Joshua Carroll: 'If it were not for mathematics, we would still think we were on one of a few planets orbiting a star amidst the backdrop of seemingly motionless lights. This is a rather bleak outlook today compared to what we now know about the awesomely large universe we reside in. This idea of the universe motivating us to understand more about mathematics can be inscribed in how Johannes Kepler used what he observed the planets doing, and then applied mathematics to it to develop a fairly accurate model (and method for predicting planetary motion) of the solar system. This is one of many demonstrations that illustrate the importance of mathematics within our history, especially within astronomy and physics. The story of mathematics becomes even more amazing as we push forward to one of the most advanced thinkers humanity has ever known: Sir Isaac Newton, when pondering the motions of Halley's Comet, came to the realization that the math that had been used thus far to describe physical motion of massive bodies, simply would not suffice if we were to ever understand anything beyond that of our seemingly limited celestial nook. In a show of pure brilliance that lends validity to my earlier statement about how we can take what we naturally have and then construct a more complex system upon it, Newton developed the Calculus in which this way of approaching moving bodies, he was able to accurately model the motion of not only Halley's comet, but also any other heavenly body that moved across the sky. In one instant, our entire universe opened up before us, unlocking almost unlimited abilities for us to converse with the cosmos as never before. Newton also expanded upon what Kepler started. Newton recognized that Kepler's mathematical equation for planetary motion, Kepler's 3rd Law, was purely based on empirical observation, and was only meant to measure what we observed within our solar system. Newton's mathematical brilliance was in realizing that this basic equation could be made universal by applying a gravitational constant to the equation, in which gave birth to perhaps one of the most important equations to ever be derived by mankind; Newton's Version of Kepler's Third Law. What Newton realized was that when things move in non-linear ways, using basic Algebra would not produce the correct answer. Herein lays one of the main differences between Algebra and Calculus. Algebra allows one to find the slope (rate of change) of straight lines (constant rate of change), whereas Calculus allows one to find the slope of curved lines (variable rate of change). There are obviously many more applications of Calculus than just this, but I am merely illustrating a fundamental difference between the two in order to show you just how revolutionary this new concept was. All at once, the motions of planets and other objects that orbit the sun became more accurately measurable, and thus we gained the ability to understand the universe a little deeper. Referring back to Netwon's Version of Kepler's Third Law, we were now able to apply (and still do) this incredible physics equation to almost anything that is orbiting something else. From this equation, we can determine the mass of either of the objects, the distance apart they are from each other, the force of gravity that is exerted between the two, and other physical qualities built from these simple calculations. This is the beauty of mathematics writ large; an ongoing conversation with the universe in which more than we may expect is revealed. It came to a French mathematician Urbain Le Verrier who sat down and painstakingly worked through the mathematical equations of the orbit of Uranus. What he was doing was using Newton's mathematical equations backwards, realizing that there must be an object out there beyond the orbit of Uranus that was also orbiting the sun, and then looking to apply the right mass and distance that this unseen object required for perturbing the orbit of Uranus in the way we were observing it was. This was phenomenal, as we were using parchment and ink to find a planet that nobody had ever actually observed. What he found was that an object, soon to be Neptune, had to be orbiting at a specific distance from the sun, with the specific mass that would cause the irregularities in the orbital path of Uranus. Confident of his mathematical calculations, he took his numbers to the New Berlin Observatory, where the astronomer Johann Gottfried Galle looked exactly where Verrier's calculations told him to look, and there lay the 8th and final planet of our solar system, less than 1 degree off from where Verrier's calculations said for him to look. What had just happened was an incredible confirmation of Newton's gravitational theory and proved that his mathematics were correct.'
- A Bayesian Approach for Detecting Mass-Extinction Events When Rates of Lineage Diversification Vary Michael R. May, Sebastian Höhna, Brian R. Moore: the paleontological record chronicles numerous episodes of mass extinction that severely culled the Tree of Life. Biologists have long sought to assess the extent to which these events may have impacted particular groups. The authors present a novel method for detecting mass-extinction events from phylogenies estimated from molecular sequence data. They develop an approach in a Bayesian statistical framework, which enables them to harness prior information on the frequency and magnitude of mass-extinction events. The approach is based on an episodic stochastic-branching process model in which rates of speciation and extinction are constant between rate-shift events. They then model three types of events: (1) instantaneous tree-wide shifts in speciation rate; (2) instantaneous tree-wide shifts in extinction rate, and; (3) instantaneous tree-wide mass-extinction events. Each of the events is described by a separate compound Poisson process (CPP) model, where the waiting times between each event are exponentially distributed with event-specific rate parameters. The magnitude of each event is drawn from an event-type specific prior distribution. Parameters of the model are then estimated using a reversible-jump Markov chain Monte Carlo (rjMCMC) algorithm. They demonstrate via simulation that this method has substantial power to detect the number of mass-extinction events, provides unbiased estimates of the timing of mass-extinction events, while exhibiting an appropriate (i.e., below 5%) false discovery rate even in the case of background diversification rate variation. Finally, they provide an empirical application of this approach to conifers, which reveals that this group has experienced two major episodes of mass extinction. This new approach - the CPP on Mass Extinction Times (CoMET) model - provides an effective tool for identifying mass-extinction events from molecular phylogenies, even when the history of those groups includes more prosaic temporal variation in diversification rate.
- Scientists discover a protein that silences the biological clock A new study led by UC Santa Cruz researchers has found that a protein associated with cancer cells is a powerful suppressor of the biological clock that drives the daily ("circadian") rhythms of cells throughout the body. The discovery, published in the June 4 issue of Molecular Cell, adds to a growing body of evidence suggesting a link between cancer and disruption of circadian rhythms, while offering new insights into the molecular mechanisms of the biological clock. The ticking of the biological clock drives fluctuations in gene activity and protein levels that give rise to daily cycles in virtually every aspect of physiology in humans and other animals. A master clock in the brain, tuned to the daily cycle of light and dark, sends out signals that synchronize the molecular clocks ticking away in almost every cell and tissue of the body. Disruption of the clock has been associated with a variety of health problems, including diabetes, heart disease, and cancer. According to Carrie Partch, a professor of chemistry and biochemistry at UC Santa Cruz and corresponding author of the paper, the connection between clock disruption and cancer is still unclear. "The clock is not always disrupted in cancer cells, but studies have shown that disrupting circadian rhythms in mice causes tumors to grow faster, and one of the things the clock does is set restrictions on when cells can divide," she said. The new study focused on a protein called PASD1 that Partch's collaborators at the University of Oxford had found was expressed in a broad range of cancer cells, including melanoma, lung cancer, and breast cancer. It belongs to a group of proteins known as "cancer/testis antigens," which are normally expressed in the germ line cells that give rise to sperm and eggs, but are also found in some cancer cells. Cancer researchers have been interested in these proteins as markers for cancer and as potential targets for therapeutic cancer vaccines.
- AdS/CFT without holography: A hidden dimension on the CFT side and implications for black-hole entropy Hrvoje Nikolic: the author proposes a new non-holographic formulation of AdS/CFT correspondence, according to which quantum gravity on AdS and its dual non-gravitational field theory both live in the same number D of dimensions. The field theory, however, appears (D − 1)-dimensional because the interactions do not propagate in one of the dimensions. The D-dimensional action for the field theory can be identified with the sum over (D−1)-dimensional actions with all possible values Λ of the UV cutoff, so that the extra hidden dimension can be identified with Λ. Since there are no interactions in the extra dimension, most of the practical results of standard holographic AdS/CFT correspondence transcribe to non-holographic AdS/CFT without any changes. However, the implications on black-hole entropy change significantly. The maximal black-hole entropy now scales with volume, while the Bekenstein-Hawking entropy is interpreted as the minimal possible black-hole entropy. In this way, the non-holographic AdS/CFT correspondence offers a simple resolution of the black-hole information paradox, consistent with a recently proposed gravitational crystal.
- Sharp minimax tests for large Toeplitz covariance matrices with repeated observations Cristina Butucea, Rania Zgheib: the authors observe a sample of n independent p-dimensional Gaussian vectors with Toeplitz covariance matrix Σ = [σ|i−j| ]1≤i,j≤p and σ0 = 1. They consider the problem of testing the hypothesis that Σ is the identity matrix asymptotically when n → ∞ and p → ∞. They also suppose that the covariances σk decrease either polynomially (P k≥1 k 2ασ 2 k ≤ L for α > 1/4 and L > 0) or exponentially (P k≥1 e 2Akσ 2 k ≤ L for A, L > 0). The authors then consider a test procedure based on a weighted U-statistic of order 2, with optimal weights chosen as solution of an extremal problem. They give the asymptotic normality of the test statistic under the null hypothesis for fixed n and p → +∞ and the asymptotic behavior of the type I error probability of our test procedure. They also show that the maximal type II error probability, either tend to 0, or is bounded from above. In the latter case, the upper bound is given using the asymptotic normality of our test statistic under alternatives close to the separation boundary. Their assumptions imply mild conditions: n = o(p 2α−1/2 ) (in the polynomial case), n = o(e p ) (in the exponential case). The authors prove both rate optimality and sharp optimality of our results, for α > 1 in the polynomial case and for any A > 0 in the exponential case. A simulation study illustrates the good behavior of our procedure, in particular for small n, large p.
- Pignistic Probability Transforms for Mixes of Low-and-High-Probability Events John J. Sudano: In some real world information fusion situations, time critical decisions must be made with an incomplete information set. Belief function theories (e.g., Dempster-Shafer theory of evidence, Transferable Belief Model) have been shown to provide a reasonable methodology for processing or fusing the quantitative clues or information measurements that form the incomplete information set. For decision making, the pignistic (from the Latin pignus, a bet) probability transform has been shown to be a good method of using Beliefs or basic belief assignments (BBAs) to make decisions. For many systems, one need only address the most-probable elements in the set. For some critical systems, one must evaluate the risk of wrong decisions and establish safe probability thresholds for decision making. This adds a greater complexity to decision making, since one must address all elements in the set that are above the risk decision threshold. The problem is greatly simplified if most of the probabilities fall below this threshold. Finding a probability transform that properly represents mixes of low-and-high-probability events is essential. This article introduces four new pignistic probability transforms with an implementation that uses the latest values of Beliefs, Plausibilities, or BBAs to improve the pignistic probability estimates. Some of them assign smaller values of probabilities for smaller values of Beliefs or BBAs than the Smets pignistic transform. They also assign higher probability values for larger values of Beliefs or BBAs than the Smets pignistic transform. These probability transforms will assign a value of probability that converges faster to the values below the risk threshold. A probability information content (PIC) variable is also introduced that assigns an information content value to any set of probability. Four operators are defined to help simplify the derivations. This article outlines a systematic methodology of making better decisions using Belief function theories. This methodology can be used to automate critical decisions in complex systems.
- On the Computational Complexity of High-Dimensional Bayesian Variable Selection Yun Yang, Martin J. Wainwright, Michael I. Jordan: the authors study the computational complexity of Markov chain Monte Carlo (MCMC) methods for high-dimensional Bayesian linear regression under sparsity constraints. They first show that a Bayesian approach can achieve variable-selection consistency under relatively mild conditions on the design matrix. Furthermore they demonstrate that the statistical criterion of posterior concentration need not imply the computational desideratum of rapid mixing of the MCMC algorithm. By introducing a truncated sparsity prior for variable selection, they provide a set of conditions that guarantee both variable-selection consistency and rapid mixing of a particular Metropolis-Hastings algorithm. The mixing time is linear in the number of covariates up to a logarithmic factor. Their proof controls the spectral gap of the Markov chain by constructing a canonical path ensemble that is inspired by the steps taken by greedy algorithms for variable selection.
Schrödinger Equation of a particle in an Uniformly Accelerated Frame and the Possibility of a New kind of Quanta Sanchari De and Somenath Chakrabarty: In this article the authors develop a formalism to obtain the Schrödinger equation for a particle in a frame undergoing an uniform acceleration in an otherwise flat Minkowski space-time geometry. They present an exact solution of the equation and obtain the eigenfunctions and the corresponding eigenvalues. It has been observed that the Schrödinger equation can be reduced to an one dimensional hydrogen atom problem. Whereas, the quantized energy levels are exactly identical with that of an one dimensional quantum harmonic oscillator. Hence considering transitions, the authors predict the existence of a new kind of quanta, which will either be emitted or absorbed if the particles get excited or de-excited respectively.
- Classical Verification of Quantum Proofs Zhengfeng Ji: the author presents a classical interactive protocol that verifies the validity of a quantum witness state for the local Hamiltonian problem. It follows from this protocol that approximating the non-local value of a multi-player one-round game to inverse polynomial precision is 'QMA'-hard. His work makes an interesting connection between the theory of 'QMA'-completeness and Hamiltonian complexity on one hand and the study of non-local games and Bell inequalities on the other.
- A Defense of Scientific Platonism without Metaphysical Presuppositions Peter Punin: From the Platonistic standpoint, mathematical edifices form an immaterial, unchanging, and eternal world that exists independently of human thought. By extension, “scientific Platonism” says that directly mathematizable physical phenomena – in other terms, the research field of physics – are governed by entities belonging to this objectively existing mathematical world. Platonism is a metaphysical theory. But since metaphysical theories, by definition, are neither provable nor refutable, anti-Platonistic approaches cannot be less metaphysical than Platonism itself. In other words, anti-Platonism is not “more scientifical” than Platonism. All we can do is to compare Platonism and its negations under epistemological criteria such as simplicity, economy of hypotheses, or consistency with regard to their respective consequences. In this paper the author intends to show that anti-Platonism claiming in a first approximation (i) that mathematical edifices consist of meaningless signs assembled according to arbitrary rules, and (ii) that the adequacy of mathematical entities and phenomena covered by physics results from idealization of these phenomena, is based as much as Platonism on metaphysical presuppositions. Thereafter, without directly taking position, he tries to launch a debate focusing on the following questions: (i) To maintain its coherence, is anti-Platonism not constrained to adopt extremely complex assumptions, difficult to defend, and not always consistent with current realities or practices of scientific knowledge? (ii) Instead of supporting anti-Platonism whatever the cost, in particular by the formulation of implausible hypotheses, would it not be more adequate to accept the idea of a mathematical world existing objectively and governing certain aspects of the material world, just as we note the existence of the material world which could also not exist?
- Prototypical Reasoning about Species and the Species Problem Yuichi Amitani: The species problem is often described as the abundance of conflicting definitions of species, such as the biological species concept and phylogenetic species concepts. But biologists understand the notion of species in a non-definitional as well as a definitional way. In this article the author argues that when they understand species without a definition in their mind, their understanding is often mediated by the notion of good species, or prototypical species, as the idea of “prototype” is explicated in cognitive psychology. This distinction helps us make sense of several puzzling phenomena regarding biologists’ dealing with species, such as the fact that in everyday research biologists often behave as if the species problem is solved, while they should be fully aware that it is not. The author then briefly discusses implications of this finding, including that some extant attempts to answer what the nature of species is have an inadequate assumption about how the notion of species is represented in biologists’ minds.
- Quantum chemistry may be a shortcut to life-changing compounds Rachel Ehrenberg: Technique could ID materials for better solar cells and batteries or more effective medicines - When Alán Aspuru-Guzik was in college, he really got into SETI, the project that uses home computers to speed the search for extraterrestrial intelligence. He was less interested in finding aliens in outer space, however, than in using fleets of computers to search molecular space. He wanted to find chemical compounds that could do intelligent things here on Earth. SETI is a well-known distributed computing project that allows regular people to volunteer their idle computers to sift through reams of data — in this case, radio signals. Aspuru-Guzik, now a theoretical chemist at Harvard University, hopes to harness thousands of home computers to comb through almost every possible combination of atoms.
Optimal Bayesian estimation in stochastic block models Debdeep Pati, Anirban Bhattacharya: With the advent of structured data in the form of social networks, genetic circuits and protein interaction networks, statistical analysis of networks has gained popularity over recent years. Stochastic block model constitutes a classical cluster-exhibiting random graph model for networks. There is a substantial amount of literature devoted to proposing strategies for estimating and inferring parameters of the model, both from classical and Bayesian viewpoints. Unlike the classical counterpart, there is however a dearth of theoretical results on the accuracy of estimation in the Bayesian setting. In this article, the authors undertake a theoretical investigation of the posterior distribution of the parameters in a stochastic block model. In particular, they show that one obtains optimal rates of posterior convergence with routinely used multinomial-Dirichlet priors on cluster indicators and uniform priors on the probabilities of the random edge indicators. En route, they also develop geometric embedding techniques to exploit the lower dimensional structure of the parameter space which may be of independent interest.
Quantum Life Spreads Entanglement Across Generations: The way creatures evolve in a quantum environment throws new light on the nature of life - Computer scientists have long known that evolution is an algorithmic process that has little to do with the nature of the beasts it creates. Instead, evolution is set of simple steps that, when repeated many times, can solve problems of immense complexity; the problem of creating the human brain, for example, or of building an eye. And, of course, the problem of creating life. Put an evolutionary algorithm to work in a virtual environment and it doesn’t take long to create life-like organisms in silico that live and reproduce entirely within a virtual computer-based environment. This kind of life is not carbon-based or even silicon-based. It is a phenomenon of pure information. But if the nature of information allows the process of evolution to be simulated on an ordinary computer, then why not also on a quantum computer? The resulting life would exist in virtual quantum environment governed by the bizarre laws of quantum mechanics. As such, it would be utterly unlike anything that biologists have ever encountered or imagined. But what form might quantum life take? In a latest result, we get an insights into this question thanks to the work of Unai Alvarez-Rodriguez and a few pals at the University of the Basque Country in Spain. They have simulated the way life evolves in a quantum environment and use this to propose how it could be done in a real quantum environment for the first time. “We have developed a quantum information model for mimicking the behavior of biological systems inspired by the laws of natural selection,” they say.
- Cosmology from quantum potential: It was shown recently that replacing classical geodesics with quantal (Bohmian) trajectories gives rise to a quantum corrected Raychaudhuri equation (QRE). In this article, a derivation of the second order Friedmann equations from the QRE is carried out, and shown that this also contains a couple of quantum correction terms, the first of which can be interpreted as cosmological constant (and gives a correct estimate of its observed value), while the second as a radiation term in the early universe, which gets rid of the big-bang singularity and predicts an infinite age of our universe.
- Improved minimax estimation of a multivariate normal mean under heteroscedasticity Zhiqiang Tan: the author considers the problem of estimating a multivariate normal mean with a known variance matrix, which is not necessarily proportional to the identity matrix. The coordinates are shrunk directly in proportion to their variances in Efron and Morris’ (J. Amer. Statist. Assoc. 68 (1973) 117– 130) empirical Bayes approach, whereas inversely in proportion to their variances in Berger’s (Ann. Statist. 4 (1976) 223–226) minimax estimators. The author proposes a new minimax estimator, by approximately minimizing the Bayes risk with a normal prior among a class of minimax estimators where the shrinkage direction is open to specification and the shrinkage magnitude is determined to achieve minimaxity. The proposed estimator has an interesting simple form such that one group of coordinates are shrunk in the direction of Berger’s estimator and the remaining coordinates are shrunk in the direction of the Bayes rule. Moreover, the proposed estimator is scale adaptive: it can achieve close to the minimum Bayes risk simultaneously over a scale class of normal priors (including the specified prior) and achieve close to the minimax linear risk over a corresponding scale class of hyper-rectangles. For various scenarios in this numerical study, the proposed estimators with extreme priors yield more substantial risk reduction than existing minimax estimators.
- Large-scale Machine Learning for Metagenomics Sequence Classification Kévin Vervier, Pierre Mahé, Maud Tournoud, Jean-Baptiste Veyrieras and Jean-Philippe Vert: Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Due to the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. In this work, the authors investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile. they show that machine learning-based compositional approaches benefit from increasing the number of fragments sampled from reference genome to tune their parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning these models involves training a machine learning model on about 108 samples in 107 dimensions, which is out of reach of standard softwares but can be done efficiently with modern implementations for large-scale machine learning. The resulting models are competitive in terms of accuracy with well-established alignment tools for problems involving a small to moderate number of candidate species, and for reasonable amounts of sequencing errors. The authors also show, however, that compositional approaches are still limited in their ability to deal with problems involving a greater number of species, and more sensitive to sequencing errors. They finally confirm that compositional approach achieve faster prediction times, with a gain of 3 to 15 times with respect to the BWA-MEM short read mapper, depending on the number of candidate species and the level of sequencing noise.
- Minimal length effects in black hole thermodynamics from tunneling formalism Sunandan Gangopadhyay: The tunneling formalism in the Hamilton-Jacobi approach is adopted to study Hawking radiation of massless Dirac particles from spherically symmetric black hole spacetimes incorporating the effects of the generalized uncertainty principle. The Hawking temperature is found to contain corrections from the generalized uncertainty principle. Further, the author shows from this result that the ratio of the GUP corrected energy of the particle to the GUP corrected Hawking temperature is equal to the ratio of the corresponding uncorrected quantities. This result is then exploited to compute the Hawking temperature for more general forms of the uncertainty principle having infinite number of terms. Choosing the coefficients of the terms in the series in a specific way enables one to sum the infinite series exactly. This leads to a Hawking temperature for the Schwarzschild black hole that agrees with the result which accounts for the one loop back reaction effect. The entropy is finally computed and yields the area theorem upto logarithmic corrections.
- Bayesian inference for higher order ordinary differential equation models Prithwish Bhaumik and Subhashis Ghosal: Often the regression function appearing in fields like economics, engineering, biomedical sciences obeys a system of higher order ordinary differential equations (ODEs). The equations are usually not analytically solvable. The authors are interested in inferring on the unknown parameters appearing in the equations. Significant amount of work has been done on parameter estimation in first order ODE models. Bhaumik and Ghosal (2014a) considered a two-step Bayesian approach by putting a finite random series prior on the regression function using B-spline basis. The posterior distribution of the parameter vector is induced from that of the regression function. Although this approach is computationally fast, the Bayes estimator is not asymptotically efficient. Bhaumik and Ghosal (2014b) remedied this by directly considering the distance between the function in the nonparametric model and a Runge-Kutta (RK4) approximate solution of the ODE while inducing the posterior distribution on the parameter. They also studied the direct Bayesian method obtained from the approximate likelihood obtained by the RK4 method. In this paper the authors extend these ideas for the higher order ODE model and establish Bernstein-von Mises theorems for the posterior distribution of the parameter vector for each method with n −1/2 contraction rate.
- How spacetime is built by quantum entanglement: A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo’s Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors’ Suggestion “for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields.” Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales. The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive. Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators. Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called “spooky action at distance.” The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory.
- A model for gene deregulation detection using expression data Thomas Picchetti, Julien Chiquet, Mohamed Elati, Pierre Neuvial, R´emy Nicolle and Etienne Birmele: In tumoral cells, gene regulation mechanisms are severely altered, and these modifications in the regulations may be characteristic of different subtypes of cancer. However, these alterations do not necessarily induce differential expressions between the subtypes. To answer this question, the authors propose a statistical methodology to identify the misregulated genes given a reference network and gene expression data. Their model is based on a regulatory process in which all genes are allowed to be deregulated. They derive an EM algorithm where the hidden variables correspond to the status (under/over/normally expressed) of the genes and where the E-step is solved thanks to a message passing algorithm. Their procedure provides posterior probabilities of deregulation in a given sample for each gene. They then assess the performance of their method by numerical experiments on simulations and on a bladder cancer data set.
- Probabilistic Knowledge as Objective Knowledge in Quantum Mechanics: Potential Powers Instead of Actual Properties Christian de Ronde: In classical physics, probabilistic or statistical knowledge has been always related to ignorance or inaccurate subjective knowledge about an actual state of affairs. This idea has been extended to quantum mechanics through a completely incoherent interpretation of the Fermi-Dirac and Bose-Einstein statistics in terms of “strange” quantum particles. This interpretation, naturalized through a widespread “way of speaking” in the physics community, contradicts Born’s physical account of Ψ as a “probability wave” which provides statistical information about outcomes that, in fact, cannot be interpreted in terms of ‘ignorance about an actual state of affairs’. In the present paper the author discusses how the metaphysics of actuality has played an essential role in limiting the possibilities of understating things differently. It is proposed instead a metaphysical scheme in terms of powers with definite potentia which allows us to consider quantum probability in a new light, namely, as providing objective knowledge about a potential state of affairs.
- Targeted Diversity Generation by Intraterrestrial Archaea and Archaeal Viruses In the evolutionary arms race between microbes, their parasites, and their neighbours, the capacity for rapid protein diversification is a potent weapon. Diversity-generating retroelements (DGRs) use mutagenic reverse transcription and retrohoming to generate myriad variants of a target gene. Originally discovered in pathogens, these retroelements have been identified in bacteria and their viruses, but never in archaea. Here the authors report the discovery of intact DGRs in two distinct intraterrestrial archaeal systems: a novel virus that appears to infect archaea in the marine subsurface, and, separately, two uncultivated nanoarchaea from the terrestrial subsurface. The viral DGR system targets putative tail fibre ligand-binding domains, potentially generating >1018 protein variants. The two single-cell nanoarchaeal genomes each possess ≥4 distinct DGRs. Against an expected background of low genome-wide mutation rates, these results demonstrate a previously unsuspected potential for rapid, targeted sequence diversification in intraterrestrial archaea and their viruses.
- Einstein, Bohm, and Leggett–Garg Guido Bacciagaluppi: In a recent paper (Bacciagaluppi 2015), the author analysed and criticised Leggett and Garg’s argument to the effect that macroscopic realism contradicts quantum mechanics, by contrasting their assumptions to the example of Bell’s stochastic pilot-wave theories, and applied Dzhafarov and Kujala’s analysis of contextuality in the presence of signalling to the case of the Leggett–Garg inequalities. In this chapter, he discusses more generally the motivations for macroscopic realism, taking a cue from Einstein’s criticism of the Bohm theory, then goes on to summarise his previous results, with a few additional comments on other recent work on Leggett and Garg. [To appear in: E. Dzhafarov (ed.), Contextuality from Quantum Physics to Psychology (Singapore: World Scientific).
- Should We Really Use Post-Hoc Tests Based on Mean-Ranks? Alessio Benavoli, Giorgio Corani, Francesca Mangili: The statistical comparison of multiple algorithms over multiple data sets is fundamental in machine learning. This is typically carried out by the Friedman test. When the Friedman test rejects the null hypothesis, multiple comparisons are carried out to establish which are the significant differences among algorithms. The multiple comparisons are usually performed using the mean-ranks test. The aim of this technical note is to discuss the inconsistencies of the mean-ranks post-hoc test with the goal of discouraging its use in machine learning as well as in medicine, psychology, etc...! The authors show that the outcome of the mean-ranks test depends on the pool of algorithms originally included in the experiment. In other words, the outcome of the comparison between algorithm s A and B depends also on the performance of the other algorithms included in the original experiment. This can lead to paradoxical situations. For instance the difference betwee n A and B could be declared significant if the pool comprises algorithms C, D, E and not significant if the pool comprises algorithms F, G, H. To overcome these issues, the authors suggest instead to perform the multiple comparison using a test whose outcome only depends on the two algorithms being compared, such as the sign-test or the Wilcoxon signed-rank test.
- The study of Lorenz and Rössler strange attractors by means of quantum theory Yu I Bogdanov, N A Bogdanova: the authors have developed a method for complementing an arbitrary classical dynamical system to a quantum system using the Lorenz and Rössler systems as examples. The Schrödinger equation for the corresponding quantum statistical ensemble is described in terms of the Hamilton-Jacobi formalism. They then consider both the original dynamical system in the position space and the conjugate dynamical system corresponding to the momentum space. Such simultaneous consideration of mutually complementary position and momentum frameworks provides a deeper understanding of the nature of chaotic behavior in dynamical systems. The authors have shown that the new formalism provides a significant simplification of the Lyapunov exponents calculations. From the point of view of quantum optics, the Lorenz and Rössler systems correspond to three modes of a quantized electromagnetic field in a medium with cubic nonlinearity. From the computational point of view, the new formalism provides a basis for the analysis of complex dynamical systems using quantum computers.
- Achieving Optimal Misclassification Proportion in Stochastic Block Model Chao Gao, Zongming Ma, Anderson Y. Zhang and Harrison H. Zhou: Community detection is a fundamental statistical problem in network data analysis. Many algorithms have been proposed to tackle this problem. Most of these algorithms are not guaranteed to achieve the statistical optimality of the problem, while procedures that achieve information theoretic limits for general parameter spaces are not computationally tractable. In this paper, the authors present a computationally feasible two-stage method that achieves optimal statistical performance in misclassification proportion for stochastic block model under weak regularity conditions. their two-stage procedure consists of a refinement stage motivated by penalized local maximum likelihood estimation. This stage can take a wide range of weakly consistent community detection procedures as initializer, to which it applies and outputs a community assignment that achieves optimal misclassification proportion with high probability. The practical effectiveness of the new algorithm is demonstrated by competitive numerical results.