Predicting the future state of a system has always been a natural motivation for science and practical applications. Such a topic, beyond its obvious technical and societal relevance, is also interesting from a conceptual point of view. This owes to
the fact that forecasting lends itself to two equally radical, yet opposite methodologies. A reductionist one, based on the first principles, and the naive inductivist one, based only on data. This latter view has recently gained some attention in response to the availability of unprecedented amounts of data and increasingly sophisticated algorithmic analytic techniques. The purpose of this note is to assess critically the role of big data in reshaping the key aspects of forecasting and in particular the claim that bigger data leads to better predictions. Drawing on the representative example of weather forecasts we argue that this is not generally the case. We conclude by suggesting that a clever and context-dependent compromise between modelling and quantitative analysis stands out as the best forecasting strategy, as anticipated nearly a century ago by Richardson and von Neumann.
KEYWORDS: Forecasting, Big Data, Epistemology
H. Hosni and A. Vulpiani, “Forecasting in Light of Big Data”, Philosophy and Technology (2017). doi:10.1007/s13347-017-0265-3
Preprint available here https://arxiv.org/abs/1705.11186
I just took over the editorship of THE REASONER from Jon Williamson who started it ten years ago. I’m very excited about this, and this is my first editorial. Read the whole issue at http://thereasoner.org
Reasoning is naturally multi-disciplinary, inter-disciplinary, inter-sectoral. While those tend to appear as buzzwords in the narrative of funding agencies in Europe and elsewhere, reality’s bitterly different. Reasoners struggle a lot when the workings of academia demand comparison with more focussed areas, both in the Natural and the Social sciences. At that moment, our strength is likely to turn into our weakness. Community building and its consolidation are therefore no less than vital to us.
This paper introduces a logical analysis of convex combinations within the framework of Lukasiewicz real-valued logic. This provides a natural, albeit as yet unexplored, link between the fields of many-valued logics and decision theory where the notion of convexity plays a central role.
We set out to explore such a link by defining convex operators on MV-algebras, which are the equivalent algebraic semantics of Lukasiewicz logic. This gives us a formal language to reason about the expected value of bounded random variables. As an illustration of the applicability of our framework we present a logical version of the Anscombe-Aumann representation result.
KEYWORDS: MV-algebra, convexity, uncertainty measures, Anscombe-Aumann
Flaminio, T., H. Hosni, and S. Lapenta. 2017. “Convex MV-Algebras: Many-Valued Logics Meet Decision Theory.” Studia Logica (Online First ) DOI: 10.1007/s11225-016-9705-9.
Originally published in The Reasoner Volume 10, Number 12– December 2016
The International Journal of Approximate Reasoning is celebrating 40 years of Dempster–Shafer theory with a very interesting special issue.
The opening sentence of the Editorial, by Thierry Denoeux, sets the tone
Among the many books published each year, some are good and a few are very good, but only exceptionally does a book propose a radically different way of approaching a scientific question, and start a new research field. “A Mathematical Theory of Evidence” by Glenn Shafer, which appeared in 1976, is one of those.
Originally published in The Reasoner Volume 10, Number 9– Semptember 2016
The Rio 2016 Olympic Games have been, as usual, a great illustration of the Laplacian dictum according to which probability is partly due to our ignorance and partly to our knowledge. For the certainty that the best athlete(s) will win or, at the other end of the spectrum, the impossibility to figure out who is more likely to win, would turn the greatest sporting event on the planet into a very dull couple of weeks.
Laplacian romanticism notwithstanding, sport generates data, lots of it. Not only does this allow for high-tech betting, it also feeds uncertain reasoning research. In particular SportVU, a body-tracking system which builds on Israeli military technology, played an unexpected role in addressing a long-standing question: Is the Hot Hand phenomenon real, or is it just one of the many ways in which we tend to see patterns where there aren’t any?
Originally published in The Reasoner Volume 10, Number 8– August 2016
Statistician Stephen Stigler put forward in the 1980’s the amusing Law of Eponymy which bears his name(!). According to Stigler’s Law, the vast majority (some say all) of scientific discoveries are not named after those who actually made the discovery. Wikipedia lists a rather impressive number of instances of Stigler’s Law, featuring the Higgs Boson, Halley’s comet, Euler’s formula, the Cantor-Bernstein-Schroeder theorem, and of course Newton’s first two laws of mechanics. Of particular interest is the case of Gauss, who according to this list, has his name mistakenly attached to three items.
Rather coherently his recent book, S. Stigler (2016: The Seven Pillars of Statistical Wisdom, Harvard University Press), presents the fascinating edifice of statistics by giving more emphasis to the key ideas on which its foundations rest, rather than to the figures who came up with them. The seven pillars are: Continue reading →
Here is the abstract for my invited talk at the Logic, Algebra and Truth Degrees 2016 which will be held from 28 to 30 June 2016 in South Africa
Logic, Probability and the Foundations of Uncertain Reasoning
The relation between logic and probability is a fascinating one. Leibniz saw them as two sides of the same coin, whereas Boole and De Morgan thought of probability as the natural generalisation of logic to reasoning under uncertainty. This cross-contamination rapidly declined with the coming of age of “mathematical logic” which reached its climax at the end of the 1920s. Around that time, Kolmogorov was providing the definitive answer to Hilbert’s sixth problem: the axiomatisation of probability. By then, the research agendas of mathematical logic and probability were showing virtually no overlap: a narrow focus on the foundations of mathematics for the former, and the assimilation to measure theory and the emerging theory of stochastic processes for the latter. At that point, neither logic nor probability were particularly concerned with uncertain reasoning and its foundations.
Things changed again during 1980s, when the sodality of logic and probability was revived as a consequence of the pressing needs of artificial intelligence and formal epistemology. In this talk I will provide (a naturally biased) account of how this is currently affecting the foundations of reasoning and decision making under uncertainty. After providing some standard back- ground, I will illustrate how the investigation of increasingly more expressive measures of uncertainty based on non-classical logics, and in particular many-valued, led to exciting research questions. I will conclude by briefly discussing how some of the ensuing answers provide interesting and to some extent surprising feedback on the classical notion of probability.
Here’s the abstract of my forthcoming talk at the Prague Seminar: The Future of Mathematical Fuzzy Logic Prague, 16–18. June 2016
Mathematical Fuzzy Logic in Reasoning and Decision Making under Uncertainty
“Fuzzy methods” are currently ubiquitous in many economic applications, especially to business and marketing. In philosophy “fuzzy logic” has long been discussed in connection with vagueness and its linguistic consequences. Both areas have played an important role and ultimately brought about a number of contributions envisaged by the early works in “fuzzy sets”. Over the past two decades, much work has been done to connect the early contributions to the wider logical landscape, resulting in what has been eventually termed Mathematical Fuzzy Logic. As a branch of mathematical logic, MFL developed its own tools and methods which partly advanced our understanding of many-valued logics, and partly lead to new logical systems. I will begin by pointing out two examples of how MFL helps us framing the discussion on central concepts in philosophy, (graded truth) and decision theory (convexity). Building on those, I will then suggest that a number of foundational problems in the social sciences lend themselves to being tackled from the MFL perspective (rather from the “fuzzy methods” perspectives.) The question then becomes: How does MFL help us rethink the foundations of reasoning and decision-making under uncertainty?
Hykel Hosni (2016). Review of ‘Analogies and Theories: Formal Models of Reasoning’ Economics and Philosophy, 32, pp 373-381. doi:10.1017/S0266267116000122.
Induction and analogy have long been considered indispensable items in the uncertain reasoner’s toolbox, and yet their formal relation to probability has never been less than puzzling. One of the first mathematically well-informed attempts at gripping the problem can be found in the penultimate chapter of Laplace’s Essai philosophique sur les probabilités. There, a key contributor to the construction of the theories of mathematical probability and statistics, argues that analogy and induction, along with a “happy tact”, provide the principal means for “approaching certainty” when the probabilities involved are “impossible to submit to calculus”. Laplace then hastens to warn the reader against the subtleties of reasoning by induction and the difficulties of pinning down the right “similarity” between causes and effects which is required for the sound application of analogical reasoning. Two centuries on, reasoning about the kind of uncertainty which resists clear-cut probabilistic representations remains, theoretically, pretty much uncharted territory.
Analogies and Theories: Formal Models of Reasoning is theattempt of I. Gilboa, L. Samuelson and D. Schmeidler to put those vexed epistemological questions on a firm decision-theoretic footing.
Originally published in The Reasoner Volume 10, Number 6– June 2016
In Mathematics and Plausible Reasoning: Patterns of plausible inference G. Polya introduces random mass phenomena along the following lines. Consider raindrops falling on an ideally squared pavement, and focus on just two otherwise identical stones called Left and Right. It starts raining (conveniently one drop at a time) and we start recording the sequence of Left and Right according to which stone is hit by each raindrop. In this situation we are (reasonably) unable to predict where the next raindrop will fall, but we can easily predict that in the long run, both stones will be wet. This, Polya suggests, is typical of random mass phenomena: “unpredictable in certain details, predictable in certain numerical proportions to the whole”.
The fact that we can often make reliable predictions on some aggregate, but fail to draw from this obvious conclusions on the individuals, has profound implications not only for the foundations of probability, but also for its practical applications. In medicine, for instance, this is quite the norm. In the absence of further information, what does the fact that a certain side effect of, say statins, is known to affect 1 in 100 patients say about you suffering from it? Problems like this raise the more general question: what is the extent to which forecasts on some aggregate can reliably inform us about its individuals? This question, and its philosophical underpinnings, are tackled by P. Dawid (2016: On Individual Risk, Synthese, First Online, Open Access.)
Here’s a motivating example from the paper