Mathematical Fuzzy Logic in Reasoning and Decision Making under Uncertainty

Here’s the abstract of my forthcoming talk at the Prague Seminar: The Future of Mathematical Fuzzy Logic Prague, 16–18. June 2016

Mathematical Fuzzy Logic in Reasoning and Decision Making under Uncertainty

“Fuzzy methods” are currently ubiquitous in many economic applications, especially to business and marketing. In philosophy “fuzzy logic” has long been discussed in connection with vagueness and its linguistic consequences. Both areas have played an important role and ultimately brought about a number of contributions envisaged by the early works in “fuzzy sets”. Over the past two decades, much work has been done to connect the early contributions to the wider logical landscape, resulting in what has been eventually termed Mathematical Fuzzy Logic. As a branch of mathematical logic, MFL developed its own tools and methods which partly advanced our understanding of many-valued logics, and partly lead to new logical systems. I will begin by pointing out two examples of how MFL helps us framing the discussion on central concepts in philosophy, (graded truth) and decision theory (convexity). Building on those, I will then suggest that a number of foundational problems in the social sciences lend themselves to being tackled from the MFL perspective (rather from the “fuzzy methods” perspectives.) The question then becomes: How does MFL help us rethink the foundations of reasoning and decision-making under uncertainty?

 

Analogies and Theories: Formal Models of Reasoning, Itzhak Gilboa, Larry Samuelson and David Schmeidler.

EAP

Hykel Hosni (2016). Review of ‘Analogies and Theories: Formal Models of Reasoning’ Economics and Philosophy, 32, pp 373-381. doi:10.1017/S0266267116000122.

Induction and analogy have long been considered indispensable items in the uncertain reasoner’s toolbox, and yet their formal relation to probability has never been less than puzzling. One of the first mathematically well-informed attempts at gripping the problem can be found in the penultimate chapter of Laplace’s Essai philosophique sur les probabilités. There, a key contributor to the construction of the theories of mathematical probability and statistics, argues that analogy and induction, along with a “happy tact”, provide the principal means for “approaching certainty” when the probabilities involved are “impossible to submit to calculus”. Laplace then hastens to warn the reader against the subtleties of reasoning by induction and the difficulties of pinning down the right “similarity” between causes and effects which is required for the sound application of analogical reasoning. Two centuries on, reasoning about the kind of uncertainty which resists clear-cut probabilistic representations remains, theoretically, pretty much uncharted territory.

Analogies and Theories: Formal Models of Reasoning is theattempt of I. Gilboa, L. Samuelson and D. Schmeidler to put those vexed epistemological questions on a firm decision-theoretic footing.

Read more  [preprint] – [published version]

P. Dawid, On Individual Risk

Originally published in The Reasoner Volume 10, Number 6– June 2016

mouse
Mousetraps usually kill. But is this one going to kill the cautious rodent?

In Mathematics and Plausible Reasoning: Patterns of plausible inference G. Polya introduces random mass phenomena along the following lines. Consider raindrops falling on an ideally squared pavement, and focus on just two otherwise identical stones called Left and Right. It starts raining (conveniently one drop at a time) and we start recording the sequence of Left and Right according to which stone is hit by each raindrop. In this situation we are (reasonably) unable to predict where the next raindrop will fall, but we can easily predict that in the long run, both stones will be wet. This, Polya suggests, is typical of random mass phenomena: “unpredictable in certain details, predictable in certain numerical proportions to the whole”.

The fact that we can often make reliable predictions on some aggregate, but fail to draw from this obvious conclusions on the individuals, has profound implications not only for the foundations of probability, but also for its practical applications. In medicine, for instance, this is quite the norm. In the absence of further information, what does the fact that a certain side effect of, say statins, is known to affect 1 in 100 patients say about you suffering from it? Problems like this raise the more general question: what is the extent to which forecasts on some aggregate can reliably inform us about its individuals? This question, and its philosophical underpinnings, are tackled by P. Dawid (2016: On Individual Risk, Synthese, First Online, Open Access.)

Here’s a motivating example from the paper

Continue reading →

Not Quite True: The logic, philosophy and mathematics of vagueness

NQT

13 May 2016 2.:00 pm Aula Enzo Paci, Department of Philosophy, University of Milan

The purpose of this workshop is to provide a multidisciplinary perspective on the fascinating yet elusive notion of vagueness. Logical, philosophical and mathematical concepts and techniques will be brought to bear on the topic.

Attendance is free, all welcome!

Speakers

 

Titles, Abstracts and practical information

Hilbert’s interpretations of probability

Originally published in The Reasoner Volume 10, Number 5– May 2016

 

The concept of Probability is interesting, among other reasons, for the variety of ways in which we may be talking about distinct things and yet, in the end, still talking about probability. From the philosophy-of-mathematics point of view, this is vividly illustrated by the fact that, except possibly for one’s views on `finite vs. countable additivity’, one axiomatisation serves a great number of largely incompatible interpretations of the concept being axiomatised. Chapters 1-3 of J. Williamson (2010. In Defence of Objective Bayesianism. Oxford University Press.) offer a wide angle picture which I recommend to those who are unfamiliar with the landscape of probability interpretations.

hilbert-problems
Viewed at a relative coarse grain, the axiomatisation of probability developed by following a similar path to other mathematical concepts until at the turn of the twentieth century the key motivation became that of securing its applications against the threat of paradoxical consequences Needless to say David Hilbert played an important role in this. The explicit question appears as number “six” in the list of problems Hilbert posed to the audience of the Second International Congress of Mathematicians, in Paris on 8 August 1900:

Six. Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part; in the first rank are the theory of probabilities and mechanics. [] As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gases.

Continue reading →

Probabilistic mistakes kill (possibly many innocents)

Originally published in The Reasoner Volume 10, Number 4– March 2016

snowden

The Oscar winning documentary Citizenfour brought the concept of metadata to the attention of general audiences. As one scene of the film explains, we leave, mostly unwillingly, many digital traces of our daily activities. Most Londoners, for instance, use an Oyster card to travel across the city. When they top-up their Oyster online or opt in for the convenient auto top-up, they effectively allow whoever has access to the data, to track their routine. (And the recent introduction of contactless payment on the London transport system clearly made this even simpler.) This can then be linked to what people buy, what they read on the internet, what they post on social networks, and indeed, to what other people do. That’s metadata.

It goes without saying that metadata is syntax with no semantics. There are many reasons as to why people do what they do, and there are many people travelling independently on the same journey. Quite obviously then, the dots representing their digital traces can be joined in a number of distinct ways, and possibly found to draw specific but wrong pictures. That’s why the Owellian idea that someone possesses a wealth of metadata about us is indeed frightening. But knowing that governments may kill based on that, is rather hard to accept.

The opening of this recent piece by C. Grothoff and J.M. Porup on Arstechnica UK is chilling:

In 2014, the former director of both the CIA and NSA proclaimed that “we kill people based on metadata.” Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.

Continue reading →