Originally published in The Reasoner Volume 10, Number 6– June 2016
In Mathematics and Plausible Reasoning: Patterns of plausible inference G. Polya introduces random mass phenomena along the following lines. Consider raindrops falling on an ideally squared pavement, and focus on just two otherwise identical stones called Left and Right. It starts raining (conveniently one drop at a time) and we start recording the sequence of Left and Right according to which stone is hit by each raindrop. In this situation we are (reasonably) unable to predict where the next raindrop will fall, but we can easily predict that in the long run, both stones will be wet. This, Polya suggests, is typical of random mass phenomena: “unpredictable in certain details, predictable in certain numerical proportions to the whole”.
The fact that we can often make reliable predictions on some aggregate, but fail to draw from this obvious conclusions on the individuals, has profound implications not only for the foundations of probability, but also for its practical applications. In medicine, for instance, this is quite the norm. In the absence of further information, what does the fact that a certain side effect of, say statins, is known to affect 1 in 100 patients say about you suffering from it? Problems like this raise the more general question: what is the extent to which forecasts on some aggregate can reliably inform us about its individuals? This question, and its philosophical underpinnings, are tackled by P. Dawid (2016: On Individual Risk, Synthese, First Online, Open Access.)
Here’s a motivating example from the paper
Ahroni et al (2013) tested a group of released adult offenders on a go/no go task using fMRI, and examined the relation between task-related activity in the anterior cingulate cortex (ACC) and subsequent rearrest (over 4 years), allowing for a variety of other risk factors. They found a significant relationship:  whereas subjects with a high ACC activity had an estimated 31 % chance of rearrest, subjects with low ACC activity had a 52% chance [and conclude] “These results suggest a potential neurocognitive biomarker for persistent antisocial behavior”. A newly released offender has low ACC activity: how should we judge his chance of rearrest?
To set the stage, statistician Phil Dawid compares a number of alternative interpretations of probability, which are distinguished according to whether they see probability as an attribute of individuals or groups. The resulting categories are called individualist andgroupist respectively. This contrast is clearly related to the traditional `subjective’ vs `objective’ divide, though it is not equivalent to it. Whilst “individualist” theories promptly suggest a subjective (personal) interpretation, the term “groupist” suggests an intersubjective rather than fully objective view of probability.
The central part of the paper illustrates Dawid’s ideas about how groupist and individualist theories relate, and does so by considering “Group to individual” (G2i) and “individual to Group” (i2G) bridges. An example of the former is provided by a suitably reintepreted version of de Finetti’s representation theorem, which entitles the individualist not to worry about taking “expert valuations” arising from groupist frequencies – so long as they arise from exchangeable observations. A rather detailed explanation of this and its implications is provided by the author. If the (G2i) direction can be seen as a reinterpretation of known results, the (i2G) direction is where the author’s contributions lie. Suppose that individual risks are known (or taken as primitive) how do they lead to `group’ frequencies? The answer is inspired by the property known as calibration, which boils down to the idea that individual estimates of risk should be in line with the way uncertainty resolves. The precise details of which kind of calibration suits best the purpose are discussed by the author in the final part of the paper. The desideratum is to have a calibration principle which is capable of providing sufficiently refined forecasts. To achieve this it is suggested, naturally enough, that all available information is used for calibration purposes. This eventually leads to a rather remarkable level of intersubjectivity which can be seen to bridge the “individual to Group” gap. I refer interested readers to Dawid’s paper for more details.