Expert Hubris/overconfidence....and wishful thinking.
So, let me first grant that Nate Silver at least tried to warn folk that an outcome pattern like the one we are seeing might be a real possibility. But even his models did not come close to what's happened in lots of states. As he candidly admits the polls were “terrible.” But he was in his peer group relatively cautious (and got flack for that); a lot of very smart people were telling me that Princeton Wang is assuring a HRC WIN.+
Last night, after my graduate seminar, a few of my students, who had lived through the Brexit fiasco and who were rather skeptical of the polls, asked me why I thought that there still was a real possibility that Trump could win (recall). I said that I had noticed that [i] there seemed to be surprisingly few state polls in lots of important places. (Again, in fairness I was alerted to this issue some time during the last six weeks by a comment by Silver.) I wondered if [ii] this apparent paucity wasn't driven by the success of past polling-aggregators which had reduced the impact and newsworthiness of state polls (which are expensive). I also said that [iii] I was surprised to see that so much late campaigning attention being lavished on Pennsylvania, New Hampshire, and even Michigan. This suggested to me that the campaign teams were less confident, perhaps, than some of the public forecasters. I also said to my students [while insisting that I was not an expert on political demographics] that [iv] the polls may be missing a shift of elderly (white) Democrats toward Trump (while -- and I did not say this -- everybody was focusing on the Latino/Hispanic surge, and even suggesting that this was deciding the election already before yesterday. The surge turns out to have been real, but its political significance was being hyped by (a) groups that will lobby on behalf of Latino/Hispanic voters and (b) folk that wish to believe that the US is entering a relatively liberal, post-racial age as the public becomes more educated and whites stop being an outright majority).
Most of the forecasters ultimately rely on Bayesian models. Bayesian models do best in data-rich environments. (For when there is a lot of data, the impact of priors on the outcome diminishes.) Even then they should not be treated as holy, but rather as one instrument among many. It's possible that all that went wrong with the aggregation models is something like [I], that is, not enough high quality data. The problem is that the modelers did a very poor job warning people that we were in an environment in which we should be cautious about their models.
As a non-trivial aside, when Silver warned that a Trump win was at least a real possibility (at various points last week it was about 1/3 chance) others criticized him because he was not sufficiently data-driven. They could do so because he keeps bits of his model secret. (They accused him of punditry and/or being interested in driving traffic to his website. So implicitly people recognize that bias may also be introduced because of modeler-incentives.) His relative lack of transparency made his relative caution less persuasive to other experts (and the public).
But even in data-rich environments we should not place all our bets (yes, I am Dutch!) on Bayesian models. That's because they give you no good signals that you may be in environments in which underlying distributions don't match the models which are calibrated on and developed on past data. The problem I am talking about here is NOT primarily a problem of fat tails, but rather a problem of not knowing in which possible world you are in. (I was pretty prescient two days ago about this.)
Bayesian models are data-driven. Now that computing power is relatively cheap and we seemingly live in a data-abundant world (but, again, recall [i]-[ii]) Bayesian models get deployed with ever greater frequency in lots of places. In lots of areas of science they get deployed when theory driven models perform poorly both because the world is very complex and even very good theories have trouble tracking it or because we really have no idea what a plausible theory may look like. In policy environments they get deployed because they are taken to give normative guidance to action. Few other theoretical devices are capable of delivering predictive out-performance and normative significance while maintaining clarity and transparency about one's modeling assumptions.++
In late July, I worried in a Facebook exchange (of July 20) with Branden Fitelson (a leading formal philosopher and historian of formal philosophy) that the assumptions underwriting the aggregation models may be dated and that we may be in an environment in which they are not working so well. At the time, Fitelson shared and amplified my concern. He wrote, "there is a deeper kind of unprojectability happening here, which i find really frightening."* The Michigan poling miss during the primary and Brexit had spooked us. Yes, the polls had picked up a late shift to Brexit, but the prediction markets and financial markets (as well as lots of pundits including myself!) had failed to acknowledge their significance. Too many people we knew assumed that the future would be like the past. But they ignored that Trump was actively promoting a rejection of the ordinary political rules of the last two generations (recall).
In my thinking I have treated the events the financial crisis as a decisive refutation of the straightforward applicability to policy environments of the ordinary tools of formal epistemology (including Bayes) as well as the highly specific use (specific to mathematical approaches to wide areas of finance) of treating genuine uncertainty as a species of randomness (which has led to adoption of various statistical devices that are proxies for true randomness). I have long been puzzled that my concerns over true uncertainty have not been more widely shared by epistemologists** and that, instead, we have seen a doubling down on thedevelopment of ever more sophisticated versions of these tools in epistemology and in finance (and elsewhere).
As regular readers know, I am no friend of Bayesianism. It's not just because I am bad at math. I think exclusive reliance on Bayesian models breeds expert over-confidence. Some of that was on display during the last few weeks among my philosophical and political science friends that embraced the aggregator models without much critical distance. If I am right about the significance of [ii], then past success generated some of the conditions of present failure. Soon enough, we will be distracted from reflecting more fully on this, I suspect, by more pressing political developments.
+There may be some stereotyping based on pedigree and ethnicity involved in that phenomenon.
++When background theory is good, the risks of relying on Bayesian models are greatly reduced.
*Through bad luck I don't have access to my side of the exchange (or the exchange itself) because the exchange was removed (along with lots of others by the person who had initiated it for reasons having nothing to do with the exchange), but because I was interested in developing the issues into a possible paper I kept Fitelson's side of the discussion.
**There has been lots of interest in responding to L.A. Paul's work on transformative experience, but most of the very interesting responses don't share my framing it as an instance of Knightian uncertainty (although some treated it as on par with Allais/Ellsberg kind challenges, which do trace back to concerns with Knightian uncertainty).
Laurie and I are working on integrating her transformative experience with the theory of bounded awareness, which encompasses and goes beyond Knightian uncertainty, at least as this is generally understood. The election result is certainly something we should think about in this context
Posted by: John Quiggin | 11/11/2016 at 11:29 AM
I would be very interested in seeing how your joint work develops, John (and L.A.). My sense is that Paul's initial responses to discussion of her work seemed to reject ideas one may find also in the theory of bounded awareness, so, perhaps, her views have evolved or I misunderstood her original responses.
Posted by: Eric Schliesser | 11/11/2016 at 11:44 AM