Data science predicts election winner!

Statistical predictions are fragile flowers. They can inspire confidence, but often only under specific, ephemeral circumstances

Data science predicts election winner!

What’s good about data science is that it gives us objective metrics for assessing the level of confidence we should place in any given prediction.

From a statistical analysis perspective, predictions inspire confidence if they can survive every data-driven effort to invalidate them. If all models confirm a specific forecast -- even if they differ widely in data sources, sampling techniques, and feature-engineering models -- they probably hit the predictive bull’s-eye dead-on. But if the latest models start to call the prediction into doubt, confidence in the forecast may wilt quickly and irretrievably.

Yet the main reasons why confidence in a prediction wanes aren't always statistical in nature. Confidence is a psychological and even sociological phenomenon that shapes how predictive modeling results are interpreted by stakeholders. Merely trying to poke holes in a prediction can undermine confidence in its accuracy.

When the stakes of the outcome in question are huge, such as in the U.S. presidential election, a serious case of predictive jitters can ensue. “Doonesbury” cartoonist Garry Trudeau said it best when he recently told the Washington Post: "With all due respect to Nate Silver, I’d rather not be checking FiveThirtyEight every half hour for the rest of my life."

We all feel the predictive jitters, and my personal response has been shockingly similar to Trudeau’s. For the past month or so, I’ve been a frequent visitor to the New York Times’ meta-predictive interactive poll-aggregation site Who Will Be President?

The newspaper’s online predictive resource isn’t designed to help voters weigh the merits of either major-party candidate. Instead, it’s a civic resource whose primary function is to instill confidence that we pretty much know who’ll win in November (regardless of whether we have any confidence in how they would behave once in office).

Many people might find it a bit on the overkill side, preferring instead to absorb the confidence provided by “gut feel” of trusted political pundits. Nevertheless, data science wonks like me (obviously, a minority segment of the population) might find it an exceptionally useful resource for allaying the predictive jitters no one can avoid over the next two months. In that regard, the site has several useful features that -- if you take the time to explore them -- in fact deliver robust predictions of voter behavior on Nov. 8:

As a data scientific approach, this site’s predictive aggregation model is analogous to ensemble learning, which essentially brokers convergence among the predictive results of multiple independent data-driven analytic algorithms. Elsewhere, I’ve referred to ensemble learning as “crowdsourcing for machines.” You could think of the New York Times resource as a sort of “crowdsourcing” in which the “crowd” consists of independent providers of polling data and predictive analyses.

Should we have confidence that the Times’ particular “crowd” of electoral predictive intelligence, when sourced into this or any other metamodel, has any particular “wisdom” on who’ll win in November? It’s best not to jump to conclusions. There are as many ways of metamodeling these particular predictive inputs, and as many possible outcomes, as there are stars in the sky.

That last point occurred to me as I was reviewing Preetam VV’s recent Medium article that models the emergent predictive behavior of a so-called committee of intelligent machines as something resembling a swarm of angry wasps. It’s a fairly technical article, but it’s informed by a fascinating multilevel framework for modeling the dynamic collaborative behavior of algorithms of different types. The objective of this modeling approach to distill a sort of crowd wisdom from the collaborative dynamic of distinct algorithms (such as convolutional neural networks, self-organizing maps, graph-based models) trained on a common data set and targeting a common predictive fitness function.

All of this is quite fascinating, but it’s not clear that this predictive metamodeling approach -- one part ecological, one part sociological in its underlying analogy -- actually mirrors the way real crowds, such as voters, converge on a collective behavior. It feels as arbitrary a framework for munging of predictive assets as the one that the New York Times uses in its electoral prediction site.

These and equivalent data scientific initiatives inspire the confidence that comes from knowing that our society has many smart people who can construct predictive assets of mind-boggling complexity. But that’s not the same as saying we should, by default, place full confidence in the predictions, electoral or otherwise, that their algorithmic contraptions generate.

Copyright © 2016 IDG Communications, Inc.