I don’t really follow 538 for my mental health, but I just saw that Nate Silver wrote some pieces as Nathan Redd and Nathan Bleu to provide two different perspectives for how things might go. I think that’s a great way to help people understand the probabilistic nature of modeling.

Consumers of popular media still struggle with 538 or with NYTimes and their needles on election night. It’s not because this form of journalism is not valuable or capturing important information. It’s because thinking in models and probabilities is still deeply unnatural for most folks.

In the practice of data visualization, there’s been all manner of attempts to visualize uncertainty. We add error bars. We add shaded regions. We play with jittering, simulations, alpha channels, and drawing curves. Folks have built simulators to demonstrate drawing from distributions or the structure of joint-probabilities and such.

But as in the world of policy research, even the most sophisticated folks are drawn strongly to point estimates.

Unfortunately, the idea of “getting it right” based on the point estimate remains a strong measure of success in the eyes of many. And deviations are not meant with an academic curiosity, examination of the data, or consideration of our own failing heuristics. People are angry because they rely on their perceptions, even when their perceptions are filled with unearned certainty built on the back of common failed heuristics.

When the point estimates have disappointed, too many folks have run back to journalism and commentary built entirely on vibes or declared polling and data journalism a failure. The post-mortem analysis on election modeling, which has been fascinating, thorough, and revealing over the last few years, was fully recast (unnecessarily) as an apology tour and still, we cling to the certainty of point estimates.

So Redd versus Bleu is a great idea. All of visualizations and numbers and technical explanations in the world have not provided the average media consumer with strong enough skills to interpret data with uncertainty. Instead of going back to the vibes op-ed to understand what’s going on, experts with the right skills for statistical inference can model how to understand the data from differing perspectives. Sure, it might trigger feelings of Lies, Damn Lies, and Statistics, but uncertainty is the reality we live in and the muck we have to understand.

I don’t always love Nate Silver, and I don’t always love 538, and I don’t always love how data journalism has shaped coverage of elections. But I think modeling statistical inference and interpretation of uncertain data from multiple perspectives in narrative form is an important tool. I hope to see this expand into more policy discussion space.

I think we’ve taken visualization and simulation as far as we can go for helping people to understand data. The next frontier is making the narrative steel man argument, from data, for different possible interpretations.