Nate Silver, whose claim to fame is accurately forecasting the 2012 Presidential general election outcome (50/50 states) and the 2008 outcome (49/50 states), had already made two major errors in the last year – the 2015 U.K. election, and the durability of Trump (who is first place in national polls for now over 6 months). You can now add to that a third one – the 538 forecast of the Iowa caucuses, where they predicted Trump would win.

Sorry, but Silver only said that according to his models Trump had around 48% or 46% chances of winning. He never said that Trump would win the caucuses.

@DB,

That’s not how these things work, my dear man!

I’m sorry, but if you actually know a bit about statistics and read their explanation of their model (http://fivethirtyeight.com/features/how-we-are-forecasting-the-2016-presidential-primary-election/), you could understand that they aren’t actually predicting who will win each primary or caucus but are merely giving a probability based on available of evidence of a certain candidate winning the primary or caucus. Therefore, Nate Silver did not incorrectly predict that Trump would win the Iowa caucus.

@Dominic,

I know exactly what they’re doing, and no one would care about their probabilities unless they were taken as predictions.

Ok. Just wanted to clarify that the failure of their model was probably due to polls not noticing Rubio’s surge and failing to take into account Cruz’s ground game.

@DB, My guess is you’re right – an unprecedented ground game by Cruz made it tricky.

I agree about the probabilities. Misnterpreting probabilities as a prediction doesn’t justify complaints against Nate Silver.

Either you are right or you are not, simple as that my friends.

I guess the best way to solve the question on whether if he predicted Iowa or not, is to see how he made his prediction in 2008. I don’t remember how he did it, but if he stated that, “yes, so and so is going to win.” But this time he just puts a probability range, then I’d probably say that in this election he chose not to predict but rather show the possibilities. There’s a clear difference between saying, “I think I’m going to get a hamburger for dinner” and saying, “there’s a 40-48% chance I’ll get a hamburger and a 38-45% chance I’ll get chicken McNuggets”. If you look, his Trump, Cruz, Rubio, probability range, allowed for any of these three to win. In my mind that isn’t a prediction. In fact, the outcome was within his range of probability, but I wouldn’t give him credit for predicting the selection either. He seems to not have picked anyone.

@Jonathan,

Carson was at 1%. So if Carson had won, that would be ‘compatible’ with his probabilities. If that’s the game Silver et al. are playing, then they’re just blowing smoke.

I read an article by him about 3 weeks ago that suggested that Cruz would win.

I don’t think he’s blowing smoke if he’s not attempting to make a prediction. He’s just simply portraying a range, as I see it. I don’t even think it’s trying to be impress, but just taking into account polls, endorsements, etc.

I’m actually disappointed he didn’t make an outright predictions. Looking back: http://270soft.com/2016/02/02/silver-makes-another-wrong-call/#comment-167293

Silver seems to state something close to a prediction in 2008. I can’t find his primary predictions. He’d predicting the general election. For 2016, he’s either avoiding an overt prediction in the primaries, possibly to avoid missing on a prediction. Who knows. But I wouldn’t call his range of victory as a prediction, because the range could be the accurate range, Cruz surpassed the range. The range was logical and a matter of facts and not a gut prediction.

Nate Silver’s model gave Trump a 46% chance of winning. So, a 54% chance of someone else winning. As he has stated many times, even someone with a 1% chance of winning COULD win. The model simply illustrates how unlikely that outcome is.

@Jeff,

If that’s how things work, then it gave a 61% chance of Cruz *not* winning!

If Silver is saying his model is compatible with basically any outcome, then it’s not making any predictions. If it’s not making any predictions, then he’s just blowing smoke (note it’s titled ‘forecast’ – forecast of what?).

Right, but it gave Cruz a 39% chance of winning. If I wake up each morning knowing that I have a .000001% chance of being struck by lightning, I feel pretty confident that I won’t be struck by lightning. Unfortunately, the chance is not 0%. So, some people are in that tiny portion of the population who end up getting a jolt on any given day. That doesn’t mean the prediction or statistics were “wrong”.

@Jeff,

It sounds like on your reading, what Silver was forecasting was that either Trump, Cruz, Rubio, or Carson would have won. If *that’s* the game they’re playing, then who cares? Anyone can attach %s to a forecast and give a sufficiently vague range of candidates to be ‘correct’ – the question is, how do you test his model? If you can’t test it, again, they’re just blowing smoke.

http://fivethirtyeight.com/features/how-we-are-forecasting-the-2016-presidential-primary-election/

I think this article lays it out pretty well (“How we’re forecasting the primaries – and why we might be totally wrong”. Yes, in reality, anyone co old have won. If your only question is “Who will win?”, and you can bear no uncertainty, then you shouldn’t care about this model or any poll… because there are no sure things.

Silver and 538 did not claim that they “knew” who would win based on their models. As the field narrows, and especially in the general election, polls offer a much more reliable basis for predicting outcomes. Even then, polls can be faulty and external factors can throw even more uncertainty into the mix. These models are more meant to give a feel for how the polling is trending, and the results show how well each campaign executed based on those factors.

@Jeff,

See discussion with Dominic at the top of this thread, links to same article.

If they’re giving %s for who will win, what other question could there be than “Who won?”

What I am arguing is that Silver et al. are in a dilemma. Either they made an incorrect forecast, or their Iowa ‘forecast’ had no predictive capability. I *think* you’re taking the latter horn of the dilemma. I’ll stick with the former.

You’re welcome to the last word on this if you’d like.

They’re only in a dilemma if they say that they’re able to definitely predict a winner based upon polls. They do not.