2020年2月29日 星期六

Did Nate Silver and FiveThirtyEight Lower 2016’s Voter Turnout? A New Paper Says Yes.


Former presidential candidate Hillary Rodham Clinton, seen on Tuesday. Clinton suggested in 2017 that forecasting had something to do with her loss to Donald Trump.

David Gannon/Getty Images

The months after the 2016 presidential election were a brutal time for election forecasters. Many Democratic voters felt misled, even betrayed, by the comforting voices in the media that had reassured them earlier that fall that a philandering and crass television star had essentially no shot at beating an experienced politician. In the aftermath of Election Day, there were reckonings. Many began to question the value of such horse-race coverage to begin with. The New York Times, Nate Silver’s FiveThirtyEight, and the Princeton Election Consortium all put Clinton’s chances somewhere between 70 to 99 percent—damaging public trust in several institutions.

But it’s possible that these predictions did something else harmful.
Three researchers dove into the numbers to see if it was possible that such forecasting had ultimately inflated potential Democratic voters’ confidence to the point it depressed voter turnout.

The research led to some fascinating and very clear results. The experiments concluded that these forecasts had infused the entire public discussion around the 2016 election, making it difficult to miss—and that those who hear about a candidate’s high probability of winning do indeed stay home. And so, the authors conclude, because Hillary Clinton was the one polling well ahead and because largely liberal audiences consume the articles these forecasters produced, the audience that the forecasts reached most widely consisted of the same people who were more likely to be deterred by the numbers in the first place.

“We cannot say with certainty,” Sean Westwood, one of the authors, said in an email. “But given how close the election was in some states, it is entirely possible that forecasts could have flipped the election in favor of Trump.”

The alleged effect is related to how FiveThirtyEight and a small number of other sites present their analysis—and how the public understands probability versus a more conventional presentation of each candidate’s expected vote share with a margin of error. The former is easier to understand on its face, but that’s the problem. A 90 percent chance of a candidate winning makes it seem like they have an overwhelming lead on the opponent and should finish with a significantly higher number of delegates. But a probability of a win doesn’t actually mean a probability of a significant victory. It doesn’t mean the election will be a blowout when the votes are counted.

Study subjects who were given a sample prediction from FiveThirtyEight tended to hold off on voting if their candidate had a high-percentage probability to win.

“Humans just cannot process probabilities accurately,” Westwood said. “Even though they make for dramatic headlines, the research shows it is nearly impossible to convey probabilities in a way that does not generate confusion.”

Hillary Clinton herself had suggested that forecasting had something to do with her loss. “I don’t know how we’ll ever calculate how many people thought that it was in the bag, because the percentages kept being thrown at people—‘Oh, she has an 88 percent chance to win,’ ” she said in a 2017 interview with New York magazine.

Clinton was correct to say it’s very difficult to know how much overconfidence played a role, but the authors set out to find out if two separate outcomes were likely: First, did the forecasters’ work actually make a significant number of people feel overconfident in Clinton’s victory? And second, did that overconfidence actually make a real-world difference in the election’s result?

Previous studies have already found that people are far less likely to vote if a race is not tight. Alternative explanations include that campaigns just stop working as hard to get out the vote,  but at least some others say that election predictions may cause voters to stay home. According to Messing, if a network makes an early call for a race as Election Day plays out, voter turnout on the West Coast drops.

Messing and his colleagues ran several experiments comparing voter behavior after they received predictions of vote shares with behavior after voters received probability forecasts. In one experiment, they put real money on the line (“because in a survey everyone says they’ll vote,” Messing said) and told participants they would win money if their candidate won. But crucially, participants also had to pay some of that money in order to vote at all. Those who were given a sample prediction from FiveThirtyEight tended to hold off on “voting” if their candidate had a high-percentage probability to win. Those who received vote-share numbers tended to vote no matter what.

Messing cautioned that this was a laboratory experiment and wasn’t meant to account for all factors, but he said the results were still compelling. “If you extrapolate this experiment to the real world,” Messing said, “you might expect to see a 6 percent decrease in voting, based on where the New York Times had Clinton.”

And, he said, because voters identify so emotionally with their party these days, they might be even more likely to have an inflated sense of their candidate’s chances at success—meaning that the prediction from the experiment could even understate the impact of probability forecasts.

They also found that FiveThirtyEight in particular really did appear to have a strong influence on voters’ behavior. During the 2018 midterm elections, there was a brief error that emerged in FiveThirtyEight’s algorithm that caused a spike in their forecasts, suggesting the Republicans had a good chance to win the House. When that happened, the U.S. bond market also spiked. “We’re not talking about a bunch of political wonks,” Messing said.

To be clear, the authors do not intend their analysis as an attack on Silver, who was arguably the most cautious forecaster during the 2016 election. (FiveThirtyEight also did present vote-share data; it just was hidden behind a separate tab on the site.) At the time, Silver made it clear that his work was complex and that it did not by any means say that Clinton was a shoo-in for the presidency. But Silver’s work was amplified by other news outlets, and during that amplification, some of that nuance was lost.

“Forecasters such as FiveThirtyEight offer a great deal of context on their predictions, but the problem is that this is lost when the prediction is covered by another media outlet,” Westwood said. “It just becomes ‘Nate Silver says Hillary Clinton will win by 80 percent.’ ”

The worst perpetrator of this, according to the paper, was MSNBC, which referenced FiveThirtyEight and other forecasts 16 times a day during one part of the campaign. “It’s broadcast across society, and cable news is the way that happens,” Messing said. “I think there’s a number of folks who may not be couching these results with the appropriate caveats when they’re talking about them. Because, you know, broadcast journalism in particular is not made for carefully caveating things.”

Even then, Messing said that those who were more likely to understand how the number-crunching works were likely to get confused when “the whole ecosystem” becomes inundated with these forecasts. “I have a master’s in statistics,” he said. “And based on the probabilistic forecasts out there in 2016—and I’m not talking about FiveThirtyEight, but all of them—I was sure that Clinton was going to win.”

Readers like you make our work possible. Help us continue to provide the reporting, commentary and criticism you won’t find anywhere else.

Join Slate Plus
Join Slate Plus


from Slate Magazine https://ift.tt/32DNpIP
via IFTTT

沒有留言:

張貼留言