2020年6月25日 星期四

What’s a Social Media Gatekeeper to Do?

This illustration photo shows a woman in Los Angeles looking at the official Twitter account of President Donald Trump on June 23, 2020.

-/Getty Images

Even experts can be a little overwhelmed by complicated questions surrounding online platforms and speech. In recent weeks, we’ve seen a multitude of examples of the challenges that platforms face in balancing free speech with upholding their standards: a Trump campaign ad removed from Facebook for using Nazi symbolism, Trump tussling with Twitter over labels on tweets making false claims or encouraging violence, a judge ruling that Rep. Devin Nunes cannot sue Twitter over a parody account, and much more. “There is so much in the news right now. It’s hard to know exactly where to begin,” Jennifer Daskal—the director of the Tech, Law, & Security Program at American University Washington College of Law—said during Tuesday’s Future Tense web event, “What’s a Gatekeeper to Do?” .

The conversation, which was part of the Free Speech Project, analyzed how platforms practice gatekeeping and what steps they should be taking to actively promote human rights in the digital world.

According to David Kaye—a professor of law and the director of the International Justice Clinic at the University of California, Irvine and U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression—the fundamental problem with current online gatekeeping standards for platforms isn’t so much “about specific outcomes,” he said, referring to Facebook’s decision to remove the Trump campaign ad that included Nazi symbolism. The broader issue is the lack of guiding principles to govern these kinds of decisions and the transparency of how they are made.

Kate Klonick, an assistant professor of law at St. John’s University Law School and affiliate fellow at the Information Society Project at Yale Law School, agreed, saying that Facebook has a long history of inconsistent decision-making, with “different types of figures getting treated differently depending on the speech that they’re saying and depending on the speech that is being recapitulated online,” she said. And with no solid rules or transparency on which political speech passes the platform’s secret test, it’s difficult to create any sort of accountability within a platform.

Another challenge, Kaye said, is that platforms too often see only two options when a post violates their guidelines: keep it up or remove it completely. But increasingly, there are other options, as evidenced by Twitter’s recent flagging of President Trump’s tweets making false claims. Twitter decided to keep the tweets up and tag them with links to correct information about mail-in voting. Daskal thought that Twitter made the right decision, and the public might agree. According to Klonick, a study of about 6,000 Twitter users found that users did not want falsified or harmful content taken down—they wanted the platform to provide context.

In some cases, there might be good reasons for complete removal, Kaye said, like a world leader inciting violence, because a public figure has a much greater chance of amplification on a platform than an average user. The threat to free speech must be weighed with the potential harm that could be caused by such speech. Klonick pointed out that politicians and many public figures don’t really need the help of platforms to have their voices amplified anyway—their words and actions will be on the news regardless of whether they send a tweet. While discerning these rules for platforms, Klonick said, it’s important to keep in mind that “you have a right to free speech, you don’t have a right to be amplified.”

Too often, according to Kaye, these discussions overlook the fact that almost 90 percent of Facebook users are from outside of the U.S. and Canada, yet platforms adhere almost exclusively to the U.S. idea of free speech. “One of the things that the platforms do pretty poorly is integrating public and community perceptions of the rules and their implementation around the world,” Kaye said. Daskal noted, however, that there is real risk of other countries with more oppressive speech standards influencing the platforms. Kaye pointed to recent decisions made by French constitutional courts and the European Court of Human Rights that show “there are places around the world that do value freedom of expression.” Other countries just implement those values in different ways, he said. He added that “It’s not that it’s a better or worse way than ours” and that American companies can learn from these developments in the European space.

Proper accountability among platforms will require more participation by users, Klonick argued. But it’s hard to know what that might look like. Direct democracy is not going to work: Facebook attempted a system of direct user voting on its community guidelines beginning in 2009, Klonick said, but only around 0.3 percent of users cast a vote, and it was seen as a “colossal failure.” Moving forward, she said, the answer may look more like the Facebook Oversight Board, which announced its first members earlier this year. The board will act much like a court of appeals for the platform, Klonick explained, and it will take on cases regarding how to handle controversial content on Facebook and Instagram. The board will also act in an advisory capacity, giving policy recommendations to the company. But Daskal asked, “Is that going to really move the needle?” Klonick acknowledged that “The skepticism is completely warranted,” pointing to the relatively small size of the board in comparison to the number of users on Facebook. The board is set to start taking on cases in September.

While its impact will not be clear any time soon, Klonick and Kaye are looking forward to seeing its influence on the way platforms regulate speech. “This is more my hope than anything else: Over time, the board will actually push the company to change the standards that it has,” Kaye said. And perhaps that will inspire other social networks to follow suit.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/2ZfDCHM
via IFTTT

沒有留言:

張貼留言