The narrative of content moderation, especially over the past few months, goes something like this: Extremists and conspiracy theorists peddle misinformation and dangerous content, Twitter (or Facebook, or Reddit) cracks down on said content by removing the offending posts and accounts, onlookers largely commend the platform, and it’s on to the next group of baddies.
This week, that target became QAnon, a group of pro-Trump conspiracy theorists who push fabrications about a Satanist “deep state” elites who run a child sex trafficking ring while also plotting to overthrow the current administration. On Tuesday, Twitter announced that it would take action against QAnon activity, a move that may seem to be a long-overdue step to address a movement that the FBI considers a potential domestic terrorism threat. Twitter will permanently suspend accounts tweeting about QAnon that violate the policy against having multiple accounts, that “swarm” (or harass) individuals, or that evade bans. Meanwhile, on a macro-level, Twitter will prevent the amplification of QAnon theories by blocking URLs associated with QAnon, removing content related to QAnon from trends and recommendations, and limiting that content in searches. As NBC News reported, Twitter has already removed 7,000 QAnon accounts and expects the new policy to affect 150,000 more. The reason the company has decided to act now, according to the announcement, is that QAnon “has the potential to lead to offline harm.”
Yet the problem here is that Twitter’s plans—at least the ones available to the public—are rather vague, leaving the door open for confusion, inconsistent enforcement, and future content moderation debacles. “I get concerned when there’s sort of unquestioning praise for Twitter’s actions here, and it earns itself a good news cycle,” said Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet and Society. She worries the move in the long term is detrimental to the project of pushing Twitter to become “more accountable and consistent in the way that they exercise their power.”
There are two main places Twitter’s plans fall short. The first is that the platform, as a Twitter spokesperson told NBC News, has decided to classify QAnon behavior with a new, undefined designation: “coordinated harmful activity.” Twitter has yet to provide any information on what this term means, or to explain how it differs from its preexisting standards on harassment, abusive behavior, and violent groups. “We’re going to see a lot of things, I think, on Twitter that look coordinated and harmful, and we’re going to ask: Is this an example of this new designation?” said Douek. “And we don’t know—Twitter can just decide in the moment whether it is, and we can’t hold on to anything because we have absolutely no details.”
“Coordinated harmful activity” sounds awfully similar to “coordinated inauthentic behavior,” or CIB—Facebook’s term for discussing unacceptable behavior on the platform, most often applied to foreign influence operations. Facebook has never been terribly clear on what CIB means, despite attempting to define it on multiple occasions, and, as Douek recently wrote, Facebook determines what is and isn’t CIB without much explanation. While the term may seem “technical and objective,” she wrote, it obscures how Facebook draws the line between acceptable and unacceptable behavior. The same could perhaps be said about Twitter’s coordinated harmful activity—but we know even less about it, which renders the designation, for the time being, at best useless, and at worst harmful.
The second issue is that the plans contain a major loophole. As Oliver Darcy, a CNN reporter, tweeted, a Twitter spokesperson told him that “currently candidates and elected officials will not be automatically included in many of these actions broadly”—something that was missing from the early reports of the crackdown. That’s significant, especially since pro-QAnon individuals are on the rise among Republicans congressional candidates. Not to mention that the president is known to retweet posts from QAnon accounts (though he hasn’t outright endorsed QAnon conspiracy theories). This clause further entrenches Twitter’s different standards of speech for politicians and the public—something that’s been called a “dual-class system for free speech.”
Part of the problem may come down to what, exactly, Twitter hopes to accomplish with this new policy. Is it trying to stop harassment? (QAnon users often bombard individuals baselessly associated with the “deep state” with abuse: Chrissy Teigen, for example, reportedly blocked 1 million users last week after sustained harassment.) Or is Twitter trying to crack down on the actual content of the QAnon belief system? If the answer is that Twitter is trying to stop harassment, Douek said, then retweets from candidates may not be that detrimental to that goal—sure, they’d still push the theories further into the mainstream, but they wouldn’t directly encourage or take part in harassment, one hopes. But if the policy is meant to suppress the theory itself (and its harmful, real-world consequences), then it can embolden candidates with large followings who effectively function as “super-spreaders” of QAnon beliefs.
Regardless of these potential flaws, the policy will undeniably limit the spread of QAnon on the platform. “This is an important marker that Twitter is recognizing how it is being manipulated,” Joan Donovan, the research director at Harvard’s Shorenstein Center on Media, Politics and Public Policy, told Wired. It’s a considerable step that Twitter is not only targeting QAnon accounts, but also the ways their message can be amplified. As Donovan said, Twitter is making it harder for QAnon accounts to grow and find one another on the platform. And if Twitter is, in fact, addressing swarming and targeted harassment, that’s a considerable step forward, Douek pointed out, since the platform has long turned a blind eye to that kind of behavior.
“I completely understand the rush to praise taking action along those lines,” said Douek. “I would just like to see it be paired with calls for greater transparency around what exactly is the line that Twitter’s drawing so that we can then call on them to be held to it in the future.” By accepting Twitter’s plans as they are, we’re essentially leaving room for Twitter to continue to make content decisions and policy updates in secrecy, often without repercussions. So much of the story of content moderation the past few years has been about pushing platforms to be more transparent, upfront, and principled in the way they moderate their platforms, Douek said. “And this is like the antithesis of that.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.
from Slate Magazine https://ift.tt/30Jnmzl
via IFTTT
沒有留言:
張貼留言