Although the First Amendment doesn’t apply to private companies, Twitter was once known as the “free-speech arm of the free-speech party.” Besides clearly illegal content like child pornography, Twitter was originally loath to moderate tweets. That has changed gradually, as the platform has tightened policies around violent extremism, abuse, etc.
This year, in response to misinformation related to the 2020 election and the coronavirus pandemic, the company has started to flag and remove harmful content at unprecedented rates. A turning point seemed to come Tuesday, when Twitter finally slapped warning labels on two of President Trump’s tweets, which falsely claimed that mail-in voting would lead to widespread voter fraud. That move came after a few days of yet another Trump Twitterstorm controversy. On Thursday, Trump retaliated by signing an executive order of questionable legality that could punish social media companies for regulating content.
It’s a winding—and frankly exhausting—saga, but indulgent Trump spectacles aside, one of the main tensions here is the question of what responsibilities Twitter has as a medium of discourse, and whether Twitter might be changing its core principles to adjust to the role it’s assumed in the public sphere.
In order to understand how a private company largely built on the idea of freedom of expression has found itself embroiled in a national free speech controversy, I spoke with Blaine Cook, Twitter’s former lead developer, who worked at the company from its founding in 2006 to 2008. During the course of our conversation, which has been edited and condensed for clarity, we discussed Twitter’s founding principles, the importance of moderating online communities, and Cook’s take on the company’s latest move.
When Twitter was just getting off the ground, how did you guys think about speech on the platform? Was the idea of free speech central to Twitter’s founding?
Yeah, I mean, there were different communities within the company, even though it was obviously quite small. Evan [Williams], and to a lesser extent Jack [Dorsey], came from the tech blogger world and had that sort of background. I had done a bunch of activist work and had worked on tools like TXTmob in the years before. We really looked at Twitter and the tools that we were building as new media platforms that enabled voices of people that wouldn’t have had representation up until that point. So the idea was that there was the established corporate media, and that the internet presented the opportunity to have different venues that weren’t controlled by the establishment, as it were.
Were you always so optimistic about it? I mean, could you have foreseen the ways in which an open platform might eventually be weaponized?
I think it’s complicated because in many ways we were hoping that it would be weaponized—not by Trump, but by progressive forces. And I think we do see quite a lot of that. You know, it’s interesting watching the Minneapolis protests and the conversations that are happening around that in parallel with Trump having a hissy fit. The Minneapolis conversations wouldn’t have been possible, either, without Twitter. So I think it has played out the way that we kind of expected.
What about the idea of the platform being “the free-speech wing of the free-speech party,” as Twitter executives called it in the early 2010s? Was that a part of Twitter’s identity in the mid-aughts?
There’s a lot of nuance there that’s important. I think that’s true to a point, but the framing of “the free-speech wing of the free-speech party” is something that came later—frankly, after I left, during some of the early interactions around harassment and the content moderation questions that came up in 2008. Ariel Waldman was one of the very early people who experienced harassment on Twitter [in May 2008], and Twitter declined to get involved, which was after I left. And I think that’s maybe where a lot of these libertarian-leaning free-speech party ideas came from.
From my perspective, that framing isn’t far off from what we were trying to do in terms of opening up communication spaces and whatnot. But I personally have always felt like moderation, community management, and having responsibility over culture is actually really important, and I think that’s one of the reasons that I ended up leaving Twitter so early—it was just a fundamental difference about the approach to those things. I think so much emphasis was placed on scaling and going after the celebrity crew and all of that kind of stuff in the early days that they kind of lost sight of what it meant to run and have a community. In the really early days, there weren’t too many social networks around, and we definitely looked to communities like Flickr, which had a strong position on moderation and on community guidelines. For me, that was that was always really important.
Do you think that Twitter’s policies have changed drastically since you left over a decade ago? Especially this year—not even just with Trump’s latest feud, but also with coronavirus misinformation and the 2020 presidential campaign?
I think it’s stayed the same more than it should have. They should have been a lot more proactive, especially with the scale and the resources that they have. I would have loved to see a lot more effort placed into figuring out community questions. They’ve been doing experiments—some hopeful—but the recent reply feature (where a user can limit replies), they launched by trolling people, and that was just really disappointing. [Editor’s note: Cook is referring to these tweets from the @twitter and @twittercomms accounts.] It feels like a lot of that stuff just isn’t as fleshed out as it should be.
I think it comes down to that they basically don’t have any competition. We need different communities with different editorial and community standards. So if Trump wants to go and have a conspiracy-theory Twitter, like a separate megaland, that would be fine. The rest of us could largely ignore it, and it’d basically wither and die. But because Trump’s tweets are mixed in with all of this other important conversation, it’s hard to reconcile those things.
What was your immediate reaction to Twitter’s decision to slap those warning labels on Trump’s tweets? Did you see that as a landmark move for the company, or did you think that perhaps it’s not as important as onlookers are making it out to be?
It’s a good step, but it doesn’t go far enough. Twitter is a private company and has every right to kill his account. That doesn’t limit his free speech at all. He’s the president of the United States—he’s got whatever platform he wants. I would like to actually see quite a lot more strong action. With the Joe Scarborough conspiracy, if it was any other person than Trump making threats and creating a dangerous situation for a private person, their account would be disabled right away. So I guess I’d like to see more of that, and more fact-checking in general, and moderation.
I really strongly believe that the culture of the community is set by its acceptable parameters. So if you have a community where abusive behavior is acceptable, then people will go there to abuse other people. And if you moderate and you actually have some community standards, then they won’t. I think that’s true in all parts of life, and because Twitter is such an important public space, it would be nice if we had stronger community standards that reflect the sort of society that we actually want to live in—not just some sort of free-but-harmful-speech protecting space.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.
from Slate Magazine https://ift.tt/2TUhAIG
via IFTTT
沒有留言:
張貼留言