2020年7月31日 星期五

Confederate Groups Are Thriving on Facebook. What Does That Mean for the Platform?

Local residents show support for a Confederate soldier statue on the grounds of the City of Virginia Beach Municipal Center in Virginia during a rally calling for the statue’s removal on Aug. 24, 2017. Alex Wong/Getty Images

Earlier this month, a meme was shared in the Facebook group Save Southern Heritage that featured the portraits of two men: the Prophet Mohammed on the left and Robert E. Lee on the right, their chins tilting toward each other. “[Mohammed] owned many slaves. Robert E. Lee was against slavery,” the caption reads. “So why are we tearing down statues instead of mosques?” That post, which received 248 likes, is still up, despite the suggestion of real-world violence (and its use of Mohammed’s image). But a comment, rambling about Arabs and Jews “running this mess” as a “little joke,” was removed within hours. Whether it was Facebook’s algorithms, or content moderators, or one of the group’s eight admins, a decision was made that one had to go while the other could stay. One slipped through the porous “free speech” filter; the other did not.

In the wake of Black Lives Matter protests, demands for Facebook to address hate speech have escalated, coinciding with a nationwide movement to remove Confederate statues and flags from cities, states, and institutions long imbued with Confederate symbolism. More than 1,100 companies and organizations have pulled ads from Facebook for at least the month of July as part of the #StopHateforProfit advertiser boycott. At the same time, Gov. Ralph Northam of Virginia has ordered the removal of the statue of Lee that famously towers over Monument Avenue in Richmond, Mississippi decided to drop the cross of the Confederate battle flag from its state flag, and NASCAR banned the flag from its races.

These movements, intertwined and mutually reinforcing, pose a particular threat to those who consider themselves present-day Confederates. From their perspective, Facebook has become more essential than ever to amplifying their message at a critical moment in history—just as Facebook has shown a new willingness to police their speech.

Facebook has recently deplatformed hundreds of groups that express overtly violent, white supremacist beliefs, such as those associated with the Boogaloo movement. But the platform has yet to settle on a consistent approach to a more difficult—and more common—question: how far to go in policing groups that the platform doesn’t consider “hate groups,” but that nonetheless often attract hateful content. This gray area contains hundreds, perhaps thousands, of neo-Confederate groups that are thriving on the platform. Individual posts containing hate speech are sometimes flagged and removed, but as a whole, these groups have so far remained relatively unscathed amid Facebook’s heightened moderation, continuing to churn out thousands of posts a day in support of the Lost Cause. By insisting they promote “heritage not hate,” they’re able to skirt the boundaries of content moderation, even as their ideology rests on a reverence for the Confederacy and the antebellum South. Their complicated position on Facebook gets to the heart of the problems inherent to content moderation itself. It is a slow, often arbitrary process, driven not by clear understandings of what “hate speech” and “hate groups” are, but by haphazard flagging, a reliance on self-policing, and confusion over the kind of space Facebook or its critics want to create.

Since Facebook users exist in echo chambers, it’s easy to miss how widespread Confederate heritage communities are if your Facebook friends aren’t sympathetic to their cause. Many such groups, both public and private, have existed since the mid-2010s, but a spate of new groups appeared this summer. Some local varieties have just hundreds of members, while other national groups, such as Confederate Citizens, have nearly 100,000 members. Not only are these groups extensive, but they also serve as content factories. Groups such as In Defense of the Confederacy, Dixie Cotton Confederates, and Save Southern Heritage see hundreds of posts each day, which circulate rapidly around other groups, pages, and news feeds. At heart, these groups share some common features: the casting of Lee as a benevolent, misunderstood figure despite his documented defense of slavery in the U.S.; the efforts to preserve and build Confederate iconography; the indignation at the toppling of statues; and the—rhetorical?—call to arms.

Many of these groups spend a lot of time thinking about hateful speech. Just take a look at their self-policing and content policies: It’s not uncommon for a group to explicitly forbid hate speech, racist content, and bullying. Nor is it rare for moderators to post and repost these rules in a group’s main discussion. Megan Squire, a computer science professor at Elon University known for her work on extremist communities on Facebook, told me that this dynamic is particular to Confederate groups. A public-facing Facebook presence is important to the Confederate agenda of, for instance, getting the Lost Cause narrative in children’s textbooks. “At the same time, they also attract this sort of hateful element, and so they know they need to clamp down on that or it will look bad,” Squire said. “I guess my question is always: If people didn’t talk like that on your page, you probably wouldn’t have to write that rule, right?”

Moderators and group members are vigilant in part because they’re aware some of the content they attract (and many would like to espouse) won’t fall within Facebook’s policies. “I fully respect the First Amendment. But the Wizard of Facebook doesn’t. I don’t want to get kicked off Facebook or have my growing page taken down because of racist words,” posted a moderator of Confederate Defenders, a public group, a few years ago. That same moderator wrote earlier this month, with greater urgency, “With all the censorship going around, I don’t want to lose my page. PLEASE BE CAREFUL WITH YOUR LANGUAGE.”

For many Confederates, that censorship is a worthwhile trade-off. “If I’m willing to self-censor myself and my organization, I can reach a reasonable number of people with my message and I can do it every day,” Kirk Lyons, an admin of Save Southern Heritage, told me. He also runs the Facebook page for the Southern Legal Resource Center, an organization he co-founded that has been called the “legal arm of the neo-Confederate movement.” Lyons identifies as an unreconstructed Southerner, but the Southern Poverty Law Center considers him a white supremacist lawyer. (Lyons denies this and maintains that the SPLC’s article on him contains many inaccuracies.) Lyons sees Facebook as a sort of necessary evil to getting his message out. “It’s worth putting up with all of Mark [Zuckerberg]’s nonsense … because it’s so much easier than it was in the email age or the letter and postage stamp age,” he said. If he’s careful, he explained, his individual posts can reach hundreds of thousands of people, such as a recent image of a Confederate flag—his Confederate flag—flown over NASCAR’s race at Talladega.

How sincere the language opposing hate speech comes across varies from group to group, user to user, which is fitting for a movement known for its broad ideological spectrum. Some say that their beliefs are compatible with an outright rejection of racism or even disrespectful content; they may believe they can revere Dixie on their own terms, irrespective of the racial violence it’s rooted in. Along these lines, the least incendiary—and the most moderated—groups tend to focus on Confederate soldiers and their descendants, as well as historical documents.

On the more extreme end of the spectrum, groups affiliated with the League of the South are known for openly discussing white supremacist beliefs. (For this reason, Facebook actually deplatforms them: A few weeks ago, for instance, Facebook took down one such group based in North Carolina, though a new group replaced it within a day.) Group discussions often bear out the disparities among Confederates’ approach to hate speech. In screenshots Squire sent me from a private Confederate “monument protection” group in her county, a number of members expressed anger at seeing a fellow Confederate hold up a sign at a rally on July 11 that read, “NO FREE COLORED TVS TODAY”—presumably a racist dog whistle. “I don’t care how you look at this, but to me this is racist period,” said one user. “People state over and over we are for history and heritage yet make signs like this.” Some reiterated this isn’t what they stand for, some didn’t understand what the big fuss was about, and others were more focused on the sign’s potential to give fuel to detractors and the “liberal media.”

But that sort of pushback is dwarfed at times by the amount of hateful speech that persists. Group members often post about landing themselves in “Facebook jail” for a reason. Even in the public groups, it’s not unusual to see racial slurs, some of which aren’t later removed. Last month, for example, a member of Save Southern Heritage used the N-word to refer to people “destroying and looting.” “I’m impressed you weren’t banned for that word on FB,” another member replied. “I agree with every word though.” More common than racial slurs, however, are calls to violence—sometimes specific, sometimes more vague. In a group called “Save the Confederacy and restore our Confederate heritage flags up,” a post on a Black Lives Matter demonstration prompted a few users to say that drivers should run protesters over and “take out as many as possible.” In Dale’s Confederate Group, which is now private, a user commented this month that the best thing to do with Democratic cities is to bomb them.

All the examples mentioned here, aside from Squire’s, come from public groups. Private groups are strict about admittance: Virtually all require you to answer questions about your commitment to the Confederacy, your opinion on the real cause of the Civil War, and what the Confederate flag means to you upon your request to join. Given the content that’s visible in public groups, it’s safe to assume that more borderline-to-outright-hateful speech thrives in these self-contained spaces. “One of the eternal problems with Facebook is that if this stuff goes on in a private group, the only way to report the content is to join the group, find the content, and report it. Each report takes 10 clicks. It’s putting a lot of work on a user,” said Squire. And in private Confederacy groups, those users may not be inclined to do any of that work.

A recent civil rights audit of Facebook, carried out by independent civil rights experts and lawyers over the course of two years, criticized the platform for prioritizing free speech over nondiscrimination. The auditors concluded, among other things, that Facebook needs to be more proactive about identifying and removing extremist and white nationalist content. “I don’t know if Mark appreciates that hateful speech has harmful results, and that Facebook groups have real-world consequences,” Jonathan Greenblatt, chief executive of the Anti-Defamation League, told the New York Times after the civil rights report was released.

Those real-world consequences are worth considering. Before Facebook restricted public access to its application programming interface, or API, in 2018, Squire used Facebook’s data to systematically study about 700,000 users across 2,000 hate groups and 10 different ideologies. Of these groups, the Confederates were the least likely to cross over with other ideologies: About 85 percent of them belonged only to Confederate groups. There are two stories here. The first is that Confederate groups are relatively contained and self-sustaining, and that their members don’t dabble much in other, more violent ideologies. From that perspective, their threat consists mostly of the speech within their groups. The second story is about the other 15 percent of Confederates who cross over into militia, white nationalist, alt-right, and anti-immigrant groups. The prime example of the dangers of that crossover is the Unite the Right rally in 2017. Although the rally was ostensibly held to protect the Lee monument in Charlottesville, Virginia, it became a gathering for hate groups across the far-right, including neo-Nazis and Klansmen, that left at least 33 injured and one counterprotester dead.

It’s not controversial to say that neo-Nazi or Boogaloo groups should go, but it’s less clear what a mainstream platform should do with “heritage not hate” groups—groups that, as the SPLC puts it, “in their effort to gloss over the legacy of slavery in the South … strengthen the appeal of Lost Cause mythology, opening the door for violent incidents.” Even the SPLC, which refers to neo-Confederacy as a whole as a revisionist branch of American white nationalism, doesn’t consider a number of Confederate heritage groups, such as the Sons of Confederate Veterans, to be hate groups.

When I asked Squire—someone who’s outspoken about her activism and who provides data on far-right extremists to the SPLC and antifa activists—whether she believes Facebook should allow these groups to operate on its platform, she pointed to the fact that their speech isn’t illegal. And more than that, she said, their beliefs are “not fringe down here” in the South. She mentioned that state representatives in her state of North Carolina have ties to the Sons of Confederate Veterans, and that the University of North Carolina at Chapel Hill gave $2.5 million last year to that organization after protesters toppled a statue of a Confederate soldier on campus in 2018. “We’re fighting it, obviously, but it’s a very long and uphill battle,” Squire continued. “And I think Facebook has to bridge both of those realities.”

As people continue to call for more robust definitions of hate speech online, it may be helpful to remember that sometimes what we want from Facebook is misaligned with how the platform operates. Facebook can be dangerous not just for its content, but for its lack of public data; for how its (private) algorithms work; for the ways it amplifies certain voices and can lead to deeper polarization and, in some cases, radicalization. There’s a reason researchers are always going on about the dire need for transparency. Outside of calling for Facebook to police its most extreme content, it’s worth asking what we can reasonably expect from a private company that operates in its own interest.

Sometimes what we want from Facebook is misaligned with how the platform operates.

After Facebook released the findings of the civil rights audit, the Verge’s Casey Newton succinctly summed up the problem in his newsletter: “The company could implement all of the auditors’ suggestions and nearly every dilemma would still come down to the decision of one person overseeing the communications of 1.73 billion people each day.” The same could be said of the majority of #StopHateforProfit’s 10 recommendations for Facebook, which demand changes such as further audits, a C-suite civil rights executive, and heightened content and group moderation. “This campaign is not calling for Facebook to adopt a new business model, spin off its acquisitions, or end all algorithmic promotion of groups,” wrote Newton. Nor is it calling for an overhaul of Facebook’s approach to transparency. Yet these sorts of changes may in fact be necessary to addressing the root of Facebook’s speech and radicalization problems.

The complexities of Confederate discourse on the platform ultimately show that singling out hate speech as the primary target of public outrage at Facebook is, in part, a distraction—a Sisyphean endeavor that has a tendency to obscure more serious issues. Such a focus leaves us with the classic censorship vs. free speech dichotomy, which inevitably leads to some people demanding a return to the First Amendment, and others retorting that the Constitution doesn’t pertain to private sites, ad infinitum. What borderline speech can force us to do is to move beyond the terms of that debate, to update the conversation (and call to action) to reflect the platform as it operates today.

But what a better conversation—let alone moderation framework—would actually look like is unclear. Newton writes that the best hope for addressing Facebook’s role in accelerating and promoting hate speech, misinformation, and extremist views comes not from the campaign or the audit, but from Congress, which has the potential to question the company’s “underlying dynamics” and “staggering size.” And that’s certainly one avenue for change, especially with Zuckerberg testifying before the House Judiciary antitrust subcommittee on Wednesday. But informed government regulation often relies on citizen engagement, and in the case of Facebook’s speech problems, users must grapple not only with the flashiest and most extreme bits of Facebook’s content, but also with the shades of speech that exist just below that, and the mechanisms that allow that speech to flourish.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/2DkgL6y
via IFTTT

沒有留言:

張貼留言