2020年12月17日 星期四

The Context Problem Social Networks Don’t Like to Talk About

Loic Venance/Getty Images

When Facebook, Twitter, and YouTube discuss the fraught work of content moderation, they often come back to one word: context. “For our reviewers, there is another hurdle: understanding context,” Monika Bickert, Facebook’s head of global policy management, once wrote in an op-ed. That context can include, say, whether a user has uploaded an alarming video to glorify violence or to criticize it. Lack of context can be the difference between whether a post follows or violates the rules—and it can frustrate content moderators and company employees already under pressure to make the “right” enforcement decision.

Missing context, however, is not only an issue when it comes to evaluating users’ posts. The companies themselves often fail to provide necessary context to the broader public about how they approach content moderation. Users who feel their tweets, posts, or videos were erroneously removed don’t get adequate explanations from the platforms about how, exactly, they violated these rules. It can be even more consequential when the documentation of human rights abuses is at stake.

Context is, indeed, everything. I should know: I spent nearly three years at YouTube in the unruly world of content moderation, working on policy and enforcement issues covering violent extremism and graphic violence. I even helped write a YouTube policy article called “The Importance of Context” to help users understand it better. The work was often contentious and divisive, to say the least, with countless emergencies and escalations that forced me and my colleagues—to say nothing of the front-line moderators—to make split-second decisions in situations where we had little context but knew there could be plenty of ramifications.

For instance, in 2017, YouTube found itself handling a companywide emergency after facing mounting criticisms about the presence of extremist content on its platform. Part of the response included significant interventions from algorithms and human moderators, which resulted in the deletion of tens of thousands of videos documenting human rights abuses in Syria. In thinking about this issue—how to help good actors trying to harness social media’s potential as a megaphone to the global community—I also found myself struggling with that old problem: context.

Let’s say, for example, a bystander films and quickly uploads a video showing a person being killed. But this witness—who is rushing to publish the video—doesn’t add context to the video or its title and description. While viewers can watch this violent scene, they don’t have additional information about what may have led to the event or where and when this act took place. Now, whether flagged by users or by algorithms, this video is sent for review. A human moderator must quickly review it to determine whether it violates Community Guidelines. Is this graphic violence? Incitement? Terrorism? Shock content? Or a brave act to document a grave abuse? It depends—what’s the context? As civil society groups from Human Rights Watch to the Syrian Archive have highlighted, these moderation decisions, which often come down to issues of context, can mean the deletion of potential evidence of human rights abuses.

But it doesn’t have to be this way. Companies, for example, can and should take seriously growing calls for preserving and archiving content that may have evidentiary value, particularly if this content is too difficult for platforms to justify leaving up for all users to see. It is also important that social media platforms continue to improve their communications with users and make their content policies and enforcement operations fully transparent. Platforms, most critically, must engage meaningfully with human rights organizations.

We’ve started seeing the beginnings of engagement efforts, though their efficacy remains to be seen. Twitter has its Trust and Safety Council, which brings civil society groups to the table to consult with the company on its rules. Facebook, over the past year, has been trumpeting the launch of its Oversight Board, a new appeals option staffed by an independent body that takes issues referred by users or Facebook itself. (YouTube has no similar initiative, at least not one on the same scale of its peer companies.)

Recently, Facebook’s Oversight Board announced its inaugural batch of six cases. Unfortunately, the board’s approach lacks that important ingredient: context.

Consider Case 2020-002-FB-UA. According to the board’s description:

A user posted two well-known photos of a deceased child lying fully clothed on a beach at the water’s edge. The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons. The post also refers to the Syrian refugee crisis. Facebook removed the content for violating its Hate Speech policy. The user indicated in their appeal to the Oversight Board that the post was meant to disagree with people who think the killer is right and to emphasize that human lives matter more than religious ideologies.

Part of the Oversight Board’s process is inviting members of the public to submit comments arguing in favor of or against a particular removal or add critical perspectives. Looking at how the public comment process worked, I grew frustrated at the lack of context provided. We have only a summary written by Oversight Board staff members to go on. We don’t have any idea how this user “refers” to the Syrian refugee crisis, a point that, if clarified, could dramatically change the meaning of the post.

It may be the case that the board seeks to limit the types of comments public parties can file, a signal that the board feels some topics are better suited for input than others. But when it comes to content moderation what good is a comment without the post in question (stripped of specific information that could identify the user, of course), information about the type of engagement from other users the post received, and the enforcement and operational life cycle (such as quality assurance scores from moderators in the various review stages before this user even had the option to appeal to the Oversight Board)? Far from being superfluous, these pieces of information can provide critical context as to whether a post was incendiary in nature (as demonstrated by the post itself), whether it offered a poignant critique that resonated with other users (based on comments and other forms of engagements users had with the removed post), or whether there are significant operational flaws that drove the removal and must be addressed (looking at the health of the moderation operation, the performance of the moderators, and the review and appeal history).

Yes, the board members themselves will have access to at least some of this information when they’re reviewing a case (for more information, see Article 2, Section 2, of the board’s own bylaws). But if they are willing to collect, read, and publish the concerns of interested parties by soliciting public comments, then let’s make these comments more than lip service. Hearing from a well-informed public—including representatives from human rights groups—will only help the decision-making process.

Context doesn’t have to just be a one-sided reason that companies employ to justify removal decisions. Users and concerned third parties should be able to demand that context flow the other way as well: asking collectively for the right information that can truly help envision moderation that works for all, not some.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/3akC4mF
via IFTTT

沒有留言:

張貼留言