2020年12月18日 星期五

Pornhub Is Just the Latest Example of the Move Toward a Verified Internet

Photo illustration by Slate. Photo by Twitter.

The online adult video platform Pornhub announced this week that it has removed all unverified videos, limiting uploads to verified users only. The move followed an opinion piece from the New York Times’ Nicholas Kristof that followed the lives of child sexual abuse victims whose videos were uploaded to the site. The article alleged that rape videos, including child rapes, were allowed to proliferate on the platform unchecked. In response, both Mastercard and Visa had begun their own investigations into the site, eventually announcing they would stop processing payments with Pornhub. Pornhub’s move to “verified users only” means that uploads can only come from official content partners and members of their “Model Program.”

It is often said that pornography drives innovation in technology, so perhaps that’s why many outlets have framed Pornhub’s verification move as “unprecedented.” However, what is happening on Pornhub is part of a broader shift online: Many, even most, platforms are using “verification” as a way to distinguish between sources, often framing these efforts within concerns about safety or trustworthiness.

For instance, Airbnb announced in 2019 that it would verify all of its listings, including the accuracy of photographs, addresses, and other information posted by hosts about themselves and their properties. Tinder has also rolled out a blue checkmark verification system to deter catfishing, asking users to take selfies in real time and match poses in sample images. Social media platforms like Twitter and Instagram have long included blue verification checkmarks. Perhaps in recognition of the importance verification will play in the future of the internet, Twitter has opened a draft of their new verification system to public comment.

Observers have long suspected that other platforms where legacy media and amateurs content creators converge, such as YouTube, have different content moderation rules and processes for different user groups. Following concerns about COVID-19 misinformation, YouTube embraced this system more explicitly, prioritizing the monetization of videos from news partners and from users who had self-certified accurately.

Verification has been viewed as one solution to the broader problem of trustworthiness or credibility online (often framed within the broader lens of mis- and disinformation). In some cases, it is being used as a way to mediate and highlight credible and authoritative information (as Twitter did during early stages of COVID-19), or content from platform-approved sources (the case with Pornhub). In other cases, it serves primarily as an external badge, oriented more toward users as they navigate the internet—part of Silicon Valley’s long-term tendency to emphasize users’ individual responsibility for evaluating content (a phenomenon that has been well documented by the media scholar Mike Ananny in his work on fact-checking). In both senses, verification signals a broader shift in content moderation away from content and toward sources.

To some degree, the embrace of “verification” is a response to long-standing concerns about who and what is spreading false information online. When it comes to social media, verification targets the bots, the multiple or fake accounts (or misrepresentative photos of people/things), and the parodies, intended to resolve the online “identity” concerns that emerged as we all moved behind the screen. The intention of verification for many of these companies, it seems, is to ensure that the online world represents the offline. (To revise the old New Yorker cartoon caption, “On the internet, everyone knows you’re a dog because you have a blue checkmark.”) For other companies—particularly Twitter, Facebook, and Instagram—verification is still very much bound with notability (though Twitter CEO Jack Dorsey has made claims in the past about the company’s plans to “open verification up to everyone”).

Verification as social currency gives platforms some leverage to enforce platform rules. Twitter, for instance, has removed verification from users when they engaged in hate speech and harassment, and in cases when users have falsely represented themselves in names and bios. This type of enforcement could be fairly useful, particularly because study after study shows that false information more often flows from the top down, by those more likely to be blue check–marked. However, platforms don’t often use this approach, perhaps because they are unwilling to provoke many members of the blue-checked class.

In this sense, and in many others, verification frequently confers various benefits, both materially and socially. In the case of Pornhub, unverified users are quite literally shut out of the site. For YouTube, whether a creator is an established partner or has self-certified their content can also factor into monetization. One’s status within the YouTube ecosystem can additionally affect how content rules are enforced. Research I have done with Tarleton Gillespie has shown that relationships with platform companies can often factor into what kind of review content receives. User-generated content is more likely to be moderated through algorithms, which are notoriously faulty. Verification (and differentiating between users through other means like YouTube’s tiered partner programs) is one way that platforms are using to allocate the tools and resources required to moderate large amounts of content online.

It is not clear whether the move toward the “verified internet” is bad or good. Performers on Pornhub have advocated for this move to verification, as a way to curb piracy and prevent the spread of nonconsensual porn. Journalism groups, such as the Trust Project and NewsGuard, have been working for years to convince platforms to address differences in how information is produced by news media (specifically organizations that operate according to a standard set of media ethics) versus other information sources. (Twitter bases its criteria on standards established by organizations such as the Society of Professional Journalists.) And housing activists have been calling out platforms like Airbnb since at least 2015 for being used as a front for professional management companies posing as individual homeowners. Verification—if implemented responsibly—can help stop that practice.

And yet, it’s not clear platforms are listening to these groups when forming their processes and procedures around verification. For instance, platforms continue to feature, often unfiltered (or with minimal commentary) information coming from “official” sources such as press releases—a side-stepping of media gatekeeping that is at odds with journalism ethics. It’s illustrative of the platform tendency to see direct information as less biased and akin to “raw data.” Platforms also have a history of changing their policies of prioritization frequently without notice, upending the industries and individuals who have dedicated resources to carefully learning how to operate within the platform’s rules. Most recently, Facebook reversed an algorithm used during the election that prioritized authoritative news sources over hyperpartisan ones. Platforms may see themselves as verifiers and mediators between information sources, rather than embracing a clear gatekeeping role.

Verification will also have important consequences for the participatory internet, particularly for the large swaths of users and creators who don’t get a checkmark. Researchers Nikki Usher, Jesse Holcomb, and Justin Littman found that verification is not conferred evenly, with male journalists more likely to be verified than female journalists. Currently, there is a dearth of publicly available information about the demographics of verification in general—for instance, whether BIPOC users are verified at the same rates as white users. But criteria based on notability has been found to reproduce existing inequalities. Similarly, news organizations and other established entities are able to apply on behalf of individuals, which can mean platforms easily take up the biases of existing institutions.

As the conversation about who gets verified becomes more visible, we need to consider how to address issues like diversity, equity, and representation, localism, and public versus private interests. We need to consider how new modes of online gatekeeping should be used to highlight marginalized voices who were cut out by media gatekeeping in the past. (For instance, within the United States, there is a long history of anti-Black violence by the media industry that has still not been addressed.) This is important because, as research by Deen Freelon, Charlton D. McIlwain, and Meredith D. Clark has shown, social media has been critical for movements like Black Lives Matter to circulate their own narratives without relying on mainstream media. There are also significant privacy concerns as verification becomes the norm. We need to consider what groups are more likely to be stigmatized or more at risk as more platforms require verification.

Studies in the past have shown that verification does not necessarily play a large role in how users assess the credibility of online actors. But Pornhub’s recent change demonstrates that the embrace of verification by platforms can significantly alter what content can be published online at all. As we move toward the verified internet, we need transparency in how platforms are making these decisions—and opportunities for the public to provide feedback—more than ever.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/38cax4i
via IFTTT

沒有留言:

張貼留言