2020年9月25日 星期五

TikTok’s Surprising Transparency Efforts

Sean Gallup/Getty Images

On Friday, Sept. 25, at noon Eastern, the Free Speech Project will host an hourlong online event examining TikTok, WeChat, and the threatened bans’ implications for the U.S.-China relationship. For more information and to RSVP, visit the New America website.

From #ThePushupChallenge to the slew of Renegade dance tutorials, TikTok has been a central form of entertainment and social media during the COVID-19 pandemic. But the platform has also drawn increased scrutiny from U.S. lawmakers, who have expressed concerns that because the social media app is owned by Chinese company ByteDance, it poses a national security threat to the United States and a risk to American users. Much to the chagrin of vehement TikTok users, President Trump signed two executive orders in August effectively banning TikTok in the United States by blocking any transactions between U.S. entities and ByteDance, and by requiring the parent company to sell or spin off its TikTok U.S. business within 90 days.
To avoid this, TikTok is working on finalizing a deal with Oracle and Walmart, although this process has been marred with confusion over who would retain majority control over the company.

There are genuine reasons to worry about TikTok’s security—and there also are a variety of security threats posed by many social media platforms. But something often overlooked is that before the Trump order came down, TikTok had made strides toward greater transparency around its content moderation practices, privacy policies, and its use of algorithmic decision-making, some of which went further than current industry standards around accountability.

Activists and researchers have continuously pressed internet platforms to provide greater transparency around their content, data privacy, and algorithmic decision-making policies and practices. This is because internet platforms act as gatekeepers of online speech and engage in vast data collection practices, with little accountability to the public. For example, in 2016, Facebook suspended the account of a Norwegian writer who shared the Pulitzer Prize winning Vietnam War “napalm girl” image of a crying and terrified child as part of a post on photographs that “changed the history of warfare.” His account was suspended for violating the company’s policy on nudity and child porn. The incident sparked controversy and demonstrated the ease with which an internet platform could omit or censor information, even a widely known, award-winning documentation of history. Similarly, YouTube’s content moderation algorithms came under fire for flagging and subsequently removing documentation of human rights atrocities in Syria as terrorist propaganda. Greater transparency around these content moderation and algorithmic curation policies and practices would help researchers and the public understand how these processes and tools are used, where they fall short, and what effects they have on user speech.

Facebook, YouTube, Twitter, and other social media platforms have responded to these calls by publishing detailed outlines of their content and data privacy policies and by sharing limited information online around how they use algorithmic decision-making to curate and moderate content. However, these companies have provided little transparency around how their human moderators are trained and how they operate, and companies have asserted that due to competition concerns, they cannot reveal their source code to external audiences.

TikTok, however, was poised to go beyond other platforms. In July, then-TikTok CEO Kevin Mayer, stated that the entire industry “should be held to an exceptionally high standard” and companies should be proactively disclosing information related to their algorithms, content moderation practices, and data privacy practices to regulators. (Mayer resigned in late August.) As part of this sensibility, in March TikTok announced that it would be opening two Transparency and Accountability Centers in Los Angeles and Washington, D.C. According to the company, at these centers experts will be able to review TikTok’s human content moderation processes, data security practices, algorithmic systems, and the app’s source code, which will be available for testing and evaluation. The opening of the center in Los Angeles was delayed due to the onset of the COVID-19 pandemic, but the company says it is still planning to launch both.

Of course, given that the Transparency and Accountability Centers have not fully opened yet, whether they will be an effective means of providing transparency and accountability is yet to be seen. Further, it’s not clear exactly how valuable the company’s transparency mechanisms truly are. In particular, some researchers have outlined that sharing access to company source code is not a valuable method for providing transparency. Most consumers wouldn’t know what they were looking at—even technical experts may not be able to understand and evaluate the code. It’s understandable that someone might consider this a bit of transparency theater, so to speak, intended to show skeptical policymakers and eager regulators that the company has nothing to hide. Still, the efforts go further than the current industry standard for providing transparency and accountability around sensitive subjects such as content moderation and data privacy. These moves should put the onus on other platforms to step up their transparency game as well.

TikTok’s moves build on its first transparency report, which it released in December 2019. The report provided data on legal requests for user information, government requests for content takedowns, and copyright-related content removals. Since then, the company has followed in the footsteps of companies such as Facebook and Google, by expanding its transparency report to include data on the scope and scale of its Community Guidelines enforcement efforts.
This is noteworthy given that TikTok is a relatively new company. By comparison, Snap Inc., which has been releasing transparency reports since 2015, only reports on data points related to legal requests for user data and content removal, and does not report on how it enforces its content policies. Other more prominent companies, including Twitter, have only recently expanded their transparency reports to include granular data around their content policy enforcement efforts.

TikTok’s transparency report, however, still has plenty of room for improvement. It only includes a limited set of metrics related to its content moderation operations and lacks granular data that outlines points such as how the platform detects and removes content, and how many appeals the company has received for its content moderation decisions. It also doesn’t publish ad transparency databases, which Facebook, Google, and Reddit all do. More broadly, transparency reports don’t always tell the full story, as companies can pick and choose which data points to report.

We don’t know how TikTok’s operations may transform in the coming months. But I hope that the company continues to follow industry best practices and norms around transparency and accountability, while also setting some of its own—something that is uncommon among nascent and smaller internet platforms. Even if TikTok has no future in the United States, its moves could encourage other platforms to do more to provide transparency and accountability around their operations.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/331VpoM
via IFTTT

沒有留言:

張貼留言