2019年10月26日 星期六

The Artists Who Are Turning the Tables on Oppressive Technologies


Natalie Matthews-Ramo

An artist-educator who collaborates with A.I. responds to Cory Doctorow’s “Affordances.”

When a new course begins, it takes me a while to learn students’ names, unless they represent something other than what I normally see. That was the case several years ago, when I encountered the first black male undergrad I’d ever had to teach at this particular college. He said his name was Jeffrey.

When I entered the classroom for the next meeting of the computational class, the same student said, “Hello, Miss. Do you remember my name?” My response was immediate, to my surprise: “You are Jeffrey.” There was a brief pause, and then he said, “Cool. I just don’t want you to see me as a number.”

Jeffrey frequently comes to my mind: when I read about 92 (short for BILL 2892), one of the protagonists in Cory Doctorow’s “Affordances,” who was known as Muhammad before he became a number. When I read the news that Google was targeting people with dark skin to improve facial recognition. When I read that black users of apps like Lyft and Uber have to wait longer than their white counterparts for rides.

In “Affordances,” we see various forms of intelligence agents erase people’s names and identities, particularly those who are held back by and are fighting societal barriers. These people are reduced to numbers, to maps of their faces, to their risk scores. Facial recognition software identifies protesters and otherwise serves as a technological gatekeeper. Online filters flag or block video footage of migrants, and racially biased algorithms determine whether alleged perpetrators are taken into custody or released. In our world, the power of Facebook, Google, and other technology companies is so immense that it can feel futile to push back against them, especially for marginalized groups. But that sense of helplessness can also enable a dangerous complacency. It is exactly because these companies are so powerful that we need people to interrogate their work and challenge it.

And that is where “Affordances” shines, as it explores how we can use those same tech tools to reclaim our agency. There are many possibilities for countering repressive and oppressive aspects of technology through activism, art, and education. Though their work may not be as familiar as Mark Zuckerberg’s, the people doing these things are heroes.

Now more than ever, there are options for not becoming merely a cog in the A.I. system.

For example, in “Affordances,” activists turn the tables by using wearable technology to foil facial recognition. In Hong Kong, protesters use laser beams to disorient police and block facial recognition. After authorities banned facemasks (an edict so widely flouted as to be unenforceable), a student shared a prototype for a “wearable face projector”: The device is “about staying anonymous in a fictional, futuristic world where face-recognizing is a big thing,” Jing-Cai Lu wrote on her website. NeuroSpeculative AfroFeminism, an award-winning digital project from Hyphen Labs, helped to develop HyperFace, a camouflage product that works by providing false faces based on ideal algorithmic representations of a human face. The project combines fashion with an “Afrocentric counter-surveillance aesthetic.” Similar to the wearable face projector, HyperFace reduces the confidence score of facial detection and recognition by providing images or false faces that distract biased algorithms.

HyperFace isn’t (yet) something that you can purchase—it’s more of an art project, an entry point for illuminating issues of privacy, transparency, identity and perception. These sorts of prototypes enable the designer to speculate about how to solve the problems in facial recognition and other artificial intelligence. Along the same lines, artists and designers Ayodamola Okunseinde and Salome Asega have created an archive of speculative design artifacts including A.I. prototypes that address social issues. Take, for example, Artifact: 012, a sensory bodysuit worn to help people deal with the cultural trauma of the Middle Passage. The bodysuit provides data on tidal waves in the Atlantic Ocean.

We’re also seeing efforts to bring in new voices to discuss and examine A.I. For instance, transmedia artist Stephanie Dinkins’ AI.Assembly makes space for “lateral-minded practice and thought around intelligent systems where we can think about what A.I. needs from us and what we want from it.” Participants brainstorm ideas for prototypes that address those two questions. One of the answers is that A.I. needs diverse designers, coders, and engineers with better data sets to counter algorithmic bias in facial recognition software. To support this type of work, the Algorithmic Justice League highlights algorithmic bias and develops practices for accountability during the design, development, and deployment phases of coded systems. Its goal is to unmask bias and give a space for the excluded to voice their concerns.

The heroes of “Affordances” are the people who rediscover their humanity and take control of it. Ninety-two recalls his former name, Muhammad. Yolanda looks for ways around online video filters to inform other about social issues. Dion, a black cop, tries to help “high-risk” youth Caleb. Caleb gets between Dion and Yolanda, who is trying to enter the U.S. on a flotilla with South American migrants. Such fiction as “Affordances” opens up dialogue about the impact of A.I. on vulnerable or marginalized groups.

Doctorow’s “Affordances” contains both warnings against the seemingly inevitable proliferation of A.I. and possibilities for countering repressive and oppressive aspects of technology through activism, art, and education. The story ends at Re-Cog, a facial recognition software company whose CEO is beginning to doubt the long-term viability of their product. Randolph sits alone staring at faces that his software didn’t recognize as faces, only as numbers. Jeffrey, my former student, told me that he never wants to become just a number. Now more than ever, there are options for not becoming merely a cog in the A.I. system. We just have to recognize and deploy them.

This story and essay, and the accompanying art, are presented by AI Policy Futures, which investigates science-fiction narratives for policy insights about artificial intelligence. AI Policy Futures is a joint project of the Center for Science and the Imagination at Arizona State University and the Open Technology Institute at New America, and is supported by the William and Flora Hewlett Foundation and Google.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.



from Slate Magazine https://ift.tt/31LypX5
via IFTTT

沒有留言:

張貼留言