2019年10月25日 星期五

A Former Facebook Insider on Why It’s So Hard for the Tech Giant to Get Elections Right


Mark Zuckerberg

Photo illustration by Slate. Photo by Mandel Ngan/AFP via Getty Images.

In 2018, former CIA analyst Yael Eisenstat went to work for Facebook as head of Global Elections Integrity Operations. She had worked in government for many years, including as a national security adviser to Vice President Joe Biden, but took a job at the tech giant as part of its effort to address election meddling with the hope of helping to address what she sees as our democracy’s “biggest existential threat”: the breakdown of civil discourse. The job did not turn out as planned, and the challenges, in her telling, remain vast.

I spoke to Eisenstat for the first episode of Slate’s new Friday-morning tech podcast, What Next: TBD. Below is a transcript of our interview, edited and condensed for clarity.

Lizzie O’Leary: How did you come to work at Facebook?

Yael Eisenstat: So, I spent most of my life in the national security world. And it was in about 2015, that I started actually—I mean, I’d left government in 2013—but it started occurring to me in 2015 that a bigger threat in my mind to all the things I had cared about, whether it was democracy, civil discourse, our national security, was no longer coming from this thing abroad that I was working on. And I started thinking it was the breakdown of civil discourse here at home. So I started digging in, and I wrote a piece about exploring the breakdown in civil discourse as our biggest existential threat, and started looking at why that was happening. And I’m not saying social media is the only reason it was happening, but I did start exploring that and started speaking about it, and started talking at tech conferences and was asked once on a podcast, do I think Mark Zuckerberg is to blame? And I said something along the lines of, I don’t think Mark Zuckerberg set out to destroy democracy—I gave this long answer of like, I don’t think this is anything intentional, but I do question who he has at his decision-making table, and I suspect it’s not people with my background.

Your background having been a CIA officer.

Yes. Having been a CIA officer, having been a diplomat overseas, having worked on the ground with real people affected by policies, decisions, conflicts. Not just sitting in an office somewhere having never actually seen how the world worked.

What did you want to accomplish when you went to work for Facebook?

So to me—and it was not an easy decision. At this point, just to be clear on the timing of it, the irony is, as you and I sit here today, Mark Zuckerberg is testifying right now. The last time he testified [in April 2018] was the day they made the job offer to me. In fact, they actually called me with the final offer one minute after his testimony ended that day.

Really?

Yeah. It was April, right? Yeah. So what did I hope to accomplish? I watched the entire hearing. I watched how many times he talked about elections, in particular around the world, as a top priority. They called me, and again gave me the exact title that spoke to the core of my priorities and who I am, offered me this shiny title of head of Global Elections Integrity Operations, and to me that meant, I don’t know if this is salvageable, but how can someone like me who cares so much, not just about our democracy, but about global politics, global civil discourse, all of these things—I cannot turn down this opportunity.

I didn’t have rose-colored glasses thinking I was going to go change the company. I didn’t. I’m old enough and have worked in the world enough to know. That was not what I thought. But I was not an easy, easy interviewee. I was very clear: Don’t hire me if you don’t mean it. I’m very excited to help this company hopefully think this through—and [the job] was on the business integrity side, so it was really about the political advertising side of the business—really help think through these very challenging questions of what role are we playing in global politics and global democracy. So I went in thinking, if what they offered me is true, which was to build and head this new team, to hire a team, and to really help think through what is the best way for us to ensure that we are not harming democracy in elections around the world, then how could I say no?

Do you remember your first day? First week?

Oh yeah. First day is orientation, so first day is like any place, drink the company Kool-Aid, very cheerleading, very exciting. So first day, I’m pretty sure there’s nothing I could’ve done wrong yet because I just participated in orientation. And my second day, my first meeting, which was a Zoom meeting, because I was in Menlo Park, and my boss was not. So my very first meeting with my new boss, she let me know that, I have to change your title—your title is now going to be Manager. And for all the things that happened over the next five to six months, nobody can say on Day Two I had already made so many mistakes that they decided they had to downgrade me.

So from the very first day, everything that was promised to me by the recruiters did not happen. I was told, without going too far into the details or it’s going to take up our whole conversation—I’ll just say one of the things that we hear them tout a lot, including right now, is how many people they’ve hired with backgrounds like mine, for example, to help them fix these problems. And they have. They’ve hired some amazing people. I had some amazing colleagues there. But hiring us and empowering us are two different things. And from Day One, well actually Day Two, sorry, it was made crystal clear to me I would never be empowered to do anything.

Can you give me a sense of what you wanted to do, what tools you wanted to build, and then what happened?

I actually didn’t want to come in on Day One with, here’s exactly what we need to do—and I talked a lot about that during all my interviews, that I wanted to take a few months to really dig in, observe, see how we got here, see what the issues are, before actually making any recommendations. And that’s, I think, the way one should, even in a move-fast-break-things culture. This is a really huge issue, and I did want to actually take my time, and they all were very on board with that during the interview process.

So I don’t want to say I came in on Day One and said this is exactly what I want to do. One of the things I did want to really understand is why is the business integrity side, why is the political advertising side completely siloed from all of the efforts that the rest of the company is doing? Because some people will say, well, she wasn’t the head of Elections Integrity, so and so is, or so and so is. Right, there’s someone on the policy side, and there’s someone on the news feed side, and I was on the business integrity side. What I cared about was ensuring that … you know what? Actually, I’ll back up a step.

One of the questions they asked me during my interview, which I actually heard Mark Zuckerberg talk about during his speech the other day, they asked me: Do you think we should ban political advertising altogether? Now, I didn’t know I was being interviewed for an elections integrity job yet, because I was actually being interviewed for something else. So they asked me that question, and I hadn’t actually thought about it in advance. And I did say: “You know what? I think it would probably be the easiest thing to do, because I assume you’re not actually making as much money from political advertising as you are from industries and other stuff.” However, no, I don’t think you guys should ban political advertising, because I look at this globally, and if you ban political advertising, you’re tilting the scales towards the incumbents who already have access to media, access to information, especially in countries that have more dictatorial regimes, and you would be squashing the voice of the smaller parties and the littler person.

So I do actually fundamentally believe that, and I heard Mark Zuckerberg say that the other day, and I agree with him. So, I do fundamentally believe that. But we cannot deny that the platform has been abused and that they let it happen. So I wanted to come in and see. First of all, I didn’t actually clearly understand from my recruiter that it was just political advertising. To me, if you want to solve this problem, you cannot solve it in silos. So I wanted to look at the political discourse altogether over the platform, from the organic side to the advertising side, and understand what are actually the underlying drivers here … to me, the bigger question wasn’t about Facebook’s policies necessarily, but: Why were the Russians so easily able to exploit and persuade Americans using that platform? And a lot of that is a problem of our society. That’s not Facebook’s fault. But the more I’ve dug in, the more I realize a lot of it is because of the way social media has divided our society.

Do you mean in terms of what social media rewards?

Yes. So the key core issue now is: What should they do? It’s been broken down into this freedom of speech versus censorship conversation. And this is going to sound like a ridiculously strong statement, because I get accused of being anti–free speech when I talk about what I think should be done. I swore, when I was in the CIA, I swore an oath to protect the Constitution. I spent 13 years in government defending our Constitution, which includes freedom of speech. So to be accused of not actually caring about that, by many people who defend shareholder profit, is a very hard pill for me to swallow. So just to start with that, but yes, so all of the issues we’re talking about, whether or not you should let a politician run a fake ad, whether or not you should have Facebook being the ones who are deciding what is truth or not, those are all really important questions that society should have to decide. Who’s going to govern the internet? Those broader questions.

But none of the real core issues will be solved before 2020, which is, in my opinion, a business model that exploits human behavioral data in order to sell this idea to advertisers that they can so custom-target individuals, and show us each a different version of truth based on what they have figured out about us. I mean, I know I’m going on a whole thread here, but based on a business model whose entire metric is about user engagement and keeping your eyes on their screen so that they can Hoover up all this data so that they can sell this to advertisers. That is what is rewarding the most salacious content. That is what is rewarding the biggest clickbait stories. That is why—and I assume this is even happening in political advertising—the most salacious content is what’s going to grab the most people’s attention, and their algorithms are all about figuring out how to keep you engaged. So that to me is actually the bigger issue that all of the whack-a-mole responses do not address.

You had this tweet.

Yes.

You know what I’m going to say.

I do.

“Facebook hired me to head Elections Integrity ops for political ads,” you wrote. “I asked if we could scan ads for misinfo.” I assume you mean misinformation. “Engineers had great ideas. Higher ups were silent.” You go on to talk about the sort of free speech answer. I’m curious, when you say you wanted to scan ads for misinformation, so you’re saying that it’s doable?

Just to clarify that a bit, and then my tweet underneath that did say something along the lines, I don’t remember the exact words, but, even if you don’t scan it, or don’t enforce, or check for misinformation, or something like that—I wish I had the tweet, you might have the tweet in front of you—but there are other things. Again, I don’t know that I actually want Facebook to be the arbiter of truth. Just to be really, really clear.

I did pose that question in what’s called an internal tribe at Facebook. I did pose the question because I wanted to understand, why is this policy of ours, why are we not—if we do have fact-checkers, and I say “we” because I’m thinking about the time when I worked there. If we do use fact-checkers now for some of the organic content in the news feed, the problem is most people that use Facebook don’t necessarily know how to differentiate between an ad and organic content, especially outside the U.S., but unfortunately they are different. So we have to talk about them differently, if we’re able to use fact-checkers to start to actually address this misinformation problem on organic content, I asked the question, why can’t we do the same thing in ads? I didn’t actually say we should or we shouldn’t. I asked, why can’t we? What is the history? And a lot of the different project managers and engineers started putting out all these different ideas, which means there was a hunger to have this conversation at least, and again, it’s not about Facebook telling politicians what they can and can’t say.

"They’re deciding what content to amplify, what content to highlight, and so I’m not quite sure I understand how that can be viewed as freedom of speech."

In this particular case, it was so egregious because it was such a debunked video that I have to be honest, I’m not 100 percent sure where I fall on whether or not that ad should have necessarily run, but I do know where I fall on the entire idea of: That ad I assume was also custom-targeted towards certain audiences. Their algorithms I’m sure boosted it to certain people. Those are the issues that I think are more dangerous, but I do know that there was zero appetite beyond the engineers and PMs to actually even consider whether we should be looking at ads for misinformation.

But again, I want to be really clear. I think it’s a really important issue for us to talk about as a society, but I also think it’s a distraction from some of the real issues that I think make this business model dangerous. Which is why, I mean, I don’t often actually talk about the individual, I talk about the company, but which is why when Mark Zuckerberg gave a speech about the freedom of speech, I just think that it is pandering to us to stand on a stage and give an entirely passionate speech about how everything you do is about defending the freedom of speech, but not talking about the fact that you actually have a business model that in no way is about the freedom of speech.

Your platform curates content. Your algorithms decide what I do and don’t see. I don’t see everything my friends post. I don’t decide what kind of ads I want to see. No one has ever asked me at Facebook what kind of ads do I want to see. They’re deciding what content to amplify, what content to highlight, and so I’m not quite sure I understand how that can be viewed as freedom of speech. If you were 100 percent an open platform that anybody could post on, that you did not use our human behavioral data to then custom-target us, both with ads and with how your news feed algorithm works, then I could buy into the argument that you care about freedom of speech.

This reminds me of Facebook exec Nick Clegg saying in a speech that they wanted to make sure there was a level playing field, but you’re essentially saying the playing field was never level to begin with.

It is absolutely not level, at least not the way it currently operates. And the funny thing is, I still use Facebook. I want Facebook to be a good product. I want to stay connected to my friends around the world. And I know when I talk about these things, people assume I’m just this anti-Facebook person. I mean I very rarely speak out or make public comments about this, but I cannot sit by and allow this completely false narrative about, if you care about the freedom of speech, if you care about us beating China, if you care about … like all of these really patriotic sentiments, which is still masking the real issue of: Your business model is creating externalities in the world, including on our mental health, including on our civil discourse. And that to me is the issue that you are making sure we don’t talk about because you’re having these grand patriotic conversations about freedom of speech.

The playing field is not level. And it’s not just Facebook by the way, let’s just be really clear about that. But when you decide and curate my content and you end up … I mean, on Twitter, let’s be honest, I can’t prove this, so this is me making a bit of an assumption, but I think there are some people working on this. The most salacious content seems to somehow go much more viral than like wonky, non-salacious, non–clickbaity things. And that to me is actually the dangerous thing. That and custom targeting. Being able to show 2 billion people 2 billion different versions of truth is really terrifying to me. And so that is why, I know you haven’t asked this, but that is why one of the things I’m most concerned about, I’m just concerned about all of these binary conversations. Again, freedom of speech versus censorship, publisher versus platform as opposed to: We all want a healthier society. I believe most of us believe in freedom of speech, but this industry is the only industry that has full immunity to act however they want when it comes to how speech looks online. And to me, that’s a really ridiculous binary conversation.

One thing I’m curious about is: If you still had your old job, what would you do with it now? What would be the things you wanted to worry about in the 2020 election cycle?

If I had my old job, I wouldn’t actually be able to work on the things I care about, because if I had my own job—listen, all of the different reactive things they’re doing, they’re important. The fact that they are looking for inauthentic pages, that they’re looking for inauthentic behavior, all of those things are important and I give them credit for that work. And sure, if I was there, I would still be trying to figure out how to work on all of that. But what I would want to do, which I wouldn’t be able to do if I was still inside the company, is talk about the business model to begin with. And it’s not about whether or not Facebook should take money for political advertising.

It’s a very small amount of their revenue.

Right. And it’s about whether or not they should continue to manipulate me based on my human behavioral data that they have Hoover’d up to custom-target me with certain kinds of content and ads. That’s the thing that I think is dangerous. That is the thing that is allowing one person in one reality to see a completely different version of truth than what I’m seeing. And it is rewarding, and it’s, again, not just Facebook, we’re talking Twitter, we’re talking YouTube, and some of them have made changes, but we’re talking about rewarding outrage, rewarding the most outrageous content, the most clickbaity things to the point where we don’t even have this version of truth anymore.

So while there are all sorts of little things I would love to do inside Facebook, at the end of the day I actually think it’s government regulation that is going to [do it], and I don’t mean regulating what is true and what is false, but basic guardrails, even if it’s purely about transparency.

Well, what does that look like? Because a lot of what you’re describing sounds like just saying, this business model fundamentally doesn’t work, or shouldn’t exist. So what kind of guardrails would you put in place?

So unfortunately I don’t think you can outlaw the business model, as much as I would love to. I do think we could start to figure out how to quantify some of the externalities of this business model. And then you, I know that gets to the dirty word of taxing externalities, but somebody should bear responsibility for that.

There’s a whole bunch of different things that I think government needs to do, and the solutions communities are all arguing with each other now. Well, antitrust won’t fix it all. Well, a data privacy lab won’t fix it all. Well, CDA 230 reform won’t fix it all. They’re right. None of it will fix it all. They’re all important pieces. The one that I care about, the one that really matters to me and the core of what I care about, is defining what are these companies’ responsibilities. And what I mean by that is, we all know the government’s responsibility is to protect the citizens of this country. Whether they do it well or not is totally debatable, but that is a responsibility. Because of this one 16-word, or something, small piece of legislation written in 1996, which—

This is the Communications Decency Act.

Section 230.

Section 230 of it, which every internet nerd is really interested in, and I think a lot of regular people have no idea it exists.

Yeah, and it is this, I mean, essentially it’s an immunity. It’s a protection, which made sense at the time—again, 1996, none of these platforms existed yet—which was to make sure that we’re protecting the internet to be able to flourish, to be able to innovate, and not hold internet providers responsible for the content that is hosted on their sites. And yes, I think it makes perfect sense. I give this example often. If I build a website on Squarespace and I host illegal stuff on my website, I should be responsible, not Squarespace. I fully, 100 percent agree with that, but a lot of people hear this nonstop debate about whether or not these platforms should be considered publishers, and the reason why is because if they’re considered publishers, they wouldn’t get CDA 230 immunity. I don’t think that that is the correct debate.

And I think, and this is where I get frustrated, these guys are curating our content. Their algorithms are deciding what they will amplify. Their algorithms are deciding what will or will not go in front of your eyes, what communities will be targeted with which ads. That is not a neutral platform. And I don’t mean neutral in terms of conservative versus liberal. This is where all of these false narratives are happening. I mean neutral in terms of just hosting content with no say in what I’m actually seeing. And so I think in 2019 we do not need to get rid of CDA 230, but we need to add a new category for what they are, which are digital curators or whatever term you want to use, and then figure out the guardrails around that. And one of the guardrails could be: Your recommendation engine has to be transparent. Whatever it is, whether it’s your recommendation algorithm for YouTube, or whether it’s your algorithm that’s figuring out how ads are getting targeted, whatever. If that was transparent, I suspect these companies would actually make a bigger effort to not show how dirty some of that has turned out to be.

When you say “transparent,” can you give me an example of what someone, not you, but a less savvy internet consumer might then see.

So people who don’t understand how it works are probably not going to fully understand actually if these algorithms become transparent—they’re not going to necessarily see it. But the people, the researchers, the journalists, everybody else who will understand it will be able to actually make sense and say, oh, that’s so interesting.

We’ve been going for about 20 minutes, so I’m going to ask you, you were hesitant about talking to us.

I was.

Why?

I think there’s so much noise out there, and I am not one of those people who wants to contribute to the noise. And I also, I mean, if you’re going to … I hope this will make it on. I’m skeptical of the media. Not necessarily because I think in any way anybody in the media’s goal is to try to contribute to all of this craziness that’s going on, but at the end of the day, in order for you to actually get play on Facebook and on YouTube and get out there, you also have to play the game and have a salacious clickbait title. And so it is very hard for me to allow anyone else to own my narrative, because when I do—I mean I gave one interview to a journalist not that long ago and they ended up running it with this super clickbaity salacious title that said, Facebook Knows More About You Than the CIA, which was not actually the full context of what I said.

So it is very hard for me to trust someone else with my narrative, otherwise I will also get reduced down to just sounding like a disgruntled employee who is complaining, when what I really want to talk about is: What are some of the solutions to help make sure that we have a healthy internet, that we can preserve all of our values, which includes freedom of speech, but reduce the way this is harming our civil discourse in our democracy? And by talking to the press, often my message gets distilled down to the sound bite and then I just become one of those people who is contributing to that noise, and that’s why I was very difficult and pushed back a lot when you asked me to do this interview.

Listen to the show, which also includes a conversation with New York Times opinion writer Charlie Warzel, below, or wherever you get your podcasts.



from Slate Magazine https://ift.tt/2Wn6Na1
via IFTTT

沒有留言:

張貼留言