Chicken little forecast

Still Chugging Along

Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.

Building a Better Internet

Featuring Danielle Citron & Eli Pariser

Not too long ago, the Internet was seen as humanity’s great hope. Today it feels more like our undoing. We see social media amplifying negative voices and harassment and producing political partisanship and interpersonal dysfunction, and it seems like no one knows to fix it—except maybe these two. Today we’re joined by Danielle Citron, a leading expert on information privacy, free speech, and civil rights, and Eli Pariser, co-founder of Upworthy and the author of “The Filter Bubble,” who now leads the New_ Public project. Together they share their views on the Internet’s current trajectory and how we might course correct.

Prefer to read? Check out the Audio Transcript

Zachary Karabell (ZK): What could go right? I’m Zachary Karabell, the founder of The Progress Network. And I am here as always with Emma Varvaloucas, the executive director of The Progress Network. And we are having a series of stimulating, or at least we hope stimulating conversations with stimulating, or at least we hope stimulating people who are members of The Progress Network, which we created as a way to establish a platform for like-minded voices who are focused in one way or another on constructing a better future and not simply focused relentlessly on all of the problems that beset our world today. Not that they’re not focused on all of the problems that beset our world today, simply that the arc of their work and the motive of their sensibility is to create the future that we want to live in. And not the future we fear we might be producing.

So one of the things that is on everybody’s mind, either top of mind or back of mind, but on our minds is this question of what has the world of social media and social media platforms wrought? And if you ask most people that question, most of us will feel these days, that it is producing a cacophony of negative voices, that it’s leading to political partisanship and dysfunction, that it is also leading to interpersonal dysfunction and even harassment, and the ability of people to vent their spleen and worse than their spleen with nary a consequence. That’s in real contrast, I think, to 10 years ago, or 15 years ago, when the general arc probably was much more one of wow, look at what these tools are making possible, look at what they’re unleashing in the best sense of the word. And maybe we are only now in the antithesis to that thesis, the awareness of the downside of the upside. But in that light, we’re going to talk to two individuals today who both have been focused for a long period of time on some of the more thorny issues of what we do about social media, what we do about online communication, what we do about it personally, what we do about it politically, what we do about it corporately, what we do about it collectively.

Emma Varvaloucas (EV): So today we’re with Danielle Citron, she’s a leading expert on information privacy, free speech, civil rights, and administrative law. And she’s a professor at the University of Virginia School of Law. Her book Hate Crimes in Cyberspace explores the phenomenon of cyber-stalking and was named one of the best 20 moments for women in 2014 by Cosmopolitan magazine. She also works closely with civil liberties and privacy organizations and councils tech companies on online safety, privacy, and free speech. And our second guest today is Eli Pariser. You may know him as the co-founder of Upworthy, the former executive director of Move On.org, and the author of The Filter Bubble. Today, he is co-director of the Civic Signals project at the National Conference on Citizenship. So welcome Eli, and welcome Danielle.

ZK: I want to thank you both for being with us today and being part of the nascent, burgeoning, and hopefully expanding ever outward—and with the funnel pointing outward—Progress Network. I thought maybe, given that both of you are deeply immersed in many of the dysfunctions of technology and internet and social media land, maybe just start with Eli in that it’s about 10 years now since you talked about the filter bubble as an emergent, or I guess at that point, an already emerging issue. I probably know the answer to this, but then again, maybe I don’t. What do you feel 10 years on? I mean, are our filter bubbles… has the moat become deeper and the bridges become higher, and we’re now in our bubbles to the degree that we’re not even as aware that we are, or is there a lot of seepage, and our attempt to create them is constantly cascading against our ability or inability to filter out information that we don’t want to hear?

Eli Pariser (EP): I mean, I would say the power of algorithmic, personalized feeds certainly is way bigger than it was 10 years ago, and way more all encompassing. But I also feel like I’ve learned a lot in the last 10 years about how this actually plays out in the real world. And in several dimensions. So one is just the complexity of how algorithms show up in different people’s lives can’t be overstated, that if I’m someone who has 200 Facebook friends and tends to follow news pages, my experience of Facebook is going to be so different from someone who has thousands of Facebook friends… there’s just a whole variety of different experiences that people are having even on one platform, let alone across platforms.

And so, it’s really hard to make general statements about what what’s happening for folks. I think the second piece is, I feel less convinced, or I feel unconvinced that social media is the main thing driving people’s experience of being in bubbles. The fact that we live in neighborhoods that tend to be with people who agree with us politically, the fact that other media are also structured increasingly in that way, the kind of educational and economic segregation in the country—all of that together creates environments where it’s really hard to relate across differences. And I think social media is one piece of that, but I think it’s not the only piece. And then the last thing I’d say is, I’ve gotten less focused in the last 10 years on: if only people came into contact with the right content, then everything would be good.

Like, if only I as a liberal saw a little more Fox news, then I would really understand what it was like to be a Republican. I think, empirically, that’s not true. And in fact, often the content creates a worse impression than actually a relationship with a real-life person would do. And so my focus has shifted from, how do we get people in contact with more diverse sets of content, to how do you actually build the kinds of relationships that then build trust across these differences so that you actually know people. And to me, that’s a pretty different lens.

EV: I definitely want to come back to how you do that. But I was going to ask Danielle for a report card as well. It’s getting towards 15 years on since you started writing about issues of invasions of sexual privacy on the internet, criminalizing revenge porn. And it’s funny that, when I first started paying attention to this not too long ago, it was just obvious to me: of course, revenge porn should be criminalized. I mean, this is a discussion? But it was, and it is. So I was wondering if you could give us a report card, both on the attitudes around your work and also legally, where are we law wise with the development of the kind of reforms that you’ve been pushing for?

ZK: And just before you begin, we’re actually covertly just trying to make each of you feel older than you actually feel.

DC: Well, we’ve definitely come a long way since 2007, when I started thinking and writing about cyberstalking. I wrote a piece called “Cyber Civil Rights” in 2008. And at the time I called for (a) that we should understand cyberstalking as a crime, as a tort, and as a civil rights violation. And that we should change Section 230. And at the time I had colleagues who were like, “you’re going to absolutely break the internet. You are the enemy of the First Amendment.” And they said it lovingly, like, I know for sure these are my privacy and free speech colleagues. And so the idea that we would criminalize ones and zeros or online content, which in some ways this resonates a lot with what Eli was saying about the sort of pathologies that we see throughout history. You know, humankind’s pathologies, they’re exacerbated online, they’re the same pathologies, right?

And we see them. They’re acute when we’re face-to-face as they’re acute online, and it’s like a whole society problem. That was absolutely true, the way in which we dismissed gendered harms is a long story, right? That is nothing new in many respects. And we’ve definitely come a long way. Certainly, as we thought about domestic violence and sexual harassment in the workplace: “It was a triviality. It Was no big deal. It was a perk of the workplace,” so to speak. And we hate to say it, but that’s what folks said about it in the late sixties and seventies. And, you know, we wrapped our minds around it. Law changed. We changed. It’s still not perfect. It’s a much better workplace than the one that I entered into in my twenties. It’s a much better workplace than what my mom entered into.

So we are making some progress. But what was interesting when I started writing about online abuse, which was very gendered and sexualized, and often, targeting women and sexual minorities and gender and race when combined was combustive, it was like, “get over yourself, Danielle. This is no big deal. It’s boys will be boys. The Internet’s special. It has its own rules.” And I think I plugged away enough. I was irritating enough. And it started happening to too many people, right? You couldn’t just say, “oh, Danielle, it’s [unclear] be quiet.” It was so pervasive. And then we saw in 2016, it was journalists covering the election. It became so pervasive, we just couldn’t deny it anymore. And we had already made a whole bunch of progress. In 2014, Mary Anne Franks and I write this article called criminalizing revenge porn. And it was shocking that we were proposing that we might criminalize invasions of sexual privacy and intimate privacy, though we’d been doing that in all sorts of ways, video voyeurism laws. It’s not as if we haven’t thought about this before. And we had two laws on the books in the states in 2014, when we wrote that article. Now there are 48. But let’s be clear, the laws aren’t great. You know, like Mary Anne and I have such heartache there. They’re mostly not felonies, they’re misdemeanors. They’re woefully under-enforced. They’re not well-designed. The laws aren’t written in the way that we counseled lawmakers always. There are a few states that have listened to our advice. So they just are woefully under-enforced in that way.

And law enforcement still, the social attitudes that we experienced and have experienced for a long time, which is just turn your computer off, ignore it, it’ll go away. Which is not true, right? Of course. Google is your CV. Those still are pervasive. So, my new book is about intimate privacy, why it matters and how we got to protect it. And our social and cultural attitudes are still not where I wish they were. And the problem is global. So we think American exceptionalism, like the one thing that isn’t exceptional about us is sexism, racism, gendered abuse, like, there is nothing special about us in that way. And so it’s a global problem. We still have a long way to go in terms of social and cultural attitudes, and laws have got a long way to go, because it’s all globally interconnected.

Where are the scofflaws in the United States? These scofflaw sites that are focused on hidden cams and revenge porn, where are they located? They’re not located in Denmark. They’re not located in France. They are located in hosted in Las Vegas. So the report card is, doing better, taking this stuff seriously. I was once crazy. I’m no longer crazy. That’s so fun. But we still have a ways to go. It’s not all a bad story. And it’s fun to talk to Eli, who when The Filter Bubble came out, it was, I think a really important piece to have a general audience for, because it helped us see the way in which algorithm decision-making has so shaped our lives in ways that are just so bad for democracy. Right? And as, Eli, you were saying, it’s not great for democracy still. But at least we’re working on all of these problems.

ZK: For those who don’t know, Section 230 is the part of law that creates a shield for Facebook, et al., to make them non-liable for content, essentially. I mean, that’s the simplest way to talk about it for those who are not familiar with the jargon part of it. And the pushback has always been, look, when most of our communication was by phone or through mail, you couldn’t sue the post office or AT&T for obscene letters or harassing content there. So the pushback’s been, we do want a world information and ideas and communication flow relatively freely. So the challenge has always been, how do you address things that are essentially illegal, separate from the platform, right? I mean, you can’t harass people. There’s a whole series of laws on the books. And I think both of you would agree that that’s been a legitimate question, right? You don’t want to go to the other side of the equation where everybody’s listening and filtering and censoring. We don’t want to create, in the name of good, a pseudo China surveillance state. Eli, you’ve thought a lot about culture around this. There’s also the unfortunate aspect of, no matter how well you regulate certain things, there’s also just the cultural reality, right?

EP: Yeah. I do think, and I’m sure Danielle has a bunch of brilliant thoughts about this, but one of the challenges with this whole thing is a notion about free speech that draws mainly on the way speech works in person and print versus thinking about amplification. And I feel like people can’t help but turn a debate that’s actually about amplification a lot of the time into a debate about, should anyone get to say anything to anyone in any particular context? So it doesn’t address the revenge porn stuff necessarily. And I think probably Danielle would argue, even if I’m just sending it to one person, that’s a problem. But it’s definitely much more of a problem when a hundred thousand, a million, 10 million people can see it, or people can see it for 10 years or whatever.

And that’s not a speech issue in the same way. That’s not like, “I don’t get to say it.” It’s what are these companies that are in the middle that are amplifiers doing with it? And it’s funny because really when you think about it, nobody thinks you can walk into the town square, set up an enormous speaker system and blast your point of view for 24 hours a day. And that’s just free speech. What’s the problem? We accept that there are limits on amplification that are necessary in order to have a cogent public conversation. But when it comes to platforms, this all gets really blurry. And obviously it’s worth saying, I think we all know this, platforms aren’t First Amendment spaces to begin with. They’re private entities. They can take down whatever. But the way that people imagine what their rights are in digital space I think is kind of warped a bit, both by the fact that we have to do public discourse in what are actually private places. That’s a challenge. And then also the fact that we mistake the right to, as Renee DiResta would say, free speech does not equal free reach. And people keep making those mistakes.

DC: I’m with you. I think the idea… it’s a harm question. And network tools produce a whole lot of harm that you’re not going to produce if we’re face-to-face with someone. And we’re not going to produce it if we send it via snail mail or even publish it in a newspaper where memories fade now, circa 50 years ago, the paper I got in my mailbox. It’s just the harm calculus is different. Right? And so the amplification, the pervasiveness, the persistence, it just creates harm that is really different. And Eli is absolutely right. The First Amendment doesn’t mean that every one and every zero is protected. There’s even speech that we regulate that just, we don’t even think it’s in the boundaries of the First Amendment, as my colleague, Fred Schauer would say. And there are 22 crimes that are made up of words. We can’t act like speech is… “Oh, wrap it up. We’re good. No law here.” It’s a harm question. And in the 21st century, harm is different.

ZK: How do we deal with the countervailing harms? Here’s a good one, which I know both of you are really familiar with, but it bears repeating: there is probably, barring an extremely lunatic fringe, not a lot of people who would defend the recording use and publication of child pornography, just as a thing. There’s not a huge constituency in favor of this. But the laws that were designed at an earlier time legitimately to crack down on child pornography began as teenagers started sharing nude pictures of each other, consensual nude pictures, not revenge porn stuff. There was a spate of prosecutions of teenagers for sharing private pictures with each other. Because it was either intercepted by the school or somebody saw it on a phone and did a screenshot. I think there’s been some degree of adjustment to that in the sense of, “these laws were not actually designed to make my 15-year-old a sex offender for the rest of their life.” Even though statutorily, that’s an entirely legitimate interpretation of the statute. So what do you do about that? And then kind of leaving it for prosecutors to go, “Oh, right? Yeah. We don’t really want to go there. But we could go there.”

DC: That’s like when law goes off the rails. We invest a lot of discretion in prosecutors, and you’re right that child exploitation laws are strict liability. So we’re not looking at your mental intent if you have a new nude picture of a teen as someone who’s under 18, it’s going to be considered child porn if you distribute it, you sell it, you possess it. And you’re right. There have been absurd prosecutions in Virginia, North Carolina, that were about seven years ago. And we’re seeing less of that now. And it’s in large part because, like you asked, what’s the purpose of the law? The purpose of the law is to prevent child predation and child rape. It’s not to deal with the teenagers. Right? And so those were so absurd and painful, and often who really was most hounded were women, girls, and people of color. It was just, it was every bad thing about police discretion you can think of wrapped up into one. And so, but we do that in the law all the time because we invest prosecutors with a whole lot of power. And the police, they can they disappoint us in lots of different ways a lot of the time. I don’t know, Eli, if this interfaces with what you’ve been thinking about, too, the sort of child-exploitation-material conversation.

EP: Yeah. I think part of your experience and the trajectory of this whole topic area over time is, who’s designing these technologies in the first place? Which, I can just say, I’ve been a relatively opinionated dude on Twitter with political opinions and all this stuff. I’ve never gotten, like, “slit your throat and die, bitch.” And that’s like just table stakes for all of my female friends who are saying anything at all. So it’s easy, if I was designing a product, yeah, I don’t have that experience. And I might, with all the good intentions think like, “oh, this is just someone who’s having this particular little… it’s a corner case that’s a thing that’s happening over here.” I just think it’s a structural problem with how we’re designing these technologies in these companies, that you really don’t have those people at the table who are going to be able to say, “no, every woman has this experience at some point, if they have a public life online.”

EV: It happens even if you barely have a public life online. Because that’s happened to me and I have 150 Twitter followers. So I take your point very seriously, Eli, but this ties in very nicely to the work you’re doing about how you do design a healthy space, and how you do design a space with maybe other people in mind that weren’t in mind when the current spaces we have were designed. And it was a basic but really important point that it doesn’t have to be like this. The spaces that we have and that we use, that doesn’t necessarily mean that that’s what we have to continue using. So I wanted to ask you, first of all, what would a healthy online space look like? We know what it doesn’t look like. But it’s hard to imagine something that’s not right in front of us.

EP: Well, so I do think the bottom line is, there’s been this mythology that the internet is this beneficent force of its own that is magically going to shore up democracy, decentralize power, make everyone rich. And there was a period, I think, where I personally bought into some pieces of that, in the sense that when I was running MoveOn in the 2000s, I really felt like this is technology that is going to make ordinary citizens more powerful. I think as we fast-forward through the last couple of decades, it’s become clearer to lots of us that that isn’t an inherent trait of technology or the internet. And it may not even be the main trait of the internet, the way that it’s structured.

But I think the other piece is, when you look through the history of communications technologies, all the time there have been these kind of resets where countries and regimes have decided to shape their communication mediums to suit their national needs. And some of those ways have been problematic, but others have been the creation of public media, or the decision to build out a journalistic sector that is not just the yellow press. There’s all these moments in history where there was an intention to structure things in a certain way. I think, right now, the online spaces that we have fail to pass the lab test, in terms of what we would actually think would work as a global connected medium, in two ways, or maybe three.

One is the idea that you can make an algorithm that works for 3 billion people and 190 countries. Why would we think that was even slightly possible? That sounds implausible because it is implausible. And what we know is that it doesn’t work that way. And there are all sorts of countries, especially if you’re in the global south or you have to deal with a version of Facebook or a version of Twitter, that’s actively not working for the way that that society is structured. We live in the United States, in the best version of Facebook. This is as good as it gets. Because that’s where all the engineers are, and that’s where a lot of the political capital is. So everywhere else is living in a less attended-to, more wild west, less moderated, less adjudicated version.

So one piece is just scale. I don’t actually believe that you can do it at scale. Another piece is structure. A Lot of our work in New_ Public is trying to think through, what can we learn from offline spaces that can inform better online spaces? And in terms of structure, one of the things that you look at in an offline community is, yeah, we have private businesses, they play an important role, but we also have all of these social institutions that do a lot of the really critical work of inviting people in, binding them together, helping make sure that everybody has their basic needs met. And we just don’t have any kind of commensurate kind of social sector online. We’re trying to solve every problem through the lens of a venture-backed for-profit company.

And I think there’s a role for that, but it’s not the only way to solve problems, nor does it scan that we’d want to solve a bunch of thorny public problems inside of that structure. It’s just not the right structure. And then I think the third piece is about power and governance. The other thing we know about what makes sort of functional societies and communities and democracies work is that you do have these kinds of federated layers where people actually are able to have a say in who gets to say what and how things work. And that’s a really critical part, not only of building spaces that work for people, but also building faith in the whole enterprise of public space. And we’re living in a very technocratic autocracy online. You cannot as a user say I want Facebook to change the way it’s doing X and have any say in it.

So governance is wrong. Structure’s wrong. Scale is wrong. I think. And I think the good news is—we’ve talked about child porn, we’ve talked about harassment, we’ve talked about the whole structure of the internet being wrong—but here’s the good news. The good news is there’s a whole bunch of people who I think are starting to pivot toward this sort of what folks are calling Web 3.0 or decentralized web, but basically sort of like other ways of thinking about how you could bring digital space to kind of human scale governable structures. And I think that’s kind of where the pendulum is swinging. There are a lot of big questions about how you deal with things like child porn or other things that you don’t want in these structures. But I do think that’s where we’re going to see the next wave of innovation and better spaces because that’s always what’s worked in human society and community, finding these local situated solutions to these problems generally beats out the one-size-fits-all top-down version.

ZK: I want to go back to one thing you said before, and also ask Danielle about this, because you said you used to believe at one point that these tools, in a more utopian moment in your life cycle, when you were doing Moveon, would empower the individual. It would seem to be, in a lot of ways, that they certainly continue to massively empower individuals— a15-year-old has a million followers, whether that’s often the negative, I’m going to harass someone in a way that is amplified and multiplied, I mean, some of it maybe is confronting the fact that if every individual’s voice is amplified, a lot of those voices, their voices, in other moments in society, probably legitimately we’re interested in not hearing. That may be just somewhat of a confrontation of human nature. I guess, Danielle, it kind of goes back to this question of all right, take aside the things that are already illegal that are amplified in a hugely destructive way, the whole realm of harassment, particularly sexual, particularly reputational, but what do you do about the political side of it?

Because you know, pamphlets from time immemorial were libelous in ways that were not necessarily prosecutable. People have been saying vile, rumor-mongering things about political opponents, either to gain advantage or gain followers. And we’ve kind of accepted some of that as a messy, maybe unfortunate, but just the reality of human democracy, if it ever exists. But there’s a huge call now to say, no, you can’t do a picture of Nancy Pelosi looking like she’s slurring, because it implies some sort of mental impairment, which implies dot, dot, dot, dot, dot. I’m eternally wondering how those lines actually get drawn in a way that facilitates and also just allows for some of human ugliness, right? Particularly in the political realm. I’m moving away from the realms that I think there should be no argument.

DC: Right? So in the Supreme Court case, The New York times versus Sullivan, talked about how we need to have breathing space, especially when it comes to political speech. There’s even some suggestion of we should ratchet back New York times versus Sullivan, right? In many ways the First Amendment is imperfect, but it’s got some great important lessons. And one of them is when it comes to the political sphere, that we should treat that as speech that gets the most protection, and we ought to not bubble wrap it. We don’t want to bubble wrap really anything. But if there was bubble wrap, it’s political speech, and you know, the question of what is political speech, what’s newsworthy, legitimately newsworthy can be… There are some difficult questions, right? Of course, we can come up with a whole bunch of scenarios where we would say, but hold on a minute, that isn’t terribly legitimately newsworthy. But for the most part, we do have tools in the First Amendment that we can deploy. I hope we don’t change that in many ways. There’s some ways in which we need to bring greater regulation into online life. And I think there are ways that we can fix section 230 without burning the house down at all—keeping it. But… I think the suggestion, the cheap fake of Nancy Pelosi slurring her words is the kind of speech I think we’re going to protect, if we stick to the First Amendment. Do we understand that as actually maliciously defamatory? Probably not. It’s probably parody.

It was harmful, and I wanted Facebook to take it down, I criticized them when they didn’t. Because they’re a private actor. They make all sorts of choices. We’re not…we’re users. We are products. It’s not as if they are creating spaces of public discourse. That’s certainly not, at least in principle, for themselves and their bottom line, is they’re making money off of our data. I think we do have some tools in the First Amendment toolbox. We can say that the First Amendment gives us that as the doctrine. And free speech values gives us play in the joints that we can have robust, obnoxious, offensive unlovely speech that is about and by matters of public and legitimate public interest, that whether it’s about politicians or political issues, that we can have a lot of breath, we can have a wide breathing room. But I think where we hit the wall and should hit the wall is where we’re seeing public health become endangered.

So the COVID disinformation, I realize we can’t really punish that. The First Amendment is not going to probably allow us to prescribe that kind of speech. We only punish lies when they’re harmful and concrete ways that law would regulate. But don’t we want that taken down? You know what I mean? Like we ended up with people dying because of COVID disinformation, not wearing masks, becoming convinced that as a matter of ideology, they shouldn’t get vaccinated, which is frankly absurd. And there’s a whole lot of death that followed. So that’s my negative side of the First Amendment. It would let that speech play online, and it’s really disastrous. And so in some respects, I’m happy it’s private companies that are making these choices about amplification.

It’s also true of The New York Times. We’re talking hard copy, they make editorial choices all the time. And so there’s some falsehoods that are “political”—and I’m saying this with air quotes—that are really destructive and dangerous given scale, persistence, to go back to the Eli’s original point about amplification. I am grateful that we have gatekeepers, hopefully that are reading responsible. They’re not often responsible, and I want to nudge them to be more responsible. And so we got to change the law, I think, a little bit there, but… So there’s some lies that in the past, maybe weren’t so damaging, Zachary, like the pamphlet. And we said scurrilous things, Hamilton said scurrilous things about people. So did Madison, they all did. But with the reach, we can kill people with lies that had to do with public health. So I think, again, the harm shouldn’t leave us as part of our conversation. Cause it’s not an ideal world, it’s the world we live in.

EP: If I could just add one piece to that, I also think we underestimate the free speech consequences of having hostile spaces. Like we always talk about free speech in the context of, why don’t I just get to say whatever the hell I want? But we don’t really think about how many people, self-censor because there aren’t sure if they’re actually going to be able to speak safely or be heard without a huge bunch of negative consequences. And there’s some great research that is counterintuitive on this point that Nathan Matias at Cornell, among other people, have put together about the way that having some rules and enforcement structures—and this isn’t like government rules—but this is just having some structure to a speech forum actually makes it more inclusive for everybody.

Because when you walk in, and you’re someone who maybe tends to be lower status or in a typically less powerful group and you don’t know what the rules are, often you’ll assume, correctly, that there’s some kind of negotiated agreement about what’s okay here, that you don’t totally understand. And you silence yourself until you can figure that out and figure out what’s okay. If you say like, here’s what we do here. Here’s what’s okay here. And here’s what’s not okay. The research shows a lot more people on, especially women and folks of color and other folks who tend to self-censor in other environments, are game to participate. And so all of that’s to say, like in practice, not as a matter of the Constitution, but just in practice, if what you’re interested in is people getting to speak, then you have to care about everyone, what’s stopping some people from speaking, as well as stepping on people who are inclined to speak a lot.

ZK: I do think this goes both ways, though. I am, like most people of my class and place, had to debate whether or not fudging my credentials to get a vaccine was the right thing to do, as opposed to, should I get a vaccine? You know, kids are getting a vaccine. And while I’m not a big fan of the conspiracy theories that, like, a vaccinated person can somehow transmit negative DNA to a pregnant woman and create osmotic birth defects… I’m not making that one up, by the way. That’s actually out there. There was a school in Florida that told vaccinated teachers not to show up for fear that it would have these negative effects that we would not quite be able to know. And while the microchip theory that the vaccines are carrying little embedded pieces of information, kind of like the tinfoil with UFOs.

But also, I’m not sure I want to be in a society where people can’t say, look, I don’t believe I should do this. It’s untested. I’m 22. You know what, if it’s going to create negative effects, I’m not saying that I think that should be a guiding principle, but it was a guiding principle about other vaccines at other times. Again, I’m mindful of not creating a space where we’re inadvertently, by virtue of moral principle, creating the very filter bubbles that you talked about that were negative impacts—negative arguments that we don’t want to hear that are somewhat good faith are literally filtered out.

EP: I don’t think we have to aspire… Like those aren’t the only choices, right? As like, anyone can say any crazy thing and there’s top-down management of what is true and what is not true. Like another, another better way in between those things is to say, here are the qualities that a statement has to meet in order to be considered amplifiable. Right now, the way that this works online, those qualities are, it gets a lot of engagement. It makes people mad. It makes people happy. It makes people whatever, but it’s getting a lot of engagement. That’s the only criteria. We all know, the old gatekeepers had their set of criteria, and they were problematic for a whole bunch of reasons as well. But I don’t see why we should frame this as like, it’s either everyone gets a right to amplify, whatever crazy thing, or, we’re going to fact check every single tweet before it gets to be tweeted.

There’s a middle ground, which is to say there’s a higher bar for stuff that gets seen by hundreds of thousands or millions of people. I think we can over-imagine that digital platforms are an even playing ground where every citizen gets to be heard. And in fact, most people super don’t, and it’s still very heavily tilted in favor of people with enormous followings and the people that they rebroadcast. So if you look at what tweets most people see, most people see tweets from people who are celebrities, essentially. And there are some viral, side-moving tweets, but the structure of that medium is still very, pretty, pretty strongly more broadcast-like than truly social in the way that we imagine. So the second piece is like, I think it would be reasonable to say you have some additional responsibilities if you are going to be seen by millions of people. We’ve said that in a bunch of different places, again, not necessarily as a law, but as a part of the contract of being part of these platforms, is you have, you’re talking to a million people. Maybe you need to show your work when it comes to sharing something that is important medical information, or maybe you need to be able to defend your work if someone challenges that. There’s all these interesting ideas coming out of the block chain world about courts where you can stake your claim against… Adjudicate. Like, I think you said something false. Prove that you didn’t. So there’s a bunch of different ways that you could imagine this working if you accepted that there’s some amount of responsibility that comes with amplification.

EV: I’m imagining the high court of internet truthiness.

EP: No, but actually, one of the things… One interesting question is how do you bring it down to the ground? How do you bring it down to, everyone has some experience being part of making decisions about what can be said, or that a lot of people do. Like what would digital speech jury duty be like? There was sort of ingrained this idea that like, everyone’s gotta sit on both sides of this table and figure out what are we, what do we want to, what do we want to think is okay here, that would be, I think there’s multiple reasons to do that.

DC: And there’s some models for that, right? World of Warcraft, ways in which we have like jury systems for content moderation, though of course you would need to know what the values are that we’re going to be adjudicating, and what would work. And maybe we need like a small claims court too. Forget the high court. I like the jury duty metaphor too, though.

ZK: You know what gets left out of this? I’m sure it hasn’t been left out of it by either of you, but I feel like it is left out of some of the discussions, how Wikipedia has created it’s sort of moderated world very quietly. I think about this a lot because I have two teens, and Wikipedia really is like the Encyclopedia Britannica on steroids, but it is something that we all use. First of all, it shows up very highly in most searches. So the temptation, I certainly am guilty as charged, right? If I’m just looking for something quick and dirty and informational, I will often just click on the Wikipedia link because it’s present and it’s there and it’s concise. But if you think about something that would be ripe for manipulation of information, and they were aware of this very early on…

DC: It’s lawlike. That’s the thing about Wikipedia, it is like a, it’s got a court system. It is really rigid. So there’s been some, you know, Joe Rigal and Dave Hoffman, [indistinct] have written, there’s a lot written about Wikipedia, and it’s incredibly law-like. So it’s intensive, it’s all volunteer, but it’s intensive. When you talk to the folks who serve as arbitrators, I’ve given talks at the Wikimedia Foundation, they are as geeky and as excited to talk to me as a law professor, because they’re acting as arbitrators.

ZK: Why do you think it’s been so limited to that one, I don’t know what you call it, vertical world reality.

DC: It’s incentives, I suppose. Are Facebook users, are they going to figure out and learn the rules of the world? Does anyone really read the terms of service agreements? We should, all of us should, we should learn about them. We should be taught about them and why, and Facebook should teach us why it bans hate speech and how it defines it. But they’re not doing that. And even if they tried, would we, each and every one of us? I hope we would, but that’s an incentive problem. I think the folks at the Wikimedia Foundation, the folks who are engaged as arbitrators, they’re like really into it. I mean, it’s not a small number of people. They have these meetings that it’s like 400 people. I spoke to a room of 450 people. And they were hardcore, all the editors and arbitrators. It’s not, Zachary, that you go to Wikipedia, because you know what, as it turns out, you know, they [indistinct] problems pretty quickly.

ZK: But that’s why I’m using it, it’s intriguing to me that here’s a really good positive model, right? Not a lot of people are claiming they’ve been censored on Wikipedia. It is a vast and ever increasing trove of information that links, through its footnotes, to vast and ever increasing troves of information. You think that that is a self-governing unbelievably potent global model of information conduit that is guarded, right? Whose parameters of like something in the realm of reality is guarded. You would think that that would be like, ah, here’s a model we could actually build on.

EP: You know, I think there’s two things. One is like, we have a whole venture capital industry that produces Facebooks, and we just don’t have the analogous thing that produces Wikipedia. So it’s a model that works because it is a values-driven enterprise. If it was a company, I think people might feel very differently about spending, you know, 2000 hours a year contributing to it for free. And I do think that gets back to this point of like, this is a way that we organize a bunch of functions in society all over the place. There are community centers in high schools where people contribute their labor because it’s serving some public purpose. And also because it’s satisfying on an individual level. I think we need more institutions like that digitally, but I don’t think that if Facebook was just like, “Hey, who wants to volunteer to spend thousands of hours doing our job?” that people would have the same kind of feeling about it. So I think that’s one piece of it.

Second, I want to be careful not to over-glamorize Wikipedia, because I think, famously, for example, you know, the gender split on Wikipedia articles has been pretty problematic. Catherine Maher who ran it most recently has done a lot of work to fix that. But it turns out that the kinds of people who want to argue about the details of a fact online for free, there’s both a gender skew to, maybe, who’s naturally more interested in that, but then also like a lot of barriers to entry for women who want to get involved in that community.

DC: And harassment, unfortunately, for arbitrators. Yes.

EP: What that’s led to is that actually Wikipedia’s write-ups on women in history, with more female leaning topics, tend to be way less well developed than every character on, you know, a big video game having its own like 60 page write-up.

EV: And so, this is making me wonder, something that Danielle said too about not reading agreements online. I live in Europe. I know that GDPR is supposed to be good for me as a citizen. It’s the bane of my existence because I never—it’s always just accept. Or do you want to look at your cookie settings? I never want to look at the settings. I never have time to go through and read like what I’m actually agreeing to. So it just seems completely useless, cause I’m just accepting, accepting, accepting because I want to go onto the site. So I’ve been wondering, Eli mentioned this internet 3.0, that’s coming. And I’m wondering if people listening, feel like that’s like waiting for Godot, right? Like what’s required of us in the meantime, or like, how can we usher in this, this internet 3.0? Is it, I don’t know, just being on Instagram less, so it’s eating our brain less? Or is it are we waiting for the internet overlords to throw us a bone, or what should we be doing?

EP: Well, I would say like, there’s a bunch of experiments that are happening at all sorts of different scales. And I think a lot of the most exciting work is happening at these really little scales where people are just playing with different ways of being together. We accept a little too much that a newsfeed at the center is a good way for a group of people to communicate with each other, and it turns out, there’s a whole bunch of other… we could have… Look at the explosion in just audio as an example of there’s this whole other mode that’s available to us that creates really different dynamics in many cases for how people are relating. So I think one piece is sort of just being part of those experiments, it’s going to be a combination of community structures and human social intelligence and technical intelligence that I think is going to usher in better digital communities.

It’s not like someone’s going to figure out a protocol and then it’s all going to be good. I think it’s really gonna take a bunch of non technologists saying, this is what works in terms of how to get people together. We kind of know this, and how do we make that happen in a digital space. So I think there’s a lot of opportunity for everyone to participate in that, I guess is part of what I’m saying. And then I also think, my opinion, we need to create the space for those things to emerge. And that probably means reigning in some of the most anti-competitive practices of the existing platforms. So for me, you know, the idea that data isn’t very portable, that I can’t move around or get access from a new social network to what’s happening in Facebook is a big impediment to even starting to really do those experiments at scale.

And so I would look at some of that work as well. And then I think, longer term, there’s some really exciting ideas coming up around, to your consent point about terms of service, things like data trusts. So data trusts would be, I’m going to trust a third party fiduciary to negotiate with platforms on behalf of me and my data. And I’m going to pool that with other people so that we’ve got some ability to have more power together. There’s some regulatory things that would need to be in place for that. But I think that might be a much better regime than trying to imagine that each of us are going to make these really difficult calculations after reading through 50 pages of legalese.

ZK: And Danielle, as we wrap up, what are your thoughts about this brave new world?

DC: So I think that we have long, and this is going to bounce a bit off of Eli’s insights. That is, he gave us a lot of examples, of ways in which we can make individual choices perhaps better. And I think in some respects, the conception of liberalism and the “choosing self” is not going to solve the problems, that it’s much more structural. And I don’t mean to describe what you were saying, just some of them were very individualized. Like how do we enhance individual choice? Where choice is just not the right question, that we need much more sort of structural rules around, for me, data collection. We need to cut off data collection. We need better rules that are structural, that get at structural problems of inequities, that we are much more deliberate about those choices in online, in a world in which our data is being constantly collected, used, mined, shared, and sold.

ZK: I do want to thank you both for the work you’re doing and for the ideas and the intensity and the passion to which you apply yourself to some of the most crucial issues that are going to be facing us for a long time to come. And there are all these things we didn’t talk about in terms of states that are more controlled, and what do you do about that? But I guess we will leave that for another conversation. Maybe we’ll reconvene a little bit down the line and continue to yap, as we should and as we will. So thanks both of you.

DC: Thank you.

EV: Like you said, Zachary, and about this brave new world being ushered in is whether we’re in the low part of the V, right? So like you said in the intro, we started out with such high hopes. The internet, we thought it was just going to be the coolest thing since sliced bread. We were all playing Neopets, and I don’t know, thinking that we were all going to be equally as powerful as the other. Now we’re in the bottom of the V, maybe, and we’ll be climbing our way back out of it. Do you think that’s right? Does that seem correct to you?

ZK: Look, I think generically, that tends to be the case. I do think attitudes towards the shiny new object inevitably get more critical over time, and as we become more familiar with it, and as we examine it a little more and recognize, maybe this is a little more flawed or problematic. And look, for something that was supposed to amplify millions, if not billions of human voices, the idea that there wouldn’t be a downside to that given just human nature was probably naive to begin with. But I do feel, and felt a little bit with the conversation—and I mean this constructively—that we are in a moment of focusing on what this has all wrought in a particularly negative way. And I think that’s a necessary moment to think about as Eli talked about, what’s our web 3.0 or internet 3.0 or whatever this is called, our social media 3.0, because we are recognizing that what we have unleashed was an immense amount of genuine positive connectivity, but we also unleashed a lot of the dark id of human nature that we had kept either tamped down or hidden in the basement or locked away in the attic in a way that, you know, might’ve been repressive in earlier times, but also had some social utility. And we’re trying to grapple with, what do you do when you’ve—I’m gonna use another cliche—let the genie out of the bottle, and there’s no going back? But I think it’s really important that people are looking at these. And someone like Danielle, who has been so focused on these egregious uses of these tools to do really ugly, awful things, but even she will defend to the death, the right of people to speak. The counter-reaction does not shut it all down. And I think that’s quite positive.

EV: Definitely. You know, you said putting the genie back in the bottle, which you can’t do. I was thinking we need to shepherd the trolls back under the bridge. You know, maybe that’s the metaphor that suits us here. Rather than them lurking in your mentions and running amok, we just need to get them back under the bridge where they belong. And they can ask us for tolls every once in a while, rather than terrorizing us on a daily basis.

ZK: Oh, I love that we’re using Scando cliches instead of other ones, but that’s a really good one. Shepherding the trolls back into the bridge. I think that’s a good one. This is clearly a conversation we all need to be having. We need to be having with each other. We need to be having a politically. And I think one thing we didn’t talk about, but I’ll leave us with is, the people who have designed this ecosystem and the tech elites who have profited so mightily from it, I think are distressingly not engaged with the difficulty of these conversations and the difficulty of these decisions. And they are, when they’re like, hauled in front of Congress and have to be, but they are not in an ongoing way involved in these discussions in a way that they should. And sure, maybe they’re being told by lawyers don’t say anything, because if you do, someone will use it against you. And I get that that may be the case, but I don’t think that… That may be partly an excuse, but I don’t think it’s a sufficient excuse. We all need to be engaged in these questions, and the people who are kind of at the epicenter of it, I wish were more engaged in them.

EV: That’s a really good point. And I think that is exactly the reason why they’ve drawn so much public ire, right? It’s like, they’ve just dropped this thing on us and then walked away and wakeboarded with the American flag, if we’re talking about Mark Zuckerberg in particular. I don’t know if everyone saw that. I’m sure you did because it went viral. But I don’t feel very optimistic that they’re gonna like suddenly be part of the public of conversation without, you know, more of the stick part of the carrot and the stick, but who knows.

ZK: And regardless, it’s a conversation that we are going to have and to some degree that they are going to be required to have by nature of all of this. So we will keep having them. Thank you all for listening.

EV: Thanks everybody.

To find out more information about The Progress Network and What Could Go Right? visit theprogressnetwork.org. You can also sign up for our weekly newsletter to stay up-to-date with everything happening with The Progress Network. If you like the show, please tell a friend, share an episode, or leave a rating and review on Apple Podcast, Stitcher, Spotify, or wherever you’re listening to this podcast. What Could Go Right? is hosted by Zachary Karabell, and me, Emma Varvaloucas. We are produced by Andrew Steven. Jordan Aaron is our production coordinator. Executive produced by Jeff Umbro and the Podglomerate. Thanks so much for listening.

LOAD MORE

Meet the Hosts

Zachary Karabell

Emma Varvaloucas

arrow-roundYOU MIGHT ALSO LIKE THESE

The Progress Report: Legislative Transplants

Featuring Emma Varvaloucas

In this week's Progress Report, Emma discusses significant advancements in medical science, particularly in bone marrow transplants, highlighting a startup called Ossium that is innovating donor matching. She also covers Colombia's legislative success in banning child marriage after a long campaign, the discovery of the world's largest single piece of coral, and political changes in Gabon following a military coup, including the introduction of term limits for presidents.

Election Reflection

Featuring Robert Wright

How do Americans overcome political polarization? Is not having a monolithic Latino or Black vote good for America? What are some benefits and drawbacks to a Trump presidency? Zachary and Emma speak with Robert Wright, author of "Why Buddhism is True” and host of the podcast and newsletter "NonZero.” They discuss Trump's possible impact and strategies, and the potential implications for U.S. relations with China and Iran.

The Progress Report: Recovered Life Expectancy

Featuring Zachary Karabell & Emma Varvaloucas

In this week's Progress Report, Zachary and Emma discuss various stories that highlight progress in society. They cover the cloning of the black-footed ferret as a conservation effort, innovations in breast pump technology, trends in American life expectancy, and the recent developments in sexually transmitted infections.