Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.
The Dish on Disinformation
Featuring Jiore Craig
How do we know which information on social media is real? How can we overcome campaigns of disinformation and misinformation? Zachary and Emma speak with Jiore Craig, global and political media strategist. They discuss the formats that disinformation campaigns use, how they target age groups differently, and various ways for users to combat these campaigns.
Prefer to read? Check out the Audio Transcript
Jiore Craig: We really are online for as long as possible and they are wanting us to be in that sort of passive brain. So when you’re scrolling online, you’re sort of pushed into this passive state and it doesn’t incentivize you to explore or question or wonder about who’s behind the content you’re seeing. And so I encourage people to really make the content in their feed earn their attention.
Zachary Karabell: What could go right? I’m Zachary Karabell, the founder of The Progress Network, joined as always by my co host Emma Varvaloucas, the executive director of The Progress Network. And What Could Go Right? is our weekly podcast where we try to penetrate the web of gloom and doom, where we try to add a different note to the dystopic chorus of everything is going to hell in a handbasket, which it might be. Absolutely. None of us know the future.
The world might be heading down fast, like a vertiginous ski slope. We don’t know what the future holds. We know what we fear the future holds. We know what we hope the future holds, but none of us know what the future holds. If we did, life would be a whole lot simpler and a lot less interesting.
So the question is, what kind of decisions can we make individually and collectively that will more likely create the future that we want rather than the future that we fear? And we started The Progress Network with the simple proposition that we don’t spend enough time looking at the answers to our problems and enough time listening to the voices that are more sanguine and sensible about the potential of the future being better and our capacity to make it so nearly as much as we listen to the cacophony of fear and the noise of doom and gloom and dystopia, and that we don’t serve ourselves particularly well by paying over attention to all that’s wrong and less attention to what might be going right.
Hence the podcast, where we look at a lot of things that are going wrong; we just do so with a sensibility that there’s something that can be done about it. And there are lots of people spending a lot of time trying to do something about it. And one of the great issues of our day, which is really more nascent, even though it’s pretty present already, is the ease with which disinformation and misinformation and now deepfakes and AI and fake information is disseminated through the technologies of social media, internet, all the things that have permeated our lives to an extraordinary degree over the past 15 years.
And we’ve talked about a number of these already on the podcast. We’ve had a series of conversations about these very issues of information, disinformation, civic responsibility. What do we do? How much is that endangering democracy? How much is that endangering privacy and rights and particularly the proliferation online of lots of dark stuff, whether it’s child pornography or, you know, deepfakes that particularly specifically undermine the rights and abilities of women in many societies to function.
So we’re going to talk about all of this today with someone who is an expert in this, and this is an area where expertise actually matters. People who understand algorithms, follow the laws, understand the trade offs, really are dealing with these issues in the weeds every day. So, Ms. Varvaloucas, who are we going to talk to today?
Emma Varvaloucas: So today we’re going to talk to Jiore Craig. She’s a resident senior fellow at the Institute for Strategic Dialogue, where she focuses on digital integrity as Zachary just mentioned, so her work is all about safeguarding democracy globally. She researches and measures the impact of online harms on society, and she’s an advisor about all that to global leaders.
So we’re going to talk to her about tech policy reform, tech accountability, what you can do to have some kind of control over your own algorithms in your life, and of course, we’re going to talk about cats and dogs in Springfield, Ohio.
Zachary Karabell: Cats and dogs, man. Bring it on.
Jiore Craig, it’s a pleasure to have you with us today. I mean, I assume you’re with us, right? That’s actually you and not some artificial simulacrum AI generated Jiore, but.
Jiore Craig: It is me. It is me. I can’t say I have any way to verify that for you, but you can just trust me, I guess.
Zachary Karabell: Alright, so for the time being, we are all gonna stipulate that each of us is in fact each of us and not an AI creation of any of us.
But as Jiore just said, how would you really know? At least at this juncture in time. So with that, let us plunge into the topics du jour. I feel like we’ve been hearing about the potential democratic threat that sort of disinformation and increasing AI capacity to create fake individuals visually, although I don’t think we’re quite there yet visually, but we’re definitely there in terms of audio, I mean, you could, you can do a pretty good job creating someone’s voice and a message and not easily being able to tell whether or not it’s them.
But I feel like we’ve been hearing about the potential threat to particular democracies for a while. It doesn’t feel yet like those threats have translated into sort of palpable, explicit, clear threats to our system. Like we have a lot of other threats, you know, people convinced about election interference and fraud and all that. But this particular issue feels still in the, in the realm of the problem is, is coming, but it doesn’t feel to me like the problem has yet really manifested in quite a, you know, commensurate with the concern.
Jiore Craig: If we’re simply talking about deepfake audio and deepfake video content, then I mostly agree with an asterisk that looking globally, there have been several instances where manipulated audio and manipulated video have had pretty dire consequences. It would be wrong of me not to mention just how bad the sexually non complicit imagery problem is when it comes to deepfake porn and deepfake images of women and children around the world.
It’s extreme, it’s having a massive chilling effect on people’s way of life and their, you know, women and children’s ability to participate in society, let alone democracy. And so that is ongoing, and it’s worth stating because it should sort of be at the top of the list of the way we’re looking at that deepfake concern.
What we’re hearing the most about though, which is what you’re saying, a very specific election threat. I’d say that the greater threat of disinformation and just overall deception is the bigger and wider concern, and the AI generated forms of that are just sort of in the same bucket as the wider deception.
One problem we do face on the AI generated front is that we don’t have the capacity to, as I established at the start of this, verify AI generated content is in fact AI generated or detect it in full. Each company kind of has one piece of the puzzle there, and there’s really no comprehensive way to verify or detect, which is problematic for the institutions that would have to communicate with voters and citizens, should there be deceptive content.
So, we’re a little bit in a bind there. We already have that problem somewhat with information generally, but that’s a bit of a bind for at least 2024.
Emma Varvaloucas: As it happens, I just did an article on the non consensual, like, pornographic, deepfake stuff, especially for women. I was kind of surprised like how quickly states moved into legislative action after January 2024 when the Taylor Swift deepfakes went viral.
And I was also surprised to learn that it’s like an oddly bipartisan issue both in the states and abroad. And that like, you know, particularly in Europe, they’re like kind of getting their act together on it, even if it’s illegal, right? Which in a lot of places is not illegal yet. Like, how do you stop that from happening and spreading?
And how do you hold people accountable? Because it seems nigh impossible at the moment.
Jiore Craig: Well, as you’re suggesting, the bipartisan nature of especially child online safety in the US and in other parts of the world is one of the more hopeful, albeit not fast enough and not enough of the legislative conversations happening around technology.
So while it seems like there was quick action following what happened with Taylor Swift, I would say that that’s actually more the result of individual stories coming forward of young people, teenagers, women who are experiencing this. I’d say that I’m not aware of any legislation that is like over the line and perfect and really getting at this in a comprehensive way. I know there’s some that’s in development.
The challenge that you have across the board when it comes to holding technology accountable in this sense is you have the problem of privacy, which is a really important right to protect, and you have the problem of freedom of speech, which the companies will use in many cases to try to not have responsibility about what happens on their platforms.
And of course, when you get into things like CSAM and you get into things like non consensual pornographic imagery, then the next layer would be terrorism activity. Those are challenges and that type of content becomes something that we have to weigh really carefully with those privacy concerns and those speech concerns.
And so I’d say that that is sort of the lowest hanging fruit that people can agree is a problem and should be explored. But even then, you still are getting hung up in the sort of legislative discussion around policy. And so I’m not sure that we’ll see a comprehensive solution. But I’m more hopeful about a comprehensive solution to things related to child online safety than really any other type of online harm.
Emma Varvaloucas: You’re saying you’re not sure if there’s going to be a comprehensive solution to the AI pornographic deepfakes, but perhaps there would be in terms of child safety. So I’m just kind of curious what that would look like.
Jiore Craig: I think that we are closer and people are taking things like CSAM. So, you know, anything related to pornographic material targeting children, more seriously than they are taking wider online harm. So if you look at like several hand, dozens of the bills that have been put forward in Congress, they are focused on child online safety that is moving faster than things that are more broadly about any manipulated image or a manipulated image of a public persona.
You sort of have these two, two ends of the spectrum. One, you have child online safety, which tends to be bipartisan, and you have parents activating around that. And then you have when a high profile person experiences something like Taylor Swift. And it’s silly that our policymaking would be responsive to one of those and not the other, meaning it’s silly if it’s responsive to the Taylor Swift or high profile person experiencing harm and not the child online safety concern. Both are seemingly able to get things at least on an agenda for policymakers.
News Clip: One very clear regulation should be that AIs are welcome to human conversations as long as they identify as AIs.
Um, it’s the same basic principle we have for humans that they, it’s against fraud. You cannot pretend to be a doctor if you are not. You cannot pretend to be a lawyer if you are not. And also you should not pretend to be a human if you are not.
Zachary Karabell: A couple of years ago, we had a conversation with Daniel Citron about some of these issues too, about, you know, some of the, the legal and first amendment issues.
And there’s a huge amount of this, which almost anybody with any sentience and conscience would agree is, you know, abusive, harmful, wrong, right? In most societies, like there’s a weird kind of cross cultural human, like some things just are beyond whatever pale we’ve decided is a pale. But there is also concern, particularly in the United States, much more, I think, ambiguously in the European Union, and obviously non existent in a place like China, that the state has immense power, it has the power to imprison, it has the power to be able to use force coercively to enforce laws which may or may not coincide with morality.
And we always have to decide, you know, what’s the right balance between the classic, like, would you rather one guilty person go free or, you know, one not guilty person go to prison? There’s always that calculus when you invest the state with power, right? Or, you know, companies don’t have the power of coercion, right?
Amazon can, Facebook can abuse your information, but they’re not going to show up at the door and, you know, haul you away. So, like, where do you come out on that line of the balance? And, and, and this is one of the things I think that creates that legislative hangup is that people, you know, they answer that question at different degrees on the spectrum of what they feel is, is appropriate.
Jiore Craig: So my background is international and I’m very sensitive to the reality that many governments who pretend to be democratic or partially democratic abuse powers and really overstep and overreach into people’s lives and abuse those powers to restrict freedoms.
And in conversations with my colleagues around tech accountability and tech policy who are from some of those countries, it’s clear that there have to be protections built in when your democratic system doesn’t have the functioning checks and balances that would keep that from happening, or that’s a check on that.
I think that when you are looking at the question of who should be in charge of online speech, the companies or the platforms, the answer is that’s sort of the wrong question to be asking, in my opinion. In my opinion, we’re not talking about an environment where anyone but the handful of gentlemen who control the social media companies are in control of what’s in our newsfeed.
That’s just what it is. Our newsfeeds are not something we are actively choosing and making for ourselves. Our newsfeeds are curated by those platforms. Now, what’s ironic to me is the people who get riled up about the government being at their doorstep or in their homes have no problem seemingly with the companies being in their bed with them on their phones, you know, listening to all of their conversations, knowing everything about who they have relationships with in their lives, where they work, how long they’ve worked for, how long that person has been on their phone that day.
I mean, it’s wild the free reign that the companies have in both selecting data and then targeting us with that data, and optimizing for different things, for example, like keeping us on for longer. And if you could, if you could find a way to actually quantify the amount of time that those companies have taken from us and used to make some kind of profit, and then you take that time back and you think of what it could have gone toward otherwise, whether it’s human connection, which in our democracy and others, we are in need of, there’s a loneliness, you know, kind of epidemic happening in many, in many democracies or anything else, whether it has to do with pouring into just society in general, civic participation.
It can be your own self improvement goals, whatever it is. That’s the conversation we’re not really having is what are they allowed to optimize for? What are they allowed to use people’s personal data for? And what transparency requirements do they have?
Right now in the US, they have very little transparency requirements. There were attempts to kind of get them to come to the table on their own. Those didn’t really work. Finally, in the EU, they are having to prepare very basic safety assessments and have those risk assessments feed into what will hopefully be the establishing of safety standards that they have to abide by like every other industry. And it’s already resulted in billions in fines. And that’s really a rounding error for them.
So, I think that that’s the conversation that needs to be explored, and in the meantime, we get stuck in this content trap, and it’s not to say that it’s not important to understand who’s in control of, of people’s speech, who’s able to plus up or plus down what someone’s saying.
That is important, but if we stay there, we won’t move forward on any of the pillars I just mentioned. The platforms will kind of keep enjoying this extremely free reign. And I am, I am happy for people to have free reign. I am not happy for the companies to have free reign over sort of how they’re governing what we consume and, you know, how long we’re scrolling.
Zachary Karabell: I think the pushback there, because I’ve had these conversations with people, has been, let me give you two examples, and this is the differential between companies, private companies abusing their unfettered use of all of our data, which is absolutely the case.
Laws against dissemination of child pornography online, which seem to be an unequivocal moral good, then were used, particularly in, you know, southern states more than northern, just happens to be a fact, some central, by some zealous local prosecutors to go after the 15 year olds who were sharing nudes with each other and then, you know, putting them on a sex offender list for the rest of their lives, in a way that clearly was not the purpose of those statutes, right?
It wasn’t to criminalize teenage sexual behavior. Although clearly for some people, in a moral police sense, it was in fact a criminalized teenage sexual behavior. We’re not ruled by the Taliban, but we have our own kind of, you know, zealous moral code that some people would like.
And in a related fashion, one of the things that has been most protected, the data that’s been most protected has often been healthcare data, and HIPAA has been very good at preventing the easy dissemination of that. You know, the downside is it’s made it harder for healthcare providers and companies to treat the whole patient.
But on the flip side, when, when some of those same state zealous people in a post Roe world have gotten their hands on some healthcare data or healthcare apps, you know, trying to use those for women to go after women who, it’s like kind of a precog, let’s go after someone who might have an abortion way. So these are the things that I guess concern people at a libertarian sense because however abusive Meta and Amazon are with our data, they’re not throwing 16 year old girls potentially in jail for maybe having an abortion.
Jiore Craig: No, but 16 year old girls are harming themselves and 16 year old boys are pursuing choking challenges that are ending them up losing their lives.
And, you know, Snapchat is suggesting recommended content, recommended contacts that end up selling a kid fentanyl. So these things are happening on these online spaces. And what, what’s challenging is that, of course, there are often third party content creators involved, and there are often these sort of third party entities that make you, make you feel like you can kind of legislate for that.
But we’re seeing that that’s not working. And we have examples in other countries. I mean, Europe has GDPR. They have digital privacy at least established, and while it’s not perfect, it exists. We have no digital privacy. We have no digital rights in that country, and so there should just be a basic framework.
Now, to your point, though, because I, I actually think that there’s so much validity in thinking about the consequences of bills in general, and what I would say to that is we have to, and I think too many people think this is a foregone, foregone conclusion. We have to re engage people in civics. We have to re engage people in paying attention to what’s going on, because that prosecutor with the political agenda that is putting, you know, 15 year olds in, in, in prison or, or putting them on the sex offender list, that person, you know, that person shouldn’t be in office.
And if, if you don’t want people like that governing your lives, you should get involved and get active. I mean, if you look at what’s going on in this country, we have so many issues. Child online safety, guns, abortion, where the vast majority of people are on side and we do not have the results that would reflect that in the way we’re represented.
And so, I think that people don’t like this. Civics isn’t terribly sexy to talk about. It’s a lot easier to talk about like, you know, the fake image of whichever boogeyman of the day is coming for us in November. But actually, I wish we could make the conversation around how we get people really fired up or just regular fired up about voting and participating.
Because other than that, we’re going to always have, perhaps in the South, people who are pushing their ideology on others. We might have that in other places too. It’s not limited to South. I’m also central. I feel like, you know, it can come from either side, either direction, but it doesn’t change unless we collectively act for it to change.
Emma Varvaloucas: So I want to dive back into the disinformation discussion that we touched on a little bit in the beginning about AI. And, you know, Zachary pointed out that the, the AI seems to have had limited impact until now. You did mention there’s been stories of dire consequences, but it’s not like a whole lot, right?
But I did want to ask you about what you see as the most pernicious form of disinformation because, like, to me, the answer is a lot more pedestrian. Like, when I see a lot of disinformation on TikTok, a lot of it is laughable. The ones that I see, like, really duping, let’s say, friends and family, to me is like bad faith clipping, like just clipping videos so that it appears like someone saying something that they didn’t actually say, but you are the expert. So I’m really curious what you find to be is the most pernicious form.
Jiore Craig: I try not to put it to a particular form, and I try to put it to what I think is the most dangerous tactic and the most dangerous tactic is anything that blends in really well.
I mean, that is what they are trying to do. I think that so much of disinformation takes advantage of our existing biases. So if we think that we are not vulnerable, if we think we’re smarter than that, we are already kind of inherently going to have our guard down. And the reality is most people do think they’re smarter than that.
They don’t think they’re going to fall for something as blatant as fake news. Fake news, you know, sounds so false. But the reality is, especially foreign state actors, but really all people pushing disinformation strategically, they get it to blend in really well. They get it to look like it’s coming from your friends or influencers you might follow. And they make it feel very in group. They use slang language, they use the cultural references that you might be getting from pop culture or other, you know, meme accounts that you follow, and that’s pretty pernicious.
And they tailor that for the demographics that they’re targeting. So, if it’s older people, they might do chain emails and print disinformation. If it’s people in their 40s and 50s, that might appear in the form of creating Facebook groups that appear to be local, but are not actually run by anybody in the country. If it is the younger generation with autoplay and short form video, there’s a whole host of ways that they can create the illusion of a real news event, or just, you know, push false facts with really very little oversight where those facts are coming from.
And I’m a big believer that the best thing you can do is be working really hard to get the information you need from your content online. It hasn’t always been like this. We didn’t always log on to our social media and enter our passive brain. Like if you’ll remember when you used to log on and see what your friends were up to and you’d be at a desktop and you’d be clicking through photos. And you might post on someone’s wall and it was your friend and then your other friend might comment, but it was a really sort of social, relational interaction that you were having.
And at that stage, they were really focused on our data. That was the sort of obvious thing to be collecting on us. But now that they’ve focused on meeting our attention, so our eyes can be looking at ads as much as possible, we really are online for as long as possible. And they are wanting us to be in that sort of passive brain.
They don’t want us to be actively consuming facts and critically thinking for too long. That would get tiring and we’d log out. So when you’re scrolling online, you’re sort of pushed into this passive state and it doesn’t incentivize you to explore or question or wonder about who’s behind the content you’re seeing.
A video might make you think. But who’s behind that video? And so I encourage people to really make the content in their feed earn their attention, earn their focus. And that is what I repeat as often as I can, because the tactics keep changing. And the disinformation keeps evolving as tech keeps evolving. But it’s always going to be the stuff that can slip into a WhatsApp group, a diaspora community, where there is nothing but people in that diaspora community and really no one, no one would give it a second guess that is going to continue being the most dangerous in my opinion.
Zachary Karabell: I mean, it’s clear, for instance, with X, you know, formerly known as Twitter, that like whatever algorithm is governing the content of that is confounding, confusing, clearly content is, you know, popping up that none of us selected, wanted, and the content that we did follow isn’t showing up.
There may be, you know, an Elon Musk algorithmic logic to it, although it’s not entirely clear that even that’s the case with it. Facebook, Meta is clearly totally moving away from news as a driver of eyeballs in the way that it did for about 10 years. So whatever content is showing up is usually paid and sponsored.
And again, confusing in a weird way, at least my own experiences between Reels and TikTok, at least those seem to follow some sort of, you know, logic of likes or logic of eyeballs that it may be pernicious in its own way, but at least it makes some degree of sense. How do you deal with that reality? I mean, you just, you alluded to it in your answer just now about the changing landscape of these algorithms, which, given that we have no transparency as to what they’re even doing, it’s doubly hard to catch up.
Like, what do you do about that?
Jiore Craig: A couple things are going on that I think are useful. One, there’s a new organization called the Integrity Institute, which is all people who used to work at tech companies. And they have the background. They have software engineering backgrounds. They have trust and safety team backgrounds.
And they are organizing to basically provide guidance to policymakers that will actually match what’s going on at the company. Because for too long, the policymakers were just too far away from what was going on. And all they had to do was just make a bunch of changes. And then like two years of legislating was kind of irrelevant.
So we have to move faster. And I think organizations like the Integrity Institute help. Young people in the conversation also really help. Centering young people in the conversation useful because they are really naturally attuned to the very subtle shifts that happen. They catch trends faster. They’re usually the first demographic that’s going to pick up on the rollout of a new technology or a new social media app.
And so keeping them in discussions and giving them the training and the mentoring that can make them sort of work in those settings in those higher stakes conversations, that’s really important. On the actual policymaking side, I think that the things that we need to do is, one, require transparency.
There are a whole bunch of ways to provide anonymized transparency without it crossing any privacy lines, and it’s just ridiculous that the companies have actually rolled back what they used to provide, Meta’s own edited tool, CrowdTangle, which was a tool that pretty much everybody in the research industry was using.
They sunsetted it, they got rid of it, their ad transparency library, both Meta and Google, are really crappy and not maintained. Even requiring them to maintain their ad transparency library would be a very, very bare minimum great start. And it’s sort of like a, you know, universal thing that’s going to be relevant whether they make changes or not.
Then you have the risk assessments. And I happen to think risk assessments are a good way to legislate. Because even though, yes, of course, the companies can sort of, you know, grease those to fit their needs, it still requires them to provide some overview that the legislators can work back and forth with them about their current platform, their current technology and the current practices that they’re using.
How I keep up with it, as a researcher and as someone who cares about this, is by incorporating all those things I just mentioned into sort of my daily life. I mean, I’m a public opinion researcher. I try to be in conversation with people from a range of demographics as often as possible. That keeps me clued into how people are experiencing their online spaces. I think that with AI, we’re doing a better job than we did with social media getting into the room to start legislating sooner and faster. The EU AI Act is already going through the process of, I think it already passed, but then there will be several other steps for it to actually get into enforcement.
So, that’s a good start. I mean, we were, we were so far away on social media to getting to any, anything, something similar, and we’ve learned our lesson now on AI, but we’re going to have a lot of the same challenges with AI, which is that it’s a little bit of a black box for a lot of legislators and they have to rapidly find the experts they need to keep up.
Emma Varvaloucas: In your opinion, how does misinformation and disinformation work? And what I mean is like, you know, we’re coming off the heels right now of the, like, Trump on the debate saying they’re eating the dogs and the cats in Springfield and there’s a couple ways that you could look at that, right? Like you could look at that as, that really got intense on the extreme edges and there were bomb threats in Springfield.
But the other way you could look at it is, I think the vast majority of people listening to that knew that it was untrue. It kind of got cleared up because there are so many news organizations that went back to find the original facebook poster and the original claims and this, that, and the other thing.
So coming back to the original question of like, how does misinformation, disinformation work? I think that the common narrative is like, it fools people into thinking that things are true that aren’t true. Is it that, or is it that it aligns with people’s already, you know, their already existing preferences, their already existing political opinions, and that makes them more extreme?
Or is it something else?
Jiore Craig: First of all, I’m not sure the majority of people thought that was not, not true, as ridiculous as it was. But that aside, I really don’t think disinformation is about making people think something is true. I think that it is a component of a tactic that has much more to do with controlling as much of the conversation as you can and getting your opponent to stand on your stage.
So, what you have with the cat and dog situation aside from, you know, real world consequences in Ohio and a real chilling effect that’s really unfortunate and disproportionately targets specific migrant communities. And that’s going to make their lives very difficult for probably the following months, you know, if it wasn’t already difficult for them, given the rhetoric that has been coming from Trump.
Aside from that, which should be the first point and focus point, we’re talking about it. And that is what a politician wants. They want us to be talking about whatever they want, they feel in control of.
They don’t want us to be talking about whatever they feel they can’t answer to, or they don’t want people to know more about. And when you came out of that debate, If you looked at sort of share of voice and you know, what were the two things that happened on the internet following that debate? You have the cats and dogs thing and you have the Taylor Swift endorsement, which involved a cat.
And I think that, you know, those were the things that were coming away. What wasn’t coming away were either a person’s plans for the country or any, frankly, any of the like quips or clapbacks that either candidate had. There was no focus on the cost of living, which we know is what most undecided voters are thinking about and feeling.
There was no real focus, aside from what was in Taylor Swift’s caption, on his very failed, very, there was sort of limited coverage of this, but extremely confusing statements around abortion happening after babies are born. So like, I think that that’s the tactic.
The tactic is, okay. There’s only so many mouthpieces that are going to be speaking about this thing. I want to control as many of them as possible. How do I do that? And Trump, those undecided voters who are watching the debate, no one expects not ridiculous things from Trump. And so their baseline isn’t, Oh, wow, he said something false and crazy. Their baseline is sort of, yet again, Trump being Trump, let’s think about the issues that, you know, are behind each candidate.
And so, I would say that it’s sort of like, how many mouthpieces can I control? How much shelf space can I control about what’s being talked about on the internet that day and what’s not on those shelves if this is on those shelves? That’s the way I think about it. And it’s one component and it involves, you know, disinformation kind of, if it’s like a guy standing on a corner with a sign that says something false, it can only do so much.
You know, if his sign is really compelling, maybe he gets people to look at it and they, their cars go off the road, like, but broadly speaking, you know, he doesn’t have that amplification power. So disinformation relies on the amplification it gets from the tech companies and the way in which it can successfully get a trusted messenger to talk about it, whether it is talking about it to debunk it or talk about it, or talking about it in another way. Of course, when something like that comes out, you do need to do the fact checking and you need to get the news out there, but I would argue a good portion of people probably didn’t see those fact checks. A lot of the people who maybe are susceptible to this narrative or it confirms a bias they have, I’m not sure that I have much confidence at all that they’re seeing those fact checks in a way that’s compelling to them or actually breaking through.
So that’s, that’s what I would say is, is the case study there and again, just to restate, because it’s worth restating, it’s really unfortunate when you have these real world consequences. And that’s where I think that the authoritarian playbook of using disinformation and threat to basically create a chilling effect and get people to back out of participating in democracy is really dangerous.
News Clip: Partly because of Trump, we live in the post factual era, in which facts don’t longer matter, Fred. They don’t matter. It’s whatever you can say, whatever people want to take in, even if you present them with volumes of evidence that it isn’t true or it was made up. I’m thinking of the big lie about voter fraud.
It’s okay because it serves a larger partisan purpose. It supports the candidate they want to win and the end justifies the means.
Emma Varvaloucas: There’s a finite amount of space on the shelf and what’s taking up the shelf conversation. I think the only thing I would say to that is, let’s say I were someone that was really concerned about immigration, I’d be like, yeah, I’m really happy that we’re talking about immigration now.
Jiore Craig: Yeah. And that would be great for you. And you would feel like you’re the person who brought it up was Trump. I mean, yeah, like that is that that’s why I’m saying, you know, people have come to me like the cats and dogs thing is ridiculous. It’s not ridiculous. It’s a strategy to talk about his issues. And it, yeah, because right after they say, Oh, that’s Trump being Trump, they say, Oh, well, at least he’s, you know, talking about immigration.
And I would say that, you know, there were, there were mentions from both candidates during the debate about immigration, however, because the rules were so all over the place and because Trump doesn’t really adhere to the rules, again, share a voice, even in the debate, you do not get the impression that, you know, both candidates were talking about it equally because he brought it up in almost every answer.
And we’re not really in any format where we’re getting straight answers about the issue of immigration. And I think that absolutely, it is the case he wants that to be what we’re talking about. So yes, if that’s your top issue, and there’s a question, because if you look at the wider disinformation apparatus that exists, especially coming from conservatives, they have been making immigration an issue for, you know, always top of mind forever and ever, always not. And I, you know, my personal belief is that our immigration system is broken and needs to be reformed. Absolutely. I think that’s the maJiorety of the country’s opinion.
I would say that, yeah, that’s exactly illustrating the point of how it’s successful and actually has an impact, you know, in a sort of strategic way for him.
Zachary Karabell: One thing that makes democracies particularly resilient, even if they’re dysfunctional and broken, resilient systems that aren’t easily replaced is the sheer mass of noise and ideas and basically freedom of speech, you know, the ability of so many different voices to opine.
Some of that information is going to be wrong. Some of it’s going to be right. Some of it’s going to be smart, wise. Some of it’s going to be crazy, foolish, but that the, the ability for all ideas to have purchase is at the end of the day, an inoculation against control in a centralized way. And it is certainly true that one of the real divides between authoritarian and despotic societies in the world today and more open and democratic ones is the approach to control of information.
The flip side is that there are those who also say that the chaos of information, you know, the ways in which let’s say the Russians have tried to disrupt Western democracies, is to turn that potential strength into a real weakness and a liability, because as we know from things like the paradox of choice, you know, human nature in the face of just an infinite amount of options and ideas is to essentially shut down because it’s just too much and you can’t parse.
And so that becomes a liability because you can sow chaotic information in a way that people are then going to look for simplicity because otherwise it just feels completely untenable to navigate. I still tend to feel like I’d rather deal with the weaknesses or potential liabilities of the chaos information than the evident liabilities of the control of information.
But I recognize, you know, there’s, there’s a problem in both. And I’m just wondering whether, you know, are you increasingly concerned about the liabilities of the chaos of information as opposed to the, the dangers posed by the control of it?
Jiore Craig: I love that question. And I have really clear answers in my mind.
What I worry about is the fact that information overwhelms. All of that, especially state sponsored, especially of the Russian variety, which is sort of what I have a background in, is meant to ultimately make us feel insecure, like we don’t know what to trust. And when we’re insecure for a long time, we get emotional, we get impulsive, we get reactive.
And when we’re emotional, impulsive, reactive for a long time, we get tired after a while. And when we’re tired, we are easy to control. And so my answer to what I’m worried about is us sort of being under control by a state actor, by a corporate actor, by anybody is actually kind of directly to your point because I think that it is so important for that marketplace of ideas to exist and for freedom of speech to be, to exist and be protected.
And what this goes to is what I would say, if your question had been, you know, what is flawed about that marketplace idea, because clearly something’s broken in our country, I would say that we do not have proportionate participation.
You have corporations and special interests vastly overrepresented. Their speech is vastly overrepresented and you have people that are not engaged at all. And so, and we have to restore civics. We have to actually make it the case that the population is participating and is actually contributing opinions. And those opinions are informed and actually, you know, can impact and interact with our actual system.
I would say that the other thing we’ve lost, that we have to restore, is sense making institutions and bases in our communities and our societies where we can come to collective decisions. You know, they still exist in some places. Churches are some, are a form of that. There are a few places where that’s happening in certain urban areas following the pandemic and sort of mass isolation and people reacting to that.
But we used to have far more common sense making institutions where you would get to some collective, you know, understanding of what your needs were and how those related to your neighbor’s needs. And so that is the puzzle piece for me. And unfortunately, you know, connection and civics are not the sexiest things in the world.
And so Paris Hilton recently, you know, got involved in the Surgeon General’s campaign against loneliness and, and sort of made loneliness sexy. And I had been saying that we needed to do that for so long because it is, it is the answer to this sort of sexy problem I get asked about, which is like the dark arts of online world. It is the answer to the way in which those dark arts are eroding our democracy.
So, that’s for sure my big picture theory of the case and anything that we can do to sort of uplift, you know, cool and promising versions of civic engagement and connection are going to have to be the antidote along with some accountability for those big power players.
Zachary Karabell: I really like that as a way to wrap this up. And in part, it’s, it’s in sync with a lot of what we’ve tried to do over the past couple of years with The Progress Network, which is more engender a sensibility than provide a specific answer. And I mean, I would say it’s not only that the issue is it’s not sexy, it’s that sensibility is sort of hard to sell, right?
Because it’s a way of approaching something rather than a very specific binary, easily digestible answer. It’s how you approach these issues and these problems, which is a much more expansive set of issues that’s not easily reducible to here’s what you do, here’s the bill you pass, the law you do, like, those are easy, like, here’s your solution.
A sensibility, a civics, an approach, right, is a way of approaching problems.
Jiore Craig: Totally.
Zachary Karabell: While it’s absolutely vital, as you say, it’s a much harder, I don’t know if “sell” is the right word. It’s, it’s, it, it requires more of all of us than, “Here’s the answer.”
Jiore Craig: Yes, I agree with you. I agree with you. And I, I will just add, I know we’re wrapping up, but you know, I think Hollywood and, and content creation can help us because at least it can role model a little bit of what we’re looking for and it can disincentivize some of what we’re not looking for.
There’s a, there’s a real power in what’s cool. Plenty of people have written about that and how that’s kind of a hard thing to grasp. But one thing America does very well is trendset. And so I’d love for us to tap into that sort of cultural powerhouse that we do really own globally and turn it towards some of that role modeling to the best we can.
Emma Varvaloucas: Make civics cool again. Slogan for that.
Zachary Karabell: So, thank you so much for the conversation. Keep at it. Keep doing what you’re doing. Keep spreading the faith, the digital and otherwise. This is clearly going to be an issue for the next decade, so you’ve, you’ve picked a good field for your endeavors. There’s going to be no sunsetting of this particular set of issues, which, you know, is kind of just the way it is.
So maybe we will circle back in months or years and see where this particular needle is.
Jiore Craig: Let’s do it. Thanks for having me.
Emma Varvaloucas: Thank you so much.
Zachary Karabell: Well, Emma, I thought of you a lot during that conversation. You know, it’s very much in sync with your, like how to read the news, how to deal with information overload.
I mean, they’re all on that spectrum of what do you do in a world where of chaos and noise, and as, as Jiore talked about at the end, this issue of, which I found an interesting way of joining the kind of the challenge of autocracy on the one hand and too much information on the other, or too much control of information versus too little is that, you know, as you’ve been talking about for years, the, the, the tendency to feel kind of bewildered and at sea with just the excess of information.
And then you add in, not only is there too much information, there’s too much misinformation and too much disinformation. And then just people either throw up their hands, they get disengaged or they get despairing. And then, I thought that was an interesting analysis, that then opens a kind of space where control and simplicity and answers becomes more appealing viscerally.
I mean, I agree with that observationally. I think that might sell too many people a little bit short. I, I think absent force controls increasingly hard. And at least in the United States, I mean, we can talk about this ad infinitum around the whole Trump factor. I think the fear of force as an enforcement of, of like ideas and communication of freedoms is overblown, at least right now.
It’s certainly true in Western Europe, you know, European Union, other than Hungary, that the rise of the right has not come with the rise of autocracy and that got pushed back in Poland. It obviously has not gotten pushback in Hungary, but you can’t like use one, one exception to invalidate the generalization, right?
Hungary right now is not true for other countries. It’s not true for Meloni, who knows what it’d be for Le Pen. So again, I, I, I think that it may be true that it opens the space to more kind of autocratic control. You just haven’t seen that actually happen yet in societies where freedom of expression and ideas really is deeply ingrained.
Emma Varvaloucas: Yeah, I guess I’ll be a Miss Cynicism for once. You know, we need people like that on the podcasts every once in a while. The one thing I would say about your examples, absolutely right that it got pushed back in Poland, but the question is like, how bad does it get and how much shit goes down, you know, for a lack of a better way to put that, before it gets pushed back.
And in Poland, it went really far. The abortion example comes to mind in Poland, for instance, that, you know, you, you couldn’t have an abortion, just basically period. Extremely limited exceptions. And I think a lot of people would point to that right now with like Roe v. Wade in the states. So it’s like, Do I think that we’re going to be in a Russia style takeover if Trump comes into office?
No, I don’t. But how big is the risk of harm and chaos is, I think, a completely different question. Like, how much gets messed up along the way is a completely different question. So that’s my cynical pushback.
Zachary Karabell: While there’s a lot of overlap between those who are most vehemently anti abortion and instincts of control and state use of power to control a whole series of things, they’re also, to me, still rather different in that there’s plenty of people who have been anti abortion or pro life or whatever you want to call it, who are doing so from what they believe to be religious and moral grounds.
Now, what’s weird about the present versus 50 years ago, I mean, those who are now enforcing anti abortion laws in states in the United States are doing so in a far more draconian fashion than was true in the 1960s and 50s. Like, no one would ever, no one prosecuted doctors or thought about prosecuting women in those years. So, I agree that there’s more authoritarian overlap, but I do think those are different issues than the ones that we’re talking about.
Emma Varvaloucas: They’re distinct, right? There’s particularly overlap when you’re talking about authoritarianism that’s coming from the right, right?
Like, it’s a different picture if you’re talking about authoritarianism that’s coming from the left, which can also happen, as we know. To bring us on to something else entirely, the one thing that I do really appreciate about Jiore in this conversation is that she did give some practical advice for people, like her comment about making sure that you’re not in a passive mindset while you’re on social media, I think is really, and you have to like turn that active part of your brain on, which is not what people tend to go to social media for, so.
Zachary Karabell: I like that aspect of her, her conversation as well, you know, there’s a lot about sort of individually taking control of your own feed. I mean, that’s something I hadn’t actually heard as an expression, but it makes a lot of sense given how much that’s part of our lives daily. And the idea that to some degree, we have to take self responsibility for what we read and how we read it is a really important one, I think, in line with her point about civics, the point about civics is we each take responsibility for what actions we take collectively or civically, not that we are the recipients of or the passive recipients of some set of people who are controlling, but that we’re actively engaged, and be fascinating, like have her have a conversation with Jonathan Haidt. I think there’d be a lot of agreement, but on that score, her emphasis on it’s kind of up to us. I mean, Jonathan might say, look, it is up to us, but you can’t expect 12 year olds to make that. So it’s up to us to make it for younger kids.
But I think that idea of, you know, we have to curate, we have to choose individually, rather than expecting someone to you know, create those parameters for us.
Emma Varvaloucas: There’s a lot of complaining from people that I know about what appears on their Instagram or their TikTok or whatever. And it’s like, you can, yes, the algorithms are a black box.
Yes, we don’t know exactly how they work, and yes, they change. However, we do know, like, I know, for instance, if I follow a really massive account on my Instagram, it’s going to be the first thing that comes up in my feed every time because it’s gotten them, it’s, you know, it’s gotten the most likes, comments, whatever.
There are a lot of things that you can do to, as you say, to individually take control of your algorithm. You can tell TikTok, you have to do it over and over and over again, but you can tell TikTok, “I don’t want to see this, I don’t want to see this, I don’t want to see this.”
That’s why I have friends who have TikTok accounts that are entirely historical and like anime videos, and that’s all they get, you know?
So we do have to take more of an active role in those things that take up so much space in our lives.
Zachary Karabell: Absolutely. Our individual behavior doesn’t matter, but our collective individual behavior matters hugely. And so we have to keep focusing on what we can individually do as well as what we should collectively do.
So I think that was a good, really important message in an ongoing, complicated conversation, which we will keep having, I am sure. But you know, those of you in Progress Network land, tell us if you agree, disagree, throw up your arms, throw a shoe at the computer screen, hopefully you didn’t do that, but something equivalent.
Sign up for our newsletter, What Could Go Right?, same title as the podcast, conveniently enough. All can be found at theprogressnetwork.org, and we will be back with you next week. So thank you for listening. Thank you, Emma for hosting. Thank you for the Podglomerate for producing, and we’ll talk to you soon.
What Could Go Right is produced by the Podglomerate, executive produced by Jeff Umbro, marketing by the Podglomerate. To find out more about What Could Go Right, the Progress Network, or to subscribe to the What Could Go Right newsletter, visit theprogressnetwork.org. Thanks for listening.
Meet the Hosts
Zachary Karabell
Emma Varvaloucas