Chicken little forecast

Still Chugging Along

Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.

Turning Science Fiction Into Reality

Featuring Ed Finn

Can today’s science fiction become tomorrow’s guidebook for change? Zachary and Emma sit down with Ed Finn, the visionary behind the Center for Science and the Imagination at ASU and academic director of Future Tense. Ed explores the intersection between sci-fi and real world science, the complexities of new technologies like AI and gene editing, and why our imaginations can be the launchpad for tomorrow’s innovations and building the future we dream about.

Prefer to read? Check out the Audio Transcript

Although the transcription is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription software errors.

Ed Finn: I think we all need more good news. I think we need to remember that everyone else on the other side of the screen is a fellow human being, and I think we need news that brings people together in community.

Zachary Karabell: What Could Go Right? I’m Zachary Karabell, the founder of The Progress Network, joined by Emma Varvaloucas, the executive director of The Progress Network. And What Could Go Right? as those of you who have listened before know, and those of you who are listening now are about to find out is our weekly podcast where we talk to sometimes quirky, scintillating, unusual individuals who have some perspective about the future, have some idea that the future could be brighter than we think, or we fear that we are capable of steering a different path than the dystopic one that is so assumed about the present tath and that we don’t know the future. We are writing the future. We are all in the process of creating it. And there are many possible futures, and we’ll get into that in the conversation. And that active imagination about a better future could be, and we certainly think is likely to be a necessary ingredient to creating it.

Meaning you have to envision a better future to have a better future. And if you envision a worse future, you are more likely. To create that worst future and writers and thinkers and fantasizes. And in the case of, of our conversation, science fiction writers have had an outsized role in imagining futures that people read and think about at an impressionable age, and then go on to create.

So even more to the point, how much of what we write and the stories that we tell in the present are an electable inexorable part of the future that we create. So we’re gonna talk to someone who thinks about this deeply and runs a center of the university that is focused on this very topic, conundrum, and goal.

So who are we gonna talk to today, Emma?

Emma Varvaloucas: So today we’re talking to Ed Finn. I love a short, simple name. It’s bold. It’s Ed Finn, and he is the founding Director of the Center for Science and Imagination at Arizona State University. He’s an associate professor there with a joint appointment at the School for the Future of Innovation in Society and the School of Arts, Media and Engineering, and if all of those titles don’t make a whole lot of sense to you, we are going to talk about what those are and, and how one becomes the founding director of such a center.

He is also the Academic Director of Future Tenses, which you may have read in Slate. It’s a partnership between ASU, New America, and Slate magazine. And before he got into this interesting field of academics, he was actually a journalist. So we love that too. Are we ready to go talk to Ed Finn?

Zachary Karabell: So ready. 

Ed Finn, what a pleasure to have you with us today and to be talking about the intersection of the imagination in science fiction and actual science and actual policy, and who we are and where we’re going, and all those big picture things. So I wanna ask a question about you first. I remember once, and I’m gonna tell a self-serving story first.

I was sitting in a cafe somewhere, I think it was at Oxford, and listening to a bunch of physicists. And I knew they were physicists because somehow they were like, that had been indicated as I was overhearing it, talking in great depth about a Shakespearean play or somebody’s performance. And I remember thinking, God, that’s just so annoying because they could toggle from their science expertise to kind of have a meaningful conversation about something humanistic.

And I thought like there’s no way I could flip that on my side and have a really in-depth conversation about physics. Like science knowledge is more particular than humanistic knowledge, at least in certain respects. And I’m asking, I’m telling that story because I’m asking you, you have a background as an academic in humanities and English, and yet you’re now heading up a Center for Science and Imagination, which is kind of this intersection of, of science and humanity.

So first of all, how do you establish your bonafides and credibility in the face of scientists who I imagine for the sake of hypotheticals don’t always extend you the potential respect of any expertise in a realm that they think they and probably do have expertise in. And then how do you bridge that immense cultural gap, certainly in academia, but also in society between the expertise of scientific knowledge and what’s often seen as the more generalistic knowledge of the humanist.

Ed Finn: Thank you first of all for having me, and what a great question. I’m gonna go back to the old two cultures idea, C.P. Snow, who suggested that there was sort of this scientific culture and this more humanistic culture and that that can be a problem if these two groups aren’t talking to one another. I’m not sure that I totally embrace that idea, but I think a lot of people do, or certainly what we reward in our society is specialization. You see that, especially in the academy and especially in the sciences, people work on these really, really narrow problems. That’s great in certain cases because it allows them to advance discovery. But I think that the most successful researchers are people who are inherently curious and. Think broadly and read broadly and do more than one thing. And when you look at the biographies of great thinkers and researchers, they tend to be, uh, generalists, inherently curious. They might also be specialists in something. So what I try to bring to the table is first I’m an incorrigible generalist. I’m interested in lots of things. I had a whole first career in journalism. I ask lots of dumb questions and sometimes those lead to really interesting answers. And second, my forte, my foundation that I can bring to this is storytelling. And I do believe that we’re fundamentally storytelling animals and the stories we tell about the world of which science is one of the most important, inform our understanding of the world in a really deep way.

And maybe even create the world for us, you know, that we can’t really access the world except through the stories that we tell. So from that perspective, I have some things to talk about, and I think that every scientist should care about the stories that they’re telling about the world and the universe and how they communicate their research to others. So those are all ways in which I can work with people. 

And then we also have this shared interest in science fiction. Okay. I’ve snuck a third thing in here, which is that science and science fiction are cousins. They’re both intellectual projects about trying to name and imagine new things to describe, new things that could happen that could exist, and there’s this beautiful feedback loop, ongoing dialogue between science and science fiction. A lot of scientists, I think. Buy into that without me having to say anything. And other people can be brought to see the value of science fiction as a tool. Even if you really just wanna focus on your research to say, well, what next? You know what happens if your lab is incredibly successful? If your research is incredibly successful, what will that world look like?

That’s a really important question for any scientist to ask. So those are some of the ways that I, Sometimes I’m successful in establishing my bonafide days with scientists and researchers. Mostly what we do is we extend invitations to do weird and interesting things, and the people who show up are already, you know, willing to try.

Emma Varvaloucas: And I’m curious to hear you talk about, especially because you just spoke about the importance of storytelling, right? Was the founding of the Center for Science and the Imagination reactive in the sense of like for instance, Zachary talks a lot about the founding of The Progress Network was in some part reactive to this current mood of pessimism.

Was it reactive in that sense? Like you feel like the story of science and where we’re going and future, the future and like are we at a point where it’s negative and the founding of the center was reactive to that, or is it something else entirely?

Ed Finn: The founding of the center has its own little origin story, and it did emerge out of a moment of pessimism. So in 2011, the US was just coming out of this big recession and we were struggling to regain the hope and optimism that I think had described previous generations. Neal Stephenson, the science fiction writer, came to an early event of this thing that ASU does called Future Tense, and he had written this polemic called Innovation Starvation.

So he, his perspective was, he grew up with the Apollo program in the, in the fifties and sixties. He grew up with big infrastructure projects. The future was really bright. We were gonna be going to Mars. All of these amazing things were gonna happen. And as an adult, when he was writing this, in 2011, we weren’t even flying our own astronauts into space at that point.

We were paying the Russians to do it, and the infrastructure was all crumbling. And the best and brightest, quote unquote, the ambitious young people were not going into work on these long-term generational or multi-generational projects. They were going to Wall Street to make better mortgage backed derivatives, or they’re going to Silicon Valley to make better targeted ads. And so his position was, we’re not thinking big anymore. We’re not doing big stuff anymore. What happened? 

And the president of ASU Michael Crow was at this event too, and being the kind of guy he is, he said, well, you know, Neal, I’ve read your books. And they’re not especially hopeful and optimistic about the future. They’re kind of dystopian and maybe instead of pinning this on the engineers and the scientists and the entrepreneurs, maybe this is your fault, but maybe we need to be telling those hopeful, ambitious stories about the future if we want to actually inspire people to build those futures. So that conversation struck a chord, and I had just started a fellowship at ASU. And it landed on my desk as a, well, what would we do if we were gonna try to change this? 

And so I thought this was just a crazy thought experiment. I didn’t think anyone would take this idea seriously, but I had a great time making up this idea for a project, a center that would bring people together from science fiction, from creative writing, from the arts with scientists, engineers, different kinds of researchers and experts to come up with technically grounded, hopeful stories about the future. 

So it did emerge out of a moment of pessimism, and I think the reason that we immediately got a lot of positive feedback was because people are so desperate for hope. They were really hungry for that message at the time, the permission to be hopeful about the future. And I think that is true today too.

Zachary Karabell: So I’m a complete Michael Crow fanboy. He is also a friend. ASU also supports The Progress Network as it does support your center, which, you know, full disclosure, I’m kind of back-patting in a self-interested way, but I was a Michael Crow fanboy long before there was any self-interest. Now just I have a reason to be.

So you’re, you’re part of the School for the Future of Innovation in Society. Like why is there a School for the Future of Innovation in Society and what does it do? I’m not saying you created it per se, but you’re part of it because it’s one of the things that ASU does that is so eclectic and unusual.

It’s like it breaks apart the mold. Most universities, whether they’re Ohio State and fraternity oriented, or Harvard, or a small college like Oberlin are still largely structured, as you know, as a former main, I’ll just call you a former mainline academic as opposed to a currently heretical heterodox academic, basically haven’t strayed much from the framework of 19th century German disciplines and all the aperture that has built upon them. They’re still pretty much a core of that. 

So what is this school meant to do? Like if you’re a student there, you tell your parents, yeah, I’m, I’m going to the School for the Future of Innovation in Society. And they turn to you and they go, what? Like, what, what kind of job is this gonna train you for?

Ed Finn: That’s a great question. And it’s been amazing to see this whole forest of futures oriented things grow up around our center at ASU over the past 13 years that we’ve been at this and the school for the Future of Innovation in Society is a really great example of that. One of their mottoes, one of my favorites is the future is for everyone. So one starting point is we need everybody to feel invited and empowered to imagine their own futures. If we just leave it to this tiny sliver of privileged humanity to imagine the future and decide for the rest of us, it’s not gonna turn out very well. So that’s one starting point, and then. The degree program and what you would do as a student there is, alright, well how do you imagine the future?

What are the skills you need? You, there’s, there are disciplines of foresight and anticipation, toolkits that you can learn, not, they’re not that hard, you know, to actually start planning for the future. And culturally, we need to recognize that we have this sort of default assumption or story that we tell about what we think the future is gonna be like.

Most of us don’t think about it very much if we think about it at all. We tend to downplay our own agency and let the Elon Musks of the world, you know, we’re saying like, well, somebody else is gonna make the decisions. I don’t have any agency in this, but that’s really not true. We do have power to shape the future, and if we’re not intentional about it, we’re never gonna get to a future that we want.

You know, if you wanna actually change the future, you have to change the story you’re telling about the future. So. Skills like anticipation and foresight, but also things like speculative fiction and storytelling about the future. These are really important and they invite people to a different frame of mind about the future.

First of all, it’s not just one thing, right? That there are multiple possible futures and we need to have constructive conversations and maybe debates about what we want. We have so many yardsticks for the futures that we’re afraid of. We have very few yardsticks for the futures that we should be working towards.

How do we measure where, where we’re trying to go instead of just all the things that we’re afraid of? You know, all of the 1984s. I think the final piece of this is to create that model for change in organizations. So one thing I think about is sustainability, which ASU is one of the first places to have a school of sustainability in the United States at least. And now there are many companies and organizations that have a chief sustainability officer that think about sustainability as just a part of doing business, a part of being in the world. I would really like to see that position exist in the context of futures, right? Who’s our futurist, who’s our future officer?

One of the folks who works at our center as our futurist in residence, Brian David Johnson, who had the one of the title of futurist at Intel for a long time and Intel is thinking 20, 30 years ahead. Major companies, you know, if you want to build a giant a billion dollar facility somewhere, you know, you need to have a really long time horizon due to do something like that. And I think about Kurt Vonnegut who famously said, I’m not gonna get the quote quite right, but you know, why isn’t there a cabinet Secretary for the Future? You know, nobody is thinking about my children and my grandchildren in government. And so that’s another arena where we, we really desperately need more long-term thinking, futures thinking. So I think this is a great example of ASU and innovation at work, that these careers are going to exist. We desperately need them. I think that they do exist now, but we need to also model and create more of them. We need to build a culture, right? Fundamentally, that’s also what we’re doing, is we’re building a culture of thinking about the future in a positive way.

Zachary Karabell: I’m gonna preempt Emma for a moment and ask a quick follow up, which is the Elon Musk 80,000 pound gorilla, not just 800-pound, because he is for better and I, I’m sure some ways for worse, kind of a perfect iteration of someone who is inspired by science fiction to then build all this stuff. I mean, in many ways, and I’m not even saying this pejoratively, you know, he is a 15-year-old manchild who took his reading of science fiction, said we should go to Mars.

I mean, I don’t know whether he read Heinlein, or Bradbury or whomever else flying cars. And in many ways that’s an inspiration, you know, that inspired him as, as it did to many people in Silicon Valley. You know, you mentioned Star Trek before. A lot of the shape of the smartphone is a kind of a weird Star Trek derivative. Is that a good thing? Is it a bad thing? Is it just a thing?

Ed Finn: I think it’s complicated. I think it’s, it’s a thing for sure. There is a very clear feedback loop between science and science fiction, and I think that that feedback loop. Is nowhere stronger than in Silicon Valley. There are a lot of people who were inspired by science fiction to go into these careers and to go on to create whole new industries.

And there are some ideas, some science fictional ideals that have inspired decades of work. If you look at the idea of virtual reality in the metaverse, there’s a student here who just completed a PhD studying the feedback loop around the metaverse and these generations of people, writers, science fiction writers like Neal Stephenson, Ernest Cline, but also technologists and companies from Second Life to you know, this, one of the biggest companies in the world, literally renaming, renaming itself Meta to align with what it thought was gonna be this amazing new future. So the feedback loop is real. Maybe you’ve seen that joke about the torment nexus. You know, the science fiction writer writes a novel that says, you know, please don’t invent the torment nexus. And then 10 years later, some tech CEO says, look, we made it, we made the torment nexus. Isn’t that cool?

People tend to pluck the technological ideas out of these stories and just focus on those without thinking about the broader social context. I think that what we all need to get better at is the idea of positive social change, social technologies, for example, with climate change, but with a, with a lot of our problems, we probably have all of the tools, all of the, the technologies that we need to solve them.

Right now, it’s not about inventing amazing new tech. It’s about changing our. Society, changing our values, having new conversations, making better collective choices, and using the tools that we have to build a better world. And that’s where the incentive structures of Silicon Valley are not always so great, and people in the room are not always the ones who are gonna be most affected or negatively affected by the choices that are getting made. So in those ways. I think science fiction can become a kind of fig leaf for the decisions people wanted to make anyway, or a shortcut that they might be taking without really thinking about everything that they’re leaving behind.

Emma Varvaloucas: It’s really interesting to hear about, like to hear someone talking about science fiction divorce from the tech element. Like I feel like that’s the thing that normally comes to mind or like the two things that come to mind are like to inspire or to warn. But I’m curious for you like what makes like a really solid, impactful piece of science fiction, like given all of those parameters that you were just talking about?

Ed Finn: What science fiction can do really well is let us not just think about the future, but feel the future. To experience what it’s like to live in that future, to empathize with a compelling human character or a humanoid whatever, but to step into somebody else’s shoes and actually feel like we’re inhabiting that future. And then you get into this. Texture and the intimate detail of, oh, do I actually like this? You know, maybe on paper flying cars sound amazing, but when you step into this world, you think, oh no, this actually is terrible and I really don’t like the way that this world is organized. And then you might have that secondary realization.

It’s not really about the flying cars at all. I don’t like all these other things that are happening in this future. So every piece of science fiction, especially well-crafted piece of science fiction, allows us to exercise our own cognitive simulation engines, to use our imaginations to step into another future. And not just to do that in an abstract way, but to really. Practice the superpower of empathy, right? And to empathize with these future humans or future people. And that exercise is really important because we can then use that to come back and look at the present. Every time you step out of our present reality into some fictional one, you get a chance to look at our present reality with new eyes to see it differently.

And so you get this invitation to see the world as it could be to imagine that the world could be otherwise. And as you build this imaginative capacity, you can get better at being more discerning and thinking, learning more about yourself, and learning more about the world that you’re in and maybe the world that you want to build.

And you get to start practicing your own versions of science fiction. I’m working on a book right now called How to Change the Future, which is really about this incredible superpower of imagination that we all have, and we’re all. Practicing anticipation and foresight in our own lives in all these little ways, but we really don’t think about how incredibly powerful that tool can be.

And then, the people we admire the most are the ones who succeed in the long term. You think about a great musician or an Olympic athlete, people who have successfully imagined a future for themselves for thousands and thousands of hours of effort and brought that thing to life and then we, the rest of us look at them and you’re like, are you even human?

How did you do that? But we can all do it. It’s just a question of building that practice of imagination and science fiction is one of the great entryways into that practice.

Zachary Karabell: I liked your, your comment earlier. I mean, I liked it because it echoes things we’ve said and of course you always agree with people who agree with. That the, there are many futures, right? One of the things we’ve said, cribbing from Karl Popper, is that the pathways of the future are endless. And one of the things we try to push back on is this very human desire for future certainty, which creates a lot of path dependent problems as opposed to the reality of like none of us know. And even extrapolating from something that seems obvious in the present into the future can be a fool’s errand. I mean, we do it all the time, but that doesn’t mean it’s the thing to do just for a quick rabbit hole moment. The line between science fiction and fantasy has always struck me as, I mean, it used to be very pure, right?

Fantasy was imagined worlds versus science fiction, which was an extrapolation potentially into the future of real worlds. But, you know, at some point those things start to blur, kind of like hip hop and R&B. I mean, do you care about those lines? Is it, is it immaterial to this? To the work you’re doing or, or do you have a certain given that you’re in the weeds of this stuff and running a center and do a lot of writing about it, there used to be a real like hardcore town and gown.

This is fantasy, this is science fiction purism. Does that pertain at all still?

Ed Finn: Some people throw down over this. My approach, first of all, on a practical level is you, you can try to impose rules on writers, but you know, it’s more like the pirate code. They’re just guidelines, you know, and so you’re not really going to, I, I’m not, I’m not interested in policing a firm boundary between these things.

I see it as more of a gradient, but what we focus on is what I like to call useful stories about the future, things that are set in the near future, usually in the next few decades, that have a fairly clear pathway from our present to that future. And that, again, are useful in the sense that they invite people to exercise their own imaginations and put themselves into that future world.

They can do that fairly quickly without having to take on a bunch of magical thinking. Now again, it’s sort of blurry, but there’s kind of a sweet spot in that idea of, of, you know, the notion of suspension, of disbelief. One of the great things about science fiction is that you can talk about really difficult topics.

You can talk about racism or oppression or completely changing our economic system in a science fiction story, and it’s sort of okay because it’s make believe and people will speculate in a way that they might not be willing to in quote unquote real life. So that’s a sweet spot at a certain point in fantasy, and we’re like, and there are dragons and there’s magic.

You know, you’re, you’re at a different level of the suspension of disbelief and you’re embracing a universe with really different rules. Now, all of this is very waffly because you can certainly have science fiction stories where technology effectively functions as magic, and we don’t really know how they work.

So again, I’m not interested in policing the boundaries, but I think sticking to the near future and inviting people to think about a world where some important things have changed, but it’s still fairly recognizably our world, where you can trace a path from where we are to where that future is. Those are the stories that we try to tell because there are plenty of others, you know, there are plenty of amazing fantasy writers out there, and also science fiction writers who are doing things like 3000 years in the future or in a galaxy far, far away or whatever. So all of those things can happily coexist with what we’re doing. There just aren’t that many useful stories of the way, the kind that we’re talking about, because it’s actually much easier to set your story 300 years in the future. Because you can do whatever you want and nobody’s gonna, you’re not gonna be proven wrong if you write a story that set five or 10 years ahead of where we are right now. You can get proven wrong really fast. So we get to sort of seed the ground with these stories that we think are missing.

Emma Varvaloucas: So when you look at tech today that I think like on the spectrum of tech that really freaks people out. I feel like AI is probably up there. Maybe gene editing, I don’t know. Maybe choose one of those two. When you look at tech like that, what do you imagine? Like where does your mind go?

Ed Finn: Those are both really interesting to me. Because I think we are falling down on the language to describe them. So in gene editing, synthetic biology, we can do all of these things that we don’t even know how to describe right now. And I think 20 years from now, 50 years from now, we will look back at this time and say, oh, people didn’t even know that this was a thing already.

Because they couldn’t talk about it. They didn’t, they, they, nobody had even named it yet, even though the technical ability was there. And I think that is true in a different way with ai. I think we do not have the right metaphors for AI. We keep trying to pretend that AI is a person, it’s not a person.

You know, we anthropomorphize in this really dangerous way with these systems, and we’re building them to pretend to be human in ways that I think are really problematic. So we need better metaphors to talk about AI and also synthetic biology. And one of the interesting things about AI in particular, we did this research study a few years ago on the premise that we need new metaphors, we need better stories about the near future of AI because the ones we keep telling are the killer robot Terminator story or the robot girlfriends.

The robot you can’t tell is actually a robot story like Westworld or the superintelligence, you know? The robots are gonna kill us. All stories, all of these, you know, maybe one day, sure, than the long, long term that we, we should be worrying about that. But meanwhile, we have all of these systems that are deciding who’s gonna get hired, who’s gonna get a loan.

They’re flying car, they’re flying planes and driving cars. We’re using AI in all of these different contexts without really knowing how to talk about it or the right metaphors to use or to understand where the failure points are of these systems. So we did this study to see if we could, what we could learn from science fiction about ai. And the very short version of that is we have no idea what AI is, even in science fiction, even when the writer has ever, you know, gets to call all the shots and invent whatever world they want. The capabilities and the boundaries of AI systems are really ambiguous. So just like we don’t know what intelligence means.

We don’t really know what artificial intelligence means in the real world. We also don’t know what it means in our, in our imaginary worlds. So we need to get better at figuring out what we are even trying to build, especially now that all of these tools and systems are being trained on our science fiction imaginaries of AI. All of these ChatGPT and all these things have read all the science fiction. And that’s part of the training data, right? And the engineers and the technologists who are building them have also read a lot of that science fiction. So the feedback loop is sort of spiraling off in this one direction. But I’m not sure that we, any of us have really figured out if that is the right direction.

Zachary Karabell: How does this play into Arthur Clarke and Hal in 2001? For those of you who may not be familiar, you know this is somewhat the prototype of an artificial intelligence. When faced with the prospect of being disconnected, shut down, does everything possible to preserve its own quote unquote life, even at the expense of human life.

Ed Finn: Well, it’s a great example of what people call the alignment problem, and it comes back to this idea of value. So Hal has decided value is the mission, right? That’s Hal’s justification for what it does in that amazing movie because it’s, well, the humans are just in the way and I’m supposed to complete this space mission.

And so the, the most efficient answer, the right choice for me is I have to just kill off the human so that I can continue this mission, which is a variation of every other story like that, like the paperclip story. You know, you tell your super intelligent AI to make the most paperclips possible, and then it turns the entire universe into paperclips.

Zachary Karabell: It optimizes for the endpoint without parameters ’cause you didn’t give it parameters.

Ed Finn: And the assumption that a lot of people make is that we can articulate the values that we want to instill in these systems and that we actually know what those values are, and that those values can be codified, and then that the machine will actually stick to that, right? It will stick to the values that we have tried to encode into it. All of those are big, big question marks and I don’t think we’re especially good at articulating our values in a coherent and consistent way. Humans are pretty good at saying things that we care about, you know, defining our values. If you really put our feet to the fire, I think humans can do that. We’re pretty bad at following through and sticking with them. And you know, you might say in the abstract that you believe something, but then in the heat of the moment, maybe you do the other thing, right? We’re still ruled by emotion and gut instinct and all this stuff. And you know, in science fiction often this is an issue too. We still, we think of Star Trek, but it’s still gonna be Captain Kirk in his commander’s chair making a gut call. There’s no, there’s no innovation in decision making in the future. It’s still gonna be this kind of messy intuition, instinctive thing that we do. So all the stories, we keep telling these stories over and over again, and it’s a version of the Frankenstein story because we’re still really worried about creating these slaves, right? And every time you create slaves, then you’re worried are the slaves gonna revolt? And then are we gonna become the slaves? Right? It’s, it’s a bad idea to create slaves. So we are wrestling with those questions, but I don’t think we’ve come up with any good solutions because again, we, we need, it’s, it’s really sort of a philosophy question.

We need to get better at talking about the social values, the ethical values that we, we all. Believe in way before we’re sort of racing ahead to build all of the tools, all of the technologies. And right now I’d say we’re, we’re pretty much doing the opposite. You know, it’s 99.999%, let’s just build all these amazing tools, and then there’s this tiny, tiny, tiny fraction of energy and money and effort that’s going into like, wait, should we.

Emma Varvaloucas: It also doesn’t help that a lot of the people that are building these tools, or at least if you’re thinking about Silicon Valley in particular, not only are they not articulating the values, but they’re also spending millions of dollars on bunkers, you know, like they, they are gonna have an out if the world goes to shit. So what, what do you make of that? Because I feel like there’s this fear there. It’s like, oh, they’re rich, they’re successful, they know this tech from, you know, the ground up. Like they must know something we don’t about where we’re headed.

Ed Finn: First, I’ll say it’s one of the most human things about the Silicon… The, the, the, the, the ultra wealthy. But I think it’s really irrational because who’s gonna be guarding your bunker? Some guy who you’re paying $20 an hour and you know, how is he gonna feel about what happens at the end of the world? There’s no, there’s no magical island that’s going to save you. And this, I love the movie Don’t Look Up as a great satire of that whole philosophy and the end of that movie. So I think that is an understandable but fairly irrational response to the world we are building and that some of those people are the avant-garde in building the leaders of building. And what would be a lot better is to think about how we could try to make the world better for everybody. And actually, if you want security for your family and to live a good life with your family, what you need to be focusing on is how can everybody live a pretty good life, right? That’s what security actually means. So I think those are hard conversations for people to wrap their minds around. And there’s also this weird, almost cult-like pressure, I think, for people to race ahead in the technology right now. 

Years ago I had a conversation with a researcher at a big company who was working on AI for automating science. And I had just finished talking to this guy about, we were both dads and, you know, he had a, he had a daughter and then two minutes later he was saying, yeah, I’m pretty sure we’re gonna have artificial general intelligence in in the next, well, I, I don’t remember, you know, 10 years, something like that. Or super intelligence maybe even. And I said, well, you know, what do you think that world is gonna be like for your daughter? How do you feel about that? And it was like. This big switch flipped in his brain and all of a sudden he was thinking about data and then he was sort of looking, concerned and worried, you know? So like actually, oh, I really don’t know. And it’s like. Well, maybe you should put those two parts of your brain together and have a bigger conversation. And that’s just one person and an anecdote from years ago. But I think that we are collectively in a moment where we need to be having those conversations, not in the context of an arms race.

Or this idea, I think a lot of people have convinced themselves that they are the good guys, and so if they can just win the race, then everything’s gonna be fine because they’re the good guys. But everybody’s racing so hard that I think that encodes a lot of really flawed assumptions about how this is gonna turn out.

Zachary Karabell: Tell us a little bit about your Hieroglyph Project and stories of optimism, like what is it? What’s the goal? Have you seen or can you glean any positive effects?

Ed Finn: Hieroglyph was the other thing that came out of that conversation that Neal Stephenson and Michael Crow had low these many years ago. So Neal went back to Seattle and he started emailing fellow science fiction writers and technologists and other folks to organize an anthology of hopeful science fiction to respond to this challenge, oh, maybe we need more hopeful science fiction that it can inspire, especially inspire young people to go out and build these futures today. So the title for Hieroglyph comes out of this conversation that Neal had I think with Jim Karkanius, if I’m remembering right, the premise that there are these iconic ideas that science fiction has given us, like the rocket ship and the submarine, and we’ve talked about the communicator and the transporter, things like that, and that those can inspire thousands of engineers and researchers to, and organize them around this ideal sort of technological ideal. And that project, Neal started it and then we, we created a home for it at ASU at the Center, and I ended up co-editing the book with Kathryn Kramer, a great science fiction writer and editor, and it became our calling card in a lot of ways.

It was the invitation that we extended to people, so we tried to get the writers to get into conversation with scientists, engineers, other researchers, but also students and members of the public. Neal Stephenson got in touch with this structural engineer at ASU and asked this guy who literally wrote the textbook on steel engineering, and he said, well, how, how tall could we build something out of steel? And this engineer loved that question because nobody had ever asked that question, and it was this amazing question to play with, with the students and to use as a teaching tool.

Emma Varvaloucas: What was the answer?

Ed Finn: Really the answer was we don’t know. Not because we don’t know that much about steel, but we don’t know that much about the upper atmosphere and how, what the wind conditions would be like. 

But the answer was probably something like 20 to 25 kilometers tall. Assuming you could manage the wind conditions and you would have to have something like jet turbines or giant fans or something on the structure, so it would almost be like a kite made of steel or something like that. It would have to be what’s, I think it’s called, actively controlled so that it would respond to atmospheric conditions. Neal wrote this amazing story in Hieroglyph based on that idea and these long collaborations, and they built computer models of what this thing would be like, and we started talking about where we would put it, and it was this great thought experiment. 

And then Bruce Sterling also wrote a story and he, I, I’m think I mentioned that science fiction writers break our rules. He wrote a story set 300 years in the future that also had this tower in it when it’s now become this quasi-religious structure and humans are sort of leaving Earth, some of them. And it’s, he’s got a, he’s got a cowboy in it. It’s a great, it’s a great story. Sort of like an elegiac myth based on Neal Stephenson’s tower. So Hieroglyph we framed when the book came out as this isn’t the end of this project. This is just the beginning. This is an invitation and people still today, like, you know, here we are talking about it now, it continues to be this calling card and it was the first prototype of this really toolkit methodology we’ve developed to bring people together from these different fields of practice to do what we call collaborative imagination. To work together to imagine these technically grounded, hopeful futures.

Emma Varvaloucas: I wanted to ask you about something that came up before we started recording the, the main part of this interview, and you mentioned it actually earlier in, in this interview too, which is Frankenstein. What you had told us is that Frankenstein is your favorite piece of science fiction. And I was really surprised by this answer, especially like we were in my mind with the tagline of like positive science fiction guy. I’ve updated that now to like useful science fiction guy, but I still am like, okay, you’re gonna have to explain the Frankenstein one to me. Because that seems really, I don’t wanna say dystopian, but that’s too much, too strong of a term. But it certainly doesn’t feel like positive or useful. But I could be wrong about that.

Ed Finn: We’ve done a ton of stuff around Frankenstein. We did this whole Frankenstein bicentennial project to celebrate the 200th anniversary of the book, and we edited a new edition of the book. I have it right here actually, because I’ve just been teaching it called Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds. I think that subtitle maybe starts to answer your question a little bit about why I agree it seems like an outlier that I would be into Frankenstein, but I find it really compelling because it’s a story about scientific creativity and responsibility. It’s easy to think of it just as this cautionary tale.

You know, Victor Frankenstein makes all these bad choices. He should never have tried to do this. It was a bad idea, and he’s defied humanity and God and he gets everything that, you know, what he, that he deserves, but really the story is about everything that happens after he does this crazy thing and brings this creature to life, and it’s his failure to take responsibility for his actions that leads to all of the negative consequences in the story. And Frankenstein is amazing because it’s in a lot of ways the first modern work of science fiction. It’s this modern myth that has become incredibly successful. If you think about the fact that it’s only 200 years old, it’s pretty amazing that they’re been. So many, hundreds, thousands of adaptations and retellings of the story.

The fact that you can eat a Franken Berry breakfast cereal or have a lunchbox or dress up in the Halloween costume. It’s a prefix: Franken. I can say Franken-food or Franken-politician, and we immediately have all these associations. That’s how deeply this idea has embedded itself into our collective consciousness. So Frankenstein is very interesting. When you ask why, why was this very weird book so successful? And it’s, it’s because it addressed these deep anxieties around what we are trying to do with science, and science and society, what our collective responsibilities are as creators. And one of my favorite facts about this book is that Victor Frankenstein, the character, predates the first use of the word scientist by about 20 years. So before we even had the word to describe scientist, the, the person who you know is a fully engaged, that that’s their, that’s their noun, right? That’s what they do, that are scientists. Before that, we had this really flawed depiction of what the scientific enterprise is.

So we’ve been wrestling with this idea of science and scientific responsibility for 200 years, and the book is just this incredible vehicle for exploring all of that, and it’s just as relevant today as it was 200 years ago. Everything that Shelley wrote up as science fiction is now just like high school students are creating genetically engineered organisms. We’ve just been talking about AI, so this book is relevant for the next 200 years too.

Emma Varvaloucas: Really quick follow up. Favorite piece of science fiction from the last 30 years? What would it be?

Ed Finn: Gosh, I really love some of William Gibson’s more recent work. He wrote this great book called The Peripheral, which is this interesting sort of thought experiment around climate change and, and technology and sort of a time travel story. So off the cuff, I’ll pick that as, as one.

Emma Varvaloucas: We needed to get at least one recommended reading, right?

Zachary Karabell: Did you like The Three-Body Problem.

Ed Finn: I really like The Three-Body Problem. Yes. First of all, it’s always really interesting to remember that science fiction is, there are, there are a lot, there’s a lot of cultural variation and I’m fascinated by the story of science fiction in China and of course that trilogy is the most widely known science fiction to come out of China, and it has this incredible ambition to it.

One of the things I love about that series of books is that it keeps ramping up the scale and the scope, and it doesn’t fall apart. It manages to sustain that over the course of the three books, which is pretty remarkable. So yeah, that’s a, that’s a great, great trilogy.

Zachary Karabell: We’ve talked a little bit about this before about individual agency and what one individually can control and what can’t. Right? None of us can control the wider arc of society. We have some agency over the specific arc of our own individual lives. But you’ve thought and talked and cogitate a lot about this and work with students as well.

So what, how do you counsel people who are kind of faced with what seemed to be the forces of technological change? You know, meaning you and I can fulminate about what OpenAI or Meta or Gemini and Google are doing, but whether or not we have much ability, even if we were to like lobby our representatives to get on the case of regulation, by the time all that’s done, they’re gonna be regulating what OpenAI did yesterday.

It’s gonna be much harder to regulate what they’re doing today or what they’re about to do tomorrow. So what can any of us do? How do you face that? Kind of throw up one’s hands and go, well, you know, the cake is baked and I can eat it or not, or it’ll be force fed to me regardless.

Ed Finn: I think the most troubling aspect of these tools, especially these generative AI large language model tools, is that they, they’re very seductive in inviting you to just let the tool do all of the work, do the actual thinking for you. You know, I have this little chess app on my phone, and it has this thing where it will, you can push a button and it will suggest a move for you.

And if I’m feeling tired or like I’m losing the game and I don’t wanna lose the game, you know, I’ll push the button and I’ll take one of the suggested moves, but then the temptation is to just push the button again. Push the button again, and all of a sudden I’m not playing chess anymore. I’m just like this fact totem for this, you know, little AI chess program in my phone. You can see that playing out. I see that playing out sometimes with students, with professionals where they’re seeding the imaginative agency and the executive decision making to a tool in ways that are not good for us. So my advice for young people is to practice having your own independent imagination, your own agency.

I think imagination is the ignition system for all this stuff that we care about, but we don’t see the imagination underneath it. So we’ve talked a little bit about foresight. In anticipation. We’ve talked a little bit about empathy, and the third thing is resilience. If you are imaginative, you can come up with your own solutions. You can see these possible pathways ahead of you, and you can try to make choices to get to the place you want to go. And I think we need a lot of resilience to navigate the sort of whitewater world of technological change that we’re heading into. And that means you need to know how to do things yourself.

You need to have a strong imagination of yourself, like who you are and what you’re trying to accomplish. And if you do that, then you can do amazing things with these tools, they can be incredibly helpful, but you have to avoid that gravitational pull, you know, getting sucked into like, well, I’m just gonna let the machine do it, or this, the tool do it about 70% right, and that’s good enough. The satisficing version of using AI. So I think that sense of, you know, imagining your own future is a big part of this, right? That’s part of the resilience as well. It’s like, okay, I’m gonna keep going. This is not going well right now, but I know where I’m trying to get to.

I have to get, I have to get to that next place. I think imagination is still this fundamental human capacity that I don’t think these systems have and that we need to cultivate and recognize, just see as a starting point or, you know, see our own imaginations at work because that will help people have the resilience to navigate a world where, you know, who knows what careers are gonna exist 20 years from now.

We can’t train people for these careers. We need to train people to be learners, to be adaptable and to understand and to navigate situations that nobody has ever seen before and figure out what to do.

Emma Varvaloucas: Since starting this work, I guess in 2011, do you feel that you yourself, since you’re swimming in this stuff, have become more empathetic, anticipatory, and resilient as a person?

Ed Finn: I like to think so. I don’t know. I feel like you have to ask my spouse or…

Emma Varvaloucas: We’ll do it. We’ll do a survey.

Ed Finn: Yeah. Yeah. I do think it’s made me more empathetic. I’ve really tried to practice imagining what other people are going through and to build a practice of empathy. I do think I, I, I feel like I spend a ton of my time in this anticipation mode and thinking about possible futures and inviting other people to think about possible futures and resilience. I, I, I’d like to think so. You know, I think that for me, weirdly, resilience is also kind of a storytelling problem. You know, when life gives you lemons sort of thing, you have to come up with a better narrative and change the story.

This was when I was first starting when I was, I was a mainline humanist as you, you put it before Zach. It felt almost hopeless. It felt really hard to try to get a job. You know, there, there are very few jobs, and this was back then, it’s way worse now, but it felt like it was really hard to get a job. And one of my little mantras, well, the game is rigged, so I have to move the goalposts and declare victory.

You have to find a different story to tell about what you’re doing and how you’re progressing and if you tell a good story, you can convince other people of that story. And so that question of framing is, is part of the sort of resilience, whether that’s professionally or in other kinds of adversity. So I’ve been incredibly fortunate.

You know, when I’m talking about resilience, there are people who are navigating much, much more difficult things than I am. So I’m not sure that. I’ve really been tested and you know, I hope, I hope I am not tested in, in one of those horrible ways, but that’s where, where I think I am, that’s where I’ve gotten.

But it’s a lifelong project with all of these things.

Zachary Karabell: I feel we’re all that way. Like I, I hope I pass the test, but I hope I never have to. 

Anyway, Ed, I want to thank you for your time today. I love the work you’re doing. Love the work the Center’s doing and that ASU is doing this kind of sui generis syncretic meeting of different trends and tendencies with the awareness of, we are, none of us in our lived lives or societal lives nearly as segmented as we force ourselves to be in our professional lives. And that to the degree you can force these artificial walls to be shattered or broken down and force people to work together across divides that are themselves just an artifact of kind of the way things went rather than an eternal law of nature, the better we will all be for it. And, and using science fiction and the imagine of capacity of human beings to picture better worlds, which is the foundation often for those worlds being created. You know, what we imagine in the present is often the seeds for what we create in the future.

And, and your work speaks to that really profoundly. So thank you for it. I encourage people to check out Ed Finn and the Center and the Schools at ASU and see what they’re doing.

Ed Finn: Thank you for having me. Thank you for doing this podcast. What could go right is exactly the question we’re trying to ask too. So it’s been a great pleasure and privilege to be your guest and looking forward to further adventures in the future in imagination.

Emma Varvaloucas: That was really great. Um, I feel like my imagination was expanding just by listening to him. Like I, I feel like we need people to point the way for us, and I really appreciate the work that just fiction writers do in general. I always have this experience that whenever I read fiction, science fiction or otherwise, just like, wow, this person is so creative.

Like, how did they think about this? I feel like sometimes that work is undervalued, particularly if it’s people where they, maybe they’re not household names, um, but they’re doing really good stuff that we as a society should be using as more of a guidepost.

Zachary Karabell: And it is extraordinary, and this could be a subject of a whole other podcast, but to the degree that the hardware that’s been invented by Silicon Valley over the past 40 years plus is remarkably like the things that the fifties and sixties and I guess seventies. Science fiction writers projected and they are almost all guys like reading this stuff as teens and then going, hey, I wanna make that, and that’s a wild aspect of human evolution.

Like we, we envision possibilities that we then make real, of course, it makes you wonder if you’d envision different possibilities. Would there have been a different reality? Probably but we’ll never know that and, and obviously the Center that Ed does, speaks to that kernel of what we picture now could have profound implications to what we create then.

Emma Varvaloucas: I mean, that does make me worry a lot about, like, I, I was trying to think about current sci-fi, like what, what are the kids watching this day, these days? I was like, Star Wars remakes, I mean, Andor was great, but it’s not exactly like new tech stuff, Black Mirror…

Zachary Karabell: Yeah. I mean, Black Mirror is more of a dystopian version of it, right? Although, I guess it’s a warning. Be careful.

Emma Varvaloucas: But I can’t think of something where it’s like, oh wow, I read this book recently that imagines new tech that somebody might build in the future. I can’t think of any example at all.

Zachary Karabell: No, but I mean, I’m not as up on like the past 10 years of sci-fi, so there’s that as well. I mean, other than the three body problem and something says, bring us your ideas.

Emma Varvaloucas: What should we read? Yeah.

Zachary Karabell: Anyway, it is a very cool endeavor that they are up to there at Arizona State University and the center that Ed Finn runs.

Emma Varvaloucas: It is. And, I feel like we could have gone into a, a, the whole interview just could have been questions like every time he mentioned somebody new at ASU or in other places like the futurists and residents, like the. It’s the follow up question in my mind is the same one that you’re asking in the beginning, like, how does one become a futuristic resident? Right. We could have done that for 45 minutes, probably the best that we didn’t, but still interesting.

Zachary Karabell: Anyway. Thank you all for listening. Again. Thank you to the Podglomerate for producing, the team at The Progress Network for providing the needed infrastructure and ideas. Please send us your thoughts at hello@theprogressnetwork.org. Sign up for our weekly newsletter, also, conveniently called What Could Go Right?, also at theprogressnetwork.org and we will be back with you. 

We have a limited number of episodes left for this particular season, but it’s a pause, not an end. So we’ll be back with you next week and thank you for your time and attention. 

LOAD MORE

Meet the Hosts

Zachary Karabell

Emma Varvaloucas

arrow-roundYOU MIGHT ALSO LIKE THESE

Turning Down the Headline Noise

Featuring Zachary Karabell and Emma Varvaloucas

Let’s close out Season 7! Zachary and Emma look back on seven months of thought-provoking positive conversations, from global politics to the depths of sci-fi, exploring how to stay hopeful in a world hooked on negative news. They dive into protecting your mental health by controlling your news intake while also celebrating how social media platforms empower 8 billion voices to be heard!

The Progress Report: China’s Climate Change Commitment

Featuring Emma Varvaloucas

Get ready for a Progress Report season finale packed with good news as Emma shares some life-changing breakthroughs! The Gates Foundation is funding a game-changing GBS vaccine, while a cutting-edge AI stroke diagnosis system in England is significantly improving recovery rates. Plus, China steps up with a bold pledge to slash greenhouse gas emissions while NASA’s James Webb Space Telescope just uncovered an astonishing 6,000 new planets.

Whatever Happened to Civics?

Featuring Nick Capodice

It’s time for a lesson in civics! Zachary and Emma are joined by Nick Capodice, co-host of the Civics 101 podcast where he gets into the basics of how the U.S. government works and also helps teachers design lesson plans to pair with the show. Nick highlights how our collective grasp on how things work in Washington is slipping, the decrease of civics education funding since the 1950s, and the recent rise of deep divisions in American politics. He focuses on the importance of civic participation and voting and how to reclaim your voice beyond the ballot box.