Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.
What if we could predict and avert catastrophes before they happened? It’s possible. In this interview, founder of The Progress Network Zachary Karabell speaks with R. P. Eddy, CEO of the strategy and geopolitical intelligence firm Ergo, about Cassandra theory, a methodology that uses data to analyze and assess future threats—everything from the COVID-19 pandemic to the rise of ISIS to the Madoff fraud.
R. P., author of the 2017 book Warnings, which predicted a coming pandemic, also led the build-out of the White House’s first-ever pandemic response plan. In this interview, he does a brief post-mortem on the US’ failures in 2020 as well as discusses how people can better integrate lessons on “what went wrong” so more can go right, why he’s long on humanity, and the frameworks for quantifying future harms and benefits.
Watch the full video below or read an extract, which has been edited for clarity and condensed.
We can measure, harms especially, in real time, but we’re not as good at measuring future harms or goods. Considering the work you do on warnings and projections, do you feel there is a way we could add in those future considerations, and to do so with immediacy? For instance, could you imagine something like a dashboard that could do a three-dimensional comparison of current harms and goods versus future harms and goods? It’s hard to prove a negative. It’s hard to prove the catastrophe you stopped. It’s hard to prove the life you saved. That’s correct, but it’s also not quite that simple. What you’re describing is actually susceptible to analysis.
I’ll just kick out for a second. If you look at the pandemic, I know who probably would have had critical roles in a Hillary Clinton administration. I know what they’ve done historically on pandemic disease. I know the plans that were in place during the Obama administration that were teed up to be launched in the event of a pandemic if one were to hit America. So I have a lot of data to start with vis a vis if those people ran those plans. We can make some pretty serious predictions of lives that would have been saved, although I’m not going to get into it, because it becomes so partisan and political, and lot of people will get turned off. But I’ve done the math—it’s a real thing.
There is also a broader way to do it, which is just the general quality of life, the misery indices across different places—how those are changing over time, and looking at the component parts that went into those changes. Climate change, for example. Has that led to more migration? Is the migration clearly climate change related? And what has that migration caused? How much infant mortality can you attribute to that? How much starvation insecurity? It starts to get a little more tenuous as you get to two or three steps out, but you can begin to create the logic line as far as the negative results. On the positive side, for instance on something like the agricultural revolutions of the seventies and eighties and now new agricultural technology, how many lives will that save?
These large things seem very hard and complex, but we have a whole series of tools that we simply don’t use enough to predict not just catastrophes, but things in general.
Are you hopeful, then, that people can meaningfully integrate lessons learned about past things going wrong, even in the absence of evidence year-by-year that that the same future things would go wrong? What I mean by that is that one of the things we learned from the pandemic was that the management of healthcare systems for efficiency does not work well for crises that require excess capacity. Will people learn from this, for instance, that having capacity within the healthcare system is essential? There’s a series of constraints that are close to immoveable, and one is going to be the profit mode. As long as we have a profit-motivated healthcare system, then the private actors—which is the vast majority of the actors—therein are not going to have excess capacity. Why would they keep it, right? So I don’t know that we’ll learn that in the private sector. That is a constraint inside the financial function in which we operate. Another huge constraint is the general idea that “people will learn.” Do people want to learn? Are we in an environment where we’re allowed to learn, where we’re helping each other learn?
With the constraints are increasing challenges. There’s this acceleration of history we’re in now, where a once-in-a-generation pandemic becomes a once-in-a-decade one, a once-in-a-century pandemic becomes a once-in-a-generation one, and so on. We have a massive acceleration of risk.
There’s the great Edward O. Wilson quote about the challenge of man with its Paleolithic mind, medieval institutions, and God-like technology. You could add the increasing amount of godlike threat. The Paleolithic minds—meaning, can humans do this? Can we cooperate together? Can we get past our biases and challenges, our medieval institutions? What does it mean when an 80-year-old geriatric senator is talking to Mark Zuckerberg about the interwebs and pipes, just totally incapable of understanding these dynamics? Amidst all that, can we do better?
Of course we can. It feels daunting, but we can, and most critically, we must.