Develop a positive perspective on a complicated world: the second season of the What Could Go Right? podcast is here! Listen now.

Chicken little forecast

Still Chugging Along

Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.

S2. EPISODE 10

Wisdom for Smart Tech

Featuring Ayesha Khanna

Web3 is seen by many as the future of the internet. Others understand the rise of artificial intelligence (AI) as the first step to a robot takeover. Where’s the balance between these two reactions? This week, AI expert, CEO of ADDO AI, and a member of the World Economic Forum’s Global Future Council on Media, Entertainment, and Culture Ayesha Khanna joins us to wade through the hype around cryptocurrency, decentralized finance, robots, and more and talk through global leadership on new tech rollouts.

Prefer to read? Check out the Audio Transcript

[Audio Clip]

Zachary Karabell (ZK): What could go right? I’m Zachary Karabell, the founder of The Progress Network. And I am joined as always by Emma Varvaloucas, the executive director of The Progress Network. And we are having a series of ongoing conversations with compelling people, talking about the major issues of our day with a slant toward, yes, what could go right? Given that so many of our questions these days are framed by the “what could go wrong?” and the relentless focus on all the things that are in fact going wrong. We live in a complicated world. That’s both a cliche and a truism. But we tend in that complication these days to go toward the negative, to not accentuate the positive, but to emphasize the negative, to privilege our fears and delist our hopes. And in that context, one of the things that people are increasingly agitated by is both the rise of and proliferation of artificial intelligence and all the challenges that it poses to identity and privacy and choice, including the nascent fears that we are about to be taken over by the robots—you know, the rise of the machines—and the concerns and fears and anxieties arising over the proliferation of cryptocurrencies, and the rise of what is called Web 3.0—the decentralized, non-government-regulated financial world that these crypto technologies are enabling.

These are all treated by some as the answer to all of our prayers and by others as the proof that we are headed downhill fast. So today we’re gonna have a conversation with someone who’s at the epicenter of these issues. So Emma, please tell us a bit about Ayesha Khanna.

Emma Varvaloucas (EV): Ayesha Khanna is an artificial intelligence expert. She’s the cofounder and CEO of ADDO AI, which is an AI solutions firm and incubator. She’s been a strategic advisor on AI, smart cities, and FinTech to leading corporations and governments and also serves on the board of Infocomm Media Development Authority, which is the Singapore government agency that develops its world-class technology sector. She’s a member of the World Economic Forum’s Global Future Council on Media, Entertainment, and Culture, which is a community of international experts who provide thought leadership on the impact and governance of emerging technologies. She’s also the founder of 21C GIRLS, which is a charity that gives free coding and artificial intelligence classes to girls in Singapore.

ZK: So let’s talk to Ayesha.

EV: All right.

ZK: Ayesha, it’s great to have this conversation with you today. We’re having yet another of our intergalactic, transcontinental, global conversations between me in New York city, Emma in Athens, and you in Singapore. I have to say, I still get a sort of weird little [inaudible] of, it’s kind of cool that we can have these conversations simultaneously across multiple zones and multiple geographies. I don’t know if that’s a great thing for the world or a bad thing for the world, but right now I’m just gonna treat it as a fun thing for the world. And you’ve spent a lot of time over the past years thinking about technology and its onward march. And I guess as just a broad conversation we can begin with, you know, is it an onward march, right? You do a lot on artificial intelligence. You’ve thought a lot about web3. And there’s a lot of people who view both of these things—decentralized finance not necessarily [inaudible] by government, and of course artificial intelligence—with a great deal of not hope and excitement, but fear and trepidation. I have a feeling I know where you come out on that, but maybe ruminate a bit on, you know, on the one hand, on the other hand, the pros, the cons.

Ayesha Khanna (AK): Yes. Well, first of all, thank you for having me here. I’m really pleased to have an opportunity to speak about this with both of you. I think the mistake that we make is we sometimes have very emotional, knee-jerk reactions to technology. We don’t have a system of thinking or an approach, because we were never taught it in school. But now we’re entering an age where more and more, it’s not like we use machines but almost like we live amongst technology. We live with technology, not only in our iPhone, but also in our digital lives, too, with smart city sensors coding the majority of buildings and roads and infrastructure down the road. We’re entering an age where we really need to have a way of approaching technology and its pros and cons, exactly as you said. But we’ve actually never been taught to do that.

So traditionally, you had two kinds of people. One who have been naively optimistic and the others who have been depressingly pessimistic. But actually, the right point, whether it’s artificial intelligence or web3 or nanotech or biotech, is to, right at the beginning, when you are trying to build something—for example, we build a lot of AI solutions for our clients globally—we kind of sit down and have a risk framework of everything that can possibly go wrong with something that we’re building. Unintended consequences, potential consequences and risks. And that approach, those scenario-based approaches, and an exercise of doing that, and red teaming yourself by having other people question your assumptions, is a very sound and logical way of going about it. And certainly I find the European Commission has exactly this approach. And I’m a big admirer of some of the regulations that they have put forward in the world, including for artificial intelligence. And they think about it—the level of auditing and governance and regulation that will be imposed on a company is directly proportional to the risk associated potentially with putting that AI into the world. So I think that what we need more than the pros and cons is a systematic way of approaching it, and that should be embedded in the education for our engineers, and certainly for all entrepreneurs and business and high school kids.

EV: So Ayesha, I wonder if you could give us some examples of this ideal behavior that the European governments are doing along artificial intelligence. I think a lot of people are familiar with Europe sort of leading the way with GDPR and privacy protections. But actually, when you mentioned that around AI in Europe, I have no idea what they’re doing. So it would be great to hear how they’re striking that balance well.

AK: So first of all, the European Union, exactly as you mentioned, [the General Data Protection Regulation] GDPR, has been able to set a standard.

[Audio Clip]

It was kind of a big wild, wild west, you know, when it came to data protection and privacy. And they really put the stake in the ground, and they said that you have the right to know if somebody is using your information and how they’re using it. You have the right to ask to be forgotten if you were young and wanted your information to be deleted. And what we discovered after that—in a number of six or seven of such tenets of data dignity and data privacy and data security that they put out there—several governments around the world began to adopt this, including elements that appeared in India and China, certainly in California, with CCPA [California Consumer Privacy Act] being even more stringent in its privacy than the European Union.

And now, they have come out with a set of guidelines by the European Commission. They haven’t been adopted yet, but these are regulations related to artificial intelligence. And they have said that any artificial intelligence can be divided into four layers: high risk, medium risk, low risk, and no risk. And you will be audited and governed or even allowed to use AI depending on how much you threaten the happiness, the livelihood, and the existence of human beings.

So for example, they said facial recognition. We know that facial recognition can inaccurately, because of bias in the data, flag certain people as criminals. And that leads to a terribly demoralizing, humiliating, and sometimes literally unjust incarceration of people. And they said, this is too high risk. This is high risk, and therefore only governments should be allowed to use this. Private corporations should not. If you are being denied health insurance, for example, because the AI is thinking that you are not capable of paying your premium or you’re too high risk, well, that’s terrible for a person looking for health insurance for themselves or their children, or looking for college finances, or even a mortgage. So that’s considered medium risk. And for that, you will be audited more. You will have more accountability. You will have to give more evidence that you are governing your AI in an ethical manner and in a proper manner where there’s less bias and decisions are made in a more or less explainable manner. And so on and so forth. If somebody’s suggesting a bag that you can buy on an e-commerce site, then that’s low risk. And so I will be audited less.

So first of all, it’s a risk-based framework, kind of the way we have in financial services. Secondly, there is accountability, both processes and fines up to 6% of your annual revenue associated with it. But because it’s European Commission guidelines, it’s vague right now. People will criticize it for being vague. But I think, hey, it’s at least a step in the right direction. Somebody’s thinking about it and putting people first.

ZK: So this is the question about, what is the proper way to create guardrails around something that, unfettered, probably will have a lot of negative unintended consequences. Although a lot of techno-utopians, of which there are still considerable number, would say that unfettered will also lead to a lot of unintended positive consequences, and that the effect of this multiplicity of rules will be to stifle multiple ways in which these technologies could evolve if not stifled otherwise.

So I guess this leads to two questions that are totally in your wheelhouse. One is, you live in Singapore, and Singapore is certainly an example of a state system that has taken it upon itself in a very technocratic way to attempt to govern a lot of aspects of life for the good of society. Does it work? And is it really a model for the rest of the world, or does it only work in a 7-million person city-state that is essentially a lot of people who have some degree of consensus about the role of these things?

And two, how does this compare to the whole web3, the decentralized blockchain finance—and maybe you can give us a little primer on that—whose entire mantra is, if not totally anti-government, then making government irrelevant to the protocols. It avers that it’s creating a self-governing guardrails built-in system, far better than any government or any laws to create both trust in fact, not trust in intent, and that that’s the way the world should go entirely, which is, we should have technology embed the rules and not government. I know those are two different questions, but they’re kind of related. And I’m also interested in the Singapore perspective.

AK: Absolutely. And two great questions at that. So let’s start with Singapore. What attracted me to Singapore—and I’ve been in tech since I started working in algorithmic trading systems, implementing quant models right after college to my PhD in smart cities infrastructure. I’ve been following and working in tech for a very long time. And what I find interesting about the way that Singapore rolls out technology is that it’s actually very human centric, which means that if you go to some of the new townships, for example, what you see is that a number of the apartments—and 80% of Singaporeans live in government subsidized housing—and actually 30% of them will be prefabricated using advanced manufacturing or 3D printing. The HDB homes because we are a graying population, will be smart homes. They will be outfitted with sensors and video cameras so that if somebody old falls down, their children or their doctor will immediately be notified.

But it’s not only about that. They realized a long time ago that elderly people don’t just die from falls in their homes. They die because they’re lonely. They die from accidents. Actually, those are the top two reasons people die in old age in Singapore and other countries. So they decided to make all of the government housing multi-generational, not to take the elderly and put them far away in nursing homes. So the point of the matter is that intergenerational housing is a question of urban design. And it’s about studying anthropology and about studying sociology. And that’s just one example. If you look at transportation and the way that they are building transportation and making it completely car-free and introducing autonomous vehicles and other kinds of more pedestrian-friendly routes, what you see is a very technical state that is actually… Everything is given towards reducing the time spent in bureaucracy and increasing the quality of life.

And then your second question was how much consensus do you need from people in order to do this? So first of all, a number of these plans are regularly published way in advance. There are four new townships coming up and Urban [Redevelopment] Authority of Singapore has all of those plans. They are reviewed many, many times in the newspaper and discussed with the MPs. And there seems to be generally… If there’s any issue, people can talk about it, but there’s a lot of communication. And you’re absolutely right. If you don’t have communication, you have a problem. And I’ll give you the example of Spot, the Boston Dynamics dog, the robot dog. The same dog was rolled out to monitor social distancing in parks in Singapore. And the same dog was introduced in New York City to help diffuse bombs, and to go ahead and look for the bombs.

In Singapore, it was in the newspaper, we ran into the dog, people took pictures, and then, that was it. In New York, there was a huge uprising, and it was shut down. The police department shut it down because they never communicated properly what the dog was supposed to do. Instead, the dog was suspected of being armed and to be accompanying policemen and policewomen on raids, which would make me uncomfortable as well. In the same manner, if you talk to the Singaporean government, autonomous vehicles, they’re all over San Francisco, everywhere. Yes, they’re safe, but there are actually not many good global standards yet. So the transport minister and department many times have very, very selected spaces, and they test them again and again and again. And coming to that point, they prefer to be more conservative than risk-taking when it comes to human lives, because it’s not really worth it for them to have one cyclist or anybody accidentally injured just for the sake of innovation.

So what I like about them is that not only do they place human beings above technology in a smart city environment, but that they also continue to communicate whatever they’re putting out. And number three, that they’re quite risk averse in putting anything at risk, whether it is the livelihood of people, whether it is their own safety. And this means that there’s a systematic approach. And the last thing, guys, that I really like about them is that they review their plans all the time. They’re not wedded to their plans. If things are not working out, they will change. So in that sense, they’re very agile. And that comes to the final point, that a government that can be agile, really, maybe it’s the city-state that can afford to do that. And it is very true that for a country of just over 5 million people, densely populated, but a city-state, that it is much easier to do these things with technology here than a large country like India or Brazil, where there are many, many, many more people and many layers you have to go through in a government to execute such a plan, and many more points of failure and potential risks that you have to look out for

EV: Ayesha, before you go into Zachary’s second question about web3—so just really quickly, for instance, if Spot the social distancing awareness dog [laugh] was received poorly by citizens, then the government would have done something differently. They would’ve recalled the dog. They would’ve done X, Y, and Z. Is that true? And in that case, can we then apply that onto the United States, for instance, at the state level or the local level? Is it possible to do that?

AK: Well, you know, it really depends. So what I find interesting is, what does it mean that it is not taken well by people? Does it mean there was one person who tweeted about it and was very loud and vociferously complaining? Does it mean that many people expressed concern about it? Did they go to their local MP? My community center is two streets down, and I can visit him every week when he comes here. There’s a method of doing things. And yes, I believe that there were concerns that were not addressed by the MPs. People would come and communicate better and then find another way to do things. But the Singapore government does have an ear to the ground and does pay attention to what people are saying. It is not as easily swayed by a lot of like Twitter-mania or the voice of a few loud people.

They just have a systematic way. And it’s rather calm. It’s not that reactive. And I think that in itself reassures people quite a bit. But when there are enough people who raise a concern, it’s certainly discussed, it’s debated in parliament, everything is recorded, you can see it live on Facebook, it’s everywhere. Now ,of course, young people in every country will rebel and be upset and complain, and that’s good. They should. But I think that there is a system of doing things where attention is paid when enough people raise a red flag against it. So they would’ve changed something or they would’ve communicated more, had enough people complained about it. And so you were talking about the US…

EV: Yeah. I was just wondering as you were talking, I was like, okay, maybe this would be difficult to do, like you said, in large democracies like India or the US at a federal level. But maybe at a state level or local level, regional level. Why not?

AK: Absolutely. You know, I’m a big believer in questioning things. I really believe it’s important. And that’s why that critical thinking that all of us, all three of us were taught in school, and that our kids are learning or our nieces or nephews, we kind of just throw it out the window. And then we kind of have either emotional or political reactions to everything. Critical thinking is a framework of approaching things. Citizens should come together and discuss new technologies as they’re being rolled out. There should be a forum for them. And they should be able to discuss it at the municipality level and eventually at the state level. And that is why local governments should be quite strong. And I’m not an expert in geopolitics or politics as much, but I know that in cities, this hyperlocal trend is also important, so that people are able to voice their concerns in hyperlocal, you know, cities within cities. At least at the city level, you should have a very engaged citizenry that has a forum without being overwhelmed by millions of people in it. And then that should ripple up. As it’s supposed to, but hasn’t done as much. And people have felt disengaged in the past in many countries.

[Audio Clip]

EV: Okay. So I’ve led you down a particular track. Let’s go back to Zachary’s second question about web3.

ZK: The wonderful, anarchic, you know… You listen to web3 proponents, and they’re all about, this is the brave new world where all the traditional financial institutions, many of which are embedded with central banks and governments, will fade away like the horse and buggy and colonialism.

AK: Well look, first of all, we know that adoption of digital currencies is going up. We know there is a frustration against the fact that some people have made a great deal of money off what is called web2, which is tech companies like Facebook or Google or YouTube or others. And some people feel they’ve been disenfranchised and left behind.

So, something usually sticks around, has stickiness, if it’s solving a problem. I can tell you for sure that the level of bureaucracy and injustice in the emerging markets, when it comes to access to capital, loans, the ability to get a loan for your business, the ability to get a loan to give it to your child’s education, if you don’t have employment, if you don’t have a transaction history—the underbanked and unbanked go in the hundreds of millions in Africa and Asia and Latin America. So this desire to disintermediate banks or these financial institutions that create friction between easy flow of payment, that take commissions when people are sending back money from the US to the Philippines, I think that it’s a real problem that has to be solved. And the ability of people to do digital work in different economies. And that is really, you know, capital controls in countries. In Pakistan, you can’t take out more than $10,000 a day or $100,000 a year USD as an individual.

ZK: Well, India is not much better. I mean, India is a million, I think, right?

AK: A lot better than a hundred thousand [laugh]. But not much better, depending on where you are in the economic scale. But the point is that it is because of these things that people want to disintermediate and disrupt financial institutions that they feel have not been customer-centric and have not really paid attention to the customer, to the citizens. Already, a wave of FinTech disruptors using AI data has knocked them off of their complacency pedestal, and they’re realizing that their digital first bank, like Nubank from Brazil, is the biggest digital bank in the world, and there are 30, 40 million young adults in Mexico, many more in Brazil, that it’s servicing with their credit cards.

And so web3 is providing yet another solution to that. It’s saying that anything that you do that you have now been doing for free on the internet, you should be paid something for it. It could be as simple as checking your email. It could be searching for something, being on Facebook. Because these companies are making money off you. Anything that you create, you should be able to sell directly to someone without having an intermediary that takes a big chunk out of it. So that, I feel, makes a lot of sense. It gives people the opportunity, democratizes the opportunity to access marketplaces without having to bear commissions or intermediaries. So it is a problem that has to be solved.

For financial services, it makes sense, but if you go to digital currencies that are not backed by gold or by sovereign assets, then it becomes a macroeconomic problem for people. And that’s the information asymmetry between citizens and the government. The government understands the macroeconomic implications. A lot of these people who want completely there to be a free for all don’t understand the ramifications well. For example, I was listening to a podcast by a lawyer, and she said this notion of a DAO, kind of an organization that everybody owns. It’s so fabulous, and everybody has equity in it. I saw somebody set up a golf club and sent out a lot of subscriptions, and $20 million or something were raised. And it’s like the three of us and everybody owns the same golf club. And we can all benefit from anybody who pays membership fees. What people miss out on is, if you’re a hundred percent an owner together, you’re also a hundred percent liable as well. And this is the fact that people just, you know, we don’t realize this. We just miss that. Yes, you have a lot of cryptocurrencies, but you have to pay taxes on it, and it’s a pain. And people are just realizing that. And now the Indian government actually announced it. So this information that people don’t have is very unfair. Because they only think of one part of this get rich quick kind of scheme. And the fact that if you had no government sounds exciting, but really, where would you go if you had a problem? There are no pathways for justice.

I just wrote for World Economic Forum a paper on digital justice. What happens if you are in the metaverse and your avatar is harassed by another avatar? What is your recourse? Who do you go to? Where is that information? This is not just bullying and going to your school. This is literally, how do you go up the chain right to that judge and who does he hold accountable? And these are processes and institutions that have been set up over time. Can they one day be replicated, simulated in web3 by blockchain? Maybe. I am certainly not philosophically opposed to anything that has good governance abandoned in it. I am opposed to jumping to a system that has no institutional legs and cannot protect the poorest people because we have no history of such institutions. So I think that I’m just trying to kind of separate some of the threads here, to separate this strong philosophical belief that people have of self-ownership, self-empowerment, self-agency, all of which are good, but they don’t understand its implications. When you give them real examples, then they are a bit jittery, because we haven’t set up those institutions yet. And you can’t rely on a trust network, right? Everybody in the network is neither equipped nor educated nor specialized enough to reach that discussion. And certainly the network itself cannot provide that security to poor people or anybody in trouble, in my opinion.

EV: So Ayesha, another way to describe what you’re saying now—and I think you actually have put it this way yourself; I think I’m taking this from you [laugh]. I’m reciting it back to you. But, I found it very persuasive, which is why I’m repeating it right now. When social media was created, there was this sense of like, well, I don’t know, it’s not gonna go that far, blah, blah, blah, blah, blah. We didn’t accurately assess the risk. Social media exploded, and then we had to work backward to, you know, establish some of the rules to get us into a better place than we are now. And what you’re doing with web3 and some of this other stuff is like, let’s talk about the problems now, before they arise. But I wonder, are you a lone wolf in all of this? Because I certainly see in this balance between the techno-optimists and the risks of new technology, like with web3, I see almost no discussion around the risk and what might be the biggest problems that arise. What you just spoke about just now was the first time I had heard anyone say anything about that. So, yeah, how do you see yourself in that field?

AK: I think people don’t know. They don’t know what they don’t know. This is insane. Like, people keep talking about NFTs and then another lawyer and I talked about copyrights. There’s a copyright law. And I know that the Singapore government is looking at digital assets. Digital assets are so much more than Bored Ape.

[Audio Clip]

So the issue is there are a few bros who are very loud and excited because they made money and crypto, and more power to them; they spotted a good trend. And then there’s all the rest of us that don’t know anything and are kind of cruising and pretending we know, or just staying on the fringes. But actually, I think this is the next iteration of the digital economy, or the economy, really. And I think that we should, first of all, admit what we don’t know and then start to learn together. And that’s why I’m launching Squad, which is this global collective for women in which we literally talk about precisely these things. We say, if you’re an entertainer or a lawyer or you run a manufacturing factory, what do these things mean for you, in small tidbits that are easy to understand.

Instead of saying things like L1 is really not gonna get us to where we want, so we’ll do L2. And everybody just nods their head, but WTF. Like, what’s L1? What’s L2? And somebody could really just explain that the bandwidth is not enough in Ethereum. And I am happy to explain that in easy to understand ways. It’s just to increase the bandwidth in different ways. And what that does is, people feel comfortable. Women especially are shy in asking for help. If they feel comfortable, then they can participate in this new iteration. And that is really what we need to do, is just talk about the fact that, like we’re talking right now—I don’t know the answer. You don’t know the answer. Let’s talk about it. That’s what smart and interesting people do. So I think that there’s just not enough discourse. People are shy because they feel they’re missed out on something. And there’s a lot of loud noises. But I hope my collective, we’re gonna ask all these questions, hopefully.

ZK: So you talked about not just this new endeavor, but you’ve done a lot of work on particularly bringing women into technology, young girls. I’m always struck—and I’ve had this argument with a lot of people, as a parent of two kids who are now less kids and more young adults—that the amount of sort of reactive fear that people in tech land have when they start raising their own children. I mean, I’ve heard people who’ve made oodles of money on social media or various forms of, “Oh no, no. I don’t let my kids have a smartphone. I don’t let my kids go on social media.” I mean, it’s almost as if they feel like they’ve been drug dealers who won’t let their children buy their own product, which is probably a good thing for a drug dealer, but it raises the question of, if you spent your entire life building out these technologies, presumably at one point, having drunk your own, Kool-Aid, believing that these were inherently for the betterment of mankind and, you know, the Valley mantra of get rich and make the world a better place.

So do you still share some of the latent optimism of, look, these tools are potent, they’re neutral, a hammer can kill someone and a hammer can build a house, and the tools neutral, the use of it is all that matters. Do you wrestle with this in raising your own kids? Do you wrestle with this in teaching technology? I mean, I know from where your sentiments are that you are fundamentally about learning and balance and understanding. But it is an odd thing. It’s an unusual thing in this particular world, the amount of people who are almost like recanting their own mantras when it comes to the next generation.

AK: Yeah. Honestly, I don’t get it. It’s like they know what they put in the soup, the goulash [laugh], and they don’t want their own kids to have it. For example, we do know that a lot of these algorithms are meant to be addictive. And I can understand their hesitance in having their own kids in there. But they do a disservice to the rest of us, right? So their kids are gonna inherit hundreds of millions of dollars, and so it’s okay for them not to learn something that is gonna be fundamental to them building an important and interesting and meaningful career globally. But for the rest of us, our kids should know, right? They should know the basics of technology for three reasons.

Number one, because to build anything interesting that solves problems, you need these little assistants that I call AI, or they can be chemicals, they can be nanomaterials. We should consider them our little assistants, and they will help us be more competitive and move faster. Secondly, as citizens who are increasingly in a world that is pervaded by technology, we need our children to be aware of what these technologies are doing, kind of have an intuitive understanding, and make decisions on when to switch on and switch offor what to demand of their governments so that they live a life of agency and are able to make decisions without being manipulated by these forces, or by the companies that own them. And then finally, I think it’s about teaching our children a proactive, philosophical relationship with technology. This is this fear that if I give them an iPad, they will just start watching YouTube videos or TikTok, and that’s how they’ll be passively trapped by the iPad. Whereas we perfectly know the iPad has Adobe on it. There are lots of positive, proactive things that they can do on it. They should look at any piece of technology, whether it’s a 3D printer or a laptop or an iPhone, iPad just like you would kind of look at a piece of paper and a pencil. You don’t expect a pencil to start drawing things out for you. You pick it up and you start doodling. And that comes only through exposure and by nudging through education so that they can feel that. And that’s kind of what we try to do in our courses for girls and kids in Singapore is to nudge them towards thinking that you are empowered by this. And you have to have critical decision-making on whether this is good or bad for society.

And even the collective I was talking about for women, it’s all about you, the individual. How can you use this for your career, your kids, your society? Whereas a lot of the discussion is not about the individual. It’s really about trends or analytics or the business or the stock market. And that’s where I think it’s a big disservice. If you don’t expose people to this new relationship with technology, then you’re really condemning them to a lifetime of passively just living listlessly as technologies take over more and more of our tasks and our life around us.

ZK: I also love your phrase of calling them little assistants. Because it’s also important… You know, a lot of people view these things as both the opposite of little and the opposite of assistants. They kind of view them as massive monsters and that it’s gonna get out of control and we’re gonna be living in some dystopian world where the robots are ruling our lives, whether it’s the Terminator or Asimov’s I, Robot, or H.G. Wells. I mean, ever since people started inventing machines, there’s been some latent fear that the machines are eventually gonna take over and dominate us and rule us. And by calling the little assistants, at least for the moment, provisionally, you’re also making it clear that the power dynamic remains unfavorable for AI and technology. Like we’re still writing the rules. And until it’s clear… Yes, there’s a lot of AI software that writes its own rules, right, but it writes them within the parameters that have already been defined. It doesn’t just go off on its own [laugh] and start writing rules.

AK: I totally agree with you. And, you know, I learned this from the Japanese. When I went to Japan and I met this guy who creates these robots. He calls them love-bots, because he said, when you talk about machine, you only talk about big hulking robots in factories or those that Superman has to fight against. But why don’t we just talk about little cute robots that bring you joy? And he said all the robots in Japan are little and cute. And they’re not threatening. And I thought that was so smart. And it was the cutest little thing. And I think that that is really the key.

And you know that in the oldest Buddhist monastery, near Tokyo, there is a robot Buddhist monk. And then everybody’s like, oh my God, this is so weird. Look what’s happening. And then they met the old man, the old monk who ran the monastery. And I loved how he spoke. He said, first of all, “This is not a replacement for me. This is a good way. We are aging. We are dying. There are fewer of us monks. This is a good way for us to communicate to the young generation in a way they find interesting.” But because he was so calm and so clear that it was their sermons, that this was a medium for them to communicate, he immediately just took that drama down three notches, and made it something that was practical and useful and interesting. And I thought this 90-something-year-old monk understood better what to use AI and robots for—for spiritual awakening or support of these people—than a lot of what we see in this dystopian–utopian world, where we’re very reactionary, very dramatic, and then kind of always at each other’s throats with knives.

EV: Yeah. And actually when you see some of these robots in action—for instance, we have an article on The Progress Network about Robin the Robot, which goes to children who are patients in hospitals. And it talks to them. It remembers who they are. It remembers their conversations. And it’s incredible when you watch the videos. The kids love it. The kids are hugging it. It’s like a friend. And to be honest, when I first read the pitch for it, I was like, that sounds kind of creepy, like sending this robot into these poor children’s lives who are sick at the hospital. But actually, it seems to have had a wonderful effect on them.

ZK: Just like Clifford the Big Red Dog. I mean…

AK: [Laugh.] The key question is, it can do good, and now we have to govern the owner of this Robin, right? Now, the owner of the company that creates it has to adhere and comply with a set of governance, regulations, or rules in which, if it is audited or at risk of being audited, to make sure that they’re not manipulating the children to buy, I don’t know, something online, that it is not recording them and can be used later when they’ve become better and sell that information to an insurance company. So when you have governance, then you can enjoy the benefits of these machines. But when you don’t have governance, don’t have controls, that’s when it becomes a problem. And we really see that in China. They’re trying to do that now.

ZK: I think one of the challenges of governance and government regulation is that—and you see this a lot in the EU, you see it increasingly in the United States, I think you actually see it less in a place like Singapore—is a high level of antagonism from the regulators toward that which they’re regulating. Kind of an a priori position of, these things are essentially bad. We can’t completely squelch them, but we’re gonna do our best to box them in. And that’s certainly true about web3. And it’s certainly true about crypto, in terms of the attitude of regulators. And it’s interesting, when you look at a lot of initial regulatory efforts in the 20th century, most regulators recognize—or regulatory frameworks recognize—that whatever they were regulating, safe food, clean water, they weren’t antagonistic toward food and water, right? They were trying to make sure it was constructive. And I think this is important. I think a lot of the regulatory culture loses that when it comes to technology. Like they really are inherently antagonistic. They are as antagonistic as like tech bro culture is ridiculously, naively utopian. And it’s like the two of them ping-pong off of each other in a really unhealthy way when it comes to, what are we gonna construct that is most favorable toward a balanced operating environment for all of us.

AK: Yeah. I think that’s very unfortunate and true. And really, what it speaks to is that you have to have people who are knowledgeable in the boardroom or at the strategic level involved in all these policy and regulatory authorities. Usually there’re a lot of lawyers and economists and finance people. And then you get some scientist that you think is kind of geeky and weird, and you call him or her for a few minutes, but you don’t really let them sit in. You don’t really grapple with the situation. Because on the one hand, there is ignorance. And then there’s a fear and shame of ignorance. And that doesn’t help, right? And of course, Silicon Valley and all haven’t helped because there’s this tech elitism where you don’t just explain things in simple, logical ways. And that is also very unfair because people do deserve to know, and you should explain it to them.

When I go into boards or anything like that to advise our clients, I make sure that we put it in a way that’s understandable to them. Certainly, they do it for me when they explain the business to me. So this kind of mutual respect and openness and willingness to explain to each other will put people’s guards down and allow them to be vulnerable enough to say, “You know what? I don’t understand that. Can you explain it to me and have more of a conversation?” And I think then it can be more collaborative.

Now, the other thing is, traditionally, people from the social sciences and engineering have had a communication problem… And so, we need to have this communication training in schools and executive programs, and also in colleges. The number one reason why AI projects fail is not because of the AI. It’s always because of human communication—99.9%. Every techie knows this in the world. So if we improve that we will have much better governance and regulations.

EV: I love how much you’re emphasizing, too, the humility and vulnerability and okayness with saying like, I don’t get it. Actually, for me, until recently with a lot of these things, I was like, I don’t get it. And what really did it for me was this thing that you put out actually about BFF, which was this initiative to welcome women into the crypto, web3 world. They had an hour- or two-hour long seminar that really broke things down simply. And the fact of the matter is, when you have good communication, people get it. You don’t need to understand like every single little bit about how the tech works to be part of the conversation. So I hope that we see more and more of that, of pulling people into the conversation. Because they can understand.

ZK: Emma, are you now a card-carrying crypto member?

EV: Well, going back to this conversation about emerging markets—not that Greece is exactly an emerging market—but I’ve been blocked for months now from buying crypto in Greece because of the verification process, and also because of what’s available in Greece. So I’m actually waiting to go back to the United States to buy crypto, which is weird. I feel like that should not be the case.

ZK: I think you should do a crypto Drachma as your summer project.

EV: [Laugh.] Yeah, that would not go well.

[Audio Clip]

ZK: So Ayesha, as we wrap this up—I guess this is a really open-ended one. I know both you and others have thought about the contrast of sort of how Asian states—Singapore, of courseI guess for a little while Hong Kong, but not so much anymore, Taiwan, Japan, South Korea—have been more holistically integrating policy into technology, right? And we talked about that in Singapore. The pushback remains, though, from an American perspective—and it’s a pushback a little bit against the EU as well, if you think about the EU as maybe occupying a middle ground in these two poles—is the fear of ceding too much control to technocrats, to people who are not. And we’ve talked about all the challenges and all the issues particularly for American technocrats, who can be remarkably un-empathetic, don’t get the real warp and woof of life. I’m not for one moment suggesting that I think Silicon Valley and technology companies have in any way stepped up and acted responsibly in the sense of engaging with the social consequences of what they’ve wrought. In fact, I think they’ve been abjectly missing in action when it comes to that. But there is a legitimate discomfort of, should the state, this kind of paternalistic, at least that’s the view, right, that there’s the paternalistic Asian state saying, “We know best what’s good for us.” Is that your experience actually? You’ve been in multiple countries. You’ve been in Europe, you’ve been in the United States, you’ve been in Asia. Is that your experience of it? Is there a legitimacy to that fear?

AK: I mean, that’s certainly said about it. I haven’t felt it that much, even in Singapore, because I feel like citizens who are self-aware—and not every citizen is, right, even in Asia—who take an interest in such things. There’s a lot of discussion and debate and policy papers that are always published and think tanks that talk about it. And then certainly when you see in other countries—you know, Asia is very big, a lot of the countries you spoke about are East Asia. But then you look at Southeast Asia—Indonesia, Vietnam, the Philippines—and then of course you look at South Asia—India, Pakistan, Bangladesh—and you see young people, right? So Singapore, Korea, Japan, we’re all aging. But then you see the other people—in Pakistan, the average age is 22 years old. You can imagine the optimism they carry. And they have their mobile phones. Some of them are carrying two mobile phones. This is their connectivity to the world. They are much more interested when the government says they’re going to do things like have a national ID system so that they can do KYC, exactly the problem that you were talking about in Greece, or that they want to encourage digital banks. They’re much more interested in this. So instead of being paternalistic, the government is… There’s a huge, positive response to such initiatives of greater use of technology, almost to the fact that I think, because they’re young, they don’t think about the downside. So the government needs to actually hold them back. So, you know, Asia is very varied. It really depends on how mature the economy is. The more mature it is, the more people are educated and hesitate about technologies, think about what the government is bringing in.

But I found in Singapore, over the years, they have pivoted the education system to bring more and more people into Industry 4.0 so that they don’t feel left behind. And that may seem patronizing—I’ll give you an example, you tell me if it’s patronizing or not. I met this man. He said he was an engineer. He used to work on planes at one of the Singapore airline aviation companies. And it turned out he was like 63 or 64. And he said that he was released from his job because of AI and automation. And the day he was released, he knew it was his last day, the Singapore government career office came. And all these people who were in their fifties and sixties, they said, “You know what? You could have a paid internship at this 3D manufacturing fabrication German company. We will pay it, but this company will teach you the basics so that if you want it, and you did a good job, they could employ you. You’ll probably make less money than before, because it would be a junior-level job, but at least it won’t be an abrupt cessation of your job.” And this gentleman, anyway, he was great. He’s gonna be featured in my book also. I was just charmed by his enthusiasm and about everything that he was learning. But he was very balanced. He said, “I’m gonna earn less, but I’m glad they came. I’m gonna keep busy. I don’t think I’m gonna retire for a long time.”

So this is the government coming in, doing a couple of things, right? It’s encouraging AI automation for the company that needs it, so that company does well. But it’s taking care of the citizens so that they don’t fall through. It’s literally subsidizing other companies that are German companies to hire its own aging citizens so that they can learn something, and if they’re useful, can be useful to the company. So there’s a lot happening here that you would considerin some ways, patronizing, right? They’re saying, “These are important technologies. We know there’s gonna be this frictional, structural unemployment. We’re gonna try to solve this problem.” But I thought that this was a good way of dealing with it, basically, of saying, “These are the technologies we believe are gonna be necessary. We’re gonna include these companies, let them come, but we’re gonna take care of our citizens to mitigate the risks that come.” Whereas, if I look at Pakistan—and I’m from Pakistan originally, so I have a lot of familiarity—there, it’s just this big enthusiasm. Digital banks are gonna come. And sure, some people will lose their jobs. But government has too many problems to solve in a poor country, not thinking about that right now. And they think the economy will do well enough for these people to find other jobs. So there are many ways of looking at it. One is more free market. One is a bit more structured and organized.

ZK: Well, Ayesha, I love your balance and your perspective, both global and personal, and your constructive, “if there’s an issue, how do we solve it? If there’s an absence, how do we create a presence?” And you’re doing wonderful work, and it’ll be exciting to see where all that goes. And thanks for being part of The Progress Network. Thank

AK: Thank you so much to both of you. It was so much fun to talk to you.

EV: So one thing that—okay, we said a fair number of things that we love about what Ayesha said. But another thing that I loved in particular that she talked a little bit explicitly about is also this balance between ages, right? Because you could go into some of these things about the problems of the government in the United States, where you have very old politicians making decisions about technology, and they’re clearly not up on what is happening. And the counter to that would say, let’s flood the government with 22-year-olds. But of course that’s not gonna end up in a great place either. And Ayesha points out like, no, what we need here is a balance. We need the fresh-faced optimism of the 22-year-old in Pakistan. And we need the wisdom of maybe a paternalistic figure in his seventies. And because this diversity of age discussion doesn’t occur a lot. I just wanted to highlight that. What do you think?

ZK: Yeah. And I mean, that was so striking, I think the hearing in Congress where some of the tech leaders including Zuckerberg were being grilled by senators, many of whom clearly didn’t basically understand email, let alone chat and technology and privacy. Not that there aren’t a lot of questions that those people should be asked, but it wasn’t just that they were talking past each other. It’s that the knowledge base about these things was so disconnected. And what’s great is, so much of Ayesha’s work, which is not really evident in a US context, is educating people, particularly women and girls, who have been typically not part of the tech entrepreneurial culture of the past 40 years, right? That’s been a huge, huge failing slash imbalance in technology land. The absence of women. I mean, it’s been a failing in lots of society, but it’s been really notable in a left-leaning, liberal-ish world that you would think would be more balanced in that respect that hasn’t been. And she’s been doing amazing work in integrating more gender parity, or at least gender presence, into this world. And I was also struck by a kind of lack of cynicism that you and I, Emma, are constantly coming up against. I come up against little, you know, demons-in-the-night, cynicism goblins haunting me which I find really bizarre, just like any middle-of-the-night anxiety, but it’s very hard to live in an American and/or Greek, European context and not just give into the, “yeah, yeah, right.” And it’s great that even within a progress network that is trying to support all these voices like Ayesha, who is so there regardless.

EV: Yeah. I mean, I really appreciated the way that she framed, actually, like people are shy to talk about web3 and technology. Because the sort of gremlin way to phrase that, which I think is also valid, is to say like, one reason why gender parity and gender presence hasn’t been in the web3 world is because, like, when you go onto those discord servers, people are mean, man. They’re not there to be like, “I’m here to educate you, and welcome, and we’re gonna do this.” And you could look at that and say in a gremlin-type fashion, “screw this. This is just another place where things are unequal.” But Ayesha, flip side, she’s like, we’re gonna solve this. We’re gonna create education. We’re gonna create initiatives. Like let’s do it, without focusing on the gremlins.

ZK: So on that note, don’t focus on the gremlins. Let’s do it. All hail Ayesha. Let’s solve our problems. And not assume that the younger generation, however young that young is, is like incapable or the object of terrible things that have been created. They are subjects. And, you know, it’s their world, it’s your world, it’s our world. Let’s figure out how to live in it. Thank you again, Emma, for having these conversations, and we will keep having them.

EV: Thank you, Zachary.

If you wanna find out more information about The Progress Network and What Could Go Right?, please visit our website at theprogressnetwork.org. And if you want something other than gloom and doom when you open your email in the morning, you can also sign up for our weekly newsletter. It’S a roundup of progress news from around the world, and that’s at theprogressnetwork.org/newsletter. And please, if you like the show, if you could tell a friend, share an episode, leave a rating or review on Apple Podcasts or wherever you listen to podcasts, that would help us out a ton. What Could Go Right? Is hosted by Zachary Karabell and Emma Varvaloucas. The show is produced by Andrew Steven and edited by Jordan Aaron. Executive produced by Jeff Umbro and the Podglomerate. Thank you so much for listening.

LOAD MORE

Meet the Hosts

Zachary Karabell

Emma Varvaloucas

arrow-roundYOU MIGHT ALSO LIKE THESE

S2. EPISODE 13

The New Space Race

Featuring Ché Bolden

Will space travel and exploration be left to the 'billionaire boys club'? Executive Director of the Inter Astra group and 26-year Marine Corps veteran Ché Bolden shares with us his views on the future of space.

S2. EPISODE 12

Facing America’s Biggest Challenges

Featuring Judge Victoria Pratt & Lauren Leader

After a string of heartbreaking news in the United States, are we doomed to fear, anger, and a descent into gridlocked politics? Today, Judge Victoria Pratt, an advocate for reforming the criminal justice system, and Lauren Leader, the cofounder and CEO of All In Together, discuss America's biggest challenges and how each have enacted change in large, complex systems.

S2. EPISODE 11

The Interfaith Imperative

Featuring Eboo Patel

How can we live with people who are different from us? Eboo Patel, founder and president of Interfaith America and former faith adviser to President Barack Obama, believes that interfaith living is essential to our collective well-being in an ethnically, racially, and ideologically diverse democracy. And in the United States, we actually do it quite well already.