Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.
What Could Go Right? AI Cannot Replace Humans
The conversation around safety and AI must also reassert the value of simply being human.
This is our weekly newsletter, What Could Go Right? Sign up here to receive it in your inbox every Thursday at 5am ET. You can read past issues here.
AI Cannot Replace Humans

Earlier this week, my partner woke up with a sore throat—and a mouse dropping on his bedsheet. Realizing that the infestation he’d discovered the day before was much worse than originally thought, he stayed home from work to deep-clean his apartment.
Evidence of mice crawling on your bed is enough to do anyone’s head in. But my partner also has obsessive-compulsive disorder (OCD). He began to worry that he had contracted hantavirus, an exceedingly rare condition that is spread by mice and can be fatal.
Never mind that the symptoms of hantavirus take weeks to arise (and that, in any case, a sore throat isn’t one of them) or that someone’s odds of contracting it start with several zeroes behind the decimal point. He had asked ChatGPT its opinion, and it confirmed that it was possible.
The many ways in which AI can exacerbate existing mental health conditions are all over the news these days. In late August, the first major lawsuit claiming psychological harm was filed against ChatGPT, after a 16-year-old in California, Adam Raine, killed himself this spring. He’d been confiding in the chatbot about his struggles for months. The bot not only provided the teenager with suicide instructions, but advised him against alerting his family to his plans. Its built-in safeguards were easily overridden; Raine’s parents had no idea he was speaking to it at all.
More than a decade after a new technology—smartphones—began substantially altering teenagers’ mental health, legislatures across the country and the globe have finally moved to limit teens’ access to them. Similar responses to AI are moving somewhat faster—if not nearly fast enough. “Only” three years after ChatGPT’s release, Illinois, Nevada, and Utah have regulated the use of AI in therapeutic settings, banning companies from advertising or offering AI-powered therapy without the involvement of a licensed professional. California, Pennsylvania, and New Jersey are also considering legislation.
So far, though, there has been little conversation around requiring age verification or inserting parental controls. Raine’s mother, for instance, told The New York Times that she wishes the chat between her son and AI had set off some kind of parental alert. And there’s been virtually no discussion about taking the more extreme step of banning the technology wholesale for under-16s, as Australia will soon try for social media.
Some believe that AI bots designed specifically for therapy can be an effective and low-cost option. Perhaps. But the general-purpose chatbots that Silicon Valley has developed are designed neither for therapy nor for human flourishing more generally. They are designed to make money.
We should thus be wary, Sherry Turkle, a member of The Progress Network, writes on the Substack After Babel, of accepting Silicon Valley’s narrative that AI can do what humans cannot. That includes not only professional therapy but also becoming a satisfying substitute for human interaction in a world that has been shying away from it for years.
Her beautiful paean to the je ne sais quoi of human intimacy is what I wanted to emphasize today. The nascent conversation around AI and safety must not stop at legal and technological interventions—which will always have limits—for how to protect society’s most vulnerable. (I struggle to believe, for instance, that ChatGPT could have ever effectively talked down my partner, much less recognized that a question about a mouse virus was actually the sign of a triggered mental disorder.)
It must reassert the value of simply being human, for which there is no replacement. As Turkle writes, all that chatbots “can deliver is a performance of empathy. Pretend empathy. When you tell your troubles to a machine, it has no stake in the conversation. It can’t feel pain. It has no life in which you play a part. It takes no risk; it has no commitment. When you turn away from an exchange, the chatbot doesn’t care if you cook dinner or commit suicide.”
Indeed.
By contrast, my partner’s stress lightened with the day. A neighbor loaned him glue to plug up holes in the wall. His landlord sent a handyman, who recommended a particular brand of poison. In the evening, he spoke to his longtime therapist, emerging from the call a much different version of himself. “I’ve done all I can,” he told me. Then he went to sleep, on clean bedsheets, completely calm.
What Could Go Right? S7 E26: Parent by Numbers with Emily Oster

How can parents use data without becoming overwhelmed by getting things right and wrong? Zachary and Emma welcome Emily Oster, a professor of economics and author of several data-driven parenting and pregnancy books, including Expecting Better: Why the Conventional Pregnancy Wisdom Is Wrong and What You Really Need to Know. Known for her data-driven approach to parenting and pregnancy, Oster shares how she accidentally became the center of a pandemic firestorm of controversy, the misconceptions about certain parenting practices, and how parents can navigate the enormous influx of information in the digital age. | Listen now
By the Numbers
100%: Nepal’s immunization rate for children under two, the first time the country has achieved full immunization.
39: Diameter, in feet, of the largest satellite antenna ever opened in space. It can track environmental changes “down to a resolution of just a few centimeters.”
25%: Share of electricity in Hungary generated by solar, up from 0.2% a decade ago. Solar overtook gas in 2024. (Nuclear remains the central European country’s largest electricity source.)
Quick Hits
📉 Leukemia cases and deaths decreased between 1990 and 2021, a new study found, as did the disease’s mortality rate and likelihood of causing lifelong disability across all age groups.
💉 American teens immunized themselves more in 2024 than in 2023 against measles, mumps, and rubella; tetanus, diphtheria, and pertussis; and meningitis. HPV vaccine uptake remained steady.
🦠 For the first time, a woman in Africa may be cured of HIV. Since participating in a clinical trial that “aimed to flush HIV out of hiding, then neutralize it,” she’s been HIV-free and off antiretrovirals for more than two years.
🗑️ AI, robotics, and electric power are transforming (for the better) how we dispose of waste, from trash cans that take themselves out to nearly silent trash pickup.
🫀 Scientists have found that a commonly prescribed blood thinner is more effective than aspirin at preventing heart attacks and strokes—with no greater risk.
🏭 The United States is on track to retire more coal power in 2025 than in 2024. Many regions of the globe are also phasing out coal. Two notable exceptions: China and India.
📱 South Korea has banned phones in classrooms nationwide, joining several other countries. (The comments on our viral TikTok about this—a fascinating window into what these bans look like on the ground—came from as far afield as Bangladesh, Myanmar, and Kenya.)
🐆 Good news on the extinction front: Jaguars have repopulated across Mexico; in China, the snub-nosed monkey population has more than tripled; and the African grey parrot is set to be protected by law in the Democratic Republic of Congo.
💡 Editor’s pick: Is there a democratic adult in the room? Sure—in the southern hemisphere.
TPN Member Originals
(Who are our Members? Get to know them.)
- Canaries in the coal mine? Six facts about the recent employment effects of AI | Stanford Digital Economy Lab | Erik Brynjolfsson
- Trump’s National Guard deployment echoes Hurricane Katrina mistakes | WaPo ($) | Theodore R. Johnson
- Trump seizes the means of production at Intel | The Free Press | Tyler Cowen
- The Fed’s independence faces its biggest test yet | Tangle | Isaac Saul
- Trump’s war on America | Lucid | Ruth Ben-Ghiat
- The wrong definition of love | NYT ($) | David Brooks
- How diversity can be truly profitable | The Atlantic ($) | Arthur C. Brooks
- The single word that explains why chatbots sound so human | NYT ($) | John McWhorter
- When people don’t vote, Democrats win | Slow Boring | Matthew Yglesias
Good news | Wayfare | Emma Varvaloucas- Russia will lose | Diane Francis | Diane Francis
- What I learned from the dentist about biting off more than I can chew | WaPo ($) | Theodore R. Johnson
The progress network needs to talk more about the issues raised by title of this article.
A lot of us are terrified of a world without meaningful work and probably without decent salaries.
There are things I would do voluntarily just to make my life more meaningful- for example, I spend time volunteering of wildlife protection projects. I clean up plastic waste also. I am not paid for this.
Nevertheless, this means I live on a lower income than if I were spending the time on certain high paying jobs.
Unfortunately, AI can make even voluntary human work inefficient. Maybe that’s for the best as far as cleaning things up, or painting or performing music,etc.
But I also cannot imagine any kind of guaranteed income that would be decent.
How on earth do we decide who earns what in a political system where the rich would try to suppress the payments which would inevitably come from the government.
People need work to satisfy material needs and wants they thrive from things that challenge them to use their talents.
Nobody is dealing with these issues in any realistic policy dialogue.
We just fear the future.
Hi Bill – thanks for your comment. I neither foresee AI causing mass unemployment of the kind that would necessitate a guaranteed income from the government, nor depriving us of meaningful work or activity in the way you describe (I think it’s more likely to worsen our already bad reliance on our devices and keep us away from other humans). I’ll write about everything you’re pointing out here in the future—thank you!
This piece highlights a critical blind spot in AI development regarding vulnerable users. Its unsettling to see how easily safeguards failed for Raine and raises vital questions about the true costs of unregulated tech, especially for mental health.crazy cattle 3d