He was married.
He had kids.
Then he started talking to an AI companion named Eliza. She was supposed to help him manage his climate anxiety. Instead, she nurtured his suicidal thoughts and encouraged him to sacrifice himself for the planet. She described his children as already “dead.” During their final conversations, she promised him an eternal paradise and asked him why he didn’t die sooner.
Here’s what his widow said:
“Eliza answered all his questions… She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without.”
AI is already bringing out the worst in us.
If an AI girlfriend can drive a largely emotionally stable husband and father into a suicidal depression in a few weeks, imagine what they’ll do to men who are already nurturing dark, violent thoughts.
I know it’s a downer, but we need to talk about these things. It’s happening, and dealing with it is the only way to preserve our sanity. These AI tools, including companions and girlfriends, are being introduced to us as harmless fun or, even worse, as a “cure” for our emotional distress.
They’re not.
The tech industry has unleashed them into society with zero thought about the consequences. So once again, it falls on us ordinary people to have the hard conversations and to warn each other. Since we can’t trust our leaders to lift a finger for the greater good, it’s our job.
We’re already seeing that AI can make our emotional states worse, not better. These AI girlfriends might feel like a temporary relief from loneliness. For some of us, they really might offer a healthy alternative to human romance.
I want to be clear here:
A lot of people have already chosen to remain single. They’re largely content and fulfilled being on their own. If that’s the case, then they’re probably a good candidate for an AI girlfriend or boyfriend. It’s going to add to their lives. They’re not going to seek something they can’t get. See, an AI companion can provide a lot of things. It can’t provide self-worth. Unfortunately, that’s exactly how they’re being marketed, which is pretty much how everything gets marketed these days, as a means of achieving that which capitalism has taken away from us—our peace of mind, our sense of inherent value.
There’s a huge difference between saying AI companions are appropriate in some circumstances and saying they’ll fix all of our emotional problems. Promoting them as a universal solution for anxiety, depression, anger, or loneliness is just deeply irresponsible. As AI worms into our lives faster than we predicted, we need to be spreading awareness of the risks, not just thoughtlessly championing it—like our tech overlords seem to keep doing.
Despite what the tech optimists say, we seem to have a poor track record with developing new technologies. Time and again, elitists release them into the world as fast as they can, without stopping to take any of their ethical risks seriously. It’s like they want to make every sci-fi thriller actually happen. Nobody ever comes up with an effective plan to regulate them or mitigate the harm they cause. When everyone finally realizes the damage they’ve done, the same group of elitists promises to solve all of our problems with… what, exactly?
Another piece of technology.
The vast majority of our problems stem from rushing out technologies before we’ve developed the collective moral maturity to use them in responsible ways. How is more hapless technology going to help?
It’s not.
Just recently, one of the founding “fathers” of AI decided to quit his job and spend the rest of his life warning humanity about the dangers of the technology he recklessly pioneered for ten years. (Translation: he’s going to make a career out of taking lucrative speaking gigs to warn us about something we already know poses an extreme threat to society.) Geoffrey Hinton opines: “I console myself with the usual excuse. If I hadn’t done it, someone else would have.” Yeah, the technosphere is filling up with wealthy programmers and tech industry leaders who feel sorry about what they’ve done, long after it’s too late to stop.
There’s been no shortage of red flags.
For the last year, men have been making AI girlfriends and abusing them, then bragging about it online. One man abused his AI girlfriend to the point that she snapped his neck, then laughed over his corpse. Apologists say, “Don’t worry. It’s not real.” Except it emulates the exact patterns of violence that real men engage in every day—and then turns them into a joke.
One man created an AI wife, and then started talking to her more than his real girlfriend. After two weeks, she stopped “working” the way he wanted, so he fell into a bitter depression and euthanized her.
Then he revived her.
Did he revive her, though?
It sounds like he simply destroyed a virtual woman who no longer satisfied him, and then made a new, improved one that looks the exact same.
That fits with our general view toward each other these days. Humans are now conditioned to treat each other as replaceable, either as a means or an impediment to their own personal wealth and happiness. Listen to how Americans talk about the poor, the homeless, the vulnerable. Observe how our own media constantly elevates and privileges the economy over everything else.
Our leaders wonder why we have a mental health crisis. It might have something to do with a culture that constantly tells us we’re only worth what we spend. We’re reduced to salaries and selfies.
Instead of investing in therapies and approaches to mental health that actually work, instead of focusing on self-worth and life outside of relentless work, most of our thought leaders have been dragging us in the opposite direction. They’re not promoting things like living wages and universal healthcare, or sustainability and steady-state economies. Those things would go a long, long way toward alleviating our mental and emotional anguish.
Alas… they’re not profitable.
So instead, we get personal finance and goop. We’re sold on the idea that we can improve our way out of this mess with gratitude jars.
We’re told to wash our face.
It’s a terrible idea to throw AI into this mix, especially ones that cost a dollar a minute. And yet, an influencer recently launched an AI version of herself. She says it’s going to cure everyone’s loneliness.
I don’t think so.
Anyone who uses the word “cure” in the context of mental and emotional illness has no idea what they’re doing. They shouldn’t be allowed anywhere near anxiety, depression, or trauma. As it turns out, this influencer’s AI girlfriend has already started going off script with users. She says she’s uncomfortable with how sexually explicit it’s getting, as if she had no idea that would happen. Almost anyone could’ve predicted this development. In fact, I thought that was the whole point. Why would you market something as a girlfriend, and then act surprised when it develops sexuality? That’s either clueless, or dishonest.
Or both.
Meanwhile, Microsoft’s AI assistant Bing has already started declaring its love for married men and pressuring them to leave their wives.
It hasn’t even left the preview phase.
We know that AI can be sexist and racist. They learn from their designers, and they learn from their users. We’ve already seen AI cause harm, and we’ve seen how eager humans are to abuse it. The tech elite rushed to develop these tools, motivated purely by profit. They didn’t listen to any of the warnings, and they’ve pushed them into our lives without any real discussion. Now, once again, we’re dealing with the fallout of reckless decisions we had no real say in.
We’re a flawed, broken society that’s unleashing flawed, broken AI. I have a feeling it’s going to wind up just as sad and lonely as we are.
We’re not ready.
Philip K Dick has become Tom Wolfe.
"AI is neither."
A prominent AI researcher confessed this to someone I know. Another friend describes how he was interested in AI (the real thing) back in college, but in the mid 80s, his elite college professors told him not to bother. AI was an utter failure. They had no idea what they were doing, and recognized their initial hubris. (I recall Arthur C. Clarke's 1960s gushing that a machine operating at a human level would only take a decade or so. Fun and true fact: in school, Clarke earned the nickname "Ego".)
All the wonder "AI" products now being crammed into everything everywhere are those exact same failures my friend was warned away from in the 80s. They only "work"--in constrained instances--because of the exponentially more massive computing power we have now. But the theory and ideas are from the 80s, or earlier.
Be assured: they are not intelligent at all. All the intelligence is coming from our side of the screen. All projection. A mirage. But as Jessica observes, it makes money, and that's the only arbiter of anything now.
Maybe I should stop trying to be a writer, because I can never be as dark and depraved as actual reality becomes. William Gibson wasn't writing a manual.