Millions of elderly Americans live alone or lack companionship. They should go die while experts figure out if AI buddies designed to keep them company have the right “guardrails.”
That’s my distillation of a presentation at a recent conference focusing on technology governance, including our current hot-button, AI. I came away from the presentation more than miffed by the implicit indifference that people in tech policy and regulatory compliance pay to vulnerable communities.

Derided in a morning leadoff talk was a product called ElliQ, an AI buddy offered to provide interaction, companionship, and assistance, such as medicine reminders, to aged people. (“Buddy” is my term; ElliQ calls its device a “sidekick.”) The speaker at the conference showed a video of a woman expressing appreciation and even love for this device given its role in her life. “She checks up on me. She finds out how I’m doing. She remembers little things like I wasn’t happy yesterday,” the ElliQ user says. The woman has even put a feminine face on her device. It’s not entirely comfortable to watch.
The speaker, a prominent tech policy expert, glibly invited the audience to join him in focusing on things about such a product that are “creepy.” Hilarity and guffaws in the audience betrayed little sympathy for a woman who has found an enhancement to life at an advanced age.
While conference-goers mused, elsewhere, millions of elderly people sat alone, many in front of blaring televisions, passing whatever time they have left. Family members, if they have them, can’t always be there to remind them to take their medications for blood pressure, mental health, and so on. Such gaps in support systems allow elderly lives to degrade faster until, more or less directly, these gaps hasten death.
That’s where I start my cost-benefit analysis, with the rather obvious benefits of giving people in need an AI buddy. Self-care reminders, the sense of having an interested companion, someone with whom (something with which) to share ideas—and even learn—can improve the quality and length of life for millions. The sooner such a product is in wide use, the greater the benefits, not least because experience will generate improvements, driving the product’s benefits even higher.
Of course, experience will turn up faults and concerns with AI buddies. There are plenty.
Privacy is one. What happens to information directly shared or gleaned from interactions with an AI buddy? Who has access to that information? What might they do with it? I’d answer most of those questions through contracts (treating personal information as property). The push and pull between companies and customers will hone the products’ terms. Customers collectively adopt or decline technologies and products based on privacy interests and concerns.
What if the buddy brings about some kind of manipulation, either by design or mistake? If an AI buddy seeks to upsell its own services or the company’s other products, is that ethical? People can adopt wretched political and social views through natural friendships, and that’s universally accepted in policy terms. Are the political “we” empowered to intervene so that AI buddies keep their companions’ thinking within acceptable bounds?
How can AI buddies be managed so that some human agent (family member, attorney, or other traditional responsible actor) can provide oversight? When does an AI buddy have an obligation to report the potential for self-harm or harm to others?
These problems generally mirror human relationships. Market processes will require providers to create solutions and protective tools so their products continue to sell. Liability rules often kick in when incentives fail. And more good news for hand-wringers: AI systems can help create and undergird their own protective functions.
For me, these problems all arise in the context of opportunity. The risks are correctly compared to the status quo in which people suffer, not some ideal where everyone has a human companion or better resources than current reality offers. There is a budding, prosocial entertainment market here, far better than passive content consumption. AI buddies seem certain to make things better on net—maybe vastly better.
The presentation on ElliQ was structured around the idea of connoisseurship: We should see new technologies like we appreciate wine. Looking at a product like ElliQ, we should recognize our own gut reactions—that sense of creepiness the speaker incited in the audience—then interrogate more deeply the problems we’d have to solve if this product is going to be acceptable.
That approach was right for the room: people in tech policy and regulatory compliance who don’t bear the costs of saying no to progress. But the wine-tasting metaphor was vinegar for me. The more appropriate metaphor, and the message the presentation sent to the lonely, elderly, and infirm, was, “Let them eat cake.”
The post Marie Antoinette Does AI Tech Policy Analysis appeared first on American Enterprise Institute – AEI.