A.I. labs want more of your time. That's a serious problem.
Healthy disengagement — not stickiness — should become a core safety feature within AI chatbots.
COMMENTARY by Michal Luria and Amy Winecoff
OpenAI executives have claimed for years that they don’t design products to maximize user engagement. It’s what CEO Sam Altman told the U.S. Senate during a hearing in 2023, and what the company has posted on its public blogs. But reporting from The New York Times tells a different story: Over the years, as the company faced increasing competitive pressure, it made deliberate choices to prioritize engagement over safety. And it did so in the face of known issues related to the mental health of its most vulnerable users.
The decision to focus on engagement instead of building safeguards into ChatGPT, the company’s widely popular chatbot, is central to at least seven lawsuits against the company. While OpenAI is getting the most attention for its doublespeak when it comes to user engagement, it’s hardly alone: Other AI companies such as Anthropic and Cohere similarly publicly reject engagement as their goal while seeming to pursue efforts to extend time on platform.
AI companies have attempted to set themselves apart from ad-based social media platforms such as Facebook, TikTok or YouTube that are clearly incentivized, as well as heavily criticized, for promoting attention-grabbing content for revenue. Instead, these companies have argued that they lack this incentive, as their business currently relies on subscriptions, API access and enterprise sales, and therefore don’t need to optimize for metrics such as time on platform or clicks.
That distinction may have held water when top AI companies were structured as research labs, but the immense financial pressures of developing AI have necessitated a shift toward commercialization. Companies are increasingly turning their technologies into products, and hiring the kinds of people that are able to make them compelling. With that comes a focus on metrics — how long and how frequently do people use the product. This creates an uneasy tension between companies’ original goals of building AI that benefits humanity and the new reality of building technology that keeps users paying for subscriptions.
When companies refuse to admit their focus on engagement, they obfuscate potential harms of their technologies, and delay conversations about accountability when those risks turn into real-world harms.
Nearly 10% of adult users and 20% of K-12 students say they use chatbots for social interaction and companionship, and a substantial proportion of both groups report interacting with these systems multiple times a week. While users may benefit from using AI to learn and create, interfaces designed for engagement may encourage people to interact far beyond their original intent, fostering deskilling and emotional dependence. Heavy use of AI can also worsen loneliness and is linked to cases of self-harm, suicide and what was termed “AI psychosis.” Increased engagement is especially concerning given that AI safety guardrails tend to break down during longer user sessions. In other words, the longer and more frequently vulnerable people interact with current AI systems, the more at risk they may be.
These are inherent risks with AI. AI training processes such as reinforcement learning from human feedback can create friendly, empathetic and socially coherent behaviors in models. But these same methods increase sycophancy, models’ tendency to endorse users’ beliefs, moral positions and emotional frames, regardless of whether those views are accurate or constructive. Interacting with sycophantic systems reduces users’ willingness to work through interpersonal conflicts, but increases their desire to continue using sycophantic AI. Thus, the very techniques that can make AI feel socially human can undermine human social relationships.
It’s not just what chatbots know that matters. How they are structurally designed to interact with users is incredibly important, too. Some chatbots, for example, complete every interaction with a follow-up question, encouraging the conversation to continue. Just the format of a “chat” leads to prolonged engagement; unlike typing a search query or scrolling a feed, chatting in natural, text-like language triggers our instinct to see the system as a person, making us attribute it with empathy and personality — and keep talking.
Engagement itself isn’t inherently a problem. There's nothing wrong with designing AI systems that people genuinely enjoy. The problem arises when companies insist that engagement is not their goal, even as they build products optimized to encourage it. If AI companies are serious about not supporting unhealthy forms of engagement, the first step is to acknowledge how stickiness is built into their systems. Claiming otherwise delays serious conversations about designing for actual responsible engagement.
Acting responsibly means measuring and reporting engagement honestly. If the goal is not maximizing time on the platform, companies should measure it anyway — and strive to provide information users seek without unnecessary interaction. When a platform notices excessive interaction, that could prompt a safety flag, and even lead the platform to suggest users take a break.
In fact, AI developers should design for healthy disengagement. Currently, many chatbots complete interactions with suggestions for next actions, in an attempt, perhaps, to be more helpful. Instead, chatbots need to recognize when a user has received what they need, and help them end the conversation. Furthermore, companies should be transparent about how they are working to prevent overuse or dependencies. Simply claiming that they aim to foster “healthy engagement” is not enough. As AI becomes unavoidable, and entangled in many aspects of society, everyone deserves a better understanding of how these systems are built, and what the multibillion dollar corporations behind them are doing to safeguard the public.
Perhaps now, from within the eye of the storm, AI companies can stop claiming they don’t optimize for engagement. Whether the goal is called retention, user satisfaction or helpfulness, the outcome is the same — systems that are designed to keep people interacting, sometimes beyond what is necessary, healthy or desired. Denying that problem only dodges accountability. It doesn’t make it go away.
Michal Luria is a research fellow at the Center for Democracy & Technology, leading the organization's research on AI chatbots and child safety online.
Amy Winecoff is a senior technologist in the AI Governance Lab at the Center for Democracy & Technology. Her work focuses on governance issues in AI, such as how documentation and evaluation practices can support responsible design and deployment of AI systems.