
When Olivia Dreizen Howell was accused of sounding like an AI chatbot, her reaction was as human as it gets.
“I was talking about it nonstop for weeks,” says Howell, who co-founded an online divorce support network. “I felt like I was being attacked. I was very upset.”
Howell’s supposed offense was an Instagram post she shared the day after Christmas, reflecting on why the post-holiday emotional crash can feel so brutal. One follower left a public comment complaining that the post was obviously AI-generated—it wasn’t—and “pretty off-putting.”
“It felt invasive,” Howell says. She clarified in the comments that the post had been written by her without any machine assistance. “I put my blood, sweat, and tears into my work,” she says, “and I wanted people to know it was indeed a false statement.”
Across the internet, as tools like ChatGPT, Claude, and Gemini become part of everyday life, people are increasingly informing others that their words come across as AI output. You can practically feel the disdain through the screen: “Did AI write that?” It’s not really a question—it’s a way of ending a conversation by casting doubt on whether someone deserves to be taken seriously.
“It’s basically shorthand for, ‘You don’t sound human enough,’ which is a pretty loaded accusation,” says Stephanie Steele-Wren, a psychologist in Bentonville, Ark. “It taps into a much bigger cultural anxiety about authenticity, and whether or not we can still recognize a human voice when we hear or read one.” The implication, she says, is clear: The person on the other end lacks intelligence, originality, and credibility—and may not even be worth engaging with or trusting.
Why it stings
Large language models (LLMs) tend to write in recognizable ways—AI hallmarks are certain constructions like “It’s not just X, it’s also Y,” and overusing em dashes. “AI has certain habits,” says Alex Kotran, co-founder and CEO of aiEDU, an education nonprofit focused on AI literacy. “It likes threes—X, Y, and Z—and it often has alliteration.” Other so-called tells include overly tidy conclusions and unnaturally smooth transitions.
When you read something that sounds like it was generated by AI, “you feel like it’s a politician speaking,” says Caitlin Begg, a sociologist who focuses on technology's effect on everyday life. “It’s generally very long-winded, and it doesn’t really take a hardened stance.” In other words, it hedges instead of committing and avoids saying much of anything at all. “There’s a certain part to it that feels soulless,” she says.
Being told you sound like AI, then, can feel oddly dehumanizing. “That’s why the insult stings,” Steele-Wren says. “It’s not about quality. It’s about identity. It suggests your voice is generic or interchangeable,” and that hurts.
A desire for authenticity
The fact that people are accusing others of using AI to stand in for their own voice, whether it’s true or not, speaks to cultural angst about this strange new machine-mediated world, Steele-Wren says. That’s complicated by the fact that there’s no reliable way to detect whether something was actually written by AI, plus ongoing anxiety about whether human effort still matters. When you can’t confidently identify the human behind the words, she says, every interaction feels a little less grounded.
“There’s a real hunger right now for writing that feels unmistakably human, with all the quirks, oddly specific details, and little flashes of personality that AI can’t quite mimic,” she adds. “Humans are naturally chaotic and idiosyncratic. AI is not.”
Read More: Stop Letting AI Run Your Social Life
Some people—in fear of being accused of using AI—are purposely inserting grammatical errors or typos to make their prose look more human, experts say. “You can already see people adapting with more intentional messiness, more humor, and more specificity,” Steele-Wren says. “It’s a collective attempt to signal, ‘A real person wrote this.’”
Kotran has noticed that he’s consciously not polishing his writing as much as he used to. That includes bidding farewell to the beleaguered em dash. “You'll read my paragraphs sometimes, and I'll just be using commas and commas and commas. I'm like, I know this isn't really correct, but there are people who look at a piece of writing and go, ‘Oh, it has an em dash—it’s been generated by AI,’” he says. He’s even started to remove alliteration that once would have made him smile.
The irony is that this wasn’t always the case, says Nicole Ellison, a professor at the University of Michigan School of Information who studies human-computer interaction. Her past research found that people were more likely to dismiss someone if their dating profile had typos. “They would see that as a signal that either this person is uneducated, or that they don't care," she says. “Now we’ve kind of come full circle, where a typo maybe signals that you actually do care, because you took the time to write it yourself.”
Part of the problem is that there aren’t any best practices around AI usage yet, Ellison adds. Should you add a disclaimer when you use ChatGPT to write something, preempting any backlash? “There are no established norms at the moment,” she says. “I assume that we’ll collectively, as a society, come up with shared expectations.”
Some experts expect people to start prioritizing analog activities, like hand-writing notes, to push back against the creeping automation of everyday life. “I think there will be a premium placed on humanness,” Kotran says. “Whenever possible, people should just be transparent, because ultimately people want authenticity. We're in a moment where we're literally redefining authenticity.”
What to say when you’re accused of sounding like AI
When Howell was told her Instagram post read like it had been written by a chatbot, she defended herself in multiple messages—public and private. “Hmm, it’s not AI, but I have been working in marketing for 20 years, so I do know how people read,” she said in one. If it happened again, however, she doesn’t think she’d bother to acknowledge the accusation. “I know what I'm doing—and obviously I know it’s me—so I wouldn’t feel the need,” she says.
While some people will feel best letting snide remarks slide, others will feel compelled to push back. If you do choose to respond, keep it simple. Steele-Wren suggests a comment like this: “Uh, no, that’s my actual voice.” You could add: “I was really careful in writing it, and maybe that's not how I always come off. My writing looks a lot different than how I talk.”
Read More: Is Giving ChatGPT Health Your Medical Records a Good Idea?
These options work, too, she says: “That’s just what happens when I slow down enough to choose my words on purpose,” or “That’s just my ‘I want this to land softly’ voice.”
Almost everyone will have to reckon with how to handle these modern communication dilemmas. “People are noticing more and more that discourse has become flattened online, and that there’s a lot of mechanized influence,” Begg says. “I think people are getting a little bit sick of it, and they’re beginning to rebel against AI and the 'algorithmization of everyday life.’ That includes calling out people for perceived AI-generated writing,” whether those on the receiving end deserve it or not.
More Must-Reads from TIME
- Cybersecurity Experts Are Sounding the Alarm on DOGE
- Meet the 2025 Women of the Year
- The Harsh Truth About Disability Inclusion
- Why Do More Young Adults Have Cancer?
- Colman Domingo Leads With Radical Love
- How to Get Better at Doing Things Alone
- Michelle Zauner Stares Down the Darkness
Contact us at letters@time.com