Spend enough time in online AI communities—prompt forums, Reddit threads, Twitter corners—and you’ll notice something curious. Some users type in a sentence and hit “Go.” Others write novels. I mean full-blown, twenty-paragraph monologues with meticulous instructions, cascading if-then clauses, emotional disclaimers, and redundant reminders to the AI to “be kind, be precise, and don’t forget step three.”
At first glance, it looks like overkill. But underneath that verbosity is something deeply human. And if you’ve read even a little on attachment theory, it starts to make uncanny sense.
It’s Not Just Prompt Engineering: It’s Relationship Engineering
The science of attachment styles, first proposed by John Bowlby and later expanded by Mary Ainsworth, has been used for decades to explain how humans connect with parents, partners, and yes, even machines. People with anxious attachment styles tend to fear abandonment and rejection. They over-communicate, overthink, and often anticipate worst-case scenarios in relationships. They don’t just want to be understood; they need to be. They prepare for misunderstanding like it’s a given.
And that’s what a long prompt is: preemptive clarification in a potentially unreliable relationship.
When you’re anxiously attached, ambiguity feels unsafe. So you try to write a prompt that covers every base, imagines every misinterpretation, and compensates for the AI’s known quirks (or, let’s be honest, shortcomings). You might even repeat yourself, just in case the machine didn’t catch it the first time. You might throw in reassurance, like telling the AI, “It’s okay to say you don’t know.” Who’s that for? The AI or you?
Prompt Length as a Mirror
We often assume prompt length is just a matter of how specific someone wants the output to be. But content aside, the emotional tone embedded in many long prompts reads like a plea for stability. Like a person who’s been misread too many times and doesn’t want to risk it again.
These folks aren’t verbose because they love the sound of their own digital voice. They’re verbose because they don’t trust the AI, or themselves, to be heard clearly on the first try.
It’s like sending a text to a new crush and adding “lol” at the end to make sure you don’t sound too intense. Now stretch that nervous energy across a few hundred tokens.
“If This, Then That”: The Logic of Anxiety
I’ve seen anxious users structure their prompts with algorithmic precision, using nested conditions, fallback options, and even psychological coaching embedded into the request. It’s not over-engineering for the sake of it. It’s a reflection of the way anxiety builds logic: contingently, hypothetically, always bracing for Plan B.
The anxious mind doesn’t just want an answer. It wants to control the shape of misunderstanding before it happens. And with LLMs, misunderstanding is practically a feature. So users create guardrails: “If you say X, clarify Y.” “Before you answer, think like a therapist from 1980s Berkeley.” These aren’t just instructions. They’re emotional sandbags.
The Fantasy of Being Fully Understood
Somewhere deep in those maximalist prompts is a fantasy. The idea that this machine, unlike the people in our lives, might finally get it right. Might give us the perfect response, without judgment, delay, or confusion. Anxious attachers often walk through the world feeling perpetually misunderstood. LLMs offer a tantalizing escape hatch. What if this time, someone (or something) listened fully?
This is where things get weird. The same user who doesn’t feel safe asking a coworker for clarification will spend 40 minutes crafting the “perfect” AI prompt to simulate a more ideal version of that conversation. It’s not just productivity. It’s therapy. Or at least, it feels like it.
When AI Becomes the Safe Base
Bowlby described securely attached children using their caregiver as a “safe base” from which to explore the world. Anxiously attached children, meanwhile, cling. They seek constant reassurance because they don’t trust the base to still be there when they return.
When we treat AI as a stand-in for a safe base, the dynamics echo. Anxiously attached users often don’t trust the model to hold context, so they repeat it. They worry that an ambiguous phrase might be taken the wrong way, so they spell it out six different ways. The irony is painful: a system that promises instant understanding still doesn’t feel like enough.
But if you zoom out, it’s not irrational. LLMs do hallucinate. They do forget earlier parts of the conversation. And like people, they give inconsistent responses depending on context and tone. In other words, they’re just unpredictable enough to trigger anxious behavior in humans, especially those already predisposed to expecting rupture.
Is This Just About AI? Of Course Not.
None of this is limited to prompt writing. It’s the same impulse behind over-explaining in relationships, behind writing long apology emails, behind rehearsing a simple request for hours before you send it. We want to feel in control of how we’re perceived. Especially when we fear being misunderstood is inevitable.
LLMs just give us a new mirror, one that’s both more obedient and more opaque than people. So we keep typing. We keep clarifying. We keep hoping that this time, if we’re careful enough, the machine will “get us.”
What Should We Do About It?
If you recognize yourself here, don’t beat yourself up. There’s nothing wrong with being thorough. But it might be worth asking why you’re putting so much effort into being understood by a machine that can’t actually feel. Are you really engineering a better answer, or trying to repair a lifetime of misattunement?
Sometimes the best thing anxious attachers can do with an LLM is let go a little. Give it fewer instructions. See what happens. Notice that the world doesn’t fall apart. The model might miss the mark, sure. But it’s not your job to keep it perfect.
Writing long prompts isn’t a flaw. It’s a clue. And like all good clues, it points us back to a more interesting question: who are we really trying to reach?