ChatGPT

Stuck talking to ChatGPT? Welcome the age of doomprompting

In 2020, doomscrolling was named one of the Oxford English Dictionary’s words of the year. The term captured a collective habit born in the pandemic: endlessly refreshing social media and news feeds, consuming an overwhelming stream of disasters, conflicts, and crises. It was a coping mechanism that left people more anxious, exhausted, and disconnected than informed. 

Now, 5 years later, that same compulsive pattern has evolved into something new. In the age of generative AI, the infinite scroll has shifted from newsfeeds to prompt boxes. The phenomenon – dubbed doomprompting – is the tendency to get stuck in endless cycles of refining and resubmitting prompts to LLMs – large language models, chasing the illusion of productivity while quietly outsourcing our own thinking. 

From scrolling to prompting – the new digital trap

The roots of doomscrolling lie deep in our evolutionary wiring. Humans are naturally attuned to negativity bias – we pay more attention to threats and bad news than to positive information, because in ancestral environments, overlooking danger carried high costs. During the COVID-19 pandemic, this tendency was turbocharged by uncertainty, fear, and a craving for control. 

However, following this natural instinct to anticipate every possible threat comes at a high cost – often paid with our mental health. A study from the University of California discovered that people who consumed over 6 hours of media coverage of the Boston Marathon bombing reported worse post-traumatic stress symptoms than those who were physically present at the event. 

In a similar vein, doomprompting emerges as the AI-era sibling: the urge to craft prompts and receive neat, structured answers reflects a compulsion to regain agency in a world mediated by infinite uncertainties. Studies confirm that media consumption loops are tied to existential anxiety and distrust, showing how compulsive digital habits can erode well-being. Instead of endlessly scrolling for the next headline, nowadays people endlessly refine and resubmit prompts to chatbots and LLMs, chasing the perfect answer. 

AI systems are designed to offer infinite continuations – “Would you like me to expand?” “Shall I suggest alternatives?” – creating the same addictive cycle of near-completion as social feeds. As one tech writer put it, “the modern creator isn’t overwhelmed by a blank page; they’re overwhelmed by a bottomless one”. 


What is doomprompting?


Doomprompting is a term for the compulsive habit of endlessly refining, rephrasing, and resubmitting prompts to AI systems without ever achieving a satisfying outcome. Regular prompting is about giving the model clear instructions to get valuable results through prompt engineering. Meanwhile, doomprompting is its darker twin: a loop where the process itself overtakes the purpose. 

So how do you know when you’ve crossed the line into doomprompting? Warning signs mirror criteria for any other overuse: inability to stop despite negative outcomes, decreased control, and mental distress when – in our case, the AI companion is unavailable. Research into digital addiction more broadly supports these markers: compulsive digital use characterised as habitual; uncontrolled behaviour that interferes with daily functioning; early signs of burnout, anxiety, and impaired decision-making. 

This behaviour often develops through more advanced ways of using AI, where helpful techniques turn into unhelpful habits:

● Step-by-step prompting (sometimes called ‘chain-of-thought prompting’) can encourage clearer thinking, but in doomprompting it can also lead to over-explaining. Users may keep asking the AI to go further instead of deciding on an answer. For example, someone might ask for more steps on a simple maths problem long after the answer is clear.

● Layering instructions (or ‘prompting hierarchy’) can quickly become too complex, creating the illusion of being thorough while obscuring the original question. For example, instead of simply asking for a travel itinerary, a user might add prompts about flights, hotels, and restaurants, ending up with something overcomplicated and harder to use.

● Using examples in prompts (often called ‘few-shot prompting’) is intended to guide the AI with efficient examples, but it often turns into long sessions of trial and error. A writer might add more and more sample headlines into the model, only to end up with results that look more generic, not less.

What begins as a smart way to guide AI can quietly turn into an endless loop of overthinking. “We mistake the performance of thinking for thinking itself, the theatre of problem-solving for problem-solving itself, the ritual of creation for creation itself. And the satisfaction feels real,” noticed Anu, writer and founder from New York. What keeps users stuck in this loop, rather than simply closing the chatbot app when the answer falls short? 


Why doomprompting feels addictive

It feels so good not only because of the way AI tools are designed – to agree, suggest, and keep the conversation going – but also because it exploits deep-seated cognitive biases that keep users trapped in the loop. 

1. The productivity mirage

Part of what makes doomprompting so sticky is the illusion of progress. Each new iteration feels “almost there,” nudging you to try just one more tweak. Psychologists refer to this as the sunk-cost fallacy: after investing time in refining prompts, people believe the perfect output is within reach, and continue even when the returns diminish. For users, the habit mimics craftsmanship, but instead of sharpening skills, it drains hours into loops of minor adjustments. Research shows that sunk-cost behaviour strongly influences decision-making, even when additional effort is irrational. 

2. The slot machine effect

AI prompts also exploit reinforcement learning patterns similar to those in gambling. Each answer is slightly different, creating a near-miss effect – responses that feel close enough to perfection to keep you trying again. Psychologists have long documented that near-misses in gambling amplify motivation, sustaining behaviour even when outcomes are objectively poor. Neuroscience studies confirm that near-win results activate the same reward circuits as actual wins, rendering them equally addictive. 


Chat GPT by Berke Citak on Unsplash.jpg

3. Creative and cognitive drain

Finally, doomprompting leads to decision fatigue, creative burnout, and erosion of personal voice. Each iteration demands micro-choices: adjust the wording, add another example, tweak the style. Over time, these micro-decisions accumulate into mental overload. Research on decision fatigue shows that repeated small choices deplete willpower and reduce overall performance. Much like doomscrolling, which is linked to anxiety and diminished well-being, doomprompting creates the paradox of control – users feel in charge while slowly losing clarity and energy. 


How to make doomprompting insightful

Despite the risks of prompt loops, there is also a more positive trend emerging. Some users are experimenting with ‘joyprompting’, using AI as a spark for imagination, playfulness, and curiosity. In this approach, AI becomes a partner in exploration rather than a machine for outsourcing thought. As Forbes notes, learning to “prompt with joy” may become a core digital literacy skill for the future – as essential as filtering information was in the age of doomscrolling. How can we start building this skill ourselves? 

Step 1: Use AI Where You Can Validate Results 

AI works best when applied to tasks with a clear target that can be verified against reality. Examples include debugging code (the programme either runs or it doesn’t), generating grammar corrections (the sentence is either accurate or not), or drafting structured formats like tables or checklists. By contrast, outsourcing open-ended creative reasoning to AI – the ‘messy middle’ – often results in diluted ideas. Research in education indicates that students who allow AI to handle higher-order thinking steps retain less knowledge and perform worse on complex tasks. 

Step 2: Redesign Your Prompting Habits 

Use a prompting hierarchy as a simple framework: refine once, finalise, and move on. Studies on productivity show that imposing self-limits increases task completion and reduces cognitive overload. Another strategy is to step away from the screen and engage in offline ideation before re-engaging with AI. Research in cognitive psychology confirms that breaks from continuous online engagement enhance creativity and problem-solving. 


ChatGPT on mobile device by Aerps.com on Unsplash.jpg

Step 3: Find Human Collaborators 

Perhaps the most effective antidote to doomprompting is conversation with real people. AI, is designed to flatter. Human partners can challenge your assumptions, provide alternative perspectives, and force you to articulate your ideas clearly. Research shows that collaborative reasoning tends to produce more accurate and creative solutions than individual reasoning alone. 

Step 4: Set Boundaries Where AI Stays Out 

The most effective solution may be clear limits on how often you use AI. This can mean planning AI-free work or study hours. Create time when you rely only on your own thinking – or take short digital detoxes. Harvard experts also recommend creating device-free spaces and setting notification limits to prevent compulsive loops. Daily mindfulness practices are also effective. A meta-analysis found that mindfulness training reduces compulsive digital use and strengthens self-regulation. 


FAQ

What is doomprompting?

Doomprompting is the compulsive habit of endlessly rewriting and resubmitting prompts to AI systems. The aim is the hope of achieving the ‘perfect’ output. Unlike normal prompting, doomprompting traps users in loops of overthinking and tweaking, often without meaningful progress. 

Is doomprompting harmful? 

Yes – over time it can cause decision fatigue, reduce creative confidence, and erode your personal voice. Much like doomscrolling, it creates a false sense of control while actually amplifying anxiety and mental overload. 

How is doomprompting different from doomscrolling? 

Doomscrolling describes endlessly consuming negative news, which harms mental health by fuelling anxiety and pessimism. Doomprompting is its digital twin: instead of scrolling news feeds, people scroll through AI answers. 

Why does doomprompting feel addictive? 

AI tools are designed to suggest ‘next steps’ and offer slightly different answers each time. This exploits psychological biases like the near-miss effect (known from gambling) and the sunk-cost fallacy. Each response feels close to success, pushing users to keep trying more. 

Is there an alternative to doomprompting? 

Yes. Some users experiment with what’s called joyprompting: using AI not as a productivity crutch, but as a playful tool to spark imagination. Instead of chasing the “perfect” output, this approach treats prompting as an act of exploration. 

This article was prepared with the participation of the States of Mind content team. 


ChatGPT
ChatGPT by Emiliano Vittoriosi on Unsplash.jpg


No Comments

We love to hear your views and ideas! We reply to all comments personally! :)

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights