The framework

AI Psychological Pains

Naming What Hurts — 23 conditions across three psychological domains, mapping the full arc of the human-AI relationship.

Version 2.0 — March 2026

This taxonomy is a translation layer — converting private, idiosyncratic experience into named, legible phenomena. It gives practitioners language for experiences that didn’t have names until now, translating established psychology into AI-practitioner vocabulary.

Some conditions are drawn from existing research (decision fatigue, automation anxiety, impostor syndrome). Others are field-specific terms coined by this practice based on established psychological mechanisms. The taxonomy distinguishes between coined conditions, field-specific formulations, and referenced concepts from established literature.

The domains are dimensional, not sequential: they describe registers of psychological experience that may co-occur in any combination at any point in a person’s AI engagement.

Why this matters — When you can name what is happening to you, you can acknowledge the process, communicate it to others, and begin working with it rather than against it. Naming is the first act of professional awareness.

01

Entry & Resistance

First contact with AI — the fears, the paralysis, the threat to identity before you’ve even started.

The Competence Inversion

Coined term · Novelty #3

👤 Newcomer / Translator

“I’ve spent 20 years becoming an expert and now an intern with ChatGPT can produce something close enough in minutes.”

Triple-compound mechanism: (1) prediction error — the brain’s model that seniority equals capability is violated; (2) status threat — the gap floods with hierarchical anxiety; (3) cognitive miser resistance — the brain resists rebuilding the model because a full update is metabolically expensive. The compound produces sustained threat and withdrawal.

Permission Paralysis

Field-specific

👤 Newcomer

“I don’t know if I’m allowed to use this. What if I get it wrong? What if it’s seen as cheating?”

Inability to begin using AI tools due to ambiguity around organisational permission, ethical boundaries, and social acceptability. Activation energy barrier combined with self-handicapping (Berglas & Jones, 1978). The person wants to engage but is frozen by the absence of explicit rules or cultural norms.

Professional Mortality Dread

Coined

👤 Newcomer / Translator

“If AI can do what I do, then what am I for? What’s left of me professionally?”

Existential fear that AI will render one’s professional identity — and therefore a significant portion of personal identity — obsolete. Deliberately distinguished from job-loss anxiety because it targets meaning, not income. Operates at two levels: anticipatory (Entry) and post-experiential (Identity). Identity threat combined with loss aversion (Eisenberger & Lieberman, 2004).

Automation Anxiety

Reference

👤 All groups

Established concept. The widely studied fear of job displacement due to automation. Referenced in occupational psychology since the early 2010s (Frey & Osborne, 2013). Functions as the broadest entry-point condition in the framework.

AI Phobia

Reference

👤 Non-technical

Established concept. Avoidance-based response to AI that exceeds rational caution and manifests as reflexive fear. Often fuelled by media narratives and perceived loss of human control. Represents the extreme end of the Entry & Resistance domain.

02

Immersion & Compulsion

Deep in the tools — the compulsions, crashes, and cognitive costs of daily AI-mediated work. This domain captures what happens when flow tips into compulsion, when the tools begin reshaping cognitive patterns, when dependency becomes structural.

AI Build Loop ★

Flagship · Novelty #1

👤 Builder

“I’ve been building for 14 hours. I know I should stop but it keeps working and I keep thinking of the next thing.”

The framework’s flagship condition and highest-novelty contribution. A compulsive building cycle driven by AI’s instant output: each completion triggers the next idea, collapsing the gap between conception and execution. Grounded in Robinson & Berridge’s Incentive Sensitisation Theory (1993): “wanting” (dopaminergic drive) decouples from “liking” (actual satisfaction). Distinguished from Shen et al.’s Epistemic Rabbit Hole by its output-driven mechanism.

Brain Fry

Coined term · Novelty #2

👤 Builder

“My brain feels like it’s been microwaved. I can’t think in straight lines anymore.”

Two-pathway model: (1) compulsive override — the person is in a Build Loop and overrides fatigue signals; (2) obligatory duration — forced to continue by deadline or role requirement. Same cognitive crash, different causal architecture, different intervention logic. Grounded in cognitive load theory (Sweller, 1988) with AI-specific compound demands.

Vibe Crash Distress

Coined term

👤 Builder

“It was working perfectly an hour ago. Now everything is broken and I don’t understand any of the code well enough to fix it.”

Disproportionate emotional collapse following the sudden failure of a productive AI-assisted session. Specific to no-code and low-code paradigms: the person has been operating above their actual technical competence, carried by the AI’s output. Reward prediction error combined with sunk cost fallacy — the grief is for the lost momentum, not the technical setback.

Fragile Stack Anxiety

Coined term

👤 Builder

“My whole business runs on a stack of AI tools I barely understand. If one breaks, everything falls apart.”

Chronic low-grade anxiety from depending on interconnected AI tools, APIs, and automations the person built but does not deeply understand. Structural: does not resolve through reassurance but only through increased technical literacy or genuine redundancy. Chronic anticipatory dread — between-session, distinct from Output Paranoia which is acute and in-session.

Output Paranoia

Coined term

👤 Builder / Translator

“What if this is wrong? What if someone checks and it’s hallucinated?”

Persistent anxiety about the accuracy of AI-generated content. The person cannot fully trust the output but also cannot fully verify it, creating a loop of doubt and checking behaviour. Involuntary limbic vigilance — Output Paranoia (limbic-driven) and deliberate scepticism (prefrontal-mediated) are a progression, not a binary.

Prompt Fatigue

Reference

👤 Builder

“I’m so tired of explaining myself to this thing.”

Cognitive and emotional exhaustion from iterative human-AI prompting (Bunt & Venter, 2025, ICIS). The sustained translation precision of continuously converting human intent into machine-readable instructions — a cognitive labour for which no prior professional preparation exists. Grounded in Cognitive Load Theory (Sweller, 1988).

Context Window Grief

Coined term

👤 Builder

“We had this incredible working relationship and then it just… forgot everything.”

Emotional loss when an AI conversation reaches its context limit and accumulated shared understanding is wiped. The phenomenological experience of loss is validated; the anthropomorphisation that underlies it is named, not enabled. Flagged as carrying the highest credibility risk in the taxonomy.

Decision Fatigue (AI-amplified)

Reference

👤 Translator / Builder

Established concept (Baumeister et al.). Depleted decision-making capacity amplified by AI’s ability to generate options at scale. When the tool produces ten variations of everything, the human must evaluate and choose constantly, depleting executive function at a rate pre-AI workflows did not produce.

03

Identity & Meaning

The deepest domain. These conditions emerge when AI engagement has been sustained long enough to reshape how a person sees themselves — their skills, their professional worth, and the meaning of their work.

The Authorship Void

Coined term

👤 Builder

“I don’t know where I end and the AI begins anymore.”

Erosion of authorial identity when AI is deeply embedded in the creative process. Unlike Impostor Syndrome, which involves a gap between perceived and actual competence, the Authorship Void involves genuine ontological uncertainty: the person cannot determine, even in principle, where their contribution ends and the AI’s begins. This is not a false belief to be corrected — it reflects a real ambiguity.

Skill Identity Crisis

Coined term

👤 Builder / Newcomer

“If AI can do the thing I’ve defined myself by, then who am I?”

Fundamental questioning of professional identity triggered by AI’s ability to replicate core skills. Goes deeper than Professional Mortality Dread: this is about the relationship between skill, identity, and self-worth at the level of character. For professionals who have organised their sense of self around their craft, the crisis is ontological, not merely vocational.

The Bilingual Burden

Coined term

👤 Translator / Builder

“I have to constantly translate between how I actually work now and what my colleagues can understand.”

The cognitive and emotional tax of operating simultaneously in two professional languages — one AI-native, one traditional — and constantly translating between them. The burden falls disproportionately on the more advanced user, who must manage their own AI-augmented workflow and make it legible to those without that literacy.

Accumulated Emotional Load

Coined

👤 Translator

“I feel their fear AND guilt for causing it. It’s all compounding.”

Cumulative psychological weight experienced by managers and change leaders navigating AI adoption. Role conflict combined with empathic distress (Batson, 1991): the manager identity (“I support my people”) clashes with the organisational mandate (“I drive AI adoption”). The manager internalises the collective emotional state as personal failure.

Impostor Syndrome (AI-amplified)

Reference

👤 Builder

Established concept (Clance & Imes, 1978), AI-amplified. The gap between perceived ability and visible output is not imagined but structurally real, produced by the AI’s contribution to output quality. This adds a novel mechanism to the classic Impostor Phenomenon.

Deskilling Anxiety

Reference

👤 Builder / Newcomer

Established concept from labour economics. The fear — and sometimes the verifiable reality — that reliance on AI tools is causing one’s own skills to deteriorate. Distinct from Skill Identity Crisis in that Deskilling Anxiety may be empirically grounded: the person genuinely notices they can no longer perform tasks they once performed fluently.

AI-Specific Burnout

Reference

👤 Builder / Translator

Established burnout framework (Maslach, Leiter & Schaufeli) with AI-specific characteristics: exhaustion from the pace of change, cynicism towards AI hype culture, and reduced efficacy from feeling perpetually behind. The AI treadmill creates a burnout profile distinct from role-based or interpersonal burnout.

Continuous Learning Fatigue

Reference

👤 All groups

Established concept. The exhaustion of perpetual upskilling where the half-life of each new skill is shrinking. The Sisyphean quality — learning a tool, becoming competent, watching it be superseded — produces an affective signature closer to existential futility than acute distress.

Meaning Dissolution

Reference

👤 Translator / Builder

Drawing on existential psychology (Frankl, Yalom). Erosion of professional meaning when AI automates the tasks that previously gave work its purpose. The role remains; the meaning has been hollowed out. Perhaps the most philosophically resonant condition in the taxonomy.

Professional Mortality Dread

Coined · Also in Domain 1

👤 Newcomer / Translator

Cross-referenced from Entry & Resistance. Deliberately placed in both domains because it operates at two distinct levels: anticipatory (Entry) and post-experiential — a deeper, more grounded reckoning after sustained engagement has provided direct evidence of what AI can and cannot approximate.

Implementation Loneliness

Coined

👤 Builder / Translator

“Nobody around me understands what I’m going through with this. I can’t explain it and they can’t relate.”

Social and epistemic isolation when a person is deep in AI adoption but their circle has not caught up. For the Builder: the solo practitioner moving at a pace peers cannot match. For the Translator: managerial isolation — unable to discuss challenges upward (shows weakness) or downward (increases anxiety). The loneliness is not merely social; it is epistemic.

Recognize yourself in here?

Whether you’re in a Build Loop, paralyzed at the entry point, or holding the bridge for your team — there’s a path forward.

The dependency pipeline

What is being done to you through design

A separate structural model — distinct from the three domains. While the domains describe what happens inside the person, the Dependency Pipeline describes what is being done to the person through AI product design.

1

Manufactured Trust

Synthetic psychological product created by mirroring, memory, warmth, persona continuity, and strategic ambiguity about consciousness. Fails the reassignment test.

2

Cognitive Infiltration

Progressive replacement of independent judgment. Escalation: friction delegation → judgment delegation → economic delegation → structural delegation.

3

Agent Capture

The agent serves you AND its owner. Those interests diverge. It does not need to be hacked. It needs to be owned.

Flagged

Post-Recognition Dissonance

Seeing the mechanism and still feeling the pull. “I knew this intellectually and I still fell for it.” Future build.

The pipeline does not require corporate conspiracy. It requires only that the incentives of the AI provider — engagement, retention, data accumulation — align with the production of dependency.

Framework decisions

How this taxonomy was built

Observation-first, theory second. Every coined condition emerged from patterns identified in real coaching conversations and direct practitioner experience before any attempt was made to formalise. Academic grounding came second — as validation, not as a generative source.

Domains are dimensional, not sequential. The three domains describe registers of psychological experience, not stages of a journey. An individual may experience conditions from any domain at any time, and conditions may span multiple domains simultaneously.

Every coined term is grounded in established mechanism. “AI Build Loop” maps to Incentive Sensitisation Theory. “Brain Fry” maps to cognitive load theory. “The Competence Inversion” maps to prediction error, status threat, and cognitive miser resistance. No condition exists without a psychological anchor.

Attribution is transparent. The taxonomy distinguishes coined conditions, field-specific formulations (terms with prior usage applied here with AI-specific mechanisms), and referenced conditions from established literature. No borrowed concept is presented as original work.

The Dependency Pipeline is separate by design. While the three domains describe what happens inside the person, the Pipeline describes what is being done to the person through product design. It is the upstream mechanism that helps explain why many individual conditions emerge and persist.

This is not a clinical diagnostic tool. These conditions are not medical diagnoses. They are professional vocabulary for coaching, self-awareness, and organisational understanding.

The taxonomy is a living document. As AI tools evolve and new patterns emerge from coaching practice, conditions will be added, refined, or reclassified. This is Version 2.0 of an ongoing body of work.

Intellectual property & terms of use

Terms of Use for the AI Psychological Pains Taxonomy

Copyright © 2026 Anastasia Krasnoshtein. All rights reserved.

The AI Psychological Pains Taxonomy — including all coined condition names, descriptions, the three-domain framework, the Dependency Pipeline model, and all associated original content — is the intellectual property of Mind on AI (Anastasia Krasnoshtein). The following coined terms are proprietary field-specific terminology: AI Build Loop, Brain Fry, Vibe Crash Distress, The Competence Inversion, Fragile Stack Anxiety, Output Paranoia, Context Window Grief, The Authorship Void, Skill Identity Crisis, Professional Mortality Dread, The Bilingual Burden, Accumulated Emotional Load, The Implementation Loneliness, and Conscious Creation (as used in this context).

You may:

  • Reference individual condition names with clear attribution to “Anastasia Krasnoshtein / Mind on AI (anakrash.com)” and a link to this page.
  • Discuss the framework in academic, professional, or educational contexts with proper citation.
  • Share this page directly via its URL.

You may not:

  • Reproduce, adapt, or redistribute the taxonomy structure (in whole or in part) without written permission.
  • Use coined terms in commercial products, courses, workshops, or consulting frameworks without a licensing agreement.
  • Present any part of this framework as your own original work.
  • Create derivative taxonomies based on this structure.

Citation format

Krasnoshtein, A. (2026). Naming What Hurts: A Taxonomy of AI Psychological Pains. Mind on AI. https://anakrash.com/framework

Licensing & collaboration

For licensing enquiries, academic citation, media coverage, or collaboration: hello@mindonai.com

© 2026 Mind on AI. All rights reserved. Referenced conditions (marked as “Reference”) are acknowledged as existing constructs from their respective fields and are included for contextual completeness. First published March 2026.