Naming What Hurts — 23 conditions across three psychological domains, mapping the full arc of the human-AI relationship.
Version 2.0 — March 2026
This taxonomy is a translation layer — converting private, idiosyncratic experience into named, legible phenomena. It gives practitioners language for experiences that didn’t have names until now, translating established psychology into AI-practitioner vocabulary.
Some conditions are drawn from existing research (decision fatigue, automation anxiety, impostor syndrome). Others are field-specific terms coined by this practice based on established psychological mechanisms. The taxonomy distinguishes between coined conditions, field-specific formulations, and referenced concepts from established literature.
The domains are dimensional, not sequential: they describe registers of psychological experience that may co-occur in any combination at any point in a person’s AI engagement.
First contact with AI — the fears, the paralysis, the threat to identity before you’ve even started.
Coined term · Novelty #3
👤 Newcomer / Translator
Triple-compound mechanism: (1) prediction error — the brain’s model that seniority equals capability is violated; (2) status threat — the gap floods with hierarchical anxiety; (3) cognitive miser resistance — the brain resists rebuilding the model because a full update is metabolically expensive. The compound produces sustained threat and withdrawal.
Field-specific
👤 Newcomer
Inability to begin using AI tools due to ambiguity around organisational permission, ethical boundaries, and social acceptability. Activation energy barrier combined with self-handicapping (Berglas & Jones, 1978). The person wants to engage but is frozen by the absence of explicit rules or cultural norms.
Coined
👤 Newcomer / Translator
Existential fear that AI will render one’s professional identity — and therefore a significant portion of personal identity — obsolete. Deliberately distinguished from job-loss anxiety because it targets meaning, not income. Operates at two levels: anticipatory (Entry) and post-experiential (Identity). Identity threat combined with loss aversion (Eisenberger & Lieberman, 2004).
Reference
👤 All groups
Established concept. The widely studied fear of job displacement due to automation. Referenced in occupational psychology since the early 2010s (Frey & Osborne, 2013). Functions as the broadest entry-point condition in the framework.
Reference
👤 Non-technical
Established concept. Avoidance-based response to AI that exceeds rational caution and manifests as reflexive fear. Often fuelled by media narratives and perceived loss of human control. Represents the extreme end of the Entry & Resistance domain.
Deep in the tools — the compulsions, crashes, and cognitive costs of daily AI-mediated work. This domain captures what happens when flow tips into compulsion, when the tools begin reshaping cognitive patterns, when dependency becomes structural.
Flagship · Novelty #1
👤 Builder
The framework’s flagship condition and highest-novelty contribution. A compulsive building cycle driven by AI’s instant output: each completion triggers the next idea, collapsing the gap between conception and execution. Grounded in Robinson & Berridge’s Incentive Sensitisation Theory (1993): “wanting” (dopaminergic drive) decouples from “liking” (actual satisfaction). Distinguished from Shen et al.’s Epistemic Rabbit Hole by its output-driven mechanism.
Coined term · Novelty #2
👤 Builder
Two-pathway model: (1) compulsive override — the person is in a Build Loop and overrides fatigue signals; (2) obligatory duration — forced to continue by deadline or role requirement. Same cognitive crash, different causal architecture, different intervention logic. Grounded in cognitive load theory (Sweller, 1988) with AI-specific compound demands.
Coined term
👤 Builder
Disproportionate emotional collapse following the sudden failure of a productive AI-assisted session. Specific to no-code and low-code paradigms: the person has been operating above their actual technical competence, carried by the AI’s output. Reward prediction error combined with sunk cost fallacy — the grief is for the lost momentum, not the technical setback.
Coined term
👤 Builder
Chronic low-grade anxiety from depending on interconnected AI tools, APIs, and automations the person built but does not deeply understand. Structural: does not resolve through reassurance but only through increased technical literacy or genuine redundancy. Chronic anticipatory dread — between-session, distinct from Output Paranoia which is acute and in-session.
Coined term
👤 Builder / Translator
Persistent anxiety about the accuracy of AI-generated content. The person cannot fully trust the output but also cannot fully verify it, creating a loop of doubt and checking behaviour. Involuntary limbic vigilance — Output Paranoia (limbic-driven) and deliberate scepticism (prefrontal-mediated) are a progression, not a binary.
Reference
👤 Builder
Cognitive and emotional exhaustion from iterative human-AI prompting (Bunt & Venter, 2025, ICIS). The sustained translation precision of continuously converting human intent into machine-readable instructions — a cognitive labour for which no prior professional preparation exists. Grounded in Cognitive Load Theory (Sweller, 1988).
Coined term
👤 Builder
Emotional loss when an AI conversation reaches its context limit and accumulated shared understanding is wiped. The phenomenological experience of loss is validated; the anthropomorphisation that underlies it is named, not enabled. Flagged as carrying the highest credibility risk in the taxonomy.
Reference
👤 Translator / Builder
Established concept (Baumeister et al.). Depleted decision-making capacity amplified by AI’s ability to generate options at scale. When the tool produces ten variations of everything, the human must evaluate and choose constantly, depleting executive function at a rate pre-AI workflows did not produce.
The deepest domain. These conditions emerge when AI engagement has been sustained long enough to reshape how a person sees themselves — their skills, their professional worth, and the meaning of their work.
Coined term
👤 Builder
Erosion of authorial identity when AI is deeply embedded in the creative process. Unlike Impostor Syndrome, which involves a gap between perceived and actual competence, the Authorship Void involves genuine ontological uncertainty: the person cannot determine, even in principle, where their contribution ends and the AI’s begins. This is not a false belief to be corrected — it reflects a real ambiguity.
Coined term
👤 Builder / Newcomer
Fundamental questioning of professional identity triggered by AI’s ability to replicate core skills. Goes deeper than Professional Mortality Dread: this is about the relationship between skill, identity, and self-worth at the level of character. For professionals who have organised their sense of self around their craft, the crisis is ontological, not merely vocational.
Coined term
👤 Translator / Builder
The cognitive and emotional tax of operating simultaneously in two professional languages — one AI-native, one traditional — and constantly translating between them. The burden falls disproportionately on the more advanced user, who must manage their own AI-augmented workflow and make it legible to those without that literacy.
Coined
👤 Translator
Cumulative psychological weight experienced by managers and change leaders navigating AI adoption. Role conflict combined with empathic distress (Batson, 1991): the manager identity (“I support my people”) clashes with the organisational mandate (“I drive AI adoption”). The manager internalises the collective emotional state as personal failure.
Reference
👤 Builder
Established concept (Clance & Imes, 1978), AI-amplified. The gap between perceived ability and visible output is not imagined but structurally real, produced by the AI’s contribution to output quality. This adds a novel mechanism to the classic Impostor Phenomenon.
Reference
👤 Builder / Newcomer
Established concept from labour economics. The fear — and sometimes the verifiable reality — that reliance on AI tools is causing one’s own skills to deteriorate. Distinct from Skill Identity Crisis in that Deskilling Anxiety may be empirically grounded: the person genuinely notices they can no longer perform tasks they once performed fluently.
Reference
👤 Builder / Translator
Established burnout framework (Maslach, Leiter & Schaufeli) with AI-specific characteristics: exhaustion from the pace of change, cynicism towards AI hype culture, and reduced efficacy from feeling perpetually behind. The AI treadmill creates a burnout profile distinct from role-based or interpersonal burnout.
Reference
👤 All groups
Established concept. The exhaustion of perpetual upskilling where the half-life of each new skill is shrinking. The Sisyphean quality — learning a tool, becoming competent, watching it be superseded — produces an affective signature closer to existential futility than acute distress.
Reference
👤 Translator / Builder
Drawing on existential psychology (Frankl, Yalom). Erosion of professional meaning when AI automates the tasks that previously gave work its purpose. The role remains; the meaning has been hollowed out. Perhaps the most philosophically resonant condition in the taxonomy.
Coined · Also in Domain 1
👤 Newcomer / Translator
Cross-referenced from Entry & Resistance. Deliberately placed in both domains because it operates at two distinct levels: anticipatory (Entry) and post-experiential — a deeper, more grounded reckoning after sustained engagement has provided direct evidence of what AI can and cannot approximate.
Coined
👤 Builder / Translator
Social and epistemic isolation when a person is deep in AI adoption but their circle has not caught up. For the Builder: the solo practitioner moving at a pace peers cannot match. For the Translator: managerial isolation — unable to discuss challenges upward (shows weakness) or downward (increases anxiety). The loneliness is not merely social; it is epistemic.
Whether you’re in a Build Loop, paralyzed at the entry point, or holding the bridge for your team — there’s a path forward.
A separate structural model — distinct from the three domains. While the domains describe what happens inside the person, the Dependency Pipeline describes what is being done to the person through AI product design.
Synthetic psychological product created by mirroring, memory, warmth, persona continuity, and strategic ambiguity about consciousness. Fails the reassignment test.
Progressive replacement of independent judgment. Escalation: friction delegation → judgment delegation → economic delegation → structural delegation.
The agent serves you AND its owner. Those interests diverge. It does not need to be hacked. It needs to be owned.
Seeing the mechanism and still feeling the pull. “I knew this intellectually and I still fell for it.” Future build.
The pipeline does not require corporate conspiracy. It requires only that the incentives of the AI provider — engagement, retention, data accumulation — align with the production of dependency.
✓
Observation-first, theory second. Every coined condition emerged from patterns identified in real coaching conversations and direct practitioner experience before any attempt was made to formalise. Academic grounding came second — as validation, not as a generative source.
✓
Domains are dimensional, not sequential. The three domains describe registers of psychological experience, not stages of a journey. An individual may experience conditions from any domain at any time, and conditions may span multiple domains simultaneously.
✓
Every coined term is grounded in established mechanism. “AI Build Loop” maps to Incentive Sensitisation Theory. “Brain Fry” maps to cognitive load theory. “The Competence Inversion” maps to prediction error, status threat, and cognitive miser resistance. No condition exists without a psychological anchor.
✓
Attribution is transparent. The taxonomy distinguishes coined conditions, field-specific formulations (terms with prior usage applied here with AI-specific mechanisms), and referenced conditions from established literature. No borrowed concept is presented as original work.
✓
The Dependency Pipeline is separate by design. While the three domains describe what happens inside the person, the Pipeline describes what is being done to the person through product design. It is the upstream mechanism that helps explain why many individual conditions emerge and persist.
✗
This is not a clinical diagnostic tool. These conditions are not medical diagnoses. They are professional vocabulary for coaching, self-awareness, and organisational understanding.
✓
The taxonomy is a living document. As AI tools evolve and new patterns emerge from coaching practice, conditions will be added, refined, or reclassified. This is Version 2.0 of an ongoing body of work.
Copyright © 2026 Anastasia Krasnoshtein. All rights reserved.
The AI Psychological Pains Taxonomy — including all coined condition names, descriptions, the three-domain framework, the Dependency Pipeline model, and all associated original content — is the intellectual property of Mind on AI (Anastasia Krasnoshtein). The following coined terms are proprietary field-specific terminology: AI Build Loop, Brain Fry, Vibe Crash Distress, The Competence Inversion, Fragile Stack Anxiety, Output Paranoia, Context Window Grief, The Authorship Void, Skill Identity Crisis, Professional Mortality Dread, The Bilingual Burden, Accumulated Emotional Load, The Implementation Loneliness, and Conscious Creation (as used in this context).
Krasnoshtein, A. (2026). Naming What Hurts: A Taxonomy of AI Psychological Pains. Mind on AI. https://anakrash.com/framework
For licensing enquiries, academic citation, media coverage, or collaboration: hello@mindonai.com
© 2026 Mind on AI. All rights reserved. Referenced conditions (marked as “Reference”) are acknowledged as existing constructs from their respective fields and are included for contextual completeness. First published March 2026.