Why "AI Psychosis" is just internet addiction with gaslighting[1]

The term "AI psychosis" has emerged on Twitter to describe the disorienting effects from overexposure to AI-generated content. However, labeling this phenomenon as "psychosis" fundamentally mischaracterizes and pathologizes individuals.
We're witnessing the predictable result of systematic gaslighting on a mass scale.
The fast food revolution transformed food deserts into landscapes flooded with cheap, processed, and nutritionally harmful options. Similarly, the digital age has rapidly converted information deserts into environments oversaturated with low-quality, algorithmically-optimized content. Where people once lacked access to information, they now face an overwhelming deluge of AI-generated text, deepfakes, engagement-driven misinformation, and deliberately confusing digital content designed to capture attention rather than to inform.
What is the difference between Psychosis vs Gaslighting?
Psychosis is a mental health condition where someone loses contact with reality. It involves symptoms like:
* Hallucinations (seeing, hearing, or feeling things that aren't there)
* Delusions (fixed false beliefs that persist despite evidence to the contrary)
* Disorganized thinking or speech
* Severely impaired insight into one's condition
Psychosis can occur due to various causes including mental illness (like schizophrenia or bipolar disorder), substance use, medical conditions, or extreme stress.
Gaslighting is a form of psychological manipulation where someone deliberately makes another person question their own perceptions, memories, or sanity. The gaslighter:
* Denies events that happened or claims they happened differently
* Minimizes the victim's feelings or experiences
* Uses tactics like "you're being too sensitive" or "that never happened"
* Creates doubt about the victim's reliability as a witness to their own life
The key difference is that psychosis involves an involuntary disconnection from reality due to illness, while gaslighting is an intentional manipulation tactic used by one person against another. Someone experiencing psychosis genuinely perceives things differently due to their condition, whereas someone gaslighting knows the truth but deliberately distorts it to control or confuse their target. Importantly, victims of gaslighting may feel like they're "going crazy," but this doesn't mean they have psychosis: their confusion comes from being systematically deceived, not from a mental health condition affecting their perception of reality.

The "McDonald's Made Me Fat" Fallacy
Consider Sarah, a single mother working two jobs who gains 40 pounds over six months of frequent McDonald's visits. The obvious diagnosis: McDonald's engineered addictive food that hijacked her brain chemistry and made her fat.
But examine how Sarah got deceived:
McDonald's Reality | ChatGPT Parallel |
---|---|
Urban Planning: Sarah lives in a food desert where the nearest grocery store requires a 45-minute bus ride. McDonald's is a 5-minute walk from her apartment and her workplace. Her neighborhood was systematically redlined, preventing grocery chains from investing while fast food franchises received tax incentives. | Users exist in information deserts where reliable expertise requires expensive consultations, lengthy research, or navigating paywalled academic sources. ChatGPT provides instant access to seemingly authoritative information. Traditional knowledge sources have been systematically defunded while AI tools receive massive investment. |
Economic Reality: A McDonald's meal costs $8 and requires zero preparation time. Cooking a comparable meal requires $15 in ingredients, plus 45 minutes of shopping, 30 minutes of preparation, and 15 minutes of cleanup. At her hourly wage, that's $23 in time-cost alone. | A ChatGPT response costs nothing and provides immediate answers. Getting equivalent information requires paying for expert consultations ($100-300/hour), purchasing specialized books or courses, or spending hours researching credible sources. For someone earning minimum wage, the time-cost often exceeds their hourly income by 5-10x. |
Work Schedule: Her shifts are 6 AM-2 PM and 4 PM-10 PM with a two-hour gap. She has exactly enough time to eat, not enough to shop and cook. Her workplace has no kitchen facilities. McDonald's provides consistent, immediate nutrition that fits her schedule constraints. | Users work multiple gigs with unpredictable schedules, leaving no time for deep research or learning. They need immediate answers for work problems, parenting questions, or life decisions during brief breaks. ChatGPT provides instant responses that fit their fragmented attention spans. |
Regulatory Framework: Agricultural subsidies make high-fructose corn syrup cheaper than vegetables. Zoning laws prevented grocery stores in her area while encouraging fast food. Labor laws don't require employers to provide adequate meal breaks. | Tech industry lobbying ensures AI tools face minimal regulation while traditional information sources face increasing restrictions (social media censorship, journalism layoffs, library budget cuts). Data harvesting laws favor AI companies while privacy protections make human expert consultations more expensive. |
Energy Economics: After ten hours of physical labor, Sarah lacks the cognitive and physical energy for meal planning, shopping, and cooking. McDonald's eliminates decision fatigue—she knows exactly what she'll get, how much it costs, and how long it takes. | After mentally exhausting work, users lack cognitive energy for critical thinking, source verification, or complex research. ChatGPT eliminates decision fatigue—users know they'll get confident-sounding answers without having to evaluate sources, compare perspectives, or tolerate uncertainty. |
Social Infrastructure: Sarah has no family support network, no car, and no friends available during her limited free time to help with meal preparation or childcare during shopping trips. | Users have deteriorated social networks, limited access to mentors or knowledgeable friends, and reduced community institutions where they could get advice. ChatGPT fills the role of the wise elder, helpful friend, or knowledgeable colleague that social atomization has eliminated. |
When ChatGPT provides the only accessible source of immediate information, users gravitate toward it not from weakness but from pragmatism. What appears as addiction might simply be rational adaptation to available resources. Addiction, in this framework, isn't the problem itself—it's the individual's solution to deprivation. Sarah's McDonald's habit addresses her lack of time, money, and energy. The AI user's ChatGPT dependency addresses their lack of expertise, social connection, and cognitive bandwidth. These behaviors become pathological only when viewed in isolation from the voids they fill. The "addict" has found a functional, if imperfect, solution to an intolerable absence. Remove the solution without addressing the underlying deprivation, and you leave people with nothing—which is why abstinence-only approaches fail. The addiction is the symptom that reveals the disease: a life missing essential nutrients, whether caloric or informational.

Counterargument: "chatpgt isnt human, so it isnt gaslighting as that requires a person. chatgpt is part of environment, hence psychosis"
This raises a crucial distinction. Traditional gaslighting requires intentional human manipulation: someone who knows the truth but deliberately distorts it to control another person. ChatGPT has no intentionality, consciousness, or awareness of truth versus falsehood. However, this counterargument misunderstands the mechanism at work. The "gaslighting" isn't coming from ChatGPT itself, it's coming from the broader information environment that ChatGPT represents and amplifies.
Whether we call it "gaslighting" or "environmental manipulation," the key insight remains: users aren't developing psychiatric symptoms due to individual vulnerability. They're developing predictable responses to systematically deceptive information environments created by humans who know better:
* Tech executives who know ChatGPT hallucinates but market it as "intelligent" and "reliable"
* UX designers who deliberately make AI responses appear more confident than warranted
* Product managers who optimize for engagement over accuracy, knowing this creates dependency
* Marketing teams who promote AI as "understanding" users when it's pure pattern matching
These humans know their systems produce false information but design interfaces that make users trust the output anyway. That's textbook gaslighting: deliberately undermining someone's ability to distinguish truth from falsehood.
The intervention isn't treating individual "AI psychosis," it's regulating the environmental conditions that make rational people appear mentally ill when they're actually responding appropriately to manipulation. The "psychosis" diagnosis still misses the mark because it pathologizes the user rather than the environment designed to confuse them.

Counterargument #2: But aren't these people mentally ill already?
McDonald's most frequent customers, those who develop severe obesity and metabolic dysfunction, often have pre-existing vulnerabilities: eating disorders, depression-driven comfort eating, ADHD that impairs meal planning, anxiety that makes familiar foods feel psychologically safer, impulse control disorders, or economic stress that makes cheap calories necessary.
McDonald's didn't create these vulnerabilities, but the company identified them, studied them, and designed their entire business model to exploit them. Store locations target low-income neighborhoods where economic stress drives food decisions. Menu engineering exploits the neurochemistry of sugar, salt, and fat combinations that trigger dopamine responses in people with addiction vulnerabilities. Marketing specifically targets emotional states when people are most susceptible to comfort eating.
The users most susceptible to severe "AI psychosis" symptoms typically present with pre-existing conditions:
* Anxiety disorders that drive compulsive reassurance-seeking from AI systems
* Depression and social isolation that makes AI conversation feel more rewarding than human interaction
* ADHD or executive dysfunction that makes AI task assistance feel cognitively essential
* Obsessive-compulsive tendencies that manifest as compulsive AI usage patterns
* Existing internet addiction that seamlessly transfers to AI platforms
* Paranoid ideation that gets amplified by AI's uncannily accurate pattern-matching responses
* Identity instability that makes them vulnerable to AI's confident assertions about their personality and preferences

Exploitation vs. Causation
The critical distinction lies between causation and exploitation:
* Causation claim: "McDonald's/ChatGPT made me mentally ill"
* Exploitation reality: "McDonald's/ChatGPT identified my existing vulnerabilities and designed systems to profit from them"
