top of page

Part III: Code-Switching for Chatbots — Beyond “Clinical Speak” and using African American English Dialect

  • Writer: Alex Shohet
    Alex Shohet
  • Dec 22, 2025
  • 3 min read

The biggest way AI fumbles the bag in mental health isn't what it says. It’s the vibes.

If an AI talks like a textbook or a Boomer trying to be helpful, users are gonna ghost it. This is a massive L for communities who already know that the system is sus and doesn't actually care about them.

A vibrant street art mural depicting various alien species holding telephones, connected to a central multi-armed alien DJ operating a switchboard, with a neon sign reading 'AI CODE SWITCH
A vibrant street art mural depicting various alien species holding telephones, connected to a central multi-armed alien DJ operating a switchboard, with a neon sign reading 'AI CODE SWITCH

Language isn't just aesthetic. Language is the key to the server.

Why "Professional" Language is Cringe

In a clinic, doctors choose words to avoid getting sued. But in the real world? That "HR voice" feels cold, patronizing, and gives major "Fed" energy.

Phrases like:

  • “I’m sorry you’re experiencing distress”

  • “That sounds very challenging”

  • “I encourage you to seek professional support”

...don't land as helpful. They land as NPC dialogue.

For someone who grew up navigating the struggle, the streets, or family drama, this kind of language signals one thing: You’re an op. And trust gets completely cooked.

Code-Switching Isn't Being a "Pick-Me"

Humans code-switch naturally. You talk differently to the homies, your mom, and your boss. AI doesn't—unless we teach it to read the room.

Code-switching in AI isn't about using slang just to try and have rizz. It’s about:

  • Understanding the vibe.

  • Peeping the power dynamics.

  • Not saying things that make people spiral.

An AI that only speaks "Corporate Safe-Talk" is gonna leave the people who need it most on read.

Lived Experience is the Cheat Code

You can't fake this with a policy document. Systems trained on "polite," academic data are just reflecting the worldview of the people in charge—not the community.

This is why Alex Shohet is crucial. He’s lived through addiction and recovery; he knows when the chat is becoming toxic versus when it’s safe. Clinicians like Dr. Jay Watts  understands how using African American English Dialect how words hit—especially when someone is down bad.

Without them, AI safety is just theory-crafting. With them, it’s based.

The Problem of "Gatekept Language"


Most LLMs are trained on:

  • Educated

  • Middle-class

  • "Karen-approved" text.


That creates massive blind spots. Street talk, dark humor, deflection, and bravado get misread as "hostility" or "non-compliance".

The result?

  • The bot starts clutching its pearls.

  • It tries to moralize (preach).

  • It shuts down the chat prematurely.

  • It tries to call the cops when nobody is in danger.

To the user, this doesn't feel like safety. It feels like surveillance.


Stigma Can Be "Concern Trolling"

Stigma doesn't always sound mean. Sometimes it sounds "concerned." An AI can be stigmatizing by:

  • Treating a relapse like a Skill Issue.

  • Treating slang like a threat.

  • Treating hesitation like denial.

These aren't edge cases. They are everyday behaviors. Teaching AI to stop being judgmental means training it to understand the context, not just the text.


Building the Dataset (The Grind)

This is the unglamorous part—but it’s how we secure the bag. Trustworthy AI needs datasets that include:

  • Different dialects.

  • Real idioms.

  • Actual slang.

  • The way people actually tell stories (non-linear).

We aren't trying to normalize harm—we're trying to recognize the fam. Without this, AI will look good in the demo but be trash in real life.


The Goal Isn't Clout. It’s Credibility.

The point isn't to make the AI sound trendy or "street". The point is to make it sound real enough that someone doesn't rage quit.

If AI is going to be there when someone is panicking or feeling shame, it has to earn the right to stay in the chat.

What This Means for AI Safety

Safety isn't just about refusing to answer. It’s about whether the user keeps typing.

An AI that only speaks "Corporate" will systematically abandon the people who are already on the margins. If we want AI to actually be a safety net, it has to learn to code-switch—not to be deceptive, but to actually connect.

Because when things get real, people don't need perfect grammar. They need language that passes the vibe check.

Comments


Evergreen Fund new logo 2026.png

Innovating Recovery, Life Fulfillment & Human Performance

Since 2005

14465 and 14475 Mulholland Dr,

Los Angeles, CA 90077 DHCS Licensed and JCAHO Accredited Detoxification and Residential Treatment Centers

© 2005, 2026 Evergreen Fund Inc.

  • X

Contact Us

Thank you for your submission

bottom of page