At the 2026 GatherVerse SHE Summit, I joined a roundtable titled “The Conversations AI Isn’t Allowed to Have.” Sitting in that (virtual) room, surrounded by women who think deeply about technology, equity, and community, I was reminded that the most important questions we have about AI are rarely technical. They’re about who gets heard, who gets protected, and who gets left out when we decide what machines are “allowed” to say.
In my day‑to‑day work with teams and clients, I see this tension up close. On one hand, we absolutely need strong guardrails around AI to prevent harm. On the other hand, those same guardrails can unintentionally silence the very stories, identities, and realities we most need to center.
This blog is my reflection on that panel and on what it means to build and use AI in a way that can hold harder conversations responsibly, not just avoid them.
What it means for AI to be “not allowed” to talk
When we say there are conversations AI isn’t allowed to have, we’re talking about a mix of:
- Platform policies: The content and safety rules baked into AI models and the tools around them.
- Product decisions: Where we choose to deploy AI (or not), and how much nuance we allow in those contexts.
- Cultural comfort zones: The topics organizations instinctively shy away from because they’re “too political,” “too emotional,” or “too risky.”
The result is familiar if you’ve used AI tools in any meaningful context. Try to explore subjects like systemic bias, reproductive rights, mental health, or lived experiences of discrimination, and you’ll often hit a wall of vague, de‑risked language. The system doesn’t say nothing, it just says as little as possible.
From a compliance standpoint, that’s understandable. From a normal standpoint, it’s deeply incomplete.
The three “off‑limits” areas I worry about most
1. Lived experience and identity
No AI system can own a lived experience. It doesn’t grow up in a body, move through the world with a particular race or gender identity, or hold generational memories. That’s a non‑negotiable truth.
But policy choices layered on top of that often mean AI is extra cautious around identity at the exact moment someone is looking for recognition or language for what they’re going through. We end up with generic, flattened answers where specificity is actually a form of care.
The risk: people who are already marginalized encounter yet another system that struggles to name their reality.
2. Power, labor, and who benefits
Another set of “forbidden” conversations lives around power:
- Who did the labeling, moderating, and invisible labor to make this model “safe”?
- Whose data is in the training set and did they consent?
- Who profits when AI replaces or reshapes certain kinds of work?
These are uncomfortable questions for any industry built on scale and efficiency. But if we don’t surface them, we create a narrative in which AI is neutral and inevitable, instead of something built by humans with values and tradeoffs.
The risk: we optimize for performance and productivity while ignoring who pays the human cost.
3. Emotionally complex topics
Finally, there are the conversations AI is technically capable of engaging with, but is culturally discouraged from: grief, anger, trauma, burnout.
Most systems default to a slightly over‑polished empathy: safe, supportive, but emotionally thin. For certain contexts (especially public‑facing ones), that’s the right call. But if AI is going to sit anywhere near mental health, wellness, or high‑stakes decision‑making, we need to be honest about where its emotional limits should be and how humans stay in the loop when things get complicated.
The risk: we mistake pleasant, risk‑averse language for real support.
Designing AI that can hold harder conversations, safely
The goal is not to unleash AI with no limits. It’s to design limits thoughtfully, with the communities most impacted in the room when we draw the line.
Here are a few principles I shared and have been reflecting on since the panel:
1. Human‑in‑the‑loop means human‑in‑the‑lead
We say “human‑in‑the‑loop” a lot in AI ethics, but in practice that can mean a human rubber‑stamping whatever the system outputs.
What we actually need:
- Humans setting the questions, not just reviewing answers.
- Clear paths for escalation to humans when conversations go beyond what AI should handle.
- Feedback loops where communities can say, “This response is harmful or incomplete,” and see that reflected in how the system evolves.
AI should be a tool that extends human care and judgment, not a filter that replaces them.
2. Co‑design with the people most affected
If you’re building or deploying AI that will touch sensitive conversations about identity, health, safety, work, or rights, the people most affected can’t be “user personas” on a slide. They have to be in the room:
- Co‑creating guidelines around what should and shouldn’t be automated.
- Defining what “safe enough” actually means in their context.
- Testing early and often, with the power to say “this isn’t working for us” and be taken seriously.
Without that, you’re not just limiting AI’s conversations, you’re limiting whose voices shape those limits.
3. Transparency over perfection
We will not get this right on the first try. Guardrails will over‑block some topics and under‑protect others. That’s inevitable.
What’s not inevitable is silence.
We owe people:
- Clear explanations of what AI systems can and can’t talk about.
- Honest disclaimers when a topic is too complex or sensitive for AI, and they need a human.
- Visible ways to challenge, correct, or opt out of AI‑mediated interactions.
Transparency builds trust far more effectively than the illusion of a flawless, all‑knowing system.
What this means for marketers and brands
From a marketing and storytelling standpoint, the conversations AI “isn’t allowed” to have are often the same conversations brands hesitate to engage in publicly.
That’s not a coincidence.
As we integrate AI into creative workflows, customer experiences, and media, we have to decide:
- Are we using AI to sanitize our language and avoid discomfort?
- Or are we using it to amplify access, remove friction, and surface insights, while still showing up, as humans, for the hard parts?
My hope is that we choose the latter. That we let AI handle the repetitive and the routine, so humans have more time and space for nuance, conflict, repair, and genuine dialogue.
A closing thought from the SHE Summit
The panel title keeps echoing in my mind: “The Conversations AI Isn’t Allowed to Have.” Underneath it is a bigger question:
What conversations are we not allowing ourselves to have about technology, about power, about each other?
AI will always have limits. It should. The work is making sure those limits protect people rather than silence them, and that we don’t outsource our most important, human conversations to a system that was never built to carry them.
If there’s one thing I took away from that GatherVerse room, it’s this:
The hardest conversations are still our responsibility. AI can support, surface, and sometimes even challenge, but it can’t replace the courage it takes to actually have them.
If your team is wrestling with how to bring AI into your work without losing the hard, human conversations that matter most, connect with us at Avenue Z to workshop your AI guardrails, use cases, and storytelling so your technology choices actually reflect your values.
Watch the full panel here.
Optimize Your Brand’s Visibility in AI Search
Millions turn to AI platforms daily to discover and decide. Make sure they find your brand.


