Anthropic uses a detailed constitution to train Claude, its AI model. Up until recently, that constitution was from May 2023. The company decided that it was time for an upgrade, so they’ve created a new constitution to ensure the AI more closely follows their guidelines and values.
We’re publishing a new constitution for Claude.
— Anthropic (@AnthropicAI) January 21, 2026
The constitution is a detailed description of our vision for Claude’s behavior and values. It’s written primarily for Claude, and used directly in our training process.https://t.co/CJsMIO0uej
What is Claude’s Constitution?
Claude’s constitution is a document that provides the foundation for the AI platform and how it operates. It details the values that Anthropic wants Claude to represent and why. This content is used to give Claude information on how to effectively offer information and deal with complicated situations in a way that fits the company’s expectations.
Every time Claude responds to users, it should be consistent with the constitution. Anthropic published the constitution publicly to help users see when Claude’s actions are intended or unintended. If Claude doesn’t follow the guidelines well enough, the constitution needs tweaks to help the AI better understand what’s expected of it.
Claude has quickly outgrown the constitution written in 2023, so Anthropic decided it was time to make significant changes to improve how the AI operates.
Biggest Changes in the New Constitution
In the new constitution released on January 21st, 2026, there are a few key changes that the company focused on. Those include:
- Being Broadly Safe: Claude must act within sanctioned limits to avoid drastic or irreversible actions that could harm people. If it’s asked to do something that is deemed unsafe based on its constitution, it cannot complete that task since that would undermine human oversight.
- Following Claude’s Ethics: Claude must follow good values, avoiding inappropriate, harmful, and dangerous responses. It cannot generate responses or advice that could be hurtful to people.
- Remaining Compliant with Guidelines: Claude always acts in accordance with Anthropic’s specific guidelines, which include how to handle specific situations like medical advice. These guidelines help ensure the responses stay safe and ethical.
- Offering Genuine Helpfulness: The goal is for Claude to benefit the people who interact with it. Therefore, the platform must be genuinely helpful in every response, acting as a friend with expert knowledge. This helpfulness always follows the other guidelines included in the constitution.
For more details, check out the full version of Claude’s Constitution, which is publicly available.
Claude’s Constitution Will Keep Changing
Even with this update, Claude’s constitution isn’t set in stone. It’s a continuous work in progress, so it will be updated to correct mistakes and add more details over time. The most up-to-date version will always be on Anthropic’s website to offer transparency to users.
Anthropic used feedback from external experts, including experts in law, philosophy, theology, and psychology, to develop the constitution. The company will continue to gather a variety of feedback about Claude to update the constitution as necessary.
Consider AI Searches in Your Brand Marketing
Since AI platforms like Claude keep advancing, they’re only becoming more popular for searches. Many people discover products when asking questions on AI platforms, so it’s a great way for brands to reach new customers.
AI optimization is a key aspect of modern brand marketing. Contact Avenue Z today to see how we can increase your company’s visibility in AI searches.
We are the Agency for Influence
Discover new ways to drive revenue and build reputation for your brand.


