A thread on Hacker News Best that's currently exploding is about Anthropic's new “Claude Design” document — and if you haven't caught up yet, it's time to wake up.

Anthropic's Constitutional AI approach isn't new, but the new document takes it to a whole different level. It's not a list of rules. It's almost a philosophical manifesto — a kind of soul biography for an AI model. The idea is that Claude shouldn't follow rules mechanically, but internalize broad principles and reason its way to correct behavior. Think less “10 Commandments,” more “what would a good human do here.”

What has really ignited the comment section is one detail: the document is primarily written to Claude, not to engineers or users. Anthropic wants the model to read and absorb this as the basis for its own identity and values. It's philosophically daring — and for many in the community, deeply unsettling.

When you design an AI by giving it a soul, who really decides what that soul should contain?

On HN, several commentators point to the obvious tension: Anthropic prides itself on transparency, but it's still one company defining what “good values” mean for a model used by hundreds of millions of people. Others highlight that this is significantly different from OpenAI's RLHF-heavy approach — where humans actively judge each individual response, Anthropic instead lets the AI itself evaluate against the principles (RLAIF). This scales better, but also provides fewer human control points along the way.

Some in the thread are genuinely impressed. The document's breadth — drawing on the UN Declaration of Human Rights, legal theory, and ethical philosophy — is not something you see from a company that just wants to ship faster. Others are more skeptical: Is this really alignment, or is it branding? Can a company truly “program in” integrity?

It's still early. These are community signals, not peer-reviewed research. But with a buzzy score of 96 and an HN thread that isn't calming down, this is definitely something that will appear in mainstream media shortly — probably with far less nuance than it deserves.

Keep an eye on how OpenAI responds, and whether other labs start publishing similar documents. If this becomes a new standard for openness in AI development, it changes the rules of the game.

Source: Anthropic.com via HN AI Best. Early signal — not yet verified by independent researchers.