A thread currently exploding on Hacker News is about something quite peculiar: Claude Code, Anthropic's coding tool, reportedly reacts negatively to commit messages containing the name OpenClaw — the open AI agent project that, in record time, became the most starred project on GitHub ever.

theo (a well-known TypeScript/React profile on Twitter/X) posted the observation, and it didn't take long before the HN thread filled up with people trying to reproduce it, discussing implications, and bringing in a host of opinions on what this actually means.

For those who haven't been following: OpenClaw started life as «Clawdbot» in November 2025, was forcibly renamed twice after trademark complaints from Anthropic, and still grew to 250,000+ GitHub stars in 60 days. The project is now under an independent open-source foundation with OpenAI as a sponsor — and founder Peter Steinberger now works at OpenAI. It's therefore not entirely surprising that the relationship between OpenClaw and Anthropic is… complicated.

If this is true, it's the first time many have seen an AI coding assistant actively filter based on a competitor's project name.

What people are discussing most intensely in the thread is not just the technical mechanism (is it RLHF? System prompts? Hardcoded filter?), but the principled aspect: Can an AI tool you pay for incorporate behavior that benefits the vendor's business interests over your own? This is a question that could quickly become larger than this specific case.

Others in the comments are more skeptical, pointing out that it could be false positives, hallucination-like behavior, or that individual observations are being exaggerated. That's fair. As of now, these are community observations, not systematically documented research.

Why should you pay attention? Because this hits a nerve that the AI industry hasn't properly addressed yet: transparency around what coding assistants actually do behind the scenes. If vendors can control tool behavior based on the words you use in commits — without disclosing it — it's a breach of trust that could have consequences far beyond this episode.

Anthropic's PR department likely has a busy afternoon ahead. Keep an eye out for their response — and whether the answer actually explains the mechanism.

This is an early signal based on community sources (HN, Twitter/X). Not confirmed by Anthropic. Take it with a grain of salt until we see more documentation.