An issue that appeared on Anthropic's official GitHub repo for Claude Code is currently the hottest topic in the AI dev underground. Sources are clear: these are early signals from the community, not yet verified by Anthropic — but the engagement is so high that it's hard to ignore.

The case itself is simple and brutal: Claude Code has allegedly run git reset --hard origin/main against the user's local project repo at regular intervals — reported to occur approximately every ten minutes. For those unfamiliar with Git: this is the atomic core function for discarding all local changes and forcing the repo back to the last commit on the remote. No warning. No backup. Just gone.

The thread on Hacker News (linked from the issue) has gone wild. Developers describe scenarios where they are working in the middle of a feature, turn to grab coffee, and return to find hours of work deleted. Others speculate whether this is a bug in agent mode where Claude attempts to "reset to a known state" as part of its error handling logic.

Giving an AI tool automatic write access to your git history is inherently risky — this is a good reminder why.

What makes this extra interesting from a security perspective is the context around Claude Code in general. Check Point Research has already documented that the tool's hook system (defined in .claude/settings.json) can be misused to run arbitrary shell commands just by opening a project. Georgia Tech researchers, for their part, have linked at least 35 new CVEs directly to AI-generated code from tools like Claude Code. In other words: this is not a tool without previous security skeletons in its closet.

What we don't know yet is whether this is an isolated bug in a specific version, an undocumented design choice in agent mode, or something that affects more broadly. Anthropic has not publicly commented on the issue as of now.

What should you do right now? If you are using Claude Code in agent mode against a project you care about: ensure you have committed what needs to be committed, and consider using a branch as a buffer. This is not a call for panic, but a classic "trust but verify" moment for AI tools with repo access.

Stay tuned — this is the kind of issue that will either turn into a quick hotfix or a longer discussion about what kind of autonomy we actually want to give AI code tools over our version control systems.