A discussion thread on Lobsters that started quietly a few days ago has garnered an unusual amount of traffic for a forum that usually remains quite calm. The topic: what are you doing specifically to protect yourself against a scenario where LLMs start finding zero-days on an industrial scale?
And no, this is no longer speculative sci-fi.
The figures circulating in the thread are worth taking seriously. CrowdStrike recently reported an 89 percent year-over-year increase in AI-enabled attacks, and the average time from a system being compromised to the attacker moving laterally within the network is now down to 29 minutes. The fastest observed breakout time? 27 seconds. There's barely time to react.
What makes the Lobsters thread particularly interesting is that it's not full of doomer rhetoric. It's technical, pragmatic, and a little frightening precisely because the people there know what they're talking about. Some highlight air-gapping as the only real defense — the classic "machine that has never been online" strategy. Others point out that this doesn't help for infrastructure we actually depend on.
Context is important here: AI CVEs (i.e., documented vulnerabilities related to AI systems) surged to over 2,100 in 2025 alone, an increase of nearly 35 percent. Of these, over 1,500 have a high or critical severity. This means the attack surface is growing faster than defense capabilities.
The truly worrying scenario discussed in the thread is not that AI directly attacks systems — it's that a small group of people with access to powerful LLMs can find and exploit vulnerabilities that take years to patch, or that cannot be patched at all because they are deeply embedded in hardware or protocols.
Google Project Zero has already shown that AI can increase vulnerability detection by up to 20 times on standard benchmarks. It's a double-edged sword of immense proportions.
Why should you pay attention to this now? Because mainstream security media still treats this as a future scenario. It is no longer. The Lobsters thread is an early signal that people working hands-on with security are starting to take this seriously on a completely different level than six months ago.
NOTE: This is based on community discussions and aggregated industry figures — not verified through independent primary sources. Early signal, not definitive.
