Scoopfeeds — Intelligent news, curated.
agentic-ai

AI is Breaking Two Vulnerability Cultures

LessWrong · May 8, 2026, 3:50 PM · Also reported by 1 other source

A week ago the Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open. His goal was that with only the raw fix public, the knowledge that a serious vulnerability existed could be "embargoed": the people in a position to address it know, but they've agreed not to say anything for a few days. Someone else noticed the change, however, realized the security implications, and shared it publicly. Since it was now out, the embargo was deemed over, and we can now see the full details. It's interesting to see the tension here between two different approaches to vulnerabilities, and think about how this is likely to change with AI acceleration. On one side you have "coordinated disclosure" culture. This is probably the most common approach in computer security. When you discover a security bug you tell the maintainers privately and give them some amount of time (often 90d) to fix it. The goal is that a fix is out before anyone learns about the hole. On the other side you have "bugs are bugs" culture. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn't then someone somewhere may be able to turn it into an attack. Just fix things as quickly as possible, without drawing attention to them. Often people won't notice, with so many changes going past, and there's still time to get machines patched. This approach never worked perfectly, but with AI getting good at finding vulnerabilities it's a much bigger problem. So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher. Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective. [1]

Article preview — originally published by LessWrong. Full story at the source.
Read full story on LessWrong → More top stories

Also covered by

Aggregated and edited by the Scoop newsroom. We surface news from LessWrong alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop