AI And Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin
from the yikes dept
First, some of the good news: certain AI models—currently Anthropic’s Mythos, but surely others are well on their way if they haven’t already arrived—turn out to be really good at finding cybersecurity vulnerabilities. As Anthropic itself reported:
During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.
That’s quite the tool, if it can help find vulnerabilities so that they can be patched.
But it’s also quite the tool to help find vulnerabilities so that they can be exploited. Like so many tools, including technological tools, whether they are good or bad depends entirely in how they are used. A hammer is a really helpful tool for building things, but it also smashes windows. And with this news, AI now has the capability for some really destructive uses.
To try to prevent them, Anthropic is working with some of the largest tech companies in the world to let them use a preview of its model on their own software to help QA them and proactively patch vulnerabilities. As Casey Newton reports:
Anthropic announced Mythos alongside Project Glasswing, an initiative with more than 40 of the world’s biggest tech companies that will see Anthropic grant early access to the model to find and patch vulnerabilities across many of the world’s most important systems. Launch partners in the coalition include Apple, Google, Microsoft, Cisco and Broadcom.
They’ll be tasked with scanning and patching their own systems along with the critical open-source systems that modern digital infrastructure depends on. Anthropic is giving participants $100 million in usage credits for Mythos, and donating another $4 million to open-source security efforts.
This sounds like a great program. It also should be noted that the Mythos model is not consumer-grade AI; it takes expensive, dedicated infrastructure to run, which means that, at least for the moment, there’s not an imminent danger of it being misused. But trouble is nevertheless brewing, and someday it will be here, which raises certain questions, like:
(A) What about other AI models, which will inevitably be similarly powerful? What if they are produced by less ethical companies, who would have no compunction against rogue actors using their systems in destructive ways that Project Glasswing won’t have intercepted?
(B) And what about every single legacy technology system in use, which Project Glasswing is unlikely to be able to retroactively fix? Large, resourced companies may be able to weather the on-coming storm, but what about your local dentist office? Or a hospital? Municipal IT systems? Networked technology is everywhere, and these smaller businesses and institutions are likely to both have older, unpatched technology and also fewer resources to update and secure them, or deal with the consequences of a hack, which can be devastating for the business or the people they serve.
On the other hand, there does seem to be one other bit of good news with this revelation: governments, including that of the United States, have often engaged in the dubious practice of hording zero-days, or collecting information about vulnerabilities that they then kept secret so that they could exploit them themselves by using them on an adversary. For those unfamiliar, “zero-day” refers to a vulnerability that has yet to be disclosed, which is why it’s on “day zero,” or before the first day of it being a known vulnerability that could now be fixed.
Mythos’s capabilities would seem to obviate this strategy, because suddenly the stash of unknown vulnerabilities isn’t really going to be such a secret, since anyone using the model will be able to find them. Mythos’s existence changes the balance of interests, where the stronger national security play by the government would be to disclose any discovered vulnerability to the vendor as soon as possible so that they can be patched and our nation’s systems more secured. Arguably that was always the better national security play, but now there’s definitely no upside to trying to keep them secret because it now definitely needs to be presumed that adversaries will be able to find and exploit them. They’ll have the tools.
With these AI models we’re going to need to presume that everyone is going to have the tools to know about every vulnerability. Up to now there has been at least the illusion of some security, because vulnerabilities couldn’t be exploited if no one knew about them, and finding vulnerabilities is hard. But now that it will be easy, the risk to the nation’s cybersecurity is greater than we have ever before contended with.
It is also not really a great harbinger that we know about Mythos because… a copy of the software got leaked. It’s just the software that was leaked and not the models it uses to tune its “reasoning,” which means that anyone trying to now build their own Mythos is still missing an important piece if they want to mimic its full capabilities, but they would have a lot. Which is probably why Anthropic has been sending DMCA takedown notices to have the leaked software removed from the Internet.
But doing so raises a related issue: the role of copyright law when it comes to “vibe coding,” or “having an AI system write the software rather than a programmer, just by instructing it on what to do. It’s especially important in light of the cybersecurity concerns always raised by software (and including vibe-coded software, as we’re having to trust that what’s produced does not have vulnerabilities). Copyright requires a human author, which raises the question: can software written by an AI be copyrightable? The answer would appear to be no, unless there was a great deal of creative effort on the part of a human being to instruct the AI or modify the output. But as Ed Lee chronicled, per Anthropic itself, even its own software (“pretty much 100%”) is being written by AI. And if that’s the case, then Anthropic has no business sending takedown notices for its software because DMCA takedown notices are only for demanding the removal of copyrighted works, which, it would appear, Anthropic’s own code does not qualify for.
But maybe it’s better if software stops being subject to copyright. “Vibe coding,” is becoming increasingly efficient, to the point that there is likely no need for copyright to incentivize its authorship. Instead, what public policy really needs to emphasize is that whatever software is produced is secure software. But in many ways copyright obstructs that goal, like through its lengthy terms, which mean that while a copyright holder might not still be maintaining its older software, no one else can maintain and patch it either, without potentially infringing the software’s copyright. Or through its privileged secrecy (unusually for copyright, when it comes to software you don’t actually have to disclose all the actual code to register a copyright in it!) and other powers to lock out security research efforts, like through Section 1201 of the DMCA, when such efforts aren’t specifically supported by the developer–assuming the developer supports any security testing at all, as right now there aren’t necessarily the incentives to make them care about it. Instead public policy has given them the ability, like with copyright, to escape oversight of the security of their software products, even as those products end up embedded in more and more of our lives.
It’s time to change that focus and get copyright out of the way of making software security our top policy priority.
And fast.
Filed Under: ai, claude, claude code, copyright, cybersecurity, mythos, project glasswing, vulnerabilities, zero days
Companies: anthropic







