A piece of news circulated through security feeds last week. Researchers had documented a class of command-injection vulnerabilities in the Model Context Protocol — the now-ubiquitous standard that connects AI agents to tools, data, and IDEs. The disclosure covered the official SDK across every major language. The downstream blast radius reached into the AI-assisted IDE plugins most working developers have running on their laptop right now.
You can read the advisory itself elsewhere; this isn’t a writeup of the bug. It’s a writeup of the feeling I had reading it, which was: I’ve seen this movie before. So have you. So has anyone who was building software in the early 2000s, or the early 2010s, or really any of the major eras when a new layer of the stack became load-bearing faster than the security culture around it could form.
The original
For most of the 2000s, every web application was vulnerable to SQL injection. Not “had a few bugs.” Vulnerable, in the broad ambient sense — the way every car in 1965 was vulnerable to its driver’s chest hitting the steering column. It was the default state of a web app written by a working developer.
The vulnerability wasn’t novel or clever. SQL injection was understood and named in the late 1990s. The unsafe pattern — concatenating user input into a query string — was just the most obvious way to get the job done in the languages and frameworks of the era. Every PHP tutorial showed you string concatenation. Every Classic ASP example built queries by gluing strings together. New developers, of which there were a great many, learned the unsafe pattern as the normal pattern. By the time they learned a safer one, they had usually shipped.
What eventually fixed it wasn’t a single advisory or a single patch. It was a slow shift at the framework layer: ORMs that escaped by default, query builders that didn’t expose raw concatenation, parameterized queries promoted from “advanced topic” to “the only path the framework gives you.” The fix didn’t happen because individual developers collectively got better. It happened because the population of developers shifted onto tools that made the safe path the default and the unsafe path harder. That took, roughly, a decade and a half.
The pattern
The interesting part of that history is the shape of it, not the outcome. Three things happened simultaneously.
A foundational layer — relational databases plus the dynamic web — became load-bearing faster than the security practices around it had time to form. The number of people building on top of that layer grew much faster than the conventions did. And the early defaults, set by a small group of designers under deadline pressure, propagated unchallenged through tutorials, copy-paste examples, and starter projects until they were just the way things were done.
Each of those three was, individually, manageable. Together, over a long enough timescale, they produced an environment in which the default state of a web application was vulnerable. The eventual fix required tools and conventions to catch up to the population — which is to say, it required time.
I keep coming back to the third factor, the population one, because it’s the hardest to talk about politely. The reason SQL injection persisted wasn’t that experienced developers didn’t know better. It was that the population of people writing web code grew far faster than the population of people who had been doing it long enough to know better. Practices spread by mentorship, by code review, by reading the work of careful people. Mentorship doesn’t scale at the rate that tutorials do.
The current round
The MCP advisory rhymes with this so closely that it’s almost embarrassing. A foundational layer — protocols connecting AI agents to tools — has become load-bearing in a fraction of the time SQL took to do the same. The number of people building servers, connectors, and tooling on top of it is growing at a pace that is not really comparable to anything in the last twenty years. And the early defaults, set quickly by a small group of designers under genuine deadline pressure, are propagating through SDKs, examples, and starter repos faster than anyone can challenge them.
The specific technical issue — passing user-controlled values into a process invocation without a layer in between — is, in the abstract, the same class of bug that defined the SQL injection era. The mechanics differ. The shape is identical. Untrusted input meets a powerful primitive without a meaningful boundary.
What’s new isn’t the bug. The bug is, frankly, boring. What’s new is the speed. SQL injection had a long fuse before it became an ecosystem-wide condition, and an even longer one before frameworks closed it. The current AI-tooling layer hasn’t been around for two years, and the conventions that would normally form during a quieter period — the “wait, you should never do that” reflex among practitioners, the framework-level escape hatches, the linters, the secure-by-default starter templates — are being asked to form during the deployment, not before it.
Why supply chain is the right frame
A lot of the coverage of this disclosure has reached for the language of supply chain risk, and I think that framing is more useful than it first appears. Not because there are bad actors poisoning packages — there might be, but that isn’t the structural part. It’s because the most consequential decisions in this kind of ecosystem are no longer made at the level of the individual application. They’re made at the protocol and SDK layer, once, and then they propagate.
When you write an MCP server today, you don’t write it from scratch. You install the SDK, follow the example, and ship. The decisions baked into that SDK — about what counts as trusted, what gets sanitized, what defaults are safe — are the decisions you inherit. You can override them, in the same way you could write your own SQL escape function in 2004, but most people, most of the time, will not. The protocol’s defaults are the application’s behavior, by transitive default.
This is the actual supply chain. Not a malicious package; a reasonable architectural decision, made once, propagating unexamined through every downstream library and project that trusted the protocol to be what it appeared to be. When the decision turns out to need rethinking, the rethinking has to happen at the same layer where it was made. There isn’t a way to fix it project-by-project — or rather, there is, but the math doesn’t work, because the projects are being created faster than they can be reviewed.
What’s actually different
If this were just “SQL injection again,” it would be a footnote. The reason to write about it is the part that is different, which is the timescale, and what the timescale does to the population.
The web’s unsafe-by-default era ran on a clock measured in years and was repaired on a clock measured in the same units. We had time to write the books, run the conferences, train the next cohort, build the frameworks, deprecate the old patterns. The repair was painful, but it fit inside an industry’s normal adaptation cycle.
The current cycle does not fit inside that adaptation cycle. New tooling layers go from announcement to universal in months. The cohort writing on top of them is dominated, at any given moment, by people who arrived in the last quarter or two — not because they’re careless, but because the field is growing faster than its older members are being promoted into mentoring roles, faster than the patterns are being written down, faster than the linters are being shipped. The conventions that would normally form during a slower onboarding aren’t forming, because the onboarding is happening at a rate the conventions can’t keep up with.
This isn’t a complaint about the people who are new to the space. They’re doing exactly what new entrants have always done, which is build things using the defaults the ecosystem hands them. It’s an observation about what happens to defaults when the ecosystem is moving this fast. They harden into permanent behavior before anyone has time to ask whether they’re the right defaults.
What I take from it
I don’t think there’s an actionable closing argument here, and I’m wary of the kind of post that pretends there is. Security has always lagged shipping; that ratio isn’t new. What’s new is that the lag is now measured against a denominator — the rate of new tooling, new libraries, new users — that has no historical analog. The conventions that would normally close the gap are forming during deployment, on production systems that already exist.
So the cautionary read isn’t that AI security is uniquely bad. It’s that the industry’s normal mechanism for repairing unsafe defaults — the slow accumulation of practitioner knowledge, the framework-level escape hatches, the starter-template hygiene — runs on a clock that may no longer match the one the rest of the ecosystem is on. We’ve paid this tax before, on a long fuse. We’re going to pay it again, on a much shorter one. The interesting work, for the next few years, is figuring out what kind of mechanism replaces the slow one.
It isn’t a doom story. It’s just a familiar one, in fast-forward.