The attack that wasn’t technical
In April 2026, attackers compromised Axios — the HTTP client inside hundreds of millions of JavaScript projects — without finding a single vulnerability in the codebase. They built a fake company, scheduled a Teams call, and waited for the maintainer to click through what looked like a routine software update. It was a Remote Access Trojan. From there: stolen credentials, a malicious package published to npm, and an exploit sitting inside the dependency tree of projects most developers have never audited.
A week later, a similar story. Two malicious versions of LiteLLM — an AI services routing library with 97 million monthly downloads — were pushed to PyPI via a compromised CI/CD pipeline. One of the downstream victims was Mercor, a $10 billion AI training data contractor serving OpenAI, Anthropic, and Meta. Potentially four terabytes of proprietary data, gone through a library that most teams using it could not have named if you asked them.
Both attacks exploited the same structural property: a widely-used package with a single point of failure controlling something that many downstream projects trusted implicitly. That structural property is not getting better. There is a strong argument that AI-assisted development is actively making it worse.
What AI recommends and why
When you ask an AI coding assistant to make an HTTP request, fetch a URL, or parse a date, it reaches for what it knows. That means packages with broad training data coverage: axios, moment.js, lodash, requests, numpy sub-utilities. These are the tools that appear in millions of Stack Overflow answers, blog posts, and tutorials. They are well-represented in training data precisely because they are popular, and they are popular partly because they have been recommended so consistently.
What the AI does not naturally reach for is the native alternative. Node’s built-in fetch. Python’s urllib. The Date object. The language standard library function that has existed for years and requires zero additional dependencies. An experienced developer often gravitates toward these by default. Not because they are ideologically opposed to dependencies but because they have learned that every dependency is a maintenance obligation, a potential vulnerability, and a point of trust in a chain they do not fully control.
That instinct is not being transmitted to a new generation of builders. When you are vibe-coding your way through a project and the AI suggests import axios from 'axios', the path of least resistance is to accept it. The suggestion is confident, the code works, and there is no immediate signal that you have just added a transitive dependency graph that you will never inspect.
The new builder problem
The people most exposed to this pattern are not naive. They are capable, motivated builders who are using AI tools to ship things that would previously have required a larger or more experienced team. That is the genuine value proposition. But the experience gap that AI fills on the implementation side does not fill on the architectural judgment side.
A senior developer who has been burned by a compromised transitive dependency, or who spent a weekend dealing with a critical npm package going unmaintained, carries that scar tissue into every project. They ask: do I actually need this? What does the native API look like? What is the maintenance status of this package? Who maintains it and what is their security posture?
A builder who learned to code with AI assistance has not accumulated that scar tissue. The AI never pushes back on a dependency choice. It does not say “you could use the native fetch here” unless you specifically ask. It treats the recommendation of a well-known package as obviously correct, because in terms of training data signal, it is.
The result is projects with dependency footprints that look like they were built by a team that never asked whether a dependency was necessary. Dozens of packages, each with their own transitive dependencies, each maintained by individuals or small teams with varying security practices, all sitting inside an application that the builder considers their own.
What the attack surface actually looks like
The Axios compromise is instructive because it illustrates how the attack surface has changed. Ten years ago, the primary concern was vulnerabilities in your own code: SQL injection, XSS, improper authentication. You could audit that. The attack surface was proportional to what you wrote.
The modern attack surface is proportional to what you import. A typical JavaScript project has hundreds of packages in node_modules. A Python ML project can have thousands of transitive dependencies once you pull in anything from the standard data science stack. Most of those packages are never directly called by your application code — they exist because something you depend on depends on them.
This is the supply chain: a sequence of trust relationships, each invisible to the one above it. When Axios was compromised, every project that imported it became a potential vector. The developer who added axios to their package.json made a trust decision about the Axios maintainers without knowing they were making it. The Axios maintainers made a trust decision about the package registry without knowing they were the target.
The LiteLLM attack went one step further. It targeted the CI/CD pipeline of the package itself, meaning the attacker did not need to compromise a maintainer at all. They needed to compromise the automation that published the package. For any package that auto-publishes from a CI system with insufficient access controls, this is a viable vector.
The specific risk for AI-native projects
Projects built around AI APIs and tooling have a distinct vulnerability profile. The ecosystem is young, moving fast, and dominated by a small number of widely-adopted libraries. LangChain, LiteLLM, the Anthropic and OpenAI SDKs, HuggingFace’s transformers — these are the packages that appear in the vast majority of AI-native projects, and they are the packages that AI assistants recommend most confidently.
Each of these libraries handles sensitive material: API keys, model outputs, conversation history, user data passed to external services. A compromised version of any of them has direct access to credentials and data that would be valuable to an attacker. LiteLLM literally sits in the path of every API call your application makes to an AI service. Compromising it means you can harvest keys for every service your application talks to.
These libraries are also under intense development pressure. New features ship weekly. Maintainers are stretched. The speed of the ecosystem makes thorough security review of every release unlikely, and the broad adoption makes each release an attractive target.
The wrong dependency is also a problem
Reducing dependency count is one half of the problem. The other is understanding what you are choosing when you do decide a dependency is warranted.
AI tools have a strong bias toward whatever is most represented in their training data. That means older, more established options — not because they are better for your use case, but because they appear in more tutorials, Stack Overflow answers, and GitHub repositories. The newer, leaner alternative that solves the same problem more cleanly is underrepresented, so the AI does not reach for it.
Take the Next.js versus Vite+React decision. Next.js is the default AI recommendation for almost any React project. It has enormous training data coverage, a large community, and clear documentation. For a project that genuinely needs server-side rendering, edge functions, or the full deployment pipeline it offers, it is a reasonable choice. But for a large proportion of projects — dashboards, internal tools, single-page applications, prototypes — it brings substantial complexity you will never use. A framework with its own routing conventions, its own build system, its own deployment model, and a release cadence that occasionally introduces breaking changes. Vite+React is smaller, faster to build, easier to understand fully, and gives you explicit control over every architectural decision. The AI will not suggest it unless you ask, because Next.js dominates the training data.
ESLint versus Biome is the same shape. ESLint is the established choice with years of plugin ecosystem behind it. It also requires a non-trivial configuration investment, has slow cold-start performance on large codebases, and involves coordinating multiple packages (the linter, the parser, the config, the plugins) that can conflict. Biome is a single binary that handles linting and formatting together, runs significantly faster, and requires almost no configuration for standard setups. For most projects, it is the better choice. AI assistants default to ESLint because that is what the training data shows — and then they generate the plugin configuration that makes it feel like the complexity was necessary.
The pattern extends broadly. Moment.js versus the Temporal API or date-fns. Webpack versus esbuild or Vite. Express versus Hono or the native Node HTTP server. Axios versus native fetch. In each case, the established option is heavier, older, and more prominent in training data. The alternative is leaner and better suited to most modern use cases. The AI will not flag the trade-off unless you push it.
This matters for security as well as complexity. Heavier, older packages typically have larger transitive dependency trees. More dependencies means more attack surface. The choice between Next.js and Vite+React is not just an architectural preference — it is also a choice about how many packages you are implicitly trusting. One project reduced its total dependency count by over 30% by auditing what was genuinely needed versus what had been imported by convention. The trigger was a supply chain compromise in an unrelated package that prompted the question: what else in here do we not actually need?
What you can actually do
Removing dependencies entirely is not always practical, but the reflex should be to ask the question. Native fetch handles most HTTP use cases. Standard library date handling is fine for most applications. String manipulation, file I/O, basic data transformations — a surprising number of use cases that reach for a package can be served by the language itself.
Two tools are worth building into your workflow. Socket.dev analyses your dependency tree and surfaces supply chain risk signals: whether a package has unusual install scripts, new maintainers, or behavioural patterns that match known compromise techniques. It goes beyond known CVEs to flag things that look suspicious before they have been formally identified as malicious. Corridor takes a different angle — it baselines your codebase and flags risky patterns introduced over time, including dependency additions that expand your attack surface. Neither is a guarantee, but both change the default from “trust everything unless proven otherwise” to something closer to continuous scrutiny.
For dependencies you do keep, pin exact versions in production and treat upgrades as a deliberate decision rather than routine maintenance. This is the counter-intuitive one: the instinct — especially among developers who have absorbed the “always stay current” advice — is to update to the latest version as soon as it drops. But if you are stable, your dependencies are audited, and there is no concrete bug fix or feature you need, upgrading is adding risk with no return. A malicious version of a package is most dangerous in the window between publication and detection. Staying one or two versions behind a release that has no benefit for you is a reasonable position.
This does not mean ignoring security patches. It means distinguishing between a patch release that fixes a known vulnerability affecting your usage (update immediately) and a minor version bump with new features you do not need (let it sit, let others find the problems first). The “update to latest” reflex is another AI-era default: AI-generated code often pulls the latest version by default, and Dependabot opens PRs the moment a new version is available. Both nudge toward constant churn without asking whether the update is warranted.
Access controls on publish credentials are worth reviewing. Many compromises happen because the CI system that publishes packages has overly broad permissions. If your deployment pipeline has npm publish rights, a compromised CI token is a compromised package. Scoping publish credentials to a separate system with additional confirmation steps reduces the blast radius of a CI compromise.
For AI-specific libraries, the same rules apply with higher urgency. Review what you are importing. Understand what data each library touches. Check the maintenance status and security track record. The ecosystem is new enough that “this is the standard library everyone uses” is not sufficient due diligence.
The longer view
The Axios and Mercor incidents are not isolated. They are examples of a pattern that has been building for years and is accelerating. The software ecosystem has become deeply interdependent, and AI coding tools are adding a new layer to that dependency problem: an automated recommender that defaults to popular packages without considering whether the dependency is necessary.
The developers best positioned to resist this are the ones who have internalised that every import is a decision, not a convenience. That instinct takes time and experience to develop. It comes from encountering the consequences of the alternative.
For builders who are developing that instinct now, the most useful thing to ask when the AI recommends a package is: does the language already do this? Often, the answer is yes.