The Pragmatic Engineer surveyed 900+ engineers and leaders on how AI tools are actually affecting their work in 2026. The headline number, 30% hitting token or usage limits regularly, is less interesting than what’s driving it. The report identifies three distinct archetypes: Builders (quality-focused, architecture-conscious), Shippers (outcome-focused, feature-driven), and Coasters (less experienced, heavy AI reliance). Builders and Coasters are the two groups most likely to hit limits, but for completely different reasons. Builders are running large-codebase refactors, migrations, and extended test coverage improvements — high-value, high-token work. Coasters are generating volume without the judgment to produce usable output on the first pass, burning tokens on iterations that an experienced engineer would avoid.

This distinction matters because most teams measure AI adoption by usage volume. High usage looks like success on a dashboard, but Coaster-pattern usage generates what the report calls “AI slop” — plausible-looking code that passes a quick review and fails later. The frustration for Builder-profile engineers is that reviewing this output has become its own hidden cost. “Typing is no longer a bottleneck” is how some builders describe their current state, but several note that their review burden has increased proportionally. The productivity win for builders is real: AI assistance on refactoring, migrations, and writing test coverage for large codebases is where the gains are clearest and most durable. The downside is that these engineers now spend more time cleaning up after colleagues who aren’t operating at the same judgment level.

The cost curve is becoming visible. Companies are spending $100-200 per engineer per month on premium plans, with some on lower-budget plans around $20/month. Around 15% of respondents raised cost concerns explicitly. The US vs European split is sharp: US companies are willing to invest first and measure later; European companies are demanding clearer ROI before expanding spend. That’s not a cultural preference so much as a different tolerance for ambiguous productivity metrics. The teams hitting limits are disproportionately using expensive models (Opus or Sonnet-class) for tasks that don’t require them, which is partly an education problem and partly a missing routing layer — most teams haven’t built any criteria for which model to use for which task.

The Shipper archetype is the most straightforwardly positive story in the survey. Outcome-focused engineers using AI for rapid feature delivery are shipping faster, and they know it. The risk the report identifies is technical debt accumulation: Shippers who don’t have the structural concern of a Builder will reach for AI generation to clear a sprint without thinking about what they’re creating for the next quarter. This isn’t new (fast shippers have always created technical debt), but AI multiplies the rate.

The question for teams is which archetype their processes are designed for. Onboarding and training that treats AI tools as general-purpose productivity tools will optimise for Shippers. Teams that want Builder-pattern outcomes need to invest in prompt engineering standards, model selection criteria, and review processes that account for the reality that a significant fraction of AI-assisted output requires more careful evaluation than human-written code of equivalent apparent complexity. The 30% hitting limits is a lagging indicator; the quality of what those engineers are producing is the actual number worth tracking.