Harper Reed, founder of 2389 Research, has a name for something a lot of teams are quietly experiencing: conviction collapse. The idea is that traditional product development relied on iteration cycles long enough to build genuine conviction through customer feedback. You shipped a version, lived with it for months, watched what happened, developed strong opinions, then shipped again. That process had friction, but the friction was doing work. When AI compresses a nine-month iteration to a weekend, that feedback loop breaks. You can rebuild everything before you have learned what the previous version was teaching you.

Tim O’Reilly’s piece at O’Reilly Radar profiles Reed’s response to this, which goes further than “move faster.” Reed argues that software is now more like clay or language than like engineering: a substrate that reflects the skill and context of whoever is shaping it, rather than a fixed deliverable with a spec. His team extracts “skills” from their own workflows — a Quaker business practice agent, a Review Squad that runs five personas across a sophistication spectrum from novice to expert — and treats those as the real product, not the interface sitting on top. One cofounder reverse-engineered a Chinese sauna app using Claude Code and improved it by applying a “yes, and” methodology borrowed from improv. These are not metaphors; they are the actual approach.

The deeper problem Reed is identifying is structural. VC investment logic, sprint planning, OKRs, product management frameworks — all of these assume that iteration takes long enough that you can build conviction through feedback before making the next big bet. When that assumption fails, the entire apparatus becomes noise. You end up making more decisions faster with less information per decision, which is not obviously better than fewer decisions with more information. Speed is only an advantage if you have good mechanisms for learning quickly.

What is underreported in most AI-and-product coverage is the asymmetry this creates. Teams and founders who already have deep domain expertise — who know exactly what problem they are solving and for whom — get enormous benefit from AI-speed iteration. They can test their hypotheses quickly and build conviction from evidence. Teams without that existing clarity get faster noise. Conviction collapse is mostly a problem for teams who were relying on slow iteration to figure out what they were actually building.

The concrete takeaway is not that you should slow down. It is that conviction-building needs to become a deliberate practice rather than a side effect of taking a long time. Tighter customer contact cadences, faster hypothesis articulation, more explicit frameworks for deciding when to stay with a version versus rebuilding — these are the muscles that matter now. Reed’s answer is to treat your own workflow patterns as a source of reusable skills rather than building features for hypothetical users. That is a reasonable start.