Introduction

Over the past year, I’ve written three posts that—at the time—felt consistent.

First, I described four categories of AI solutions, arguing that complexity determines where AI works and then, I introduced the trade-off between speed and precision, where fast systems are imprecise and precise systems are slow.

Both were true at the time.

Lastly I introduced the Wiggum Loop which argues that institutional memory is useless.

The original model

The underlying assumption in the two first posts was simple. AI is most effective when problems are well-bounded, precision requirements are low, and iteration costs are small. It struggles when precision is critical, domain knowledge is deep, and errors are expensive. In other words, AI accelerates simple work, while humans remain essential for complex work.

The crack in the model

The Wiggum Loop challenges that assumption. If solutions can be reached through repeated iteration rather than upfront understanding, then precision is no longer a prerequisite—it becomes something you converge on. This changes the equation. Complexity no longer blocks AI in the same way; it simply increases the number of iterations required.

From capability to convergence

The original model was about capability—what AI can do well. The emerging model is about convergence—how quickly a system can explore the solution space and arrive at something that works. Once iteration is cheap and automated, the constraint shifts. It is no longer about whether we can solve a problem, but whether we can recognize when it has been solved.

Reinterpreting the three posts

Seen together, the three posts describe a transition. 

The model does not disappear—it shifts.

The new boundary

The real boundary is no longer complexity or precision. It is whether a problem can be expressed in a way that supports iteration. That requires a clearly defined outcome, explicit constraints, and a way to evaluate results. If those exist, iteration can often replace deep understanding; if they do not, it cannot.

This does not remove expertise—it relocates it. The hard part is no longer solving the problem directly, but defining what success looks like, encoding the right constraints, and deciding how results are evaluated.

What this means for organizations

This is not just a technical shift—it changes how organizations create value. Historically, value came from expertise, experience, and accumulated knowledge. Increasingly, it comes from defining problems clearly, encoding constraints explicitly, and running and governing iterative systems. The center of gravity moves.

The uncomfortable alignment

Taken together, the three posts lead to a slightly uncomfortable conclusion. Much of what we treat as essential organizational knowledge is actually context-bound constraint—decisions made under conditions that no longer apply.

If iteration can rediscover solutions faster than we can recall them, then memory becomes less valuable than exploration. That has consequences. Expertise shifts from knowing answers to defining problems and constraints. Institutional memory becomes less of an authority and more of a hypothesis archive—useful, but not decisive. Roles built around recall and experience start to erode, while roles focused on framing, validation, and governance become more central.

This does not remove humans, but it changes what humans are for—from remembering why things failed to defining what success looks like.

Where this leaves us

The original model still holds, but it is no longer the full picture. AI is not just a tool for solving known problems faster—it is becoming a system for exploring unknown solutions through iteration.

There is a subtle tension here. This trilogy itself depends on cumulative understanding, where each post builds on the last—a small act of institutional memory arguing against institutional memory. Exploration does not replace memory entirely; it changes what kind of memory matters. Constraint-memory becomes less valuable, while model-building and interpretation become more important.

Final thought

We started by asking where AI works. We then asked how precise it needs to be. The emerging question is different: how fast can we iterate—and how well can we recognize success?

That is the thread connecting all three posts, and it is where the model begins to break.