Introduction

In the early 20th century, factories did not gain much by simply replacing steam engines with electric motors. The real gains came later, when they reorganized how work was done—redesigning layouts, workflows, and roles to take advantage of distributed power [9]. AI is following the same pattern. The technology itself is not the differentiator. How it is distributed inside the organization is.

Across domains, the pattern is already visible.

In scientific research, systems like AlphaFold and other AI models in biology and chemistry are shifting the frontier of what good looks like. Researchers who integrate these tools into their workflows move faster, explore more hypotheses, and expand output. Others are not just slower—they are operating below a moving baseline.

In software engineering, the dynamic is different but related. AI compresses the time required to produce code, but also compresses the time required to produce failure. Teams that combine strong engineering practices with AI accelerate safely. Teams that rely on generated output without discipline introduce risk at speed.

In both cases, the effect is not uniform improvement. It is divergence.

The hidden failure mode: uneven distribution

What is emerging is not a lack of AI capability, but an uneven distribution of it.

Some individuals and teams gain early access, experiment, and build fluency through use. Others wait for guidance, are constrained by governance, or never fully integrate AI into how they work. Over time, this creates a gap in capability that compounds.

This is where the A and B teams begin to appear—not as a deliberate strategy, but as a consequence of how access, learning, and incentives are structured.

AI literacy beats AI elites

Organizations that scale AI successfully distribute capability rather than concentrate it [1][2].

When AI is centralized, teams depend on specialists. Demand exceeds capacity, and most of the organization remains passive. When capability is distributed, teams solve problems locally, and learning happens through application rather than instruction.

McKinsey consistently finds that only a minority of companies capture meaningful value from AI, and those that do embed it across functions rather than isolating it [1]. Experimental evidence reinforces that productivity gains depend on how individuals integrate AI into their work, not just whether they have access to it [11][12].

The constraint is not the model. It is whether people know how to use it effectively in context.

The Center of Excellence trap

The default enterprise response is to centralize AI into a Center of Excellence. This improves oversight and consistency, but it also creates a structural bottleneck. Every team now depends on a central unit for access, prioritization, and delivery, which does not scale with demand.

More importantly, it concentrates knowledge. Patterns, practices, and hard-won lessons accumulate inside the CoE rather than flowing through the organization. Capability becomes something you request, not something you build.

This is why many organizations are exploring federated and embedded operating models [3][4], though the transition is often incomplete and uneven. The goal is not just to distribute execution—it is to distribute capability.

This is where platform engineering provides a better mental model. Instead of acting as a delivery function, the central team builds golden paths: paved, opinionated ways of working that make the right thing the easy thing. Tooling, templates, guardrails, and reusable components are exposed directly to teams, enabling them to move independently while staying within defined boundaries.

The difference is fundamental. A CoE pulls work toward itself. A platform pushes capability outward. One creates queues. The other creates flow.

If AI is treated as a centralized service, it will scale linearly at best. If it is treated as a platform, it can scale with the organization.

AI creates uneven gains, not uniform uplift

Research consistently shows average productivity gains in the range of 10–20%, combined with substantial variation across users and tasks [5][10][12]. The variation is the important part.

In some contexts, less experienced workers benefit significantly because AI transfers best practices and reduces barriers to entry. In others, highly skilled workers gain more when operating within the effective frontier of the technology. Outcomes depend on skill, task, and how well AI is integrated into the workflow.

The result is not a level playing field, but a changing gradient. People and teams that adapt effectively accelerate. Those who do not fall behind, even when they have access to the same tools.

Governance is becoming the bottleneck

Organizations respond to AI risk by increasing control: approvals, restrictions, and policy layers. While necessary, this often introduces systemic friction.

Industry and institutional research consistently identify organizational barriers—not technical limitations—as the primary constraint on AI value creation [1][3]. The issue is less about building capability and more about enabling its use.

A more effective approach is proportional governance. Low-risk, individual use cases require minimal control. Team-level workflows benefit from lightweight oversight. High-impact, enterprise-critical systems require full governance. This aligns with risk-based approaches such as those from the OECD [8].

Without this proportionality, governance becomes a bottleneck rather than a safeguard.

How the divide compounds

The gap between A and B teams develops through small, compounding differences in access, learning environments, and culture.

Some teams have direct access to tools and are encouraged to experiment. Others operate through restricted interfaces and formal processes. Some learn through iteration; others wait for approval.

Over time, these differences accumulate. One part of the organization develops new capabilities and ways of working, while another continues with established practices. Eventually, they are no longer operating at the same level.

Distribution requires giving up some control

Avoiding this outcome requires accepting a degree of decentralization. Teams need the ability to experiment locally, and organizations need to tolerate variation in tools and approaches.

This introduces a temporary phase where things feel less controlled and less consistent. That phase is where learning happens. Eliminating it too early suppresses adoption and reinforces the divide.

AI as infrastructure

If AI remains confined to specialists, organizations create internal inequality and limit their ability to adapt. If it becomes embedded in everyday work—more like electricity than expertise—it enables continuous, distributed improvement.

The objective is not to build a stronger AI team, but to remove the distinction altogether. Because the organizations that benefit most from AI will not be those with the most advanced models, but those where its use is widespread, routine, and integrated into how work gets done.

References

[1] McKinsey & Company – The State of AI
https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[2] Boston Consulting Group – Artificial Intelligence Capabilities
https://www.bcg.com/capabilities/artificial-intelligence

[3] Deloitte – State of AI in the Enterprise
https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/state-of-ai-and-intelligent-automation-in-business-survey.html

[4] Gartner – How to Scale AI in the Enterprise
https://www.gartner.com/en/articles/how-to-scale-ai-in-the-enterprise

[5] National Bureau of Economic Research – Generative AI at Work
https://www.nber.org/papers/w31161

[8] OECD – AI Principles
https://oecd.ai/en/ai-principles

[9] Paul A. David – The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox
https://doi.org/10.3386/w5099

[10] Quarterly Journal of Economics – Generative AI at Work
https://academic.oup.com/qje/article/140/2/889/7990658

[11] MIT Sloan – How Generative AI Can Boost Highly Skilled Workers’ Productivity
https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity

[12] MIT Economics – Experimental Evidence on Generative AI
https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf