Peter Birkholm-Buch

Stuff about Software Engineering

The Future of Consulting: How Value Delivery Models Drive Better Client Outcomes

Introduction

The consulting industry, particularly within software engineering, is shifting away from traditional hourly billing towards value-driven and outcome-focused engagements. This evolution aligns with broader trends identified by industry analysts like Gartner and McKinsey, emphasizing outcomes and measurable impacts over time-based compensation (Gartner Hype Cycle for Consulting Services, 2023 and McKinsey: The State of Organizations 2023).

Shift from Hourly Rates to Value Delivery

Traditional hourly billing often incentivizes cost reduction at the expense of quality, creating tension between minimizing expenses and achieving meaningful results. The emerging approach—value-based consulting—aligns compensation directly with specified outcomes or deliverables. For instance, many consulting firms now employ fixed-price projects or performance-based contracts that clearly link payment to the achievement of specific business results, improving alignment and encouraging deeper collaboration between clients and consultants.

According to a 2023 McKinsey report titled ‘The State of Organizations 2023’, approximately 40% of consulting engagements are shifting towards value-based models, highlighting the industry’s evolution (https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-state-of-organizations-2023)

Leveraging Scrum and Agile Methodologies

Agile methodologies, especially Scrum, facilitate this shift by naturally aligning consulting work with measurable outputs and iterative improvements. In Scrum, value is measured through clearly defined user stories, regular sprint reviews, and tangible deliverables evaluated continuously by stakeholders. These iterative deliveries provide clear visibility into incremental progress, effectively replacing hourly tracking with meaningful metrics of success.

Challenges in Adopting Value-Based Models

Transitioning to value-based consulting is not without its challenges. Firms may encounter difficulties in accurately defining and measuring value upfront, aligning expectations, and managing the inherent risks of outcome-based agreements. Overcoming these challenges typically requires transparent communication, clear contract terms, and robust stakeholder engagement from project inception.

AI and Human Oversight

While there is significant enthusiasm and concern surrounding AI, its role remains primarily augmentative rather than fully autonomous, particularly in high-stakes decision-making. Human oversight ensures AI-driven solutions remain precise and contextually appropriate, directly supporting high-quality, outcome-focused consulting. This perspective aligns with insights discussed in Responsible AI: Enhance Human Judgment, Don’t Replace It.

Balancing Speed and Precision

AI offers substantial gains in speed but often involves trade-offs in precision. Certain fields, such as financial services or critical infrastructure, require exactness, making human judgment essential in balancing these considerations. This topic, explored in detail in Speed vs. Precision in AI Development, highlights how value-driven consulting must thoughtfully integrate AI to enhance outcomes without sacrificing accuracy.

Conclusion

The shift to outcome-focused consulting models, supported by agile frameworks and thoughtful AI integration, represents a significant evolution in the industry. By prioritizing measurable value and clearly defined outcomes over hourly rates, consulting engagements become more impactful and sustainable.

AI for Data (Not Data and AI)

Cold Open

Most companies get it backwards.

They say “Data and AI,” as if AI is dessert—something you get to enjoy only after you’ve finished your vegetables. And by vegetables, they mean years of data modeling, integration work, and master‑data management. AI ends up bolted onto the side of a data office that’s already overwhelmed.

That mindset isn’t just outdated—it’s actively getting in the way.

It’s time to flip the script. It’s not Data and AI. It’s AI for Data.

AI as a Data Appendage: The Legacy View

In most org charts, AI still reports to the head of data. That tells you everything: AI is perceived as a tool to be used on top of clean data. The assumption is that AI becomes useful only after you’ve reached some mythical level of data maturity.

So what happens? You wait. You delay. You burn millions building taxonomies and canonical models that never quite deliver. When AI finally shows up, it generates dashboards or slide‑deck summaries. Waste of potential.

What If AI Is Your Integration Layer?

Here’s the mental flip: AI isn’t just a consumer of data—it’s a synthesizer. A translator. An integrator – an Enabler!

Instead of cleaning, mapping, and modeling everything up front, what if you simply exposed your data—as is—and let the AI figure it out?

That’s not fantasy. Today, you can feed an AI messy order tables, half‑finished invoice exports, inconsistent SKU lists—and it still works out the joins. Sales and finance data follow patterns the model has seen a million times.

The magic isn’t that AI understands perfect data. The magic is that it doesn’t need to.

MCP: OData for Agents

Remember OData? It promised introspectable, queryable APIs—you could ask the endpoint what it supported. Now meet MCP (Model Context Protocol). Think OData, but for AI agents.

With MCP, an agent can introspect a tool, learn what actions exist, what inputs it needs, what outputs to expect. No glue code. No brittle integrations. You expose a capability, and the AI takes it from there.

OData made APIs discoverable. MCP makes tools discoverable to AIs.

Expose your data with just enough structure, and let the agent reason. No mapping tables. No MDM. Just AI doing what it’s good at: figuring things out.

Why It Works in Science—And Why It’ll Work in Business

Need proof? Look at biology.

Scientific data is built on shared, Latin‑based taxonomies. Tools like Claude or ChatGPT navigate these datasets without manual schema work. At Carlsberg we’ve shown an AI connecting yeast strains ➜ genes ➜ flavor profiles in minutes.

Business data is easier. You don’t need to teach AI what an invoice is. Or a GL account. These concepts are textbook. Give the AI access and it infers relationships. If it can handle yeast genomics, it can handle your finance tables.

Stop treating AI like glass. It’s ready.

The Dream: MCP‑Compliant OData Servers

Imagine every system—ERP, CRM, LIMS, SharePoint—exposing itself via an AI‑readable surface. No ETLs, no integration middleware, no months of project time.

Combine OData’s self‑describing endpoints with MCP’s agent capabilities. You don’t write connectors. You don’t centralize everything first. The AI layer becomes the system‑of‑systems—a perpetual integrator, analyst, translator.

Integration disappears. Master data becomes a footnote.

When Do You Still Need Clean Data?

Let’s address the elephant in the room: there are still scenarios where data quality matters deeply.

Regulatory reporting. Financial reconciliation. Mission-critical operations where a mistake could be costly. In these domains, AI is a complement to—not a replacement for—rigorous data governance.

But here’s the key insight: you can pursue both paths simultaneously. Critical systems maintain their rigor, while the vast majority of your data landscape becomes accessible through AI-powered approaches.

AI for Data: The Flip That Changes Everything

You don’t need perfect data to start using AI. That’s Data and AI thinking.

AI for Data starts with intelligence and lets structure emerge. Let your AI discover, join, and reason across your real‑world mess—not just your sanitized warehouse.

It’s a shift from enforcing models to exposing capabilities. From building integrations to unleashing agents. From waitingto acting while you learn.

If your organization is still waiting to “get the data right,” here’s your wake‑up call: you’re waiting for something AI no longer needs.

AI is ready. Your data is ready enough.

The only question left: Are you ready to flip the model?

Responsible AI: Enhance Human Judgment, Don’t Replace It

Artificial Intelligence (AI) is transforming businesses, streamlining processes, and providing insights previously unattainable at scale. However, it is crucial to keep in mind a fundamental principle best summarized by computing pioneer Grace Hopper:

“A computer can never be held accountable; therefore, a computer must never make a management decision.”

This simple yet profound guideline underscores the importance of human oversight in business-critical decision-making, a topic I’ve discussed previously in my posts about the Four Categories of AI Solutions and the necessity of balancing Speed vs Precision in AI Development.

Enhancing Human Decision-Making

AI should serve as an enabler rather than a disruptor. Its role is to provide support, suggestions, and insights, but never to autonomously make significant business decisions, especially those affecting financial outcomes, security protocols, or customer trust. Human oversight ensures accountability and ethical responsibility remain clear.

Security and Resilience

AI-powered systems, such as chatbots and customer interfaces, must be built with resilience against adversarial manipulation. Effective safeguards—including strict input validation and clearly defined output limitations—are critical. Human oversight must always be available as a fallback mechanism when the system encounters unforeseen scenarios.

Balancing Core and Strategic AI Solutions

In earlier posts, I’ve outlined four categories of AI solutions ranging from simple integrations to complex, custom-built innovations. Core AI solutions typically leverage standard platforms with inherent governance frameworks, such as Microsoft’s suite of tools, making them relatively low-risk. Conversely, strategic AI solutions involve custom-built systems that provide significant business value but inherently carry higher risks, requiring stringent oversight and comprehensive governance.

Lean and Practical Governance

AI governance frameworks must scale appropriately with the level of risk involved. It’s important to avoid creating bureaucratic overhead for low-risk applications, while ensuring that more sensitive, strategic AI applications undergo thorough evaluations and incorporate stringent human oversight.

Humans-in-the-Loop

A “human-in-the-loop” approach is essential for managing AI-driven decisions that significantly impact financial transactions, security measures, or customer trust. While AI may suggest or recommend actions, final approval and accountability should always rest with human operators. Additionally, any autonomous action should be easily reversible by a human.

Final Thoughts

AI offers tremendous potential for innovation and operational excellence. However, embracing AI responsibly means recognizing its limitations. AI should support and empower human decision-making—not replace it. By maintaining human oversight and clear accountability, we can leverage AI effectively while safeguarding against risks.

Ultimately, AI assists, but humans decide.

Accelerating Research at Carlsberg Research Laboratory using Scientific Computing

Introduction

Scientific discovery is no longer just about what happens in the lab—it’s about how we enable research through computing, automation, and AI. At Carlsberg Research Laboratory (CRL), our Accelerate Research initiative is designed to remove bottlenecks and drive breakthroughs by embedding cutting-edge technology into every step of the scientific process.

The Five Core Principles of Acceleration

To ensure our researchers can spend more time on discovery we are focusing on:

  • Digitizing the Laboratory – Moving beyond manual processes to automated, IoT-enabled research environments.
  • Data Platform – Creating scalable, accessible, and AI-ready data infrastructure that eliminates data silos.
  • Reusable Workflows – Standardizing and automating research pipelines to improve efficiency and reproducibility.
  • High-Performance Computing (HPC) – Powering complex simulations and large-scale data analysis. We are also preparing for the future of quantum computing, which promises to transform how we model molecular behavior and simulate complex biochemical systems at unprecedented speed and scale.
  • Artificial Intelligence – Enhancing data analysis, predictions, and research automation beyond just generative AI.

The Expected Impact

By modernizing our approach, we aim to:

  • Reduce research setup time by up to 70%
  • Accelerate experiment iteration by 3x
  • Improve cross-team collaboration efficiency by 5x
  • Unlock deeper insights through AI-driven analysis and automation

We’re not just improving research at CRL; we’re redefining how scientific computing fuels innovation. The future of research is fast, automated, and AI-driven.

The Half-Life of Skills: Why 100% Utilization Can Destroy Your Future

At a recent SXSW session, Ian Beacraft, CEO of Signal and Cipher, presented a compelling vision of the future workplace—one that demands continuous learning and adaptability. Central to his message was the idea of the rapidly shrinking half-life of skills. Today, technical skills are estimated to last only 2.5 years before becoming outdated, a stark decrease compared to the past.

The diagram visualizes the shrinking half-life of skills over time, highlighting how rapidly technical competencies become outdated. It contrasts the decreasing lifespan of relevant skills (currently around 2.5 years) with the growing need for continuous, agile learning methods. The visual emphasizes the risk of traditional, slow-paced training methods becoming obsolete and illustrates the necessity for companies to adopt flexible, micro-learning approaches to remain competitive and innovative in the modern workplace.

This concept aligns closely with my previous insights into organizational efficiency and innovation, particularly around the dangers of running teams at 100% utilization. Classical queue theory demonstrates that when utilization approaches full capacity, wait times and bottlenecks increase dramatically. For knowledge work, this manifests as a loss of innovation, adaptability, and essential skills development.

In an environment of near-constant technological evolution, companies that fill every available hour with immediate productivity leave no room for the critical learning and upskilling necessary to stay competitive. The future belongs to organizations that deliberately balance productivity with learning, recognizing that skill development isn’t an extracurricular activity—it’s foundational to future success.

As skills continue to expire faster than ever, running at full utilization isn’t just inefficient; it’s a direct threat to your company’s relevance. To thrive in this new reality, the approach to learning and up-skilling within companies must fundamentally change. Traditional courses with formal diplomas and structured online training from established vendors will increasingly struggle to keep pace with the rapid evolution of skills. Instead, bite-sized, just-in-time learning content available through the web, YouTube, and other micro-learning platforms will become essential.

Ian Beacraft highlighted a striking prediction: the cost of training and upskilling employees will soon eclipse the cost of technology itself. If you have SMEs in your company this is something you have to think hard about solving so that you can manage these costs effectively and maintain competitive edge in an era where skill requirements evolve rapidly.

AI Without Borders: Why Accessibility Will Determine Your Organization’s Future

Introduction

Gartner recently announced that AI has moved past the peak of inflated expectations in its hype cycle, signaling a crucial transition from speculation to real-world impact. AI now stands at this inflection point—its potential undeniable, but its future hinges on whether it becomes an accessible, enabling force or remains restricted by excessive governance.

The Risk of Creating A and B Teams in AI

Recent developments, such as OpenAI’s foundational grants to universities, signal an emerging divide between those with AI access and those without. These grants are not just an academic initiative—they will accelerate disparities in AI capabilities, favoring institutions that can freely explore and integrate AI into research and innovation. The same divide is already forming in the corporate world.

Software engineers today are increasingly evaluating companies based on their AI adoption. When candidates ask in interviews whether an organization provides tools like GitHub Copilot, they are not just inquiring about productivity enhancements—they are assessing whether the company is on the cutting edge of AI adoption. Organizations that restrict AI access risk falling behind, unintentionally categorizing themselves into the “B Team,” making it harder to attract top talent and compete effectively.

Lessons from Past Industrial Revolutions

History provides clear lessons about the importance of accessibility in technological revolutions. Electricity, for example, was initially limited to specific industrial applications before it became a utility that fueled industries, powered homes, and transformed daily life. Similarly, computing evolved from expensive mainframes reserved for large enterprises to personal computers and now cloud computing, making advanced technology available to anyone with an internet connection.

AI should follow the same path.

However, excessive corporate governance could hinder its progress, while governmental governance remains essential to ensure AI is developed and used safely. Just as electricity transformed from an industrial novelty to the foundation of modern society, AI must follow a similar democratization path. Imagine if we had limited electricity to only certified engineers or specific departments—we would have stifled the innovation that brought us everything from household appliances to modern healthcare. Similarly, restricting AI access today could prevent us from discovering its most transformative applications tomorrow.

Governance Should Enable, Not Block

The key is not to abandon governance but to ensure it enables rather than blocks innovation. AI governance should focus on how AI is used, not who gets access to it. Restricting AI tools today is akin to limiting electricity to specialists a century ago—an approach that would have crippled progress.

The most successful AI implementations are those that integrate seamlessly into existing workflows. Tools like GitHub Copilot and Microsoft Copilot demonstrate how AI can enhance productivity when it is embedded within platforms that employees already use. The key is to govern AI responsibly without creating unnecessary friction that prevents widespread adoption.

The Competitive Divide is Already Here

The AI accessibility gap is no longer theoretical—it is already shaping the competitive landscape. Universities that receive OpenAI’s foundational grants will advance more rapidly than those without access. Companies that fully integrate AI into their daily operations will not only boost innovation but also become magnets for top talent. The question organizations must ask themselves is clear: Do we embrace AI as an enabler, or do we risk falling behind?

As history has shown, technology is most transformative when it is available to all. AI should be no different. The organizations that will thrive in the coming decade will be those that balance responsible governance with widespread AI accessibility—empowering their people to innovate rather than restricting them with excessive controls. The question isn’t whether you’ll adopt AI, but whether you’ll do it in a way that creates competitive advantage or competitive disadvantage.

Speed vs. Precision in AI Development

In my post “Four Categories of AI Solutions”, I categorized AI solutions by their level of innovation and complexity. Building on that framework, I’ve been thinking about how AI-assisted programming tools fit into this model—particularly the trade-off between speed and precision in software development.

Mapping Speed and Precision

The diagram below illustrates how tasks can be plotted along axes of speed and precision. As you move towards higher precision (e.g., creating backend services or system programming), speed naturally decreases, as does the immediate value AI tools provide. Conversely, low-precision tasks—like generating boilerplate code for frontend applications—enable high speed and provide quick wins.

Dave Farley in a recent video seems to align with this observation. The speakers noted that AI tools excel at accelerating well-defined and repetitive tasks, like building a simple mobile app or prototyping a frontend interface. These tasks fall into Category 1 of my model: low complexity, high-speed solutions.

However, the further you move into Category 3+4—solutions requiring precision and contextual understanding—the less impactful these tools become. For instance, when my team uses GitHub Copilot for backend development, only about 20% of its suggested code is accepted. The rest lacks the precision or nuanced understanding needed for high-stakes backend systems.

The Speed-Precision Trade-Off

The interview also highlighted a critical concern: AI’s emphasis on generating code quickly can erode the incremental approach central to traditional programming. In precision-driven tasks, small, deliberate steps are essential to ensure reliability and minimize risk. By generating large amounts of code at once, AI tools risk losing this careful craftsmanship.

Yet this trade-off isn’t a flaw—it’s a characteristic of how these tools are designed. AI’s value lies in accelerating the routine and freeing up developers to focus on higher-order problems. For precision tasks, AI becomes an assistant rather than a solution, helping analyze systems, identify bugs, or suggest improvements.

The Four Categories Revisited

This balancing act between speed and precision ties directly into the “Four Categories of AI Solutions”:

  1. Category 1: High-speed, low-precision tasks like prototyping and boilerplate generation. AI tools thrive here.
  2. Category 2: Moderately complex applications, where AI can augment human effort but requires careful validation.
  3. Category 3: High-precision, low-speed systems programming or backend development. AI contributes less here, serving more as an analysis tool.
  4. Category 4: Novel, cutting-edge AI applications requiring custom-built solutions.

As we develop software with AI, understanding where these tools provide the most value—and where their limitations begin—is critical. For now, AI tools may help us write code faster, but when it comes to precision, the human touch remains irreplaceable.

Gaia: How We Built a Platform to Transform Infrastructure Creation

The Evolution of Infrastructure Management

In Software Engineering in Growth Products at Carlsberg, our journey towards modern infrastructure management began with a familiar challenge: as the number of development teams grew, the traditional approach of manually provisioning and managing infrastructure became a significant bottleneck. The DevOps team, tasked with building and maintaining infrastructure for multiple development teams, found themselves overwhelmed by the increasing demand for infrastructure resources.

Each new project required careful setup of networking, permissions, and cloud resources, all of which had to be manually configured by a small DevOps team. As development velocity increased and more teams came onboard, this model proved unsustainable. New projects faced delays waiting for infrastructure provisioning, while the DevOps team struggled to keep pace with mounting requests.

Reimagining Infrastructure Creation

The solution emerged when the DevOps team envisioned a different approach: what if developers could create their own infrastructure while adhering to organizational standards? The challenge was to enable self-service infrastructure without requiring developers to understand the complexities of building secure, scalable, and compliant cloud resources.

This vision led to the creation of Gaia, a platform that automates infrastructure creation while maintaining strict security and compliance standards. Built by the DevOps team, Gaia represents a fundamental shift in how infrastructure is provisioned and managed at Carlsberg.

The Platform Engineering Approach

Infrastructure as Code Evolution

Gaia elevates infrastructure creation beyond basic scripting by providing a comprehensive platform engineering solution. The platform utilizes Terraform for infrastructure provisioning but abstracts its complexity through a higher-level interface. This approach allows developers to focus on their applications while ensuring infrastructure deployments follow organizational best practices.

Standardized Module Library

The platform provides an extensive library of pre-built, production-ready modules covering the complete spectrum of AWS infrastructure components:

  • Compute Services: EC2, ECS, EKS, Lambda
  • Data Stores: Aurora, RDS, DynamoDB, DocumentDB, Redis, Elasticsearch
  • Networking: VPC, Load Balancers, API Gateway, Route53
  • Security: IAM, ACM, Secrets Manager
  • Messaging: SQS, Kafka
  • Monitoring: CloudWatch, Managed Grafana

Each module encapsulates best practices, security controls, and compliance requirements, ensuring consistent infrastructure deployment across the organization.

Developer Experience

Simplified Workflow

Gaia integrates seamlessly with existing development workflows through GitHub. Developers request infrastructure by:

  1. Creating a configuration file with simple key-value pairs
  2. Submitting a pull request
  3. Awaiting automated validation and deployment

Example configuration for a serverless function with a storage layer and an API Gateway:

The API Gateway configuration for the API is very simple:

Once the developer is ready to create the infrastructure a Pull Request is created and a “code owner” (in this case a Platform Engineer Team Member) approves the request and the infrastructure is deployed automatically.

Automated Compliance

The platform automatically enforces organizational standards and security policies. Developers don’t need to worry about:

  • Network configuration
  • Security group settings
  • Access control policies
  • Compliance requirements

All these aspects are handled automatically by Gaia’s pre-configured modules.

Technical Architecture

Terragrunt Integration

Gaia leverages Terragrunt as a wrapper around Terraform to provide enhanced functionality:

  • Automatic variable injection based on environment context
  • Template-based module generation
  • Configuration reuse across environments
  • Simplified state management

Monitoring and Observability

The platform includes native integration with monitoring tools:

  • Automated Datadog dashboard creation
  • Standardized monitoring configurations
  • Built-in health checks and alerts
  • Custom metric collection

Organizational Impact

DevOps Transformation

  • Reduced manual infrastructure work by approximately 80%
  • Shifted focus from repetitive tasks to platform improvements
  • Enabled scaling of development operations without proportional increase in DevOps resources

Development Velocity

  • Eliminated infrastructure provisioning bottlenecks
  • Reduced time-to-deployment for new projects
  • Enabled consistent implementation across teams

Governance and Security

  • Centralized policy enforcement
  • Automated compliance checking
  • Infrastructure drift detection and remediation
  • Standardized security controls

Future Directions

Multi-Cloud Strategy

While currently focused on AWS, Gaia is being extended to support Azure. This expansion presents unique challenges due to fundamental differences in how cloud platforms implement similar services. The team is working to maintain the same simple developer experience while adapting to Azure’s distinct architecture.

Platform Evolution

Planned enhancements include:

  • Enhanced monitoring capabilities
  • Expanded multi-cloud support
  • Deeper integration with development tools
  • Advanced automation features

Conclusion

Gaia represents a successful transformation from traditional DevOps to platform engineering. By providing developers with self-service infrastructure capabilities while maintaining security and compliance, the platform has eliminated a major organizational bottleneck. The success of this approach demonstrates how well-designed abstractions and automation can make infrastructure management accessible to development teams while maintaining enterprise-grade standards.

The platform has fundamentally transformed how Carlsberg manages cloud infrastructure. As cloud infrastructure continues to evolve, Gaia’s modular architecture and focus on developer experience position it well for future adaptations and enhancements. The platform serves as a testament to how modern platform engineering can effectively bridge the gap between development velocity and operational excellence.

Gaia was conceived by https://www.linkedin.com/in/josesganjos/ and built with the help from:

The DevOps Conference in Copenhagen November 2024

I presented Gaia at the The DevOps Conference in Copenhagen November 2024:

Evolve Commerce Club Expert Session #031

I was honoured to be invited to speak at the 31st Expert Session at the https://www.evolve-community.com.

We touched on a lot of subjects around Software Engineering and mostly AI.

Thank you Carlos Monteiro and Gustavo Valle for inviting me.

Here are some links to posts which covers some of the topics we spoke about in more detail:

Why 100% Utilization Kills Innovation: The Mathematical Reality

Imagine a highway at 100% capacity. Traffic doesn’t just slow down—it stops completely. A single broken-down car causes massive ripple effects because there’s no buffer space to absorb the variation. This isn’t just an analogy; it’s mathematics. And the same principle explains why running teams at full capacity mathematically guarantees the death of innovation.

The Queue Theory Reality

In 1961, mathematician J.F.C. Kingman proved something remarkable: as utilization approaches 100%, delays grow exponentially. This finding, known as Kingman’s Formula, demonstrates that systems operating at full capacity don’t just slow down linearly—they break down dramatically. Hopp and Spearman’s seminal work “Factory Physics” (2000) further established that optimal system performance occurs at around 80% utilization, giving rise to the “80% Rule” in operations management.

This isn’t opinion or management theory—it’s mathematics. When utilization exceeds 80-85%, systems experience:

  • Exponentially increasing delays
  • Inability to handle normal variation
  • Cascading disruptions from small problems
  • Deteriorating performance across all metrics

The Human System Connection

Just as a machine’s productivity is limited by its operational capacity, humans too are constrained by cognitive load. People and teams are systems too. When cognitive load research pioneers Sweller and Chandler demonstrated how mental capacity follows similar patterns, they revealed something crucial: minds at 100% capacity lose the ability to process new information effectively. Just as a fully utilized highway can’t absorb a single additional car, a fully utilized mind can’t absorb new ideas or opportunities.

The implications are profound: innovation requires spare capacity. This isn’t about working less—it’s about maintaining the mental and temporal space required for creative thinking and problem-solving. Studies of innovation consistently show that breakthrough ideas emerge when people have the bandwidth to:

  • Notice unexpected patterns
  • Explore new connections
  • Experiment with different approaches
  • Learn from failures

The Three Horizons Impact

McKinsey’s Three Horizons Framework provides a useful lens for understanding innovation timeframes:

  • Horizon 1: Improving current business
  • Horizon 2: Extending into new areas
  • Horizon 3: Creating transformative opportunities

Here’s where queue theory delivers its killing blow to innovation: At 100% utilization, everything becomes Horizon 1 by mathematical necessity. When a system (human or organizational) operates at full capacity, it can only handle what’s already in the queue. New opportunities, no matter how promising, must wait. Over time, Horizons 2 and 3 don’t just suffer—they become mathematically impossible.

To keep Horizons 2 and 3 viable, companies need to intentionally limit Horizon 1 resource utilization and leave room for creative and exploratory projects.

The Innovation Impossibility

Queue theory proves that running at 100% utilization:

  • Makes delays inevitable
  • Eliminates flexibility
  • Prevents absorption of variation
  • Blocks capacity for new initiatives

Therefore, organizations face a mathematical certainty: maintain 100% utilization or maintain innovation capability. You cannot have both. This isn’t a management choice or cultural issue—it’s as fundamental as gravity.

The solution isn’t working less—it’s working smarter. Just as highways need buffer capacity to function effectively, organizations need spare capacity to innovate. The 80% rule isn’t about reduced output; it’s about maintaining the space required for sustainable performance and growth.

The choice is clear: accept the mathematical reality that innovation requires spare capacity, or continue pushing for 100% utilization while wondering why transformative innovation never seems to happen.

References:

  • Kingman, J.F.C. (1961). “The Single Server Queue in Heavy Traffic”
  • Hopp, W.J., & Spearman, M.L. (2000). “Factory Physics”
  • McKinsey & Company. “Three Horizons of Growth”
  • Sweller, J., & Chandler, P. “Cognitive Load Theory and the Format of Instruction”
« Older posts

© 2025 Peter Birkholm-Buch

Theme by Anders NorenUp ↑