Peter Birkholm-Buch

Stuff about Software Engineering

Different Roles and Responsibilities for an IT Architect

Introduction

In IT being an “Architect” means something different for almost everyone and the role and responsibilities varies between industries, countries and continents. So here are my 0.02€ on this.

I like to divide architects into the following groups/layers:

  • Enterprise Architecture (EA)
  • Solution Architecture (SA)
  • Infrastructure Architecture (IA)

They can, of course, be divided even further, but in my experience, this works at a high level. I firmly believe that EA, SA, and IA should remain as distinct functions within an organization, each with its own reporting structure. This separation ensures that Enterprise Architecture (EA) focuses on strategic governance, Solution Architecture (SA) remains embedded in product teams, and Infrastructure Architecture (IA) continues to provide the necessary operational foundation.

This approach aligns with Svyatoslav Kotusev’s research on enterprise architecture governance, which suggests that keeping these disciplines distinct leads to better strategic focus, executional efficiency, and organizational alignment. Additionally, insights from “Enterprise Architecture as Strategy” (Ross, Weill, Robertson) emphasize that EA should focus on high-level strategic direction rather than detailed execution. “Fundamentals of Software Architecture” (Richards, Ford) further supports the distinction between EA and SA, reinforcing that Solution Architects must remain closely aligned with engineering teams for execution. “Team Topologies” (Skelton, Pais) highlights the importance of structuring architecture teams effectively to support flow and autonomy, while “The Art of Scalability” (Abbott, Fisher) underscores how separating governance from execution helps organizations scale more efficiently.

By structuring these functions independently, organizations can maintain a balance between governance and execution while ensuring that architecture decisions remain both strategic and practical. This separation fosters alignment between business strategy, technology execution, and infrastructure stability, ensuring that architecture is an enabler rather than a bottleneck.

Enterprise Architecture

From Wikipedia:

Enterprise architecture (EA) is an analytical discipline that provides methods to comprehensively define, organize, standardize, and document an organization’s structure and interrelationships in terms of certain critical business domains (physical, organizational, technical, etc.) characterizing the entity under analysis.

The goal of EA is to create an effective representation of the business enterprise that may be used at all levels of stewardship to guide, optimize, and transform the business as it responds to real-world conditions.

EA serves to capture the relationships and interactions between domain elements as described by their processes, functions, applications, events, data, and employed technologies.

This means that EA exists in the grey area between business and IT. It’s neither one or the other but it takes insight into both in order to understand how the business is affected by IT and vice versa.

Because of the close proximity to the business it’s usually EA that writes strategies on issues which are cross organisational and multi-year efforts. This ensures proper anchoring of strategies where IT and business (finance) must agree on business directions.

I’ve seen EA divided into something like the following:

  • Overall solution, application, integration, API etc. architectures
  • Data: Master Data Management & Analytics
  • Hosting: Cloud, hybrid, edge, HCI, Managed and Onprem
  • Security: Physical, Intellectual, IT etc.
  • Processes: IAM, SIEM, ITIL etc
  • Special areas from the business depending on industry like: Logistics, brewing, manufacturing, R&D, IoT etc.

Solution Architecture

From Wikipedia:

Software development is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components.

Software development involves writing and maintaining the source code, but in a broader sense, it includes all processes from the conception of the desired software through to the final manifestation of the software, typically in a planned and structured process.

Software development also includes research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products

I like to add an additional role named Software Architecture to the Solution Architecture layer and I differentiate between the two through the following:

  • A Solution Architect is in charge of the overall solution architecture of a solution that may span multiple IT and business domains using different technologies and software architecture patterns.
  • A Software Architect is in charge of a part of the overall solution usually within a single business domain and technology stack.

Although both roles are highly technical the Solution Architect is a bit more of a generalist and the Software Architect is a specialist within a certain technology stack.

Depending on the size of a solution you only need a single person to handle everything to multiple people people in both roles. Usually there’s a single Solution Architect in charge.

I’ve seen SA divided into the following:

  • Building things from scratch
  • Customizing existing platforms
  • Non and Cloud Architecture Focus
  • Microsoft 365 (Workplace) Architecture
  • Mega corporation stuff like SAP, Salesforce etc

Successful organizations ensure that EA remains a strategic function rather than absorbing all architects into a single unit. Solution and Infrastructure Architects must be embedded in product teams and technology groups, ensuring a continuous feedback loop between strategy and execution. Without this distinction, architecture becomes detached from real business needs, leading to governance-heavy, execution-poor outcomes.

Svyatoslav Kotusev’s [1] research on enterprise architecture governance supports this view, emphasizing that EA should function as a decision-support structure rather than an operational execution layer. His empirical studies highlight that centralizing all architects within EA leads to inefficiencies, as solution and infrastructure architects require proximity to delivery teams to ensure architectural decisions remain practical and aligned with business realities.

Infrastructure Architecture

From Wikipedia:

Information technology operations, or IT operations, are the set of all processes and services that are both provisioned by an IT staff to their internal or external clients and used by themselves, to run themselves as a business.

With some additional skills like:

  • Data center-infrastructure management (DCIM) is the integration of IT and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
  • Data center asset management is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the IT environment. Assets include all elements of software and hardware that are found in the business environment.

The IA is responsible for the foundation upon which all IT solutions depends on. Without IT Infrastructure nothing in IT works.

Designing, implementing and maintaining IT infrastructure spans from your internet router at home to the unbelievable physical size and complexity of cloud infrastructure data centres with under ocean network connections.

The IA takes their requirements from EA and SA and implements accordingly.

I’ve seen IA divided into the following:

  • Infrastructure Experts
    • Cloud (AWS, Azure, GCP etc)
    • On-premises
  • IaC and Monitoring
  • Hosting: Cloud, hybrid, edge, HCI, Managed and Onprem
  • Management and support teams for managed services

Cross Organizational Collaboration

In order to make sure that everyone knows what is going on all architecture is controlled through the Architecture Forum where the leads from each discipline meets and has the final say over the usage of technology and implementation of policies.

Example of how an Architecture Forum could be organized – the specific areas are examples

References

  1. “The Practice of Enterprise Architecture: A Modern Approach to Business and IT Alignment”Svyatoslav Kotusev
  2. “Enterprise Architecture as Strategy” – Jeanne W. Ross, Peter Weill, David Robertson
  3. “Team Topologies” – Matthew Skelton, Manuel Pais
  4. “The Art of Scalability” – Martin L. Abbott, Michael T. Fisher

For more references see https://birkholm-buch.dk/2021/02/12/useful-resources-on-software-systems-architecture/

Software is Eating Science – and That’s a Good Thing

Introduction

I love the internet memos like the one from Jeff Bezos about APIs and Marc Andreessen’s 2011 prediction that “software is eating the world.” Over a decade later, it’s devoured more than commerce and media—it’s now eating science, and quite frankly, it’s about time.

Scientific research, especially in domains like biology, chemistry, and medicine, has historically been a software backwater. Experiments were designed in paper notebooks, data handled via Excel, and results shared through PowerPoint screenshots. It’s only recently that leading institutions began embedding software engineering at the core of how science gets done. And the results speak for themselves. The Nobel Prize in Chemistry 2024, awarded for the use of AlphaFold in solving protein structures, is a striking example of how software—developed and scaled by engineers—has become as fundamental to scientific breakthroughs as any wet-lab technique.

The Glue That Holds Modern Science Together

Software engineers aren’t just building tools. At institutions like the Broad Institute, Allen Institute, and EMBL-EBI, they’re building scientific platforms. Terra, Code Ocean, Benchling—these aren’t developer toys, they’re scientific instruments. They standardize experimentation, automate reproducibility, and unlock collaboration at scale.

The Broad Institute’s Data Sciences Platform employs over 200 engineers supporting a staff of 3,000. Recursion Pharmaceuticals operates with an almost 1:1 engineer-to-scientist ratio. These are not exceptions—they’re exemplars.

The Real Payoff: Research Acceleration

When you embed software engineers into scientific teams, magic happens:

  • Setup time drops by up to 70%
  • Research iteration speeds triple
  • Institutional knowledge gets preserved, not lost in SharePoint folders
  • AI becomes usable beyond ChatGPT prompts—supporting actual data analysis, modeling, and automation

These are not hypothetical. They’re documented results from public case studies and internal programs at peer institutions.

From Hype to Hypothesis

While many institutions obsess over full lab digitization (think IoT pipettes), the smarter move is prioritizing where digital already exists: in workflows, data, and knowledge. With tools like Microsoft Copilot, OpenAI Enterprise, and AI language models for genomics like Evo2, AlphaFold for protein structure prediction, and DeepVariant for variant calling—tools that only become truly impactful when integrated, orchestrated, and maintained by skilled engineers who understand both the research goals and the computational landscape, researchers are now unlocking years of buried insights and accelerating modeling at scale.

Scientific software engineers are the missing link. Their work turns ad hoc experiments into reproducible pipelines. Their platforms turn pet projects into institutional capability. And their mindset—rooted in abstraction, testing, and scalability—brings scientific rigor to the scientific process itself.

What many underestimate is that building software—like conducting experiments—requires skill, discipline, and experience. Until AI is truly capable of writing production-grade code end-to-end (and it’s not—see Speed vs. Precision in AI Development), we need real software engineering best practices. Otherwise, biology labs will unknowingly recreate decades of software evolution from scratch—complete with Y2K-level tech debt, spaghetti code, and glaring security gaps.

What Now?

If you’re in research leadership and haven’t staffed up engineering talent, you’re already behind. A 1:3–1:5 engineer-to-scientist ratio is emerging as the new standard—at least in data-intensive fields like genomics, imaging, and molecular modeling—where golden-path workflows, scalable AI tools, and reproducible science demand deep software expertise.

That said, one size does not fit all. Theoretical physics or field ecology may have very different needs. What’s critical is not the exact ratio, but the recognition that modern science needs engineering—not just tools.

There are challenges. Many scientists weren’t trained to work with software engineers, and collaboration across disciplines takes time and mutual learning. There’s also a cultural risk of over-engineering—replacing rapid experimentation with too much process. But when done right, the gains are exponential.

Science isn’t just done in the lab anymore—it’s done in GitHub. And the sooner we treat software engineers as core members of scientific teams, not as service providers, the faster we’ll unlock the discoveries that matter.

Let’s stop treating software like overhead. It’s the infrastructure of modern science.

Old Wine, New Bottles: Why MCP Might Succeed Where UDDI and OData Failed

Introduction

Anthropic’s Model Context Protocol (MCP) is gaining traction as a way to standardize how AI applications connect to external tools and data sources. Looking at the planned MCP Registry and the protocol itself, I noticed some familiar patterns from earlier integration standards.

The MCP Registry bears striking resemblances to UDDI (Universal Description, Discovery and Integration) from the early 2000s. Both aim to be centralized registries for service discovery, storing metadata to help developers find available capabilities. MCP’s approach to standardized data access also echoes Microsoft’s OData protocol.

We see this pattern frequently in technology—old concepts repackaged with new implementation approaches. Sometimes the timing or context makes all the difference between success and failure.

UDDI and MCP Registry: What’s Different This Time

UDDI tried to create a universal business service registry during the early web services era. Companies could theoretically discover and connect to each other’s services automatically. The concept was sound, but adoption remained limited.

MCP Registry targets a narrower scope—AI tool integrations—in an ecosystem that already has working implementations. More importantly, AI provides the missing piece that UDDI lacked: the “magic sauce” in the middle.

With UDDI, human developers still needed to understand service interfaces and write integration code manually. With MCP, AI agents can potentially discover services, understand their capabilities, and use them automatically. The AI layer handles the complexity that made UDDI impractical for widespread adoption.

OData: The Complexity Problem

OData never achieved broad adoption despite solving a real problem—standardized access to heterogeneous data sources. The specification became complex with advanced querying, batch operations, and intricate metadata schemas.

MCP deliberately keeps things simpler: tools, resources, and prompts. That’s the entire model. OpenAPI Specification closed some of OData’s gap, but you still can’t easily connect to an API programmatically and start using it automatically. MCP’s lower barrier to entry might be what was missing the first time around.

Timing and Context Matter

Several factors suggest MCP might succeed where its predecessors struggled:

AI as the Integration Layer: AI agents can handle the complexity that overwhelmed human developers with previous standards. They can discover services, understand capabilities, and generate appropriate calls automatically. As I discussed in “AI for Data (Not Data and AI)”, AI works as a synthesizer and translator – it doesn’t need perfect, pre-cleaned interfaces. This makes automatic integration practical in ways that weren’t possible with human developers manually writing integration code.

Proven Implementation First: Companies like Cloudflare and Atlassian already have production MCP servers running. This contrasts with UDDI’s “build the registry and they will come” approach.

Focused Problem Domain: Instead of trying to solve all integration problems, MCP focuses specifically on AI-tool integration. This narrower scope increases the chances of getting the abstraction right.

Simple Core Protocol: Three concepts versus OData’s extensive specification or SOAP’s complexity. The most successful protocols (HTTP, REST) stayed simple.

The Historical Pattern

We’ve seen this “universal integration layer” vision repeatedly:

  • CORBA (1990s): Universal object access
  • SOAP/WSDL (early 2000s): Universal web services
  • UDDI (early 2000s): Universal service discovery
  • OData (2010s): Universal data access
  • GraphQL (2010s): Universal query language
  • MCP (2020s): Universal AI integration

Each generation promised to solve integration complexity. Each had technical merit. Each faced adoption friction and complexity creep.

Why This Matters

The recurring nature of this pattern suggests the underlying problem is real and persistent. Integration complexity remains a significant challenge in software development.

MCP has advantages its predecessors lacked—AI as an intermediary layer, working implementations before standardization, and deliberate simplicity. Whether it can maintain focus and resist the complexity trap that caught earlier standards remains to be seen.

The next few years will be telling. If MCP stays simple and solves real problems, it might finally deliver on the promise of standardized integration. If it expands scope and adds complexity, it will likely follow the same path as its predecessors.

The Future of Consulting: How Value Delivery Models Drive Better Client Outcomes

Introduction

The consulting industry, particularly within software engineering, is shifting away from traditional hourly billing towards value-driven and outcome-focused engagements. This evolution aligns with broader trends identified by industry analysts like Gartner and McKinsey, emphasizing outcomes and measurable impacts over time-based compensation (Gartner Hype Cycle for Consulting Services, 2023 and McKinsey: The State of Organizations 2023).

Shift from Hourly Rates to Value Delivery

Traditional hourly billing often incentivizes cost reduction at the expense of quality, creating tension between minimizing expenses and achieving meaningful results. The emerging approach—value-based consulting—aligns compensation directly with specified outcomes or deliverables. For instance, many consulting firms now employ fixed-price projects or performance-based contracts that clearly link payment to the achievement of specific business results, improving alignment and encouraging deeper collaboration between clients and consultants.

According to a 2023 McKinsey report titled ‘The State of Organizations 2023’, approximately 40% of consulting engagements are shifting towards value-based models, highlighting the industry’s evolution (https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-state-of-organizations-2023)

Leveraging Scrum and Agile Methodologies

Agile methodologies, especially Scrum, facilitate this shift by naturally aligning consulting work with measurable outputs and iterative improvements. In Scrum, value is measured through clearly defined user stories, regular sprint reviews, and tangible deliverables evaluated continuously by stakeholders. These iterative deliveries provide clear visibility into incremental progress, effectively replacing hourly tracking with meaningful metrics of success.

Challenges in Adopting Value-Based Models

Transitioning to value-based consulting is not without its challenges. Firms may encounter difficulties in accurately defining and measuring value upfront, aligning expectations, and managing the inherent risks of outcome-based agreements. Overcoming these challenges typically requires transparent communication, clear contract terms, and robust stakeholder engagement from project inception.

AI and Human Oversight

While there is significant enthusiasm and concern surrounding AI, its role remains primarily augmentative rather than fully autonomous, particularly in high-stakes decision-making. Human oversight ensures AI-driven solutions remain precise and contextually appropriate, directly supporting high-quality, outcome-focused consulting. This perspective aligns with insights discussed in Responsible AI: Enhance Human Judgment, Don’t Replace It.

Balancing Speed and Precision

AI offers substantial gains in speed but often involves trade-offs in precision. Certain fields, such as financial services or critical infrastructure, require exactness, making human judgment essential in balancing these considerations. This topic, explored in detail in Speed vs. Precision in AI Development, highlights how value-driven consulting must thoughtfully integrate AI to enhance outcomes without sacrificing accuracy.

Conclusion

The shift to outcome-focused consulting models, supported by agile frameworks and thoughtful AI integration, represents a significant evolution in the industry. By prioritizing measurable value and clearly defined outcomes over hourly rates, consulting engagements become more impactful and sustainable.

AI for Data (Not Data and AI)

Cold Open

Most companies get it backwards.

They say “Data and AI,” as if AI is dessert—something you get to enjoy only after you’ve finished your vegetables. And by vegetables, they mean years of data modeling, integration work, and master‑data management. AI ends up bolted onto the side of a data office that’s already overwhelmed.

That mindset isn’t just outdated—it’s actively getting in the way.

It’s time to flip the script. It’s not Data and AI. It’s AI for Data.

AI as a Data Appendage: The Legacy View

In most org charts, AI still reports to the head of data. That tells you everything: AI is perceived as a tool to be used on top of clean data. The assumption is that AI becomes useful only after you’ve reached some mythical level of data maturity.

So what happens? You wait. You delay. You burn millions building taxonomies and canonical models that never quite deliver. When AI finally shows up, it generates dashboards or slide‑deck summaries. Waste of potential.

What If AI Is Your Integration Layer?

Here’s the mental flip: AI isn’t just a consumer of data—it’s a synthesizer. A translator. An integrator – an Enabler!

Instead of cleaning, mapping, and modeling everything up front, what if you simply exposed your data—as is—and let the AI figure it out?

That’s not fantasy. Today, you can feed an AI messy order tables, half‑finished invoice exports, inconsistent SKU lists—and it still works out the joins. Sales and finance data follow patterns the model has seen a million times.

The magic isn’t that AI understands perfect data. The magic is that it doesn’t need to.

MCP: OData for Agents

Remember OData? It promised introspectable, queryable APIs—you could ask the endpoint what it supported. Now meet MCP (Model Context Protocol). Think OData, but for AI agents.

With MCP, an agent can introspect a tool, learn what actions exist, what inputs it needs, what outputs to expect. No glue code. No brittle integrations. You expose a capability, and the AI takes it from there.

OData made APIs discoverable. MCP makes tools discoverable to AIs.

Expose your data with just enough structure, and let the agent reason. No mapping tables. No MDM. Just AI doing what it’s good at: figuring things out.

Why It Works in Science—And Why It’ll Work in Business

Need proof? Look at biology.

Scientific data is built on shared, Latin‑based taxonomies. Tools like Claude or ChatGPT navigate these datasets without manual schema work. At Carlsberg we’ve shown an AI connecting yeast strains ➜ genes ➜ flavor profiles in minutes.

Business data is easier. You don’t need to teach AI what an invoice is. Or a GL account. These concepts are textbook. Give the AI access and it infers relationships. If it can handle yeast genomics, it can handle your finance tables.

Stop treating AI like glass. It’s ready.

The Dream: MCP‑Compliant OData Servers

Imagine every system—ERP, CRM, LIMS, SharePoint—exposing itself via an AI‑readable surface. No ETLs, no integration middleware, no months of project time.

Combine OData’s self‑describing endpoints with MCP’s agent capabilities. You don’t write connectors. You don’t centralize everything first. The AI layer becomes the system‑of‑systems—a perpetual integrator, analyst, translator.

Integration disappears. Master data becomes a footnote.

When Do You Still Need Clean Data?

Let’s address the elephant in the room: there are still scenarios where data quality matters deeply.

Regulatory reporting. Financial reconciliation. Mission-critical operations where a mistake could be costly. In these domains, AI is a complement to—not a replacement for—rigorous data governance.

But here’s the key insight: you can pursue both paths simultaneously. Critical systems maintain their rigor, while the vast majority of your data landscape becomes accessible through AI-powered approaches.

AI for Data: The Flip That Changes Everything

You don’t need perfect data to start using AI. That’s Data and AI thinking.

AI for Data starts with intelligence and lets structure emerge. Let your AI discover, join, and reason across your real‑world mess—not just your sanitized warehouse.

It’s a shift from enforcing models to exposing capabilities. From building integrations to unleashing agents. From waitingto acting while you learn.

If your organization is still waiting to “get the data right,” here’s your wake‑up call: you’re waiting for something AI no longer needs.

AI is ready. Your data is ready enough.

The only question left: Are you ready to flip the model?

Responsible AI: Enhance Human Judgment, Don’t Replace It

Artificial Intelligence (AI) is transforming businesses, streamlining processes, and providing insights previously unattainable at scale. However, it is crucial to keep in mind a fundamental principle best summarized by computing pioneer Grace Hopper:

“A computer can never be held accountable; therefore, a computer must never make a management decision.”

This simple yet profound guideline underscores the importance of human oversight in business-critical decision-making, a topic I’ve discussed previously in my posts about the Four Categories of AI Solutions and the necessity of balancing Speed vs Precision in AI Development.

Enhancing Human Decision-Making

AI should serve as an enabler rather than a disruptor. Its role is to provide support, suggestions, and insights, but never to autonomously make significant business decisions, especially those affecting financial outcomes, security protocols, or customer trust. Human oversight ensures accountability and ethical responsibility remain clear.

Security and Resilience

AI-powered systems, such as chatbots and customer interfaces, must be built with resilience against adversarial manipulation. Effective safeguards—including strict input validation and clearly defined output limitations—are critical. Human oversight must always be available as a fallback mechanism when the system encounters unforeseen scenarios.

Balancing Core and Strategic AI Solutions

In earlier posts, I’ve outlined four categories of AI solutions ranging from simple integrations to complex, custom-built innovations. Core AI solutions typically leverage standard platforms with inherent governance frameworks, such as Microsoft’s suite of tools, making them relatively low-risk. Conversely, strategic AI solutions involve custom-built systems that provide significant business value but inherently carry higher risks, requiring stringent oversight and comprehensive governance.

Lean and Practical Governance

AI governance frameworks must scale appropriately with the level of risk involved. It’s important to avoid creating bureaucratic overhead for low-risk applications, while ensuring that more sensitive, strategic AI applications undergo thorough evaluations and incorporate stringent human oversight.

Humans-in-the-Loop

A “human-in-the-loop” approach is essential for managing AI-driven decisions that significantly impact financial transactions, security measures, or customer trust. While AI may suggest or recommend actions, final approval and accountability should always rest with human operators. Additionally, any autonomous action should be easily reversible by a human.

Final Thoughts

AI offers tremendous potential for innovation and operational excellence. However, embracing AI responsibly means recognizing its limitations. AI should support and empower human decision-making—not replace it. By maintaining human oversight and clear accountability, we can leverage AI effectively while safeguarding against risks.

Ultimately, AI assists, but humans decide.

Accelerating Research at Carlsberg Research Laboratory using Scientific Computing

Introduction

Scientific discovery is no longer just about what happens in the lab—it’s about how we enable research through computing, automation, and AI. At Carlsberg Research Laboratory (CRL), our Accelerate Research initiative is designed to remove bottlenecks and drive breakthroughs by embedding cutting-edge technology into every step of the scientific process.

The Five Core Principles of Acceleration

To ensure our researchers can spend more time on discovery we are focusing on:

  • Digitizing the Laboratory – Moving beyond manual processes to automated, IoT-enabled research environments.
  • Data Platform – Creating scalable, accessible, and AI-ready data infrastructure that eliminates data silos.
  • Reusable Workflows – Standardizing and automating research pipelines to improve efficiency and reproducibility.
  • High-Performance Computing (HPC) – Powering complex simulations and large-scale data analysis. We are also preparing for the future of quantum computing, which promises to transform how we model molecular behavior and simulate complex biochemical systems at unprecedented speed and scale.
  • Artificial Intelligence – Enhancing data analysis, predictions, and research automation beyond just generative AI.

The Expected Impact

By modernizing our approach, we aim to:

  • Reduce research setup time by up to 70%
  • Accelerate experiment iteration by 3x
  • Improve cross-team collaboration efficiency by 5x
  • Unlock deeper insights through AI-driven analysis and automation

We’re not just improving research at CRL; we’re redefining how scientific computing fuels innovation. The future of research is fast, automated, and AI-driven.

The Half-Life of Skills: Why 100% Utilization Can Destroy Your Future

At a recent SXSW session, Ian Beacraft, CEO of Signal and Cipher, presented a compelling vision of the future workplace—one that demands continuous learning and adaptability. Central to his message was the idea of the rapidly shrinking half-life of skills. Today, technical skills are estimated to last only 2.5 years before becoming outdated, a stark decrease compared to the past.

The diagram visualizes the shrinking half-life of skills over time, highlighting how rapidly technical competencies become outdated. It contrasts the decreasing lifespan of relevant skills (currently around 2.5 years) with the growing need for continuous, agile learning methods. The visual emphasizes the risk of traditional, slow-paced training methods becoming obsolete and illustrates the necessity for companies to adopt flexible, micro-learning approaches to remain competitive and innovative in the modern workplace.

This concept aligns closely with my previous insights into organizational efficiency and innovation, particularly around the dangers of running teams at 100% utilization. Classical queue theory demonstrates that when utilization approaches full capacity, wait times and bottlenecks increase dramatically. For knowledge work, this manifests as a loss of innovation, adaptability, and essential skills development.

In an environment of near-constant technological evolution, companies that fill every available hour with immediate productivity leave no room for the critical learning and upskilling necessary to stay competitive. The future belongs to organizations that deliberately balance productivity with learning, recognizing that skill development isn’t an extracurricular activity—it’s foundational to future success.

As skills continue to expire faster than ever, running at full utilization isn’t just inefficient; it’s a direct threat to your company’s relevance. To thrive in this new reality, the approach to learning and up-skilling within companies must fundamentally change. Traditional courses with formal diplomas and structured online training from established vendors will increasingly struggle to keep pace with the rapid evolution of skills. Instead, bite-sized, just-in-time learning content available through the web, YouTube, and other micro-learning platforms will become essential.

Ian Beacraft highlighted a striking prediction: the cost of training and upskilling employees will soon eclipse the cost of technology itself. If you have SMEs in your company this is something you have to think hard about solving so that you can manage these costs effectively and maintain competitive edge in an era where skill requirements evolve rapidly.

AI Without Borders: Why Accessibility Will Determine Your Organization’s Future

Introduction

Gartner recently announced that AI has moved past the peak of inflated expectations in its hype cycle, signaling a crucial transition from speculation to real-world impact. AI now stands at this inflection point—its potential undeniable, but its future hinges on whether it becomes an accessible, enabling force or remains restricted by excessive governance.

The Risk of Creating A and B Teams in AI

Recent developments, such as OpenAI’s foundational grants to universities, signal an emerging divide between those with AI access and those without. These grants are not just an academic initiative—they will accelerate disparities in AI capabilities, favoring institutions that can freely explore and integrate AI into research and innovation. The same divide is already forming in the corporate world.

Software engineers today are increasingly evaluating companies based on their AI adoption. When candidates ask in interviews whether an organization provides tools like GitHub Copilot, they are not just inquiring about productivity enhancements—they are assessing whether the company is on the cutting edge of AI adoption. Organizations that restrict AI access risk falling behind, unintentionally categorizing themselves into the “B Team,” making it harder to attract top talent and compete effectively.

Lessons from Past Industrial Revolutions

History provides clear lessons about the importance of accessibility in technological revolutions. Electricity, for example, was initially limited to specific industrial applications before it became a utility that fueled industries, powered homes, and transformed daily life. Similarly, computing evolved from expensive mainframes reserved for large enterprises to personal computers and now cloud computing, making advanced technology available to anyone with an internet connection.

AI should follow the same path.

However, excessive corporate governance could hinder its progress, while governmental governance remains essential to ensure AI is developed and used safely. Just as electricity transformed from an industrial novelty to the foundation of modern society, AI must follow a similar democratization path. Imagine if we had limited electricity to only certified engineers or specific departments—we would have stifled the innovation that brought us everything from household appliances to modern healthcare. Similarly, restricting AI access today could prevent us from discovering its most transformative applications tomorrow.

Governance Should Enable, Not Block

The key is not to abandon governance but to ensure it enables rather than blocks innovation. AI governance should focus on how AI is used, not who gets access to it. Restricting AI tools today is akin to limiting electricity to specialists a century ago—an approach that would have crippled progress.

The most successful AI implementations are those that integrate seamlessly into existing workflows. Tools like GitHub Copilot and Microsoft Copilot demonstrate how AI can enhance productivity when it is embedded within platforms that employees already use. The key is to govern AI responsibly without creating unnecessary friction that prevents widespread adoption.

The Competitive Divide is Already Here

The AI accessibility gap is no longer theoretical—it is already shaping the competitive landscape. Universities that receive OpenAI’s foundational grants will advance more rapidly than those without access. Companies that fully integrate AI into their daily operations will not only boost innovation but also become magnets for top talent. The question organizations must ask themselves is clear: Do we embrace AI as an enabler, or do we risk falling behind?

As history has shown, technology is most transformative when it is available to all. AI should be no different. The organizations that will thrive in the coming decade will be those that balance responsible governance with widespread AI accessibility—empowering their people to innovate rather than restricting them with excessive controls. The question isn’t whether you’ll adopt AI, but whether you’ll do it in a way that creates competitive advantage or competitive disadvantage.

Speed vs. Precision in AI Development

In my post “Four Categories of AI Solutions”, I categorized AI solutions by their level of innovation and complexity. Building on that framework, I’ve been thinking about how AI-assisted programming tools fit into this model—particularly the trade-off between speed and precision in software development.

Mapping Speed and Precision

The diagram below illustrates how tasks can be plotted along axes of speed and precision. As you move towards higher precision (e.g., creating backend services or system programming), speed naturally decreases, as does the immediate value AI tools provide. Conversely, low-precision tasks—like generating boilerplate code for frontend applications—enable high speed and provide quick wins.

Dave Farley in a recent video seems to align with this observation. The speakers noted that AI tools excel at accelerating well-defined and repetitive tasks, like building a simple mobile app or prototyping a frontend interface. These tasks fall into Category 1 of my model: low complexity, high-speed solutions.

However, the further you move into Category 3+4—solutions requiring precision and contextual understanding—the less impactful these tools become. For instance, when my team uses GitHub Copilot for backend development, only about 20% of its suggested code is accepted. The rest lacks the precision or nuanced understanding needed for high-stakes backend systems.

The Speed-Precision Trade-Off

The interview also highlighted a critical concern: AI’s emphasis on generating code quickly can erode the incremental approach central to traditional programming. In precision-driven tasks, small, deliberate steps are essential to ensure reliability and minimize risk. By generating large amounts of code at once, AI tools risk losing this careful craftsmanship.

Yet this trade-off isn’t a flaw—it’s a characteristic of how these tools are designed. AI’s value lies in accelerating the routine and freeing up developers to focus on higher-order problems. For precision tasks, AI becomes an assistant rather than a solution, helping analyze systems, identify bugs, or suggest improvements.

The Four Categories Revisited

This balancing act between speed and precision ties directly into the “Four Categories of AI Solutions”:

  1. Category 1: High-speed, low-precision tasks like prototyping and boilerplate generation. AI tools thrive here.
  2. Category 2: Moderately complex applications, where AI can augment human effort but requires careful validation.
  3. Category 3: High-precision, low-speed systems programming or backend development. AI contributes less here, serving more as an analysis tool.
  4. Category 4: Novel, cutting-edge AI applications requiring custom-built solutions.

As we develop software with AI, understanding where these tools provide the most value—and where their limitations begin—is critical. For now, AI tools may help us write code faster, but when it comes to precision, the human touch remains irreplaceable.

« Older posts

© 2025 Peter Birkholm-Buch

Theme by Anders NorenUp ↑