Stuff about Software Engineering

Tag: AI (Page 1 of 2)

AI for Data (Not Data and AI)

Cold Open

Most companies get it backwards.

They say “Data and AI,” as if AI is dessert—something you get to enjoy only after you’ve finished your vegetables. And by vegetables, they mean years of data modeling, integration work, and master‑data management. AI ends up bolted onto the side of a data office that’s already overwhelmed.

That mindset isn’t just outdated—it’s actively getting in the way.

It’s time to flip the script. It’s not Data and AI. It’s AI for Data.

AI as a Data Appendage: The Legacy View

In most org charts, AI still reports to the head of data. That tells you everything: AI is perceived as a tool to be used on top of clean data. The assumption is that AI becomes useful only after you’ve reached some mythical level of data maturity.

So what happens? You wait. You delay. You burn millions building taxonomies and canonical models that never quite deliver. When AI finally shows up, it generates dashboards or slide‑deck summaries. Waste of potential.

What If AI Is Your Integration Layer?

Here’s the mental flip: AI isn’t just a consumer of data—it’s a synthesizer. A translator. An integrator – an Enabler!

Instead of cleaning, mapping, and modeling everything up front, what if you simply exposed your data—as is—and let the AI figure it out?

That’s not fantasy. Today, you can feed an AI messy order tables, half‑finished invoice exports, inconsistent SKU lists—and it still works out the joins. Sales and finance data follow patterns the model has seen a million times.

The magic isn’t that AI understands perfect data. The magic is that it doesn’t need to.

MCP: OData for Agents

Remember OData? It promised introspectable, queryable APIs—you could ask the endpoint what it supported. Now meet MCP (Model Context Protocol). Think OData, but for AI agents.

With MCP, an agent can introspect a tool, learn what actions exist, what inputs it needs, what outputs to expect. No glue code. No brittle integrations. You expose a capability, and the AI takes it from there.

OData made APIs discoverable. MCP makes tools discoverable to AIs.

Expose your data with just enough structure, and let the agent reason. No mapping tables. No MDM. Just AI doing what it’s good at: figuring things out.

Why It Works in Science—And Why It’ll Work in Business

Need proof? Look at biology.

Scientific data is built on shared, Latin‑based taxonomies. Tools like Claude or ChatGPT navigate these datasets without manual schema work. At Carlsberg we’ve shown an AI connecting yeast strains ➜ genes ➜ flavor profiles in minutes.

Business data is easier. You don’t need to teach AI what an invoice is. Or a GL account. These concepts are textbook. Give the AI access and it infers relationships. If it can handle yeast genomics, it can handle your finance tables.

Stop treating AI like glass. It’s ready.

The Dream: MCP‑Compliant OData Servers

Imagine every system—ERP, CRM, LIMS, SharePoint—exposing itself via an AI‑readable surface. No ETLs, no integration middleware, no months of project time.

Combine OData’s self‑describing endpoints with MCP’s agent capabilities. You don’t write connectors. You don’t centralize everything first. The AI layer becomes the system‑of‑systems—a perpetual integrator, analyst, translator.

Integration disappears. Master data becomes a footnote.

When Do You Still Need Clean Data?

Let’s address the elephant in the room: there are still scenarios where data quality matters deeply.

Regulatory reporting. Financial reconciliation. Mission-critical operations where a mistake could be costly. In these domains, AI is a complement to—not a replacement for—rigorous data governance.

But here’s the key insight: you can pursue both paths simultaneously. Critical systems maintain their rigor, while the vast majority of your data landscape becomes accessible through AI-powered approaches.

AI for Data: The Flip That Changes Everything

You don’t need perfect data to start using AI. That’s Data and AI thinking.

AI for Data starts with intelligence and lets structure emerge. Let your AI discover, join, and reason across your real‑world mess—not just your sanitized warehouse.

It’s a shift from enforcing models to exposing capabilities. From building integrations to unleashing agents. From waitingto acting while you learn.

If your organization is still waiting to “get the data right,” here’s your wake‑up call: you’re waiting for something AI no longer needs.

AI is ready. Your data is ready enough.

The only question left: Are you ready to flip the model?

Responsible AI: Enhance Human Judgment, Don’t Replace It

Artificial Intelligence (AI) is transforming businesses, streamlining processes, and providing insights previously unattainable at scale. However, it is crucial to keep in mind a fundamental principle best summarized by computing pioneer Grace Hopper:

“A computer can never be held accountable; therefore, a computer must never make a management decision.”

This simple yet profound guideline underscores the importance of human oversight in business-critical decision-making, a topic I’ve discussed previously in my posts about the Four Categories of AI Solutions and the necessity of balancing Speed vs Precision in AI Development.

Enhancing Human Decision-Making

AI should serve as an enabler rather than a disruptor. Its role is to provide support, suggestions, and insights, but never to autonomously make significant business decisions, especially those affecting financial outcomes, security protocols, or customer trust. Human oversight ensures accountability and ethical responsibility remain clear.

Security and Resilience

AI-powered systems, such as chatbots and customer interfaces, must be built with resilience against adversarial manipulation. Effective safeguards—including strict input validation and clearly defined output limitations—are critical. Human oversight must always be available as a fallback mechanism when the system encounters unforeseen scenarios.

Balancing Core and Strategic AI Solutions

In earlier posts, I’ve outlined four categories of AI solutions ranging from simple integrations to complex, custom-built innovations. Core AI solutions typically leverage standard platforms with inherent governance frameworks, such as Microsoft’s suite of tools, making them relatively low-risk. Conversely, strategic AI solutions involve custom-built systems that provide significant business value but inherently carry higher risks, requiring stringent oversight and comprehensive governance.

Lean and Practical Governance

AI governance frameworks must scale appropriately with the level of risk involved. It’s important to avoid creating bureaucratic overhead for low-risk applications, while ensuring that more sensitive, strategic AI applications undergo thorough evaluations and incorporate stringent human oversight.

Humans-in-the-Loop

A “human-in-the-loop” approach is essential for managing AI-driven decisions that significantly impact financial transactions, security measures, or customer trust. While AI may suggest or recommend actions, final approval and accountability should always rest with human operators. Additionally, any autonomous action should be easily reversible by a human.

Final Thoughts

AI offers tremendous potential for innovation and operational excellence. However, embracing AI responsibly means recognizing its limitations. AI should support and empower human decision-making—not replace it. By maintaining human oversight and clear accountability, we can leverage AI effectively while safeguarding against risks.

Ultimately, AI assists, but humans decide.

Accelerating Research at Carlsberg Research Laboratory using Scientific Computing

Introduction

Scientific discovery is no longer just about what happens in the lab—it’s about how we enable research through computing, automation, and AI. At Carlsberg Research Laboratory (CRL), our Accelerate Research initiative is designed to remove bottlenecks and drive breakthroughs by embedding cutting-edge technology into every step of the scientific process.

The Five Core Principles of Acceleration

To ensure our researchers can spend more time on discovery we are focusing on:

  • Digitizing the Laboratory – Moving beyond manual processes to automated, IoT-enabled research environments.
  • Data Platform – Creating scalable, accessible, and AI-ready data infrastructure that eliminates data silos.
  • Reusable Workflows – Standardizing and automating research pipelines to improve efficiency and reproducibility.
  • High-Performance Computing (HPC) – Powering complex simulations and large-scale data analysis. We are also preparing for the future of quantum computing, which promises to transform how we model molecular behavior and simulate complex biochemical systems at unprecedented speed and scale.
  • Artificial Intelligence – Enhancing data analysis, predictions, and research automation beyond just generative AI.

The Expected Impact

By modernizing our approach, we aim to:

  • Reduce research setup time by up to 70%
  • Accelerate experiment iteration by 3x
  • Improve cross-team collaboration efficiency by 5x
  • Unlock deeper insights through AI-driven analysis and automation

We’re not just improving research at CRL; we’re redefining how scientific computing fuels innovation. The future of research is fast, automated, and AI-driven.

AI Without Borders: Why Accessibility Will Determine Your Organization’s Future

Introduction

Gartner recently announced that AI has moved past the peak of inflated expectations in its hype cycle, signaling a crucial transition from speculation to real-world impact. AI now stands at this inflection point—its potential undeniable, but its future hinges on whether it becomes an accessible, enabling force or remains restricted by excessive governance.

The Risk of Creating A and B Teams in AI

Recent developments, such as OpenAI’s foundational grants to universities, signal an emerging divide between those with AI access and those without. These grants are not just an academic initiative—they will accelerate disparities in AI capabilities, favoring institutions that can freely explore and integrate AI into research and innovation. The same divide is already forming in the corporate world.

Software engineers today are increasingly evaluating companies based on their AI adoption. When candidates ask in interviews whether an organization provides tools like GitHub Copilot, they are not just inquiring about productivity enhancements—they are assessing whether the company is on the cutting edge of AI adoption. Organizations that restrict AI access risk falling behind, unintentionally categorizing themselves into the “B Team,” making it harder to attract top talent and compete effectively.

Lessons from Past Industrial Revolutions

History provides clear lessons about the importance of accessibility in technological revolutions. Electricity, for example, was initially limited to specific industrial applications before it became a utility that fueled industries, powered homes, and transformed daily life. Similarly, computing evolved from expensive mainframes reserved for large enterprises to personal computers and now cloud computing, making advanced technology available to anyone with an internet connection.

AI should follow the same path.

However, excessive corporate governance could hinder its progress, while governmental governance remains essential to ensure AI is developed and used safely. Just as electricity transformed from an industrial novelty to the foundation of modern society, AI must follow a similar democratization path. Imagine if we had limited electricity to only certified engineers or specific departments—we would have stifled the innovation that brought us everything from household appliances to modern healthcare. Similarly, restricting AI access today could prevent us from discovering its most transformative applications tomorrow.

Governance Should Enable, Not Block

The key is not to abandon governance but to ensure it enables rather than blocks innovation. AI governance should focus on how AI is used, not who gets access to it. Restricting AI tools today is akin to limiting electricity to specialists a century ago—an approach that would have crippled progress.

The most successful AI implementations are those that integrate seamlessly into existing workflows. Tools like GitHub Copilot and Microsoft Copilot demonstrate how AI can enhance productivity when it is embedded within platforms that employees already use. The key is to govern AI responsibly without creating unnecessary friction that prevents widespread adoption.

The Competitive Divide is Already Here

The AI accessibility gap is no longer theoretical—it is already shaping the competitive landscape. Universities that receive OpenAI’s foundational grants will advance more rapidly than those without access. Companies that fully integrate AI into their daily operations will not only boost innovation but also become magnets for top talent. The question organizations must ask themselves is clear: Do we embrace AI as an enabler, or do we risk falling behind?

As history has shown, technology is most transformative when it is available to all. AI should be no different. The organizations that will thrive in the coming decade will be those that balance responsible governance with widespread AI accessibility—empowering their people to innovate rather than restricting them with excessive controls. The question isn’t whether you’ll adopt AI, but whether you’ll do it in a way that creates competitive advantage or competitive disadvantage.

Speed vs. Precision in AI Development

In my post “Four Categories of AI Solutions”, I categorized AI solutions by their level of innovation and complexity. Building on that framework, I’ve been thinking about how AI-assisted programming tools fit into this model—particularly the trade-off between speed and precision in software development.

Mapping Speed and Precision

The diagram below illustrates how tasks can be plotted along axes of speed and precision. As you move towards higher precision (e.g., creating backend services or system programming), speed naturally decreases, as does the immediate value AI tools provide. Conversely, low-precision tasks—like generating boilerplate code for frontend applications—enable high speed and provide quick wins.

Dave Farley in a recent video seems to align with this observation. The speakers noted that AI tools excel at accelerating well-defined and repetitive tasks, like building a simple mobile app or prototyping a frontend interface. These tasks fall into Category 1 of my model: low complexity, high-speed solutions.

However, the further you move into Category 3+4—solutions requiring precision and contextual understanding—the less impactful these tools become. For instance, when my team uses GitHub Copilot for backend development, only about 20% of its suggested code is accepted. The rest lacks the precision or nuanced understanding needed for high-stakes backend systems.

The Speed-Precision Trade-Off

The interview also highlighted a critical concern: AI’s emphasis on generating code quickly can erode the incremental approach central to traditional programming. In precision-driven tasks, small, deliberate steps are essential to ensure reliability and minimize risk. By generating large amounts of code at once, AI tools risk losing this careful craftsmanship.

Yet this trade-off isn’t a flaw—it’s a characteristic of how these tools are designed. AI’s value lies in accelerating the routine and freeing up developers to focus on higher-order problems. For precision tasks, AI becomes an assistant rather than a solution, helping analyze systems, identify bugs, or suggest improvements.

The Four Categories Revisited

This balancing act between speed and precision ties directly into the “Four Categories of AI Solutions”:

  1. Category 1: High-speed, low-precision tasks like prototyping and boilerplate generation. AI tools thrive here.
  2. Category 2: Moderately complex applications, where AI can augment human effort but requires careful validation.
  3. Category 3: High-precision, low-speed systems programming or backend development. AI contributes less here, serving more as an analysis tool.
  4. Category 4: Novel, cutting-edge AI applications requiring custom-built solutions.

As we develop software with AI, understanding where these tools provide the most value—and where their limitations begin—is critical. For now, AI tools may help us write code faster, but when it comes to precision, the human touch remains irreplaceable.

AI-Engineer: A Distinct and Essential Skillset

Introduction

Artificial intelligence (AI) technologies have caused a dramatic change in software engineering. At the forefront of this revolution are AI-Engineers – the professionals who implement solutions within the ‘Builders’ category of AI adoption, as I outlined in my previous post on Four Categories of AI Solutions. These engineers not only harness the power of AI but also redefine the landscapes of industries.

As I recently discussed in “AI-Engineers: Why People Skills Are Central to AI Success” organizations face a critical talent shortage in AI implementation. McKinsey’s research shows that 60% of companies cite talent shortages as a key risk in their AI adoption plans. This shortage makes understanding the AI Engineer role and its distinct skillset more crucial than ever.

But what is an AI-Engineer?

Core Skills of an AI Engineer

AI-Engineers are skilled software developers who can code in modern languages. They create the frameworks and software solutions that enable AI functionalities and make them work well with existing enterprise applications. While they need basic knowledge of machine learning and AI concepts, their primary focus differs from data engineers. Where data engineers mainly focus on writing and managing data models, AI-Engineers concentrate on building reliable, efficient, and scalable software solutions that integrate AI.

Strategic Business Outcomes

The role of AI-Engineers is crucial in translating technological advancements into strategic advantages. Their ability to navigate the complex landscape of AI tools and tailor solutions to specific business challenges underlines their unique role within the enterprise and software engineering specifically. By embedding AI into core processes, they help streamline operations and foster innovative product development.

Continuous Learning and Adaptability

As I described in “Keeping Up with GenAI: A Full-Time Job?” the AI landscape shifts at a dizzying pace. Just like Loki’s time-slipping adventures, AI-Engineers find themselves constantly jumping between new releases, frameworks, and capabilities – each innovation demanding immediate attention and evaluation.

For AI-Engineers, this isn’t just about staying informed – it’s about rapidly evaluating which technologies can deliver real value. The platforms and communities that facilitate this learning, such as Hugging Face, become essential resources. However, merely keeping up isn’t enough. AI-Engineers must develop a strategic approach to:

  1. Evaluate new technologies against existing solutions
  2. Assess potential business impact before investing time in adoption
  3. Balance innovation with practical implementation
  4. Maintain stable systems while incorporating new capabilities

Real-World Impact

The real value of AI-Engineers becomes clear when we look at concrete implementation data. In my recent analysis of GitHub Copilot usage at Carlsberg (“GitHub Copilot Probably Saves 50% of Time for Developers“), we found fascinating patterns in AI tool adoption. While developers and GitHub claim the tool saves 50% of development time, the actual metrics tell a more nuanced story:

  • Copilot’s acceptance rate hovers around 20%, meaning developers typically use one-fifth of the suggested code
  • Even with this selective usage, developers report significant time savings because reviewing and modifying AI suggestions is faster than writing code from scratch
  • The tool generates substantial code volume, but AI-Engineers must carefully evaluate and adapt these suggestions

This real-world example highlights several key aspects of the AI-Engineer role:

  1. Tool Evaluation: AI-Engineers must look beyond marketing claims to understand actual implementation impact
  2. Integration Strategy: Success requires thoughtful integration of AI tools into existing development workflows
  3. Metric Definition: AI-Engineers need to establish meaningful metrics for measuring AI tool effectiveness
  4. Developer Experience: While pure efficiency gains may be hard to quantify, improvements in developer experience can be significant

These findings demonstrate why AI-Engineers need both technical expertise and practical judgment. They must balance the promise of AI automation with the reality of implementation, ensuring that AI tools enhance rather than complicate development processes.

Conclusion

AI Engineering is undeniably a distinct skill set, one that is becoming increasingly indispensable in AI transformation. As industries increasingly rely on AI to innovate and optimize, the demand for skilled AI Engineers who can both understand and shape this technology continues to grow. Their ability to navigate the rapid pace of change while delivering practical business value makes them essential to successful AI adoption. Most importantly, their role in critically evaluating and effectively implementing AI tools – as demonstrated by our Copilot metrics – shows why this specialized role is crucial for turning AI’s potential into real business value.

AI-Engineers: Why People Skills Are Central to AI Success

In my blog post “Four Categories of AI Solutions” (https://birkholm-buch.dk/2024/04/22/four-categories-of-ai-solutions), I outlined different approaches to building AI solutions, but it’s becoming increasingly clear that the decision on how to approach AI hinges on the talent and capabilities within the organization. As AI continues to evolve at lightning speed, companies everywhere are racing to adopt the latest innovations. Whether it’s generative AI, machine learning, or predictive analytics, organizations see AI as a strategic advantage. But as exciting as these technologies are, they come with a less glamorous reality—people skills are the make-or-break factor in achieving long-term AI success.

The Talent Crunch: A Major Barrier to AI Success

Reports from both McKinsey and Gartner consistently highlight a serious shortage of skilled AI talent. McKinsey’s latest research suggests that AI adoption has plateaued for many companies not due to a lack of use cases, but because they don’t have the talent required to execute their AI strategies effectively. 60% of companies cite talent shortages as a key risk in their AI adoption plans. (McKinsey State of AI 2023: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year).

This talent crunch means that organizations are finding it difficult to retain and attract professionals with the skills needed to develop, manage, and scale AI initiatives. 58% of organizations report significant gaps in AI talent, with 72% lacking machine learning engineering skills and 68% short on data science skills (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

The New Frontier of AI Engineering Talent

Given the rapid pace of change in AI tools and techniques, developing an internal team of AI Engineers is one of the most sustainable strategies for companies looking to stay competitive. AI Engineers are those with the expertise to design, build, and maintain custom AI solutions tailored to the specific needs of the organization.

By 2026, Gartner predicts that 30% of new AI systems will require specialized AI talent, making the need for AI Builders even more urgent (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

The Challenge of Retaining AI Talent

The retention challenge in AI isn’t just about compensation—it’s about providing meaningful work, opportunities for continuous learning, and a sense of ownership over projects. Diverse AI teams are essential for preventing biases in models and creating more robust, well-rounded AI systems. Offering inclusive, supportive environments where AI engineers can grow professionally and personally is essential to keeping top talent engaged (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

Skills Development: A Competitive Advantage

AI is only as good as the team behind it. As Gartner highlights, the true value of AI will be realized when companies can seamlessly integrate AI tools into their existing workflows. This integration requires not just the right tools but the right people to design, deploy, and optimize these solutions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

Companies that prioritize skills development will have a competitive edge. AI is advancing so quickly that organizations can no longer rely solely on external vendors to provide off-the-shelf solutions. Building an internal team of AI Builders—engineers who are continually learning and improving their craft—is essential to staying ahead of the curve. Offering employees opportunities to reskill and upskill is no longer optional; it’s a necessity for retaining talent and remaining competitive in the AI-driven economy.

Looking Ahead: The Evolving Role of AI Engineers

As organizations continue to explore AI adoption, the need for specialized AI models is becoming more apparent. Gartner predicts that by 2027, over 50% of enterprises will deploy domain-specific Large Language Models (LLMs) tailored to either their industry or specific business functions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

I believe the role of AI Engineers will continue to grow in importance and complexity. We might see new specializations emerge, such as AI ethics officers ensuring responsible AI use, or AI-human interaction designers creating seamless experiences. 

In my view, the most valuable AI Engineers of the future could be those who can not only master the technology but also understand its broader implications for business and society. They might need to navigate complex ethical considerations, adapt to rapidly changing regulatory landscapes, and bridge the gap between technical capabilities and business needs.

Conclusion: Investing in People is Investing in AI Success

The success of any AI initiative rests on the quality and adaptability of the people behind it. For organizations aiming to lead in the AI space, the focus should be on creating a workforce of AI Engineers—skilled professionals who can navigate the ever-changing landscape of AI technologies.

The lesson is clear: to succeed with AI, invest in your people first. The future of AI is not just about algorithms and data—it’s about the humans who will shape and guide its development and application.

Keeping Up with GenAI: A Full-Time Job?

As a technologist, it feels like I’m constantly time slipping — like Loki in Season 2 of his show. Every time I’ve finally wrapped my head around one groundbreaking AI technology, another one comes crashing in, pulling me out of my flow. Just like Loki’s painful and disorienting jumps between timelines, I’m yanked from one new release or framework to the next, barely catching my breath before being dropped into the middle of another innovation.

In the last week alone, we’ve had a flood of announcements that make it impossible to stand still for even a second. Here’s just a glimpse:

  1. OpenAI Swarm (October 16, 2024)
    OpenAI dropped “Swarm,” an open-source framework for managing multiple autonomous AI agents—a move that could redefine how we approach collaborative AI and automation.
    https://github.com/openai/swarm
  2. WorldCoin Rebrands as World (October 18, 2024)
    WorldCoin’s new Orb is yet another reminder that biometric and blockchain technology continues to merge in unpredictable ways. With its iris-scanning Orb, it’s trying to push a global financial identity system—a concept that sparks as much excitement as concern, especially around data privacy.
    https://www.theverge.com/2024/10/18/24273691/world-orb-sam-altman-iris-scan-crypto-token
  3. Microsoft’s Copilot with Autonomous Agents (October 21, 2024)
    Microsoft’s second wave of Copilot introduces agents that can perform complex tasks autonomously, elevating Copilot from an assistant to a decision-making, workflow-automating powerhouse.
    https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/unlocking-autonomous-agent-capabilities-with-microsoft-copilot-studio/
  4. Anthropic’s New Models & ‘Computer Use’ Feature (October 22, 2024)
    Anthropic has introduced the latest Claude 3.5 models, including a new ‘computer use’ feature that allows AI to interact directly with applications. This marks a major shift toward AI being able to execute tasks like filling out forms and interacting with user interfaces, making it a significant leap in real-world functionality.
    https://www.anthropic.com/news/claude-3-5-sonnet

The Data Speaks for Itself

These developments are not isolated. The latest Stanford AI Index Report 2024 provides some staggering numbers that put this onslaught of innovation into perspective:

  • 149 foundation models were released in 2023, more than double the number from 2022, and a whopping 65.7%were open-source. That’s an overwhelming volume of tools, each requiring careful consideration for their potential applications.
  • The number of AI-related publications has tripled since 2010, reaching over 240,000 by 2022. Staying on top of this research is an almost Sisyphean task, but these papers provide the groundwork for the rapid-fire advancements we’re seeing.
  • Generative AI investments exploded to $25.2 billion in 2023, nearly eight times the previous year. This boom in funding is driving the constant stream of new AI tools and capabilities, each promising to reshape the landscape.
  • AI was mentioned in 394 earnings calls across nearly 80% of Fortune 500 companies in 2023, an enormous jump from the 266 mentions in 2022. The sheer presence of AI in corporate strategy highlights how central this technology has become to every industry.

The Technologist’s Dilemma

The rapid pace of AI advancements presents an overwhelming challenge for technologists. Each new tool or framework isn’t just a minor update—it’s potentially transformative, requiring deep understanding and immediate adaptation. For those managing teams and implementing technological advancements, this fast-moving landscape demands constant learning and vigilance.

For more details and insights on the pace of AI developments, you can dive into the Stanford AI Index Report 2024 at this link:
https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf

Now I can go to sleep 😴

Do you have GitHub Copilot?

Is a question I’ve been getting more and more at job interviews over the past year and when I say yes we’ve been using it for almost two years I see happy faces.

So having access to GitHub Copilot not only is a key decision making factor for software engineers looking to join your organization but also GitHub Copilot Probably Saves 50% of Time for Developers and GitHub Copilot drives better Developer Experience.

GitHub was also named a Leader in the Gartner first-ever Magic Quadrant for AI Code Assistants: https://github.blog/news-insights/company-news/github-named-a-leader-in-the-gartner-first-ever-magic-quadrant-for-ai-code-assistants

So if you’re a Software Engineering Leader there’s really no (business) reason not to get GitHub Copilot (or any other AI Coding Assistant) for your developers – it will (soon) be a requirement by new hires.

GitHub Copilot Probably Saves 50% of Time for Developers

Introduction

Recently GitHub released the GitHub Copilot Metrics API which provides customers the ability to view how Copilot is used and as usual someone created an Open Source tool to view the data: github-copilot-resources/copilot-metrics-viewer.

So let’s take a look at the usage of Copilot in Software Engineering in Carlsberg from end of May to end of June 2024.

I’m focusing on the following three metrics:

  • Total Suggestions
  • Total Lines Suggested
  • Acceptance Rate

As I think they are useful for understanding how effective Copilot is and I would like to get closer to an actual understanding of the usefulnes of Copilot rather than the broad statement offered by both GitHub and our own developers that it saves 50% of their time.

The missing data in the charts is due to an error in the GitHub data pipeline at the time of writing and data will be made available at a later stage.

The low usage in the middle of June is due to some public holidays with lots of people taking time off.

Total Suggestions

Total Lines Suggested: Showcases the total number of lines of code suggested by GitHub Copilot. This gives an idea of the volume of code generation and assistance provided.

Total Lines Suggested

Total Lines Accepted: The total lines of code accepted by users (full acceptances) offering insights into how much of the suggested code is actually being utilized incorporated to the codebase.

Acceptance Rate

Acceptance Rate: This metric represents the ratio of accepted lines to the total lines suggested by GitHub Copilot. This rate is an indicator of the relevance and usefulness of Copilot’s suggestions.

Conclusion

The overall acceptance rate is about 20% which resonates with my experience as Copilot tends to either slightly miss the objective and/or be verbose so that you have to trim/change a lot of code. So if Copilot suggests 100 lines of code you end up accepting 20.

Does this then align with the statements from developers in Software Engineering and GitHub which claim that you save 50% of time using Copilot?

Clearly reviewing and changing code is faster than writing, so even if you end up only using 20% of the suggested code, you will save time.

Unfortunately we don’t track actual time to complete tasks in Jira, so we don’t have hard data to prove the claim.

But is the claim true? Probably – however, I’m 100% convinced that GitHub Copilot drives better Developer Experience.

« Older posts

© 2025 Peter Birkholm-Buch

Theme by Anders NorenUp ↑