Stuff about Software Engineering

Tag: Leadership (Page 1 of 2)

The Culture Prism: What We Tolerate Defines Us

Gartner’s “culture prism” hit me like a hammer:

https://media.licdn.com/dms/image/v2/D4E10AQHoCPXwpRWq5A/image-shrink_1280/B4EZoRm.5CGUAM-/0/1761232023589?e=1761897600&v=beta&t=OpbnuFproveGtCsQ4InFCXhmjK8cljiKQVkiUrEbo-U
https://media.licdn.com/dms/image/v2/D4E10AQHoCPXwpRWq5A/image-shrink_1280/B4EZoRm.5CGUAM-/0/1761232023589?e=1761897600&v=beta&t=OpbnuFproveGtCsQ4InFCXhmjK8cljiKQVkiUrEbo-U

I’ve always believed that leadership starts with example — that if I live the right values, others will follow.

But the prism made me realize something uncomfortable: I’ve spent years explaining what good looks like, and far too little time explaining what bad looks like — or having the hard conversations when it happens.

In society, we start with what’s not acceptable: you can’t kill, you can’t steal, you can’t harm others — and everything else is up to you.

In organizations, we flip it. We talk about performance, excellence, and continuous improvement, but we rarely say what we won’t accept.

The result? The wrong behaviors quietly take root because nobody said stop.

Silence is not neutrality. Silence is permission.

When leaders ignore people, withhold feedback, or use offence as defence, they’re signalling that learning is dangerous and honesty is punished. That’s the opposite of continuous improvement — it breaks both the First Way (never pass a defect downstream) and the Third Way (create a culture of continual learning and experimentation) from The Three Ways of DevOps (IT Revolution – Gene Kim).

Culture isn’t built by posters or handbooks; it’s built in the small moments where someone chooses to speak up — or not.

So maybe the next evolution of our leadership handbooks shouldn’t just describe the desired behaviors. It should also draw the hard lines:

  • We don’t ignore people.
  • We don’t punish those who raise problems.
  • We don’t weaponize authority.
  • We don’t stay silent when others do.

The prism reminds us that shaping culture isn’t just about promoting excellence — it’s about refusing mediocrity in character.

Conway’s Law and the Rise of Platform Engineering: Are We Just Fixing the Silos We Created?

I recently came across a Danish article from Globeteam about how Platform Engineering can drive growth and efficiency. The experts they interviewed weren’t wrong—PE absolutely can deliver those benefits. But reading it made me think about why Platform Engineering has become such a hot topic in the first place.

Melvin Conway observed back in 1968 that “any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” Over the decades, this became Conway’s Law, dutifully cited in architecture presentations everywhere. But I think we’re living through its most ironic chapter yet: the rise of Developer Platforms and Platform Engineering as desperate attempts to fix the very silos we designed into our organizations.

When Engineers Are Organized in Silos

When you organize engineers into business-aligned tribes or product domains, they inevitably build siloed systems. Not because they’re being difficult, but because that’s what the structure incentivizes. Each team starts building its own tools, pipelines, and cloud configurations. Cross-team collaboration becomes an act of heroism instead of the default way of working.

The Spotify-inspired model accelerated this problem. It optimized for autonomy but not alignment. When everyone owns their piece of the world, no one owns the whole. I’ve written before in Balancing Autonomy and Alignment in Engineering Teams about why I organize engineers into a single reporting line rather than under product ownership—it’s specifically to avoid this fragmentation.

The Platform That’s Also a Silo

Eventually, the fragmentation becomes impossible to ignore. Someone draws a diagram showing duplicate CI/CD pipelines, dozens of competing Terraform modules, three different secrets managers, and five ways of provisioning a Kubernetes cluster. So naturally, someone says we need a Developer Platform to unify all of this.

But here’s the problem: the team building that platform usually sits inside its own silo. Another specialized function, another reporting line, another backlog disconnected from product delivery. The result is that we now have siloed platforms, each optimized for its own part of the business but still lacking a shared engineering identity.

Platform Engineering’s Promise and Paradox

This is where Platform Engineering enters the picture—building infrastructure and tooling that cuts across silos to standardize, simplify, and accelerate development. The Globeteam article emphasizes exactly these benefits: reduced manual work, faster time-to-market, better developer experience.

And those benefits are real. When we built Gaia at Carlsberg, we absolutely achieved them. We went from infrastructure provisioning taking weeks to taking minutes. We eliminated an 80% reduction in manual DevOps work. Developers got self-service capabilities embedded directly into their GitHub workflow.

But even here, Conway’s Law lurks. Most organizations create a Platform Engineering department that is itself a silo. They end up maintaining a shared platform for the organization rather than with it. We’ve just added another layer, another interface, another team managing integration—effectively encoding organizational fragmentation into the technology stack.

What Actually Worked for Us

The reason Gaia succeeded wasn’t just the technology. It was because we didn’t treat the platform team as a separate silo. The platform engineering team is part of the broader engineering organization, working with the same standards, participating in the same guilds, aligned on the same methods. When we built Gaia’s golden path, it wasn’t a platform team dictating to developers—it was engineers building tools for other engineers based on shared understanding.

Conway’s Law wasn’t meant to be a trap. The point is that we can design our structures deliberately to achieve the systems we want. If our goal is coherent systems, then the organization itself must be coherent. That means engineers need to be organized as engineers, not divided by business lines or pseudo-tribes where technical collaboration is optional.

Platform Engineering as Symptom, Not Just Solution

Platform Engineering didn’t rise because we suddenly discovered a better way to do DevOps. It rose because many organizations lost sight of what engineering fundamentally is: a collaborative discipline. We created silos, then tried to fix them with technology. But every time we add another layer without fixing the underlying organizational structure, we risk making the problem worse.

The best developer experience doesn’t come from layers of abstraction or governance. It comes from removing the barriers that make collaboration difficult in the first place. When engineers work together as peers across products, domains, and technologies, you don’t need to build elaborate platforms to unify them. Their shared way of working becomes the platform.

This aligns with what I wrote in Balancing Autonomy and Alignment in Engineering Teams—alignment isn’t the opposite of autonomy, it’s what makes autonomy sustainable. You can give teams independence precisely because they’re working from a shared foundation of methods, tools, and standards. Our DevEx analysis against Gartner benchmarks showed this approach scoring 4.3/5, with particular strength in the areas of autonomy and cultural alignment.

So yes, Platform Engineering can absolutely drive growth and efficiency, as the Globeteam experts argue. But only if we recognize it as both a solution and a symptom. A symptom of organizational structures that work against collaboration rather than enabling it. The next evolution might not be another platform at all—it might just be building organizations where engineers can work together by default, not by exception.

The Future of Consulting: How Value Delivery Models Drive Better Client Outcomes

Introduction

The consulting industry, particularly within software engineering, is shifting away from traditional hourly billing towards value-driven and outcome-focused engagements. This evolution aligns with broader trends identified by industry analysts like Gartner and McKinsey, emphasizing outcomes and measurable impacts over time-based compensation (Gartner Hype Cycle for Consulting Services, 2023 and McKinsey: The State of Organizations 2023).

Shift from Hourly Rates to Value Delivery

Traditional hourly billing often incentivizes cost reduction at the expense of quality, creating tension between minimizing expenses and achieving meaningful results. The emerging approach—value-based consulting—aligns compensation directly with specified outcomes or deliverables. For instance, many consulting firms now employ fixed-price projects or performance-based contracts that clearly link payment to the achievement of specific business results, improving alignment and encouraging deeper collaboration between clients and consultants.

According to a 2023 McKinsey report titled ‘The State of Organizations 2023’, approximately 40% of consulting engagements are shifting towards value-based models, highlighting the industry’s evolution (https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-state-of-organizations-2023)

Leveraging Scrum and Agile Methodologies

Agile methodologies, especially Scrum, facilitate this shift by naturally aligning consulting work with measurable outputs and iterative improvements. In Scrum, value is measured through clearly defined user stories, regular sprint reviews, and tangible deliverables evaluated continuously by stakeholders. These iterative deliveries provide clear visibility into incremental progress, effectively replacing hourly tracking with meaningful metrics of success.

Challenges in Adopting Value-Based Models

Transitioning to value-based consulting is not without its challenges. Firms may encounter difficulties in accurately defining and measuring value upfront, aligning expectations, and managing the inherent risks of outcome-based agreements. Overcoming these challenges typically requires transparent communication, clear contract terms, and robust stakeholder engagement from project inception.

AI and Human Oversight

While there is significant enthusiasm and concern surrounding AI, its role remains primarily augmentative rather than fully autonomous, particularly in high-stakes decision-making. Human oversight ensures AI-driven solutions remain precise and contextually appropriate, directly supporting high-quality, outcome-focused consulting. This perspective aligns with insights discussed in Responsible AI: Enhance Human Judgment, Don’t Replace It.

Balancing Speed and Precision

AI offers substantial gains in speed but often involves trade-offs in precision. Certain fields, such as financial services or critical infrastructure, require exactness, making human judgment essential in balancing these considerations. This topic, explored in detail in Speed vs. Precision in AI Development, highlights how value-driven consulting must thoughtfully integrate AI to enhance outcomes without sacrificing accuracy.

Conclusion

The shift to outcome-focused consulting models, supported by agile frameworks and thoughtful AI integration, represents a significant evolution in the industry. By prioritizing measurable value and clearly defined outcomes over hourly rates, consulting engagements become more impactful and sustainable.

Responsible AI: Enhance Human Judgment, Don’t Replace It

Artificial Intelligence (AI) is transforming businesses, streamlining processes, and providing insights previously unattainable at scale. However, it is crucial to keep in mind a fundamental principle best summarized by computing pioneer Grace Hopper:

“A computer can never be held accountable; therefore, a computer must never make a management decision.”

This simple yet profound guideline underscores the importance of human oversight in business-critical decision-making, a topic I’ve discussed previously in my posts about the Four Categories of AI Solutions and the necessity of balancing Speed vs Precision in AI Development.

Enhancing Human Decision-Making

AI should serve as an enabler rather than a disruptor. Its role is to provide support, suggestions, and insights, but never to autonomously make significant business decisions, especially those affecting financial outcomes, security protocols, or customer trust. Human oversight ensures accountability and ethical responsibility remain clear.

Security and Resilience

AI-powered systems, such as chatbots and customer interfaces, must be built with resilience against adversarial manipulation. Effective safeguards—including strict input validation and clearly defined output limitations—are critical. Human oversight must always be available as a fallback mechanism when the system encounters unforeseen scenarios.

Balancing Core and Strategic AI Solutions

In earlier posts, I’ve outlined four categories of AI solutions ranging from simple integrations to complex, custom-built innovations. Core AI solutions typically leverage standard platforms with inherent governance frameworks, such as Microsoft’s suite of tools, making them relatively low-risk. Conversely, strategic AI solutions involve custom-built systems that provide significant business value but inherently carry higher risks, requiring stringent oversight and comprehensive governance.

Lean and Practical Governance

AI governance frameworks must scale appropriately with the level of risk involved. It’s important to avoid creating bureaucratic overhead for low-risk applications, while ensuring that more sensitive, strategic AI applications undergo thorough evaluations and incorporate stringent human oversight.

Humans-in-the-Loop

A “human-in-the-loop” approach is essential for managing AI-driven decisions that significantly impact financial transactions, security measures, or customer trust. While AI may suggest or recommend actions, final approval and accountability should always rest with human operators. Additionally, any autonomous action should be easily reversible by a human.

Final Thoughts

AI offers tremendous potential for innovation and operational excellence. However, embracing AI responsibly means recognizing its limitations. AI should support and empower human decision-making—not replace it. By maintaining human oversight and clear accountability, we can leverage AI effectively while safeguarding against risks.

Ultimately, AI assists, but humans decide.

AI Without Borders: Why Accessibility Will Determine Your Organization’s Future

Introduction

Gartner recently announced that AI has moved past the peak of inflated expectations in its hype cycle, signaling a crucial transition from speculation to real-world impact. AI now stands at this inflection point—its potential undeniable, but its future hinges on whether it becomes an accessible, enabling force or remains restricted by excessive governance.

The Risk of Creating A and B Teams in AI

Recent developments, such as OpenAI’s foundational grants to universities, signal an emerging divide between those with AI access and those without. These grants are not just an academic initiative—they will accelerate disparities in AI capabilities, favoring institutions that can freely explore and integrate AI into research and innovation. The same divide is already forming in the corporate world.

Software engineers today are increasingly evaluating companies based on their AI adoption. When candidates ask in interviews whether an organization provides tools like GitHub Copilot, they are not just inquiring about productivity enhancements—they are assessing whether the company is on the cutting edge of AI adoption. Organizations that restrict AI access risk falling behind, unintentionally categorizing themselves into the “B Team,” making it harder to attract top talent and compete effectively.

Lessons from Past Industrial Revolutions

History provides clear lessons about the importance of accessibility in technological revolutions. Electricity, for example, was initially limited to specific industrial applications before it became a utility that fueled industries, powered homes, and transformed daily life. Similarly, computing evolved from expensive mainframes reserved for large enterprises to personal computers and now cloud computing, making advanced technology available to anyone with an internet connection.

AI should follow the same path.

However, excessive corporate governance could hinder its progress, while governmental governance remains essential to ensure AI is developed and used safely. Just as electricity transformed from an industrial novelty to the foundation of modern society, AI must follow a similar democratization path. Imagine if we had limited electricity to only certified engineers or specific departments—we would have stifled the innovation that brought us everything from household appliances to modern healthcare. Similarly, restricting AI access today could prevent us from discovering its most transformative applications tomorrow.

Governance Should Enable, Not Block

The key is not to abandon governance but to ensure it enables rather than blocks innovation. AI governance should focus on how AI is used, not who gets access to it. Restricting AI tools today is akin to limiting electricity to specialists a century ago—an approach that would have crippled progress.

The most successful AI implementations are those that integrate seamlessly into existing workflows. Tools like GitHub Copilot and Microsoft Copilot demonstrate how AI can enhance productivity when it is embedded within platforms that employees already use. The key is to govern AI responsibly without creating unnecessary friction that prevents widespread adoption.

The Competitive Divide is Already Here

The AI accessibility gap is no longer theoretical—it is already shaping the competitive landscape. Universities that receive OpenAI’s foundational grants will advance more rapidly than those without access. Companies that fully integrate AI into their daily operations will not only boost innovation but also become magnets for top talent. The question organizations must ask themselves is clear: Do we embrace AI as an enabler, or do we risk falling behind?

As history has shown, technology is most transformative when it is available to all. AI should be no different. The organizations that will thrive in the coming decade will be those that balance responsible governance with widespread AI accessibility—empowering their people to innovate rather than restricting them with excessive controls. The question isn’t whether you’ll adopt AI, but whether you’ll do it in a way that creates competitive advantage or competitive disadvantage.

Why 100% Utilization Kills Innovation: The Mathematical Reality

Imagine a highway at 100% capacity. Traffic doesn’t just slow down—it stops completely. A single broken-down car causes massive ripple effects because there’s no buffer space to absorb the variation. This isn’t just an analogy; it’s mathematics. And the same principle explains why running teams at full capacity mathematically guarantees the death of innovation.

The Queue Theory Reality

In 1961, mathematician J.F.C. Kingman proved something remarkable: as utilization approaches 100%, delays grow exponentially. This finding, known as Kingman’s Formula, demonstrates that systems operating at full capacity don’t just slow down linearly—they break down dramatically. Hopp and Spearman’s seminal work “Factory Physics” (2000) further established that optimal system performance occurs at around 80% utilization, giving rise to the “80% Rule” in operations management.

This isn’t opinion or management theory—it’s mathematics. When utilization exceeds 80-85%, systems experience:

  • Exponentially increasing delays
  • Inability to handle normal variation
  • Cascading disruptions from small problems
  • Deteriorating performance across all metrics

The Human System Connection

Just as a machine’s productivity is limited by its operational capacity, humans too are constrained by cognitive load. People and teams are systems too. When cognitive load research pioneers Sweller and Chandler demonstrated how mental capacity follows similar patterns, they revealed something crucial: minds at 100% capacity lose the ability to process new information effectively. Just as a fully utilized highway can’t absorb a single additional car, a fully utilized mind can’t absorb new ideas or opportunities.

The implications are profound: innovation requires spare capacity. This isn’t about working less—it’s about maintaining the mental and temporal space required for creative thinking and problem-solving. Studies of innovation consistently show that breakthrough ideas emerge when people have the bandwidth to:

  • Notice unexpected patterns
  • Explore new connections
  • Experiment with different approaches
  • Learn from failures

The Three Horizons Impact

McKinsey’s Three Horizons Framework provides a useful lens for understanding innovation timeframes:

  • Horizon 1: Improving current business
  • Horizon 2: Extending into new areas
  • Horizon 3: Creating transformative opportunities

Here’s where queue theory delivers its killing blow to innovation: At 100% utilization, everything becomes Horizon 1 by mathematical necessity. When a system (human or organizational) operates at full capacity, it can only handle what’s already in the queue. New opportunities, no matter how promising, must wait. Over time, Horizons 2 and 3 don’t just suffer—they become mathematically impossible.

To keep Horizons 2 and 3 viable, companies need to intentionally limit Horizon 1 resource utilization and leave room for creative and exploratory projects.

The Innovation Impossibility

Queue theory proves that running at 100% utilization:

  • Makes delays inevitable
  • Eliminates flexibility
  • Prevents absorption of variation
  • Blocks capacity for new initiatives

Therefore, organizations face a mathematical certainty: maintain 100% utilization or maintain innovation capability. You cannot have both. This isn’t a management choice or cultural issue—it’s as fundamental as gravity.

The solution isn’t working less—it’s working smarter. Just as highways need buffer capacity to function effectively, organizations need spare capacity to innovate. The 80% rule isn’t about reduced output; it’s about maintaining the space required for sustainable performance and growth.

The choice is clear: accept the mathematical reality that innovation requires spare capacity, or continue pushing for 100% utilization while wondering why transformative innovation never seems to happen.

References:

  • Kingman, J.F.C. (1961). “The Single Server Queue in Heavy Traffic”
  • Hopp, W.J., & Spearman, M.L. (2000). “Factory Physics”
  • McKinsey & Company. “Three Horizons of Growth”
  • Sweller, J., & Chandler, P. “Cognitive Load Theory and the Format of Instruction”

AI-Engineer: A Distinct and Essential Skillset

Introduction

Artificial intelligence (AI) technologies have caused a dramatic change in software engineering. At the forefront of this revolution are AI-Engineers – the professionals who implement solutions within the ‘Builders’ category of AI adoption, as I outlined in my previous post on Four Categories of AI Solutions. These engineers not only harness the power of AI but also redefine the landscapes of industries.

As I recently discussed in “AI-Engineers: Why People Skills Are Central to AI Success” organizations face a critical talent shortage in AI implementation. McKinsey’s research shows that 60% of companies cite talent shortages as a key risk in their AI adoption plans. This shortage makes understanding the AI Engineer role and its distinct skillset more crucial than ever.

But what is an AI-Engineer?

Core Skills of an AI Engineer

AI-Engineers are skilled software developers who can code in modern languages. They create the frameworks and software solutions that enable AI functionalities and make them work well with existing enterprise applications. While they need basic knowledge of machine learning and AI concepts, their primary focus differs from data engineers. Where data engineers mainly focus on writing and managing data models, AI-Engineers concentrate on building reliable, efficient, and scalable software solutions that integrate AI.

Strategic Business Outcomes

The role of AI-Engineers is crucial in translating technological advancements into strategic advantages. Their ability to navigate the complex landscape of AI tools and tailor solutions to specific business challenges underlines their unique role within the enterprise and software engineering specifically. By embedding AI into core processes, they help streamline operations and foster innovative product development.

Continuous Learning and Adaptability

As I described in “Keeping Up with GenAI: A Full-Time Job?” the AI landscape shifts at a dizzying pace. Just like Loki’s time-slipping adventures, AI-Engineers find themselves constantly jumping between new releases, frameworks, and capabilities – each innovation demanding immediate attention and evaluation.

For AI-Engineers, this isn’t just about staying informed – it’s about rapidly evaluating which technologies can deliver real value. The platforms and communities that facilitate this learning, such as Hugging Face, become essential resources. However, merely keeping up isn’t enough. AI-Engineers must develop a strategic approach to:

  1. Evaluate new technologies against existing solutions
  2. Assess potential business impact before investing time in adoption
  3. Balance innovation with practical implementation
  4. Maintain stable systems while incorporating new capabilities

Real-World Impact

The real value of AI-Engineers becomes clear when we look at concrete implementation data. In my recent analysis of GitHub Copilot usage at Carlsberg (“GitHub Copilot Probably Saves 50% of Time for Developers“), we found fascinating patterns in AI tool adoption. While developers and GitHub claim the tool saves 50% of development time, the actual metrics tell a more nuanced story:

  • Copilot’s acceptance rate hovers around 20%, meaning developers typically use one-fifth of the suggested code
  • Even with this selective usage, developers report significant time savings because reviewing and modifying AI suggestions is faster than writing code from scratch
  • The tool generates substantial code volume, but AI-Engineers must carefully evaluate and adapt these suggestions

This real-world example highlights several key aspects of the AI-Engineer role:

  1. Tool Evaluation: AI-Engineers must look beyond marketing claims to understand actual implementation impact
  2. Integration Strategy: Success requires thoughtful integration of AI tools into existing development workflows
  3. Metric Definition: AI-Engineers need to establish meaningful metrics for measuring AI tool effectiveness
  4. Developer Experience: While pure efficiency gains may be hard to quantify, improvements in developer experience can be significant

These findings demonstrate why AI-Engineers need both technical expertise and practical judgment. They must balance the promise of AI automation with the reality of implementation, ensuring that AI tools enhance rather than complicate development processes.

Conclusion

AI Engineering is undeniably a distinct skill set, one that is becoming increasingly indispensable in AI transformation. As industries increasingly rely on AI to innovate and optimize, the demand for skilled AI Engineers who can both understand and shape this technology continues to grow. Their ability to navigate the rapid pace of change while delivering practical business value makes them essential to successful AI adoption. Most importantly, their role in critically evaluating and effectively implementing AI tools – as demonstrated by our Copilot metrics – shows why this specialized role is crucial for turning AI’s potential into real business value.

AI-Engineers: Why People Skills Are Central to AI Success

In my blog post “Four Categories of AI Solutions” (https://birkholm-buch.dk/2024/04/22/four-categories-of-ai-solutions), I outlined different approaches to building AI solutions, but it’s becoming increasingly clear that the decision on how to approach AI hinges on the talent and capabilities within the organization. As AI continues to evolve at lightning speed, companies everywhere are racing to adopt the latest innovations. Whether it’s generative AI, machine learning, or predictive analytics, organizations see AI as a strategic advantage. But as exciting as these technologies are, they come with a less glamorous reality—people skills are the make-or-break factor in achieving long-term AI success.

The Talent Crunch: A Major Barrier to AI Success

Reports from both McKinsey and Gartner consistently highlight a serious shortage of skilled AI talent. McKinsey’s latest research suggests that AI adoption has plateaued for many companies not due to a lack of use cases, but because they don’t have the talent required to execute their AI strategies effectively. 60% of companies cite talent shortages as a key risk in their AI adoption plans. (McKinsey State of AI 2023: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year).

This talent crunch means that organizations are finding it difficult to retain and attract professionals with the skills needed to develop, manage, and scale AI initiatives. 58% of organizations report significant gaps in AI talent, with 72% lacking machine learning engineering skills and 68% short on data science skills (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

The New Frontier of AI Engineering Talent

Given the rapid pace of change in AI tools and techniques, developing an internal team of AI Engineers is one of the most sustainable strategies for companies looking to stay competitive. AI Engineers are those with the expertise to design, build, and maintain custom AI solutions tailored to the specific needs of the organization.

By 2026, Gartner predicts that 30% of new AI systems will require specialized AI talent, making the need for AI Builders even more urgent (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

The Challenge of Retaining AI Talent

The retention challenge in AI isn’t just about compensation—it’s about providing meaningful work, opportunities for continuous learning, and a sense of ownership over projects. Diverse AI teams are essential for preventing biases in models and creating more robust, well-rounded AI systems. Offering inclusive, supportive environments where AI engineers can grow professionally and personally is essential to keeping top talent engaged (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

Skills Development: A Competitive Advantage

AI is only as good as the team behind it. As Gartner highlights, the true value of AI will be realized when companies can seamlessly integrate AI tools into their existing workflows. This integration requires not just the right tools but the right people to design, deploy, and optimize these solutions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

Companies that prioritize skills development will have a competitive edge. AI is advancing so quickly that organizations can no longer rely solely on external vendors to provide off-the-shelf solutions. Building an internal team of AI Builders—engineers who are continually learning and improving their craft—is essential to staying ahead of the curve. Offering employees opportunities to reskill and upskill is no longer optional; it’s a necessity for retaining talent and remaining competitive in the AI-driven economy.

Looking Ahead: The Evolving Role of AI Engineers

As organizations continue to explore AI adoption, the need for specialized AI models is becoming more apparent. Gartner predicts that by 2027, over 50% of enterprises will deploy domain-specific Large Language Models (LLMs) tailored to either their industry or specific business functions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

I believe the role of AI Engineers will continue to grow in importance and complexity. We might see new specializations emerge, such as AI ethics officers ensuring responsible AI use, or AI-human interaction designers creating seamless experiences. 

In my view, the most valuable AI Engineers of the future could be those who can not only master the technology but also understand its broader implications for business and society. They might need to navigate complex ethical considerations, adapt to rapidly changing regulatory landscapes, and bridge the gap between technical capabilities and business needs.

Conclusion: Investing in People is Investing in AI Success

The success of any AI initiative rests on the quality and adaptability of the people behind it. For organizations aiming to lead in the AI space, the focus should be on creating a workforce of AI Engineers—skilled professionals who can navigate the ever-changing landscape of AI technologies.

The lesson is clear: to succeed with AI, invest in your people first. The future of AI is not just about algorithms and data—it’s about the humans who will shape and guide its development and application.

Seniority, Organizational Influence and Participation

The table below is meant to better explain how seniority, organizational influence and behaviors are connected. Anyone wanting to move up the career ladder must master our expected behaviors and participate accordingly.

StrengthEntryIntermediateExperiencedAdvancedExpert
ScopeEntry level professional with limited or no prior experience; learns to use professional concepts to resolve problems of limited scope and complexity; works on assignments that require limited judgment and decision making. Developing position where an employee is able to apply job skills, policies and procedures to complete tasks of moderate scope and complexity to determine appropriate action.Journey-level, experienced professional who knows how to apply theory and put it into practice with full understanding of the professional field; has broad job knowledge and works on problems of diverse scope.Professional with a high degree of knowledge in the overall field and recognized expertise in specific areas.Leader in the field who regularly leads projects of criticality to company and beyond, with high consequences of success or failure.  This employee has impact and influence on company policy and program development. Barriers of entry exist at this level.
AnalogyLearning about rope and knotsCan tie basic knots, learning complex knotsCalculates rope strength, knows a lot about knotsUnderstands rope making, can tie any knotKnows more about rope than you ever will, invented new knot
Influence & ImpactSelfPeersTeamDepartmentCompany
Coding100%100%80%-100%<25%<10%
ParticipationActively participate in discussions and team activities. Seek guidance from more experienced team members. Be open to receiving feedback and learning from mistakes.Contribute to team discussions and share knowledge gained from learning basic concepts. Offer assistance to junior team members. Seek feedback on performance and actively work on improving skills.Actively contribute expertise to team projects and discussions. Mentor junior team members and facilitate knowledge sharing sessions. Regularly seek out new learning opportunities and share insights with the team.Lead collaborative learning initiatives within the team. Actively contribute to the development of best practices and processes. Mentor and coach less experienced team members, fostering a culture of continuous learning.Serve as a subject matter expert, providing guidance and direction on complex projects. Spearhead innovative learning initiatives and contribute to industry knowledge sharing. Act as a mentor and coach for both technical and professional development.

A Manifesto on Expected Behaviors

Introduction

In Software Engineering at Carlsberg our collective success is built on the foundation of individual behaviors that foster a positive, innovative, and collaborative environment. This document outlines the expected behaviors that every team member, irrespective of their role or seniority, is encouraged to embody and develop. These behaviors are not just guidelines but the essence of our culture, aiming to inspire continuous growth, effective communication, and a proactive approach to challenges. As we navigate the complexities of software engineering, these behaviors will guide us in making decisions, interacting with one another, and achieving our departmental and organizational goals. Together, let’s build a culture that celebrates learning, teamwork, and excellence in everything we do.

Learn together, grow together

“We embrace collaboration, share knowledge openly, and celebrate both individual and team success.”

  • Contribute and support: Actively participate in discussions, offer help, and celebrate each other’s successes.
  • Give and receive feedback: Regularly seek and provide constructive feedback for improvement.
  • Share expertise openly: Willingly share knowledge and expertise to benefit the team.

Communicate clearly, connect openly

“We foster understanding through respectful, transparent, and active communication using the right tools for the job.”

  • Listen actively, engage thoughtfully: Pay close attention, ask questions, and respond thoughtfully to diverse perspectives.
  • Clarity over jargon, respect in tone: Communicate with clarity, avoid technical jargon, and use respectful language.
  • Prompt and appropriate: Respond efficiently and tailor your communication to fit the situation and audience.
  • Choose the right channel: Utilize appropriate communication methods based on the message and context.

Continuous Learning and Improvement is the Way!

“We value continuous learning, actively seek opportunities to improve, and celebrate progress together.”

  • Quality First, Every Step of the Way: Never pass a known defect down the stream. If you see something which will cause problems for others then you should stop the work.
  • Challenge yourself and learn: Regularly seek new experiences and reflect on your experiences to improve.
  • Experiment and share: Be open to trying new things and share your learnings with the team.
  • Track your progress: Regularly measure your progress towards goals and adjust your approach as needed.

Own your work, drive results

“We take responsibility, proactively solve problems, and seize opportunities to excel.”

  • Embrace challenges, deliver excellence: Aim for impactful work and go the extra mile for outstanding results.
  • Be proactive problem-solvers: Actively seek, address, and prevent the escalation of challenges by ensuring solutions not only fit within established boundaries but also uphold the highest quality standards.
  • Learn and bounce back: Embrace mistakes as learning opportunities and quickly recover from setbacks.
« Older posts

© 2025 Peter Birkholm-Buch

Theme by Anders NorenUp ↑