Peter Birkholm-Buch

Stuff about Software Engineering

Beyond DevOps: The Rise of Full-Stack Platform Engineering

The Evolution of Infrastructure Management

DevOps promised to bridge the gap between development and operations, aiming to deliver infrastructure faster and more efficiently. However, in many organizations, the reality often fell short of this ideal. DevOps frequently became a practice where operations teams learned to script infrastructure without fully embracing key software engineering principles. It became more about scripting than true engineering.

The Need for a Higher Abstraction

As infrastructure needs grew more complex, it became clear that traditional DevOps approaches were not scaling effectively. Tools like Terraform, while powerful, often proved to be terse and not particularly developer-friendly. They got the job done, but they weren’t providing the streamlined experience that developers needed. A new approach was necessary – one that would raise the level of abstraction and make infrastructure more accessible.

The Golden Path as a Product

Enter the concept of the “golden path” – a set of pre-built, standardized infrastructure solutions that developers can easily use and customize. This approach treats infrastructure as a product, designed with the end-user – the developer – in mind. 

The golden path isn’t just a set of scripts or configurations; it’s a carefully crafted product that encapsulates best practices, security considerations, and organizational policies. It automates infrastructure creation while maintaining alignment with company standards, allowing developers to provision cloud resources without needing to worry about governance, security, or configuration inconsistencies.

Raising the Abstraction Level

To understand the significance of this shift, consider this analogy: Terraform, while powerful, is often like the assembly language of infrastructure. Platform engineering, and the golden path approach, is about raising that abstraction, creating reusable and maintainable infrastructure solutions that developers can work with seamlessly. 

Just as high-level programming languages made software development more accessible and efficient compared to assembly language, the golden path aims to do the same for infrastructure management. By creating higher-level abstractions, we’re making infrastructure more understandable, manageable, and aligned with modern software development practices.

The Role of Full-Stack Platform Engineers

This new approach requires a new kind of professional: the full-stack platform engineer. These engineers think like developers while solving infrastructure challenges. They build scalable, reliable, and developer-friendly infrastructure that empowers teams.

Full-stack platform engineers focus on creating robust, scalable infrastructure solutions that directly support business needs, rather than getting bogged down in low-level configuration details. They apply the same rigor expected in software development to infrastructure design, treating infrastructure truly as code.

Enhancing Developer Experience and Security

The golden path approach significantly enhances the developer experience. By integrating infrastructure provisioning directly into familiar development workflows (like those in GitHub), it allows developers to request and manage infrastructure as part of their normal process, without delays or context switching.

This approach also allows for the seamless integration of security practices. By baking security considerations into the golden path from the start, organizations can shift security left in the development process, addressing vulnerabilities at their source without compromising developer productivity.

A New Era of Infrastructure Management

The rise of full-stack platform engineering and the golden path approach represents a significant evolution in how we think about and manage infrastructure. It’s not just DevOps 2.0; it’s a fundamental shift in mindset that treats infrastructure as a product designed for developer success.

By raising the abstraction level, applying software engineering principles to infrastructure, and focusing on creating reusable, maintainable solutions, this approach promises to make infrastructure more accessible, secure, and aligned with modern development practices. As organizations continue to grapple with increasing complexity, the golden path offers a way forward – empowering developers, enhancing security, and ultimately accelerating innovation.

At Carlsberg, this approach has been embodied in Gaia, our golden path platform built by full-stack platform engineers. Gaia exemplifies how treating infrastructure as a product can transform development processes, making them more efficient and developer-friendly. It stands as a testament to the power of full-stack platform engineering in creating solutions that truly serve the needs of modern development teams.

As more organizations embrace this shift, we can expect to see a new landscape of infrastructure management emerge – one where the golden path, crafted by skilled full-stack platform engineers, leads the way to more innovative, secure, and efficient software development practices.

AI-Engineer: A Distinct and Essential Skillset

Introduction

Artificial intelligence (AI) technologies have caused a dramatic change in software engineering. At the forefront of this revolution are AI-Engineers – the professionals who implement solutions within the ‘Builders’ category of AI adoption, as I outlined in my previous post on Four Categories of AI Solutions. These engineers not only harness the power of AI but also redefine the landscapes of industries.

As I recently discussed in “AI-Engineers: Why People Skills Are Central to AI Success” organizations face a critical talent shortage in AI implementation. McKinsey’s research shows that 60% of companies cite talent shortages as a key risk in their AI adoption plans. This shortage makes understanding the AI Engineer role and its distinct skillset more crucial than ever.

But what is an AI-Engineer?

Core Skills of an AI Engineer

AI-Engineers are skilled software developers who can code in modern languages. They create the frameworks and software solutions that enable AI functionalities and make them work well with existing enterprise applications. While they need basic knowledge of machine learning and AI concepts, their primary focus differs from data engineers. Where data engineers mainly focus on writing and managing data models, AI-Engineers concentrate on building reliable, efficient, and scalable software solutions that integrate AI.

Strategic Business Outcomes

The role of AI-Engineers is crucial in translating technological advancements into strategic advantages. Their ability to navigate the complex landscape of AI tools and tailor solutions to specific business challenges underlines their unique role within the enterprise and software engineering specifically. By embedding AI into core processes, they help streamline operations and foster innovative product development.

Continuous Learning and Adaptability

As I described in “Keeping Up with GenAI: A Full-Time Job?” the AI landscape shifts at a dizzying pace. Just like Loki’s time-slipping adventures, AI-Engineers find themselves constantly jumping between new releases, frameworks, and capabilities – each innovation demanding immediate attention and evaluation.

For AI-Engineers, this isn’t just about staying informed – it’s about rapidly evaluating which technologies can deliver real value. The platforms and communities that facilitate this learning, such as Hugging Face, become essential resources. However, merely keeping up isn’t enough. AI-Engineers must develop a strategic approach to:

  1. Evaluate new technologies against existing solutions
  2. Assess potential business impact before investing time in adoption
  3. Balance innovation with practical implementation
  4. Maintain stable systems while incorporating new capabilities

Real-World Impact

The real value of AI-Engineers becomes clear when we look at concrete implementation data. In my recent analysis of GitHub Copilot usage at Carlsberg (“GitHub Copilot Probably Saves 50% of Time for Developers“), we found fascinating patterns in AI tool adoption. While developers and GitHub claim the tool saves 50% of development time, the actual metrics tell a more nuanced story:

  • Copilot’s acceptance rate hovers around 20%, meaning developers typically use one-fifth of the suggested code
  • Even with this selective usage, developers report significant time savings because reviewing and modifying AI suggestions is faster than writing code from scratch
  • The tool generates substantial code volume, but AI-Engineers must carefully evaluate and adapt these suggestions

This real-world example highlights several key aspects of the AI-Engineer role:

  1. Tool Evaluation: AI-Engineers must look beyond marketing claims to understand actual implementation impact
  2. Integration Strategy: Success requires thoughtful integration of AI tools into existing development workflows
  3. Metric Definition: AI-Engineers need to establish meaningful metrics for measuring AI tool effectiveness
  4. Developer Experience: While pure efficiency gains may be hard to quantify, improvements in developer experience can be significant

These findings demonstrate why AI-Engineers need both technical expertise and practical judgment. They must balance the promise of AI automation with the reality of implementation, ensuring that AI tools enhance rather than complicate development processes.

Conclusion

AI Engineering is undeniably a distinct skill set, one that is becoming increasingly indispensable in AI transformation. As industries increasingly rely on AI to innovate and optimize, the demand for skilled AI Engineers who can both understand and shape this technology continues to grow. Their ability to navigate the rapid pace of change while delivering practical business value makes them essential to successful AI adoption. Most importantly, their role in critically evaluating and effectively implementing AI tools – as demonstrated by our Copilot metrics – shows why this specialized role is crucial for turning AI’s potential into real business value.

AI-Engineers: Why People Skills Are Central to AI Success

In my blog post “Four Categories of AI Solutions” (https://birkholm-buch.dk/2024/04/22/four-categories-of-ai-solutions), I outlined different approaches to building AI solutions, but it’s becoming increasingly clear that the decision on how to approach AI hinges on the talent and capabilities within the organization. As AI continues to evolve at lightning speed, companies everywhere are racing to adopt the latest innovations. Whether it’s generative AI, machine learning, or predictive analytics, organizations see AI as a strategic advantage. But as exciting as these technologies are, they come with a less glamorous reality—people skills are the make-or-break factor in achieving long-term AI success.

The Talent Crunch: A Major Barrier to AI Success

Reports from both McKinsey and Gartner consistently highlight a serious shortage of skilled AI talent. McKinsey’s latest research suggests that AI adoption has plateaued for many companies not due to a lack of use cases, but because they don’t have the talent required to execute their AI strategies effectively. 60% of companies cite talent shortages as a key risk in their AI adoption plans. (McKinsey State of AI 2023: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year).

This talent crunch means that organizations are finding it difficult to retain and attract professionals with the skills needed to develop, manage, and scale AI initiatives. 58% of organizations report significant gaps in AI talent, with 72% lacking machine learning engineering skills and 68% short on data science skills (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

The New Frontier of AI Engineering Talent

Given the rapid pace of change in AI tools and techniques, developing an internal team of AI Engineers is one of the most sustainable strategies for companies looking to stay competitive. AI Engineers are those with the expertise to design, build, and maintain custom AI solutions tailored to the specific needs of the organization.

By 2026, Gartner predicts that 30% of new AI systems will require specialized AI talent, making the need for AI Builders even more urgent (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

The Challenge of Retaining AI Talent

The retention challenge in AI isn’t just about compensation—it’s about providing meaningful work, opportunities for continuous learning, and a sense of ownership over projects. Diverse AI teams are essential for preventing biases in models and creating more robust, well-rounded AI systems. Offering inclusive, supportive environments where AI engineers can grow professionally and personally is essential to keeping top talent engaged (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

Skills Development: A Competitive Advantage

AI is only as good as the team behind it. As Gartner highlights, the true value of AI will be realized when companies can seamlessly integrate AI tools into their existing workflows. This integration requires not just the right tools but the right people to design, deploy, and optimize these solutions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

Companies that prioritize skills development will have a competitive edge. AI is advancing so quickly that organizations can no longer rely solely on external vendors to provide off-the-shelf solutions. Building an internal team of AI Builders—engineers who are continually learning and improving their craft—is essential to staying ahead of the curve. Offering employees opportunities to reskill and upskill is no longer optional; it’s a necessity for retaining talent and remaining competitive in the AI-driven economy.

Looking Ahead: The Evolving Role of AI Engineers

As organizations continue to explore AI adoption, the need for specialized AI models is becoming more apparent. Gartner predicts that by 2027, over 50% of enterprises will deploy domain-specific Large Language Models (LLMs) tailored to either their industry or specific business functions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

I believe the role of AI Engineers will continue to grow in importance and complexity. We might see new specializations emerge, such as AI ethics officers ensuring responsible AI use, or AI-human interaction designers creating seamless experiences. 

In my view, the most valuable AI Engineers of the future could be those who can not only master the technology but also understand its broader implications for business and society. They might need to navigate complex ethical considerations, adapt to rapidly changing regulatory landscapes, and bridge the gap between technical capabilities and business needs.

Conclusion: Investing in People is Investing in AI Success

The success of any AI initiative rests on the quality and adaptability of the people behind it. For organizations aiming to lead in the AI space, the focus should be on creating a workforce of AI Engineers—skilled professionals who can navigate the ever-changing landscape of AI technologies.

The lesson is clear: to succeed with AI, invest in your people first. The future of AI is not just about algorithms and data—it’s about the humans who will shape and guide its development and application.

Keeping Up with GenAI: A Full-Time Job?

As a technologist, it feels like I’m constantly time slipping — like Loki in Season 2 of his show. Every time I’ve finally wrapped my head around one groundbreaking AI technology, another one comes crashing in, pulling me out of my flow. Just like Loki’s painful and disorienting jumps between timelines, I’m yanked from one new release or framework to the next, barely catching my breath before being dropped into the middle of another innovation.

In the last week alone, we’ve had a flood of announcements that make it impossible to stand still for even a second. Here’s just a glimpse:

  1. OpenAI Swarm (October 16, 2024)
    OpenAI dropped “Swarm,” an open-source framework for managing multiple autonomous AI agents—a move that could redefine how we approach collaborative AI and automation.
    https://github.com/openai/swarm
  2. WorldCoin Rebrands as World (October 18, 2024)
    WorldCoin’s new Orb is yet another reminder that biometric and blockchain technology continues to merge in unpredictable ways. With its iris-scanning Orb, it’s trying to push a global financial identity system—a concept that sparks as much excitement as concern, especially around data privacy.
    https://www.theverge.com/2024/10/18/24273691/world-orb-sam-altman-iris-scan-crypto-token
  3. Microsoft’s Copilot with Autonomous Agents (October 21, 2024)
    Microsoft’s second wave of Copilot introduces agents that can perform complex tasks autonomously, elevating Copilot from an assistant to a decision-making, workflow-automating powerhouse.
    https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/unlocking-autonomous-agent-capabilities-with-microsoft-copilot-studio/
  4. Anthropic’s New Models & ‘Computer Use’ Feature (October 22, 2024)
    Anthropic has introduced the latest Claude 3.5 models, including a new ‘computer use’ feature that allows AI to interact directly with applications. This marks a major shift toward AI being able to execute tasks like filling out forms and interacting with user interfaces, making it a significant leap in real-world functionality.
    https://www.anthropic.com/news/claude-3-5-sonnet

The Data Speaks for Itself

These developments are not isolated. The latest Stanford AI Index Report 2024 provides some staggering numbers that put this onslaught of innovation into perspective:

  • 149 foundation models were released in 2023, more than double the number from 2022, and a whopping 65.7%were open-source. That’s an overwhelming volume of tools, each requiring careful consideration for their potential applications.
  • The number of AI-related publications has tripled since 2010, reaching over 240,000 by 2022. Staying on top of this research is an almost Sisyphean task, but these papers provide the groundwork for the rapid-fire advancements we’re seeing.
  • Generative AI investments exploded to $25.2 billion in 2023, nearly eight times the previous year. This boom in funding is driving the constant stream of new AI tools and capabilities, each promising to reshape the landscape.
  • AI was mentioned in 394 earnings calls across nearly 80% of Fortune 500 companies in 2023, an enormous jump from the 266 mentions in 2022. The sheer presence of AI in corporate strategy highlights how central this technology has become to every industry.

The Technologist’s Dilemma

The rapid pace of AI advancements presents an overwhelming challenge for technologists. Each new tool or framework isn’t just a minor update—it’s potentially transformative, requiring deep understanding and immediate adaptation. For those managing teams and implementing technological advancements, this fast-moving landscape demands constant learning and vigilance.

For more details and insights on the pace of AI developments, you can dive into the Stanford AI Index Report 2024 at this link:
https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf

Now I can go to sleep 😴

DevEx Analysis: Software Engineering in Growth Products, Carlsberg vs Gartner Benchmarks

Introduction

This report was created by having ChatGPT (Auto) and Claude individually compare and evaluate the initiatives on DevEx that I’ve blogged about in the past against the official Gartner DevEx Best Practices. I then had ChatGPT combine the two results into the report below.

Report

This report evaluates the Developer Experience (DevEx) initiatives within the Software Engineering department in Growth Products at Carlsberg, comparing them against Gartner’s The State of Developer Experience Initiatives report. The initiatives reviewed apply specifically to the Software Engineering team within Growth Products and do not reflect Carlsberg’s global operations.

1. Focus on Tooling and Integration

The Software Engineering team in Growth Products demonstrates strong alignment with Gartner’s recommendations in the area of tooling and integration:

Key Highlights:

  • Emphasis on integrated platforms and tooling, such as the Gaia platform
  • Focus on automation to streamline developer workflows
  • Adoption of AI-driven tools like GitHub Copilot to enhance developer efficiency

The initiatives outlined in “From DevOps to Platform Engineering: How Gaia Transformed Our Approach to Infrastructure Alignment and Developer Experience” highlight how integrated tooling supports smoother workflows, a priority emphasized in Gartner’s best practices.

Final Rating: 4.5/5. The focus on improving developer workflows through integrated platforms and automation closely aligns with Gartner’s recommendations.

2. Embedding Security into the Developer Workflow

The approach taken by the Software Engineering team aligns well with Gartner’s guidance on embedding security into daily developer processes:

Key Highlights:

  • Security integration as part of the daily development workflow
  • Use of tools such as GitHub Advanced Security
  • Inclusion of security roles within the broader DevEx initiative

The best practices detailed in “GitHub Advanced Security Enables Shifting Security Left” demonstrate how security has been embedded into the development pipeline, a practice highly recommended by Gartner for mature DevEx programs.

Final Rating: 4.5/5. The integration of security into the workflow, combined with tooling that supports shifting security left, positions the Software Engineering team’s DevEx initiatives at an advanced level in this area.

3. Emphasis on Professional Growth and Autonomy

The Software Engineering team’s emphasis on professional growth, particularly in relation to senior roles, aligns strongly with Gartner’s recommendations:

Key Highlights:

  • Empowering senior developers to act as facilitators and influencers
  • Promoting autonomy while ensuring organizational alignment
  • Balancing team independence with a controlled framework

The initiatives described in “Seniority, Organizational Influence, and Participation” provide a clear view of how senior developers are encouraged to take on leadership roles that foster cross-team collaboration and influence, aligning with Gartner’s focus on supporting professional growth and autonomy.

Final Rating: 5/5. The focus on autonomy and professional growth, especially for senior developers, represents an exemplary alignment with Gartner’s DevEx principles.

4. Cultural and Organizational Alignment

The initiatives to foster a supportive and collaborative developer culture align closely with Gartner’s recommendations for cultural and organizational alignment:

Key Highlights:

  • Clear definitions of expected behaviors across the Software Engineering department
  • Promotion of psychological safety within teams
  • Fostering a collaborative and inclusive work environment

The best practices outlined in “A Manifesto on Expected Behaviors” establish clear behavioral expectations that create a psychologically safe environment for developers. Gartner stresses the importance of a supportive culture in any comprehensive DevEx program, which these initiatives directly address.

Final Rating: 4.5/5. The Software Engineering department has established a strong cultural foundation that promotes collaboration and safety. Further emphasis on psychological safety could further enhance this area.

5. Measuring DevEx

While the Software Engineering team demonstrates strength in many areas, there is room for improvement in how DevEx outcomes are measured:

Key Highlights:

  • Focus on productivity and automation metrics
  • Some discussion of tooling-related metrics, such as in “GitHub Copilot Metrics”
  • Limited use of broader DevEx metrics, such as DORA or job satisfaction surveys

The current focus on productivity and efficiency aligns with Gartner’s recommendations. However, Gartner also recommends a broader range of metrics, including velocity, developer satisfaction, and retention. Expanding the measurement approach, as suggested in these practices, would provide a more comprehensive understanding of DevEx success.

Final Rating: 3/5. Increasing the use of specific DevEx outcome metrics, such as DORA and job satisfaction, would enhance the ability to measure and track the effectiveness of DevEx initiatives within the Software Engineering department.

Overall Rating: 4.3/5 (Strong Alignment)

The Software Engineering department’s DevEx efforts demonstrate strong alignment with Gartner’s best practices, particularly in the areas of tooling, security integration, autonomy, and culture. The main area for improvement is in the explicit measurement and tracking of DevEx outcomes through broader metrics.

Recommendations

Incorporating Gartner’s benchmarks, the following recommendations are suggested for the Software Engineering department in Growth Products:

  1. Implement More Specific DevEx Metrics: Expanding the use of DORA metrics, job satisfaction surveys, and other developer feedback mechanisms would provide deeper insights into the effectiveness of the DevEx initiatives.
  2. Continue Refining Integrated Tooling: Maintaining the momentum around tools like Gaia and GitHub Copilot will ensure the DevEx program continues to drive productivity and reduce friction in the development workflow.
  3. Sustain Focus on Security Integration and Culture: The strengths in security integration and developer culture should remain a focus, ensuring that these foundational pillars of the DevEx program continue to evolve.
  4. Consider Formalizing the DevEx Program: If not already formalized, creating a structured DevEx program with clear goals, metrics, and accountability would allow for better tracking of progress and outcomes.
  5. Regularly Assess and Iterate on DevEx Initiatives: Using feedback loops and metrics to continuously iterate on the DevEx approach will ensure that it evolves in line with both developer needs and organizational goals.

Conclusion:

The DevEx initiatives within the Software Engineering department in Growth Products at Carlsberg are well-aligned with Gartner’s best practices, particularly in terms of integrated tooling, security, professional growth, and cultural alignment. By expanding the metrics framework to include broader DevEx outcome measurements, the program could reach even higher levels of maturity and effectiveness.

Seniority, Organizational Influence and Participation

The table below is meant to better explain how seniority, organizational influence and behaviors are connected. Anyone wanting to move up the career ladder must master our expected behaviors and participate accordingly.

StrengthEntryIntermediateExperiencedAdvancedExpert
ScopeEntry level professional with limited or no prior experience; learns to use professional concepts to resolve problems of limited scope and complexity; works on assignments that require limited judgment and decision making. Developing position where an employee is able to apply job skills, policies and procedures to complete tasks of moderate scope and complexity to determine appropriate action.Journey-level, experienced professional who knows how to apply theory and put it into practice with full understanding of the professional field; has broad job knowledge and works on problems of diverse scope.Professional with a high degree of knowledge in the overall field and recognized expertise in specific areas.Leader in the field who regularly leads projects of criticality to company and beyond, with high consequences of success or failure.  This employee has impact and influence on company policy and program development. Barriers of entry exist at this level.
AnalogyLearning about rope and knotsCan tie basic knots, learning complex knotsCalculates rope strength, knows a lot about knotsUnderstands rope making, can tie any knotKnows more about rope than you ever will, invented new knot
Influence & ImpactSelfPeersTeamDepartmentCompany
Coding100%100%80%-100%<25%<10%
ParticipationActively participate in discussions and team activities. Seek guidance from more experienced team members. Be open to receiving feedback and learning from mistakes.Contribute to team discussions and share knowledge gained from learning basic concepts. Offer assistance to junior team members. Seek feedback on performance and actively work on improving skills.Actively contribute expertise to team projects and discussions. Mentor junior team members and facilitate knowledge sharing sessions. Regularly seek out new learning opportunities and share insights with the team.Lead collaborative learning initiatives within the team. Actively contribute to the development of best practices and processes. Mentor and coach less experienced team members, fostering a culture of continuous learning.Serve as a subject matter expert, providing guidance and direction on complex projects. Spearhead innovative learning initiatives and contribute to industry knowledge sharing. Act as a mentor and coach for both technical and professional development.

A Manifesto on Expected Behaviors

Introduction

In Software Engineering at Carlsberg our collective success is built on the foundation of individual behaviors that foster a positive, innovative, and collaborative environment. This document outlines the expected behaviors that every team member, irrespective of their role or seniority, is encouraged to embody and develop. These behaviors are not just guidelines but the essence of our culture, aiming to inspire continuous growth, effective communication, and a proactive approach to challenges. As we navigate the complexities of software engineering, these behaviors will guide us in making decisions, interacting with one another, and achieving our departmental and organizational goals. Together, let’s build a culture that celebrates learning, teamwork, and excellence in everything we do.

Learn together, grow together

“We embrace collaboration, share knowledge openly, and celebrate both individual and team success.”

  • Contribute and support: Actively participate in discussions, offer help, and celebrate each other’s successes.
  • Give and receive feedback: Regularly seek and provide constructive feedback for improvement.
  • Share expertise openly: Willingly share knowledge and expertise to benefit the team.

Communicate clearly, connect openly

“We foster understanding through respectful, transparent, and active communication using the right tools for the job.”

  • Listen actively, engage thoughtfully: Pay close attention, ask questions, and respond thoughtfully to diverse perspectives.
  • Clarity over jargon, respect in tone: Communicate with clarity, avoid technical jargon, and use respectful language.
  • Prompt and appropriate: Respond efficiently and tailor your communication to fit the situation and audience.
  • Choose the right channel: Utilize appropriate communication methods based on the message and context.

Continuous Learning and Improvement is the Way!

“We value continuous learning, actively seek opportunities to improve, and celebrate progress together.”

  • Quality First, Every Step of the Way: Never pass a known defect down the stream. If you see something which will cause problems for others then you should stop the work.
  • Challenge yourself and learn: Regularly seek new experiences and reflect on your experiences to improve.
  • Experiment and share: Be open to trying new things and share your learnings with the team.
  • Track your progress: Regularly measure your progress towards goals and adjust your approach as needed.

Own your work, drive results

“We take responsibility, proactively solve problems, and seize opportunities to excel.”

  • Embrace challenges, deliver excellence: Aim for impactful work and go the extra mile for outstanding results.
  • Be proactive problem-solvers: Actively seek, address, and prevent the escalation of challenges by ensuring solutions not only fit within established boundaries but also uphold the highest quality standards.
  • Learn and bounce back: Embrace mistakes as learning opportunities and quickly recover from setbacks.

The Intersection of DevEx and DevSecOps: We need a New Way Forward

Developer Experience (DevEx) is critical for productivity, impact and retaining talent. In a world where software engineers are constantly asked to deliver more, faster, and more securely, companies can’t afford to treat DevEx and DevSecOps as separate priorities.

When these areas are siloed, we end up with fragmented workflows, frustrated developers, and disjointed experiences—counteracting the benefits of initiatives like unified development platforms. To move forward, we need an integrated approach to DevEx and DevSecOps, making security a seamless part of the development process while avoiding the fragmentation that current approaches have caused.

The current fragmented approach to DevSecOps is undermining Developer Experience. DevEx and DevSecOps serve different purposes, but poorly implemented DevSecOps practices can harm DevEx, reducing efficiency and developer satisfaction. It’s about ensuring security practices support developer productivity rather than interfere with it.

The Fragmentation Problem: A Warning for Growing Complexity

As organizations scale, it’s easy to fall into the trap of adding more tools to address new challenges—especially in security. Each new vulnerability or compliance requirement often results in adopting yet another tool. On the surface, this might seem like progress, but in reality, it adds complexity.

Each new platform comes with its own requirements, logins, and signals. Developers must toggle between different tools, piecing together information from multiple sources. This disrupts their workflow and increases the risk of errors. The very tools intended to improve security end up creating friction.

This fragmented approach seems common in many organizations. As more platforms are introduced, workflows become disjointed, and maintaining a unified process becomes harder. The result? Security becomes reactive, and developers spend less time building and improving software.

We need to rethink how we integrate security into the development process. A consolidated approach can help avoid these pitfalls while enhancing both security and productivity.

Our Success with Platform Consolidation: Improving Security and Developer Experience

At Carlsberg, we took a deliberate approach to consolidating our software development tools onto a single platform—GitHub—and used GitHub Advanced Security (GHAS) to shift security left into the developer workflow. This allowed us to address security vulnerabilities at their source, directly within the tools developers are already familiar with.

By integrating security into the developer workflow, our developers could use AI-powered tools like GitHub Copilot to write more secure code as they worked. This approach streamlined the process, reducing the need for developers to toggle between multiple platforms and ensuring that the code we deployed was free from known security vulnerabilities at the time of writing. The impact on Developer Experience (DevEx) has been significant—security is now a natural part of the development process, not an afterthought.

This consolidation not only raised our security posture but also improved developer productivity. By reducing context-switching and embedding security into the natural flow of work, we created a more cohesive, efficient development environment where developers felt empowered to take ownership of both the code and its security.

The Opposite Trend in DevSecOps: Tool Fragmentation and Complexity

While we’ve seen success in consolidating our platform and raising both security and Developer Experience, it’s the norm for many organizations to face the opposite challenge. When implementing DevSecOps, the introduction of more security tools often leads to a fragmented workflow. Developers are required to interact with multiple platforms, each with its own set of logins, signals, and processes, which disrupts their focus and lowers productivity.

Research has shown that this tool-centric approach to DevSecOps can lead to operational gaps, inefficiencies, and a disjointed developer experience. The very tools designed to improve security end up creating friction, making it harder for developers to get their work done. In addition, the immaturity of some automated DevSecOps tools further complicates integration into continuous delivery pipelines, undermining both security and efficiency.

This fragmentation isn’t specific to any one organization; it’s a widespread challenge as security teams strive to keep up with growing threats and compliance demands. The proliferation of tools, however, often leads to more silos and increased complexity—exactly the opposite of what we’ve achieved through platform consolidation.

A Call for Streamlining DevSecOps: Learning from Consolidation

The lesson here is clear: adding more tools to the mix isn’t the answer. To fully realize the potential of DevSecOps, we need to move away from tool fragmentation and focus on embedding security into the developer workflow, as we did with our consolidated platform on GitHub. By simplifying the development process and integrating security from the start, we can achieve better outcomes for both security and Developer Experience.

Security needs to be central, not an afterthought. Rather than bolting on security measures after the fact or adding layers of complexity with new tools, security should be a seamless part of how developers work. By making security a core aspect of the development process, we ensure that it is baked in from the very beginning. This approach not only improves security itself but also enhances the overall Developer Experience by reducing the friction and overhead often associated with traditional security processes.

References

1. DevSecOps People: “Identifying the Primary Dimensions of DevSecOps: A Multi-vocal Literature Review,” discusses the fragmentation of DevSecOps and the challenge of integrating multiple tools into a seamless workflow. https://www.sciencedirect.com/science/article/pii/S0164121224001080

2. AI for DevSecOps: A Landscape and Future Opportunities: This paper outlines the potential of AI in automating and enhancing security tasks within DevSecOps pipelines, but also highlights challenges around tool complexity and immaturity. https://arxiv.org/abs/2404.04839

From DevOps to Platform Engineering: How Gaia Transformed Our Approach to Infrastructure, Alignment, and Developer Experience

Introduction

In the world of cloud development, managing infrastructure effectively while maintaining alignment across teams is a constant challenge. Historically, our DevOps team played a pivotal role in provisioning and managing cloud resources, ensuring developers had what they needed to build and deploy solutions. However, this model wasn’t sustainable as the number of projects grew and cloud environments became more complex. We needed a way to streamline infrastructure management without losing sight of alignment across teams and solutions, while also improving the overall Developer Experience (DevEx).

This realization led us to shift our DevOps team from a traditional support role into a platform engineering team, focused on building and maintaining tools that provide a golden path for developers. The result? Gaia—a platform that has radically transformed how we manage cloud infrastructure, maintain alignment throughout the organization, and drastically improve Developer Experience by embedding infrastructure creation into developers’ existing workflows.

The Evolution from DevOps to Platform Engineering

When we started, our DevOps team handled infrastructure provisioning manually and on a request basis. While this ensured quality control, it created bottlenecks as the number of requests grew, leading to slower project deliveries. Developers were often left waiting for infrastructure to be set up, while the DevOps team struggled to keep up with the workload.

This wasn’t a scalable model, so we pivoted. Rather than manually provisioning infrastructure, we built Gaia—a platform that automates infrastructure creation while maintaining alignment with company policies. Gaia represents our “golden path”—a set of pre-built modules that allow developers to provision cloud resources without needing to worry about governance, security, or configuration inconsistencies.

Not only did Gaia eliminate bottlenecks, but it also integrated directly into the GitHub workflow developers were already using, significantly improving Developer Experience. Developers now interact with the same tools they use for coding, making infrastructure requests feel like a natural extension of their development work.

The Remarkable Impact of Gaia on Developer Experience

Gaia’s impact has been nothing short of remarkable. By automating the infrastructure creation process, we’ve effectively removed the need for the DevOps team to manually create infrastructure for developers. Developers now have a self-service capability to quickly and easily provision what they need on their own, directly from within their existing GitHub workflows, without waiting for approval or intervention from the DevOps team.

This seamless integration has significantly improved Developer Experience in several key ways:

  • Familiarity: Developers don’t have to learn new tools or processes to request infrastructure. They use GitHub, the platform they are already familiar with, ensuring minimal friction when interacting with infrastructure.
  • Speed and Efficiency: With Gaia, infrastructure requests are submitted via GitHub pull requests (PRs), allowing developers to spin up resources quickly. This eliminates the lag time that often occurs when requests are handled through manual ticketing systems.
  • Embedded Governance: Developers no longer have to worry about compliance or governance rules. Every infrastructure resource created via Gaia is automatically aligned with company policies, freeing developers to focus entirely on building solutions without getting bogged down in regulatory details.

By embedding infrastructure creation into the developer workflow through GitHub, Gaia significantly boosts DevEx. Developers are empowered to take control of infrastructure setup, while still benefiting from built-in quality and governance checks that ensure alignment with the company’s standards.

The New Focus of Our Platform Engineering Team

With manual infrastructure creation largely eliminated, the role of the DevOps team has shifted to that of a platform engineering team. Their primary focus is now on maintaining Gaia and the shared modules that are used to provision infrastructure. Whenever new infrastructure resources or cloud services are introduced, the team ensures they are incorporated into Gaia in a way that adheres to company policies, ensuring alignment as our cloud architecture evolves.

This centralized approach allows the platform engineering team to ensure that the development process is as smooth as possible, enhancing the overall Developer Experience by constantly improving the tools developers rely on. Developers no longer need to spend time learning about the intricacies of cloud infrastructure or worry about whether their configurations meet governance requirements.

Integrating Infrastructure Creation into the Developer Workflow

One of the most significant achievements of Gaia is how seamlessly it integrates into the developer workflow. As mentioned, we built Gaia to work within a central repository in GitHub, where developers create pull requests to request infrastructure. These PRs are then reviewed and approved by the platform engineering team, ensuring that every infrastructure change aligns with company policies and best practices.

By embedding infrastructure creation into the PR process, we’ve achieved several goals:

  • Speed: Developers can request infrastructure as part of their normal workflow, without delays or waiting for separate approvals.
  • Quality Control: The PR process provides a natural checkpoint for the platform engineering team to ensure consistency and alignment across all teams and solutions.
  • Alignment: Centralizing infrastructure requests in a single repository ensures that all teams are working from the same set of standards, preventing silos and ensuring that every team follows best practices.
  • Enhanced Developer Experience: Since developers no longer need to switch between tools or wait for external teams, the process feels fluid and integrated. This reduces the cognitive load on developers and enables them to focus more on writing code and building features rather than managing infrastructure logistics.

Gaia’s GitHub-based process has streamlined how developers interact with infrastructure, further aligning infrastructure creation with developer workflows and enhancing their experience by reducing friction and improving productivity.

Conclusion

The transition from a traditional DevOps model to a platform engineering team centered around Gaia has been a game changer for us. By providing developers with a golden path for creating infrastructure, we’ve freed up their time to focus on what they do best: building innovative software solutions. At the same time, we’ve ensured that every infrastructure deployment is aligned with our policies and governance frameworks, without the need for constant oversight.

Gaia has made our infrastructure provisioning faster, more reliable, and more scalable, while allowing our platform engineering team to focus on higher-level work—maintaining the tools that enable this. By embedding infrastructure creation into GitHub workflows, we’ve also enhanced Developer Experience, making infrastructure provisioning a natural extension of the development process.

The future of DevOps, for us, lies in platform engineering, where teams enable developers rather than managing infrastructure requests. Alignment and Developer Experience are no longer afterthoughts—they’re built into the process, ensuring that as we scale, we do so efficiently, consistently, and with a developer-centric approach.

Gaia was built by:

Balancing Autonomy and Alignment in Engineering Teams

The Spotify model has often been referenced as a way to structure engineering teams for agility and independence. It promotes business-owned product teams, where engineers report into product owners, and uses guilds to ensure that teams stay aligned on best practices. However, guilds often become more like “book clubs,” where participation is optional and relies on personal time. This happens because business line managers prioritize deliverables over cross-organizational collaboration, making it difficult to maintain alignment at scale.

Meanwhile, Team Topologies offers a different focus, looking at how different types of teams interact and organize. It doesn’t rely on guilds but instead emphasizes reducing dependencies and clarifying team responsibilities.

One of the main reasons I organize engineers into a single reporting line, rather than under product ownership, is to avoid these pitfalls. By centralizing the reporting structure, I can prioritize time for engineers to focus on cross-organizational standards and collaboration, ensuring alignment across teams without relying on optional participation.

The Importance of Alignment and Shared Processes

While models like Spotify emphasize team independence, they sometimes miss the mark on alignment. It’s critical that teams don’t end up siloed, each solving the same problems in different ways, or worse, working against established company practices. This is where alignment on best practices, methods, and tools becomes crucial.

Take the US Navy SEAL teams as an example. They are known for their ability to operate independently, much like Scrum teams. However, what people tend to overlook is that all SEAL teams undergo the same training, use the same equipment, and follow standardized methods and processes. This shared foundation is what allows them to operate seamlessly when they come together, even though they work independently most of the time.

In the same way, my approach ensures that engineering teams can solve problems on their own, but they’re aligned on best practices, tools, and processes across the company. This alignment prevents the issues often seen in the Spotify model, where teams risk becoming too focused on their own product work, losing sight of the bigger organizational picture.

Scrum Teams Need Independence from Authority

In Scrum teams, the issue goes beyond just estimation—it’s about the entire collaboration model. Scrum is designed to foster equal collaboration, where team members work together, estimate tasks, and solve problems without a hierarchy influencing decisions. When someone on the team, such as a Product Owner, is also the boss, this balance is broken. The premise of Scrum, which relies on collective responsibility and open communication, collapses.

If the Product Owner or any other leader on the team has direct authority over the others, it can lead to a situation where estimates are overridden, team members feel pressured to work longer hours, and decisions are driven by power dynamics rather than collaboration. This undermines the core principles of Scrum, where the goal is for teams to self-organize and be empowered to make their own decisions.

By keeping authority structures out of the Scrum team, we ensure that collaboration is truly equal, and that decisions are made based on the team’s expertise and collective input—not on the directives of a boss.

How We Balance Autonomy and Alignment

Instead of organizing engineers strictly around product owners and budgets—like in the Spotify model—we’ve created a framework where engineers report through a central engineering line. This keeps everyone on the same page when it comes to methods and processes. Engineers still work closely with product teams, but they don’t lose sight of the bigger picture: adhering to company-wide standards.

This approach solves a problem common in both the Spotify and Team Topologies models. In Spotify, squads may go off and build things their way, leading to inconsistencies across the organization. In Team Topologies, stream-aligned teams can become too focused on optimizing their flow, which sometimes means diverging from company-wide practices. By maintaining a central engineering line, we keep our teams aligned while still giving them the autonomy they need to innovate and move quickly.

The Result

Our approach strikes a balance. Teams are free to innovate and adapt to the challenges of their product work, but they aren’t reinventing the wheel or deviating from best practices. We’ve managed to avoid the pitfalls of silos and fragmented processes by ensuring that every team operates within a shared framework—just like how SEAL teams can work independently, but they all share the same training, tools, and methods.

At the end of the day, it’s not about limiting autonomy; it’s about creating the right kind of autonomy. Teams should be able to act independently, but they should do so in a way that keeps the organization moving in the same direction. That’s the key to scaling effectively without losing sight of what makes us successful in the first place.

« Older posts

© 2024 Peter Birkholm-Buch

Theme by Anders NorenUp ↑