Stuff about Software Engineering

Tag: Leadership (Page 1 of 2)

The Future of Consulting: How Value Delivery Models Drive Better Client Outcomes

Introduction

The consulting industry, particularly within software engineering, is shifting away from traditional hourly billing towards value-driven and outcome-focused engagements. This evolution aligns with broader trends identified by industry analysts like Gartner and McKinsey, emphasizing outcomes and measurable impacts over time-based compensation (Gartner Hype Cycle for Consulting Services, 2023 and McKinsey: The State of Organizations 2023).

Shift from Hourly Rates to Value Delivery

Traditional hourly billing often incentivizes cost reduction at the expense of quality, creating tension between minimizing expenses and achieving meaningful results. The emerging approach—value-based consulting—aligns compensation directly with specified outcomes or deliverables. For instance, many consulting firms now employ fixed-price projects or performance-based contracts that clearly link payment to the achievement of specific business results, improving alignment and encouraging deeper collaboration between clients and consultants.

According to a 2023 McKinsey report titled ‘The State of Organizations 2023’, approximately 40% of consulting engagements are shifting towards value-based models, highlighting the industry’s evolution (https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-state-of-organizations-2023)

Leveraging Scrum and Agile Methodologies

Agile methodologies, especially Scrum, facilitate this shift by naturally aligning consulting work with measurable outputs and iterative improvements. In Scrum, value is measured through clearly defined user stories, regular sprint reviews, and tangible deliverables evaluated continuously by stakeholders. These iterative deliveries provide clear visibility into incremental progress, effectively replacing hourly tracking with meaningful metrics of success.

Challenges in Adopting Value-Based Models

Transitioning to value-based consulting is not without its challenges. Firms may encounter difficulties in accurately defining and measuring value upfront, aligning expectations, and managing the inherent risks of outcome-based agreements. Overcoming these challenges typically requires transparent communication, clear contract terms, and robust stakeholder engagement from project inception.

AI and Human Oversight

While there is significant enthusiasm and concern surrounding AI, its role remains primarily augmentative rather than fully autonomous, particularly in high-stakes decision-making. Human oversight ensures AI-driven solutions remain precise and contextually appropriate, directly supporting high-quality, outcome-focused consulting. This perspective aligns with insights discussed in Responsible AI: Enhance Human Judgment, Don’t Replace It.

Balancing Speed and Precision

AI offers substantial gains in speed but often involves trade-offs in precision. Certain fields, such as financial services or critical infrastructure, require exactness, making human judgment essential in balancing these considerations. This topic, explored in detail in Speed vs. Precision in AI Development, highlights how value-driven consulting must thoughtfully integrate AI to enhance outcomes without sacrificing accuracy.

Conclusion

The shift to outcome-focused consulting models, supported by agile frameworks and thoughtful AI integration, represents a significant evolution in the industry. By prioritizing measurable value and clearly defined outcomes over hourly rates, consulting engagements become more impactful and sustainable.

Responsible AI: Enhance Human Judgment, Don’t Replace It

Artificial Intelligence (AI) is transforming businesses, streamlining processes, and providing insights previously unattainable at scale. However, it is crucial to keep in mind a fundamental principle best summarized by computing pioneer Grace Hopper:

“A computer can never be held accountable; therefore, a computer must never make a management decision.”

This simple yet profound guideline underscores the importance of human oversight in business-critical decision-making, a topic I’ve discussed previously in my posts about the Four Categories of AI Solutions and the necessity of balancing Speed vs Precision in AI Development.

Enhancing Human Decision-Making

AI should serve as an enabler rather than a disruptor. Its role is to provide support, suggestions, and insights, but never to autonomously make significant business decisions, especially those affecting financial outcomes, security protocols, or customer trust. Human oversight ensures accountability and ethical responsibility remain clear.

Security and Resilience

AI-powered systems, such as chatbots and customer interfaces, must be built with resilience against adversarial manipulation. Effective safeguards—including strict input validation and clearly defined output limitations—are critical. Human oversight must always be available as a fallback mechanism when the system encounters unforeseen scenarios.

Balancing Core and Strategic AI Solutions

In earlier posts, I’ve outlined four categories of AI solutions ranging from simple integrations to complex, custom-built innovations. Core AI solutions typically leverage standard platforms with inherent governance frameworks, such as Microsoft’s suite of tools, making them relatively low-risk. Conversely, strategic AI solutions involve custom-built systems that provide significant business value but inherently carry higher risks, requiring stringent oversight and comprehensive governance.

Lean and Practical Governance

AI governance frameworks must scale appropriately with the level of risk involved. It’s important to avoid creating bureaucratic overhead for low-risk applications, while ensuring that more sensitive, strategic AI applications undergo thorough evaluations and incorporate stringent human oversight.

Humans-in-the-Loop

A “human-in-the-loop” approach is essential for managing AI-driven decisions that significantly impact financial transactions, security measures, or customer trust. While AI may suggest or recommend actions, final approval and accountability should always rest with human operators. Additionally, any autonomous action should be easily reversible by a human.

Final Thoughts

AI offers tremendous potential for innovation and operational excellence. However, embracing AI responsibly means recognizing its limitations. AI should support and empower human decision-making—not replace it. By maintaining human oversight and clear accountability, we can leverage AI effectively while safeguarding against risks.

Ultimately, AI assists, but humans decide.

AI Without Borders: Why Accessibility Will Determine Your Organization’s Future

Introduction

Gartner recently announced that AI has moved past the peak of inflated expectations in its hype cycle, signaling a crucial transition from speculation to real-world impact. AI now stands at this inflection point—its potential undeniable, but its future hinges on whether it becomes an accessible, enabling force or remains restricted by excessive governance.

The Risk of Creating A and B Teams in AI

Recent developments, such as OpenAI’s foundational grants to universities, signal an emerging divide between those with AI access and those without. These grants are not just an academic initiative—they will accelerate disparities in AI capabilities, favoring institutions that can freely explore and integrate AI into research and innovation. The same divide is already forming in the corporate world.

Software engineers today are increasingly evaluating companies based on their AI adoption. When candidates ask in interviews whether an organization provides tools like GitHub Copilot, they are not just inquiring about productivity enhancements—they are assessing whether the company is on the cutting edge of AI adoption. Organizations that restrict AI access risk falling behind, unintentionally categorizing themselves into the “B Team,” making it harder to attract top talent and compete effectively.

Lessons from Past Industrial Revolutions

History provides clear lessons about the importance of accessibility in technological revolutions. Electricity, for example, was initially limited to specific industrial applications before it became a utility that fueled industries, powered homes, and transformed daily life. Similarly, computing evolved from expensive mainframes reserved for large enterprises to personal computers and now cloud computing, making advanced technology available to anyone with an internet connection.

AI should follow the same path.

However, excessive corporate governance could hinder its progress, while governmental governance remains essential to ensure AI is developed and used safely. Just as electricity transformed from an industrial novelty to the foundation of modern society, AI must follow a similar democratization path. Imagine if we had limited electricity to only certified engineers or specific departments—we would have stifled the innovation that brought us everything from household appliances to modern healthcare. Similarly, restricting AI access today could prevent us from discovering its most transformative applications tomorrow.

Governance Should Enable, Not Block

The key is not to abandon governance but to ensure it enables rather than blocks innovation. AI governance should focus on how AI is used, not who gets access to it. Restricting AI tools today is akin to limiting electricity to specialists a century ago—an approach that would have crippled progress.

The most successful AI implementations are those that integrate seamlessly into existing workflows. Tools like GitHub Copilot and Microsoft Copilot demonstrate how AI can enhance productivity when it is embedded within platforms that employees already use. The key is to govern AI responsibly without creating unnecessary friction that prevents widespread adoption.

The Competitive Divide is Already Here

The AI accessibility gap is no longer theoretical—it is already shaping the competitive landscape. Universities that receive OpenAI’s foundational grants will advance more rapidly than those without access. Companies that fully integrate AI into their daily operations will not only boost innovation but also become magnets for top talent. The question organizations must ask themselves is clear: Do we embrace AI as an enabler, or do we risk falling behind?

As history has shown, technology is most transformative when it is available to all. AI should be no different. The organizations that will thrive in the coming decade will be those that balance responsible governance with widespread AI accessibility—empowering their people to innovate rather than restricting them with excessive controls. The question isn’t whether you’ll adopt AI, but whether you’ll do it in a way that creates competitive advantage or competitive disadvantage.

Why 100% Utilization Kills Innovation: The Mathematical Reality

Imagine a highway at 100% capacity. Traffic doesn’t just slow down—it stops completely. A single broken-down car causes massive ripple effects because there’s no buffer space to absorb the variation. This isn’t just an analogy; it’s mathematics. And the same principle explains why running teams at full capacity mathematically guarantees the death of innovation.

The Queue Theory Reality

In 1961, mathematician J.F.C. Kingman proved something remarkable: as utilization approaches 100%, delays grow exponentially. This finding, known as Kingman’s Formula, demonstrates that systems operating at full capacity don’t just slow down linearly—they break down dramatically. Hopp and Spearman’s seminal work “Factory Physics” (2000) further established that optimal system performance occurs at around 80% utilization, giving rise to the “80% Rule” in operations management.

This isn’t opinion or management theory—it’s mathematics. When utilization exceeds 80-85%, systems experience:

  • Exponentially increasing delays
  • Inability to handle normal variation
  • Cascading disruptions from small problems
  • Deteriorating performance across all metrics

The Human System Connection

Just as a machine’s productivity is limited by its operational capacity, humans too are constrained by cognitive load. People and teams are systems too. When cognitive load research pioneers Sweller and Chandler demonstrated how mental capacity follows similar patterns, they revealed something crucial: minds at 100% capacity lose the ability to process new information effectively. Just as a fully utilized highway can’t absorb a single additional car, a fully utilized mind can’t absorb new ideas or opportunities.

The implications are profound: innovation requires spare capacity. This isn’t about working less—it’s about maintaining the mental and temporal space required for creative thinking and problem-solving. Studies of innovation consistently show that breakthrough ideas emerge when people have the bandwidth to:

  • Notice unexpected patterns
  • Explore new connections
  • Experiment with different approaches
  • Learn from failures

The Three Horizons Impact

McKinsey’s Three Horizons Framework provides a useful lens for understanding innovation timeframes:

  • Horizon 1: Improving current business
  • Horizon 2: Extending into new areas
  • Horizon 3: Creating transformative opportunities

Here’s where queue theory delivers its killing blow to innovation: At 100% utilization, everything becomes Horizon 1 by mathematical necessity. When a system (human or organizational) operates at full capacity, it can only handle what’s already in the queue. New opportunities, no matter how promising, must wait. Over time, Horizons 2 and 3 don’t just suffer—they become mathematically impossible.

To keep Horizons 2 and 3 viable, companies need to intentionally limit Horizon 1 resource utilization and leave room for creative and exploratory projects.

The Innovation Impossibility

Queue theory proves that running at 100% utilization:

  • Makes delays inevitable
  • Eliminates flexibility
  • Prevents absorption of variation
  • Blocks capacity for new initiatives

Therefore, organizations face a mathematical certainty: maintain 100% utilization or maintain innovation capability. You cannot have both. This isn’t a management choice or cultural issue—it’s as fundamental as gravity.

The solution isn’t working less—it’s working smarter. Just as highways need buffer capacity to function effectively, organizations need spare capacity to innovate. The 80% rule isn’t about reduced output; it’s about maintaining the space required for sustainable performance and growth.

The choice is clear: accept the mathematical reality that innovation requires spare capacity, or continue pushing for 100% utilization while wondering why transformative innovation never seems to happen.

References:

  • Kingman, J.F.C. (1961). “The Single Server Queue in Heavy Traffic”
  • Hopp, W.J., & Spearman, M.L. (2000). “Factory Physics”
  • McKinsey & Company. “Three Horizons of Growth”
  • Sweller, J., & Chandler, P. “Cognitive Load Theory and the Format of Instruction”

AI-Engineer: A Distinct and Essential Skillset

Introduction

Artificial intelligence (AI) technologies have caused a dramatic change in software engineering. At the forefront of this revolution are AI-Engineers – the professionals who implement solutions within the ‘Builders’ category of AI adoption, as I outlined in my previous post on Four Categories of AI Solutions. These engineers not only harness the power of AI but also redefine the landscapes of industries.

As I recently discussed in “AI-Engineers: Why People Skills Are Central to AI Success” organizations face a critical talent shortage in AI implementation. McKinsey’s research shows that 60% of companies cite talent shortages as a key risk in their AI adoption plans. This shortage makes understanding the AI Engineer role and its distinct skillset more crucial than ever.

But what is an AI-Engineer?

Core Skills of an AI Engineer

AI-Engineers are skilled software developers who can code in modern languages. They create the frameworks and software solutions that enable AI functionalities and make them work well with existing enterprise applications. While they need basic knowledge of machine learning and AI concepts, their primary focus differs from data engineers. Where data engineers mainly focus on writing and managing data models, AI-Engineers concentrate on building reliable, efficient, and scalable software solutions that integrate AI.

Strategic Business Outcomes

The role of AI-Engineers is crucial in translating technological advancements into strategic advantages. Their ability to navigate the complex landscape of AI tools and tailor solutions to specific business challenges underlines their unique role within the enterprise and software engineering specifically. By embedding AI into core processes, they help streamline operations and foster innovative product development.

Continuous Learning and Adaptability

As I described in “Keeping Up with GenAI: A Full-Time Job?” the AI landscape shifts at a dizzying pace. Just like Loki’s time-slipping adventures, AI-Engineers find themselves constantly jumping between new releases, frameworks, and capabilities – each innovation demanding immediate attention and evaluation.

For AI-Engineers, this isn’t just about staying informed – it’s about rapidly evaluating which technologies can deliver real value. The platforms and communities that facilitate this learning, such as Hugging Face, become essential resources. However, merely keeping up isn’t enough. AI-Engineers must develop a strategic approach to:

  1. Evaluate new technologies against existing solutions
  2. Assess potential business impact before investing time in adoption
  3. Balance innovation with practical implementation
  4. Maintain stable systems while incorporating new capabilities

Real-World Impact

The real value of AI-Engineers becomes clear when we look at concrete implementation data. In my recent analysis of GitHub Copilot usage at Carlsberg (“GitHub Copilot Probably Saves 50% of Time for Developers“), we found fascinating patterns in AI tool adoption. While developers and GitHub claim the tool saves 50% of development time, the actual metrics tell a more nuanced story:

  • Copilot’s acceptance rate hovers around 20%, meaning developers typically use one-fifth of the suggested code
  • Even with this selective usage, developers report significant time savings because reviewing and modifying AI suggestions is faster than writing code from scratch
  • The tool generates substantial code volume, but AI-Engineers must carefully evaluate and adapt these suggestions

This real-world example highlights several key aspects of the AI-Engineer role:

  1. Tool Evaluation: AI-Engineers must look beyond marketing claims to understand actual implementation impact
  2. Integration Strategy: Success requires thoughtful integration of AI tools into existing development workflows
  3. Metric Definition: AI-Engineers need to establish meaningful metrics for measuring AI tool effectiveness
  4. Developer Experience: While pure efficiency gains may be hard to quantify, improvements in developer experience can be significant

These findings demonstrate why AI-Engineers need both technical expertise and practical judgment. They must balance the promise of AI automation with the reality of implementation, ensuring that AI tools enhance rather than complicate development processes.

Conclusion

AI Engineering is undeniably a distinct skill set, one that is becoming increasingly indispensable in AI transformation. As industries increasingly rely on AI to innovate and optimize, the demand for skilled AI Engineers who can both understand and shape this technology continues to grow. Their ability to navigate the rapid pace of change while delivering practical business value makes them essential to successful AI adoption. Most importantly, their role in critically evaluating and effectively implementing AI tools – as demonstrated by our Copilot metrics – shows why this specialized role is crucial for turning AI’s potential into real business value.

AI-Engineers: Why People Skills Are Central to AI Success

In my blog post “Four Categories of AI Solutions” (https://birkholm-buch.dk/2024/04/22/four-categories-of-ai-solutions), I outlined different approaches to building AI solutions, but it’s becoming increasingly clear that the decision on how to approach AI hinges on the talent and capabilities within the organization. As AI continues to evolve at lightning speed, companies everywhere are racing to adopt the latest innovations. Whether it’s generative AI, machine learning, or predictive analytics, organizations see AI as a strategic advantage. But as exciting as these technologies are, they come with a less glamorous reality—people skills are the make-or-break factor in achieving long-term AI success.

The Talent Crunch: A Major Barrier to AI Success

Reports from both McKinsey and Gartner consistently highlight a serious shortage of skilled AI talent. McKinsey’s latest research suggests that AI adoption has plateaued for many companies not due to a lack of use cases, but because they don’t have the talent required to execute their AI strategies effectively. 60% of companies cite talent shortages as a key risk in their AI adoption plans. (McKinsey State of AI 2023: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year).

This talent crunch means that organizations are finding it difficult to retain and attract professionals with the skills needed to develop, manage, and scale AI initiatives. 58% of organizations report significant gaps in AI talent, with 72% lacking machine learning engineering skills and 68% short on data science skills (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

The New Frontier of AI Engineering Talent

Given the rapid pace of change in AI tools and techniques, developing an internal team of AI Engineers is one of the most sustainable strategies for companies looking to stay competitive. AI Engineers are those with the expertise to design, build, and maintain custom AI solutions tailored to the specific needs of the organization.

By 2026, Gartner predicts that 30% of new AI systems will require specialized AI talent, making the need for AI Builders even more urgent (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

The Challenge of Retaining AI Talent

The retention challenge in AI isn’t just about compensation—it’s about providing meaningful work, opportunities for continuous learning, and a sense of ownership over projects. Diverse AI teams are essential for preventing biases in models and creating more robust, well-rounded AI systems. Offering inclusive, supportive environments where AI engineers can grow professionally and personally is essential to keeping top talent engaged (McKinsey AI Tech Talent Survey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/new-mckinsey-survey-reveals-the-ai-tech-talent-landscape).

Skills Development: A Competitive Advantage

AI is only as good as the team behind it. As Gartner highlights, the true value of AI will be realized when companies can seamlessly integrate AI tools into their existing workflows. This integration requires not just the right tools but the right people to design, deploy, and optimize these solutions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

Companies that prioritize skills development will have a competitive edge. AI is advancing so quickly that organizations can no longer rely solely on external vendors to provide off-the-shelf solutions. Building an internal team of AI Builders—engineers who are continually learning and improving their craft—is essential to staying ahead of the curve. Offering employees opportunities to reskill and upskill is no longer optional; it’s a necessity for retaining talent and remaining competitive in the AI-driven economy.

Looking Ahead: The Evolving Role of AI Engineers

As organizations continue to explore AI adoption, the need for specialized AI models is becoming more apparent. Gartner predicts that by 2027, over 50% of enterprises will deploy domain-specific Large Language Models (LLMs) tailored to either their industry or specific business functions (Gartner 2023 Hype Cycle for AI: https://www.gartner.com/en/newsroom/press-releases/2023-08-17-gartner-identifies-key-technology-trends-in-2023-hype-cycle-for-artificial-intelligence).

I believe the role of AI Engineers will continue to grow in importance and complexity. We might see new specializations emerge, such as AI ethics officers ensuring responsible AI use, or AI-human interaction designers creating seamless experiences. 

In my view, the most valuable AI Engineers of the future could be those who can not only master the technology but also understand its broader implications for business and society. They might need to navigate complex ethical considerations, adapt to rapidly changing regulatory landscapes, and bridge the gap between technical capabilities and business needs.

Conclusion: Investing in People is Investing in AI Success

The success of any AI initiative rests on the quality and adaptability of the people behind it. For organizations aiming to lead in the AI space, the focus should be on creating a workforce of AI Engineers—skilled professionals who can navigate the ever-changing landscape of AI technologies.

The lesson is clear: to succeed with AI, invest in your people first. The future of AI is not just about algorithms and data—it’s about the humans who will shape and guide its development and application.

Seniority, Organizational Influence and Participation

The table below is meant to better explain how seniority, organizational influence and behaviors are connected. Anyone wanting to move up the career ladder must master our expected behaviors and participate accordingly.

StrengthEntryIntermediateExperiencedAdvancedExpert
ScopeEntry level professional with limited or no prior experience; learns to use professional concepts to resolve problems of limited scope and complexity; works on assignments that require limited judgment and decision making. Developing position where an employee is able to apply job skills, policies and procedures to complete tasks of moderate scope and complexity to determine appropriate action.Journey-level, experienced professional who knows how to apply theory and put it into practice with full understanding of the professional field; has broad job knowledge and works on problems of diverse scope.Professional with a high degree of knowledge in the overall field and recognized expertise in specific areas.Leader in the field who regularly leads projects of criticality to company and beyond, with high consequences of success or failure.  This employee has impact and influence on company policy and program development. Barriers of entry exist at this level.
AnalogyLearning about rope and knotsCan tie basic knots, learning complex knotsCalculates rope strength, knows a lot about knotsUnderstands rope making, can tie any knotKnows more about rope than you ever will, invented new knot
Influence & ImpactSelfPeersTeamDepartmentCompany
Coding100%100%80%-100%<25%<10%
ParticipationActively participate in discussions and team activities. Seek guidance from more experienced team members. Be open to receiving feedback and learning from mistakes.Contribute to team discussions and share knowledge gained from learning basic concepts. Offer assistance to junior team members. Seek feedback on performance and actively work on improving skills.Actively contribute expertise to team projects and discussions. Mentor junior team members and facilitate knowledge sharing sessions. Regularly seek out new learning opportunities and share insights with the team.Lead collaborative learning initiatives within the team. Actively contribute to the development of best practices and processes. Mentor and coach less experienced team members, fostering a culture of continuous learning.Serve as a subject matter expert, providing guidance and direction on complex projects. Spearhead innovative learning initiatives and contribute to industry knowledge sharing. Act as a mentor and coach for both technical and professional development.

A Manifesto on Expected Behaviors

Introduction

In Software Engineering at Carlsberg our collective success is built on the foundation of individual behaviors that foster a positive, innovative, and collaborative environment. This document outlines the expected behaviors that every team member, irrespective of their role or seniority, is encouraged to embody and develop. These behaviors are not just guidelines but the essence of our culture, aiming to inspire continuous growth, effective communication, and a proactive approach to challenges. As we navigate the complexities of software engineering, these behaviors will guide us in making decisions, interacting with one another, and achieving our departmental and organizational goals. Together, let’s build a culture that celebrates learning, teamwork, and excellence in everything we do.

Learn together, grow together

“We embrace collaboration, share knowledge openly, and celebrate both individual and team success.”

  • Contribute and support: Actively participate in discussions, offer help, and celebrate each other’s successes.
  • Give and receive feedback: Regularly seek and provide constructive feedback for improvement.
  • Share expertise openly: Willingly share knowledge and expertise to benefit the team.

Communicate clearly, connect openly

“We foster understanding through respectful, transparent, and active communication using the right tools for the job.”

  • Listen actively, engage thoughtfully: Pay close attention, ask questions, and respond thoughtfully to diverse perspectives.
  • Clarity over jargon, respect in tone: Communicate with clarity, avoid technical jargon, and use respectful language.
  • Prompt and appropriate: Respond efficiently and tailor your communication to fit the situation and audience.
  • Choose the right channel: Utilize appropriate communication methods based on the message and context.

Continuous Learning and Improvement is the Way!

“We value continuous learning, actively seek opportunities to improve, and celebrate progress together.”

  • Quality First, Every Step of the Way: Never pass a known defect down the stream. If you see something which will cause problems for others then you should stop the work.
  • Challenge yourself and learn: Regularly seek new experiences and reflect on your experiences to improve.
  • Experiment and share: Be open to trying new things and share your learnings with the team.
  • Track your progress: Regularly measure your progress towards goals and adjust your approach as needed.

Own your work, drive results

“We take responsibility, proactively solve problems, and seize opportunities to excel.”

  • Embrace challenges, deliver excellence: Aim for impactful work and go the extra mile for outstanding results.
  • Be proactive problem-solvers: Actively seek, address, and prevent the escalation of challenges by ensuring solutions not only fit within established boundaries but also uphold the highest quality standards.
  • Learn and bounce back: Embrace mistakes as learning opportunities and quickly recover from setbacks.

Balancing Autonomy and Alignment in Engineering Teams

The Spotify model has often been referenced as a way to structure engineering teams for agility and independence. It promotes business-owned product teams, where engineers report into product owners, and uses guilds to ensure that teams stay aligned on best practices. However, guilds often become more like “book clubs,” where participation is optional and relies on personal time. This happens because business line managers prioritize deliverables over cross-organizational collaboration, making it difficult to maintain alignment at scale.

Meanwhile, Team Topologies offers a different focus, looking at how different types of teams interact and organize. It doesn’t rely on guilds but instead emphasizes reducing dependencies and clarifying team responsibilities.

One of the main reasons I organize engineers into a single reporting line, rather than under product ownership, is to avoid these pitfalls. By centralizing the reporting structure, I can prioritize time for engineers to focus on cross-organizational standards and collaboration, ensuring alignment across teams without relying on optional participation.

The Importance of Alignment and Shared Processes

While models like Spotify emphasize team independence, they sometimes miss the mark on alignment. It’s critical that teams don’t end up siloed, each solving the same problems in different ways, or worse, working against established company practices. This is where alignment on best practices, methods, and tools becomes crucial.

Take the US Navy SEAL teams as an example. They are known for their ability to operate independently, much like Scrum teams. However, what people tend to overlook is that all SEAL teams undergo the same training, use the same equipment, and follow standardized methods and processes. This shared foundation is what allows them to operate seamlessly when they come together, even though they work independently most of the time.

In the same way, my approach ensures that engineering teams can solve problems on their own, but they’re aligned on best practices, tools, and processes across the company. This alignment prevents the issues often seen in the Spotify model, where teams risk becoming too focused on their own product work, losing sight of the bigger organizational picture.

Scrum Teams Need Independence from Authority

In Scrum teams, the issue goes beyond just estimation—it’s about the entire collaboration model. Scrum is designed to foster equal collaboration, where team members work together, estimate tasks, and solve problems without a hierarchy influencing decisions. When someone on the team, such as a Product Owner, is also the boss, this balance is broken. The premise of Scrum, which relies on collective responsibility and open communication, collapses.

If the Product Owner or any other leader on the team has direct authority over the others, it can lead to a situation where estimates are overridden, team members feel pressured to work longer hours, and decisions are driven by power dynamics rather than collaboration. This undermines the core principles of Scrum, where the goal is for teams to self-organize and be empowered to make their own decisions.

By keeping authority structures out of the Scrum team, we ensure that collaboration is truly equal, and that decisions are made based on the team’s expertise and collective input—not on the directives of a boss.

How We Balance Autonomy and Alignment

Instead of organizing engineers strictly around product owners and budgets—like in the Spotify model—we’ve created a framework where engineers report through a central engineering line. This keeps everyone on the same page when it comes to methods and processes. Engineers still work closely with product teams, but they don’t lose sight of the bigger picture: adhering to company-wide standards.

This approach solves a problem common in both the Spotify and Team Topologies models. In Spotify, squads may go off and build things their way, leading to inconsistencies across the organization. In Team Topologies, stream-aligned teams can become too focused on optimizing their flow, which sometimes means diverging from company-wide practices. By maintaining a central engineering line, we keep our teams aligned while still giving them the autonomy they need to innovate and move quickly.

The Result

Our approach strikes a balance. Teams are free to innovate and adapt to the challenges of their product work, but they aren’t reinventing the wheel or deviating from best practices. We’ve managed to avoid the pitfalls of silos and fragmented processes by ensuring that every team operates within a shared framework—just like how SEAL teams can work independently, but they all share the same training, tools, and methods.

At the end of the day, it’s not about limiting autonomy; it’s about creating the right kind of autonomy. Teams should be able to act independently, but they should do so in a way that keeps the organization moving in the same direction. That’s the key to scaling effectively without losing sight of what makes us successful in the first place.

Building a Better Software Practice: A Guide to Policies, Rules, Standards, Processes, Guidelines and Governance

Introduction

When organizations add more people (scale up) it quickly becomes impossible to have everyone sitting in the same room to discuss and agree on matters and share the same approach to doing business.

This is my version of a framework for a Software Engineering Quality Handbook where it’s easy to get an overview of how an organization can use a well-structured hierarchy for describing how work is done which is crucial for ensuring clarity, consistency, and compliance within a team. The handbook is built up with 5 layers:

  • Policies: Broad statements that define the organization’s principles and compliance expectations.
  • Rules: Strict directives that must be followed to ensure specific outcomes in certain situations.
  • Standards: Mandatory technical and operational requirements that ensure consistency and quality.
  • Processes: Detailed, step-by-step instructions that specify how to perform specific tasks.
  • Guidelines: Recommended best practices that guide decision-making but are not mandatory.
  • Governance and Compliance Controls: A collection of key controls and processes designed to ensure adherence to governance standards and compliance with both internal policies and external regulations.

Each layer of this structure supports the one above it, providing more detail and specificity. Policies and rules sets the foundation and create alignment within the goals of Software Engineering, while standards, procedures and guidelines provide the specifics on how to achieve those goals effectively.

This structure also facilitates easier updates and management. Policies and standards often require more thorough review and approval processes due to their impact and scope, whereas guidelines and procedures can be more dynamic, allowing for quicker adaptations to new technologies or methodologies..

Policies :scroll:

  • Definition: Broad, high-level statements of principles, goals, and overall expectations of the organization.
  • Purpose: To establish core values, company vision, and overarching compliance standards.
  • Lifecycle: Reviewed annually to ensure alignment with evolving legal, technological, and business conditions.
  • Compliance: Mandatory; non-compliance can result in significant legal and business risks.
  • Icon: :scroll: (scroll). It symbolizes official documents, which aligns well with the formal, foundational nature of policies. It’s often used to represent ancient laws and decrees, making it fitting for the foundational rules and standards within an organization.
  • Example: A policy might state that all software developed must comply with GDPR and other relevant data protection regulations.

Rules :scales:

  • Definition: Explicit, often granular directives that are compulsory and usually narrow in scope.
  • Purpose: To ensure specific outcomes or behaviors in particular scenarios.
  • Lifecycle: Reviewed frequently (e.g., annually) to refine and ensure they address current challenges effectively and are adhered to.
  • Compliance: Strictly mandatory; non-negotiable and must be followed exactly as prescribed.
  • Icon: :scales: (scales). It represents justice, balance, and fairness, aligning with the concept of rules ensuring specific outcomes and behaviors are maintained in a structured and equitable manner within an organization.
  • Example: A rule might state that commit messages must include a ticket number from the issue tracker.

Standards :straight_ruler:

  • Definition: Specific mandatory requirements for how certain policies are to be implemented.
  • Purpose: To ensure consistency and quality across all projects by defining technical and operational criteria that must be met.
  • Lifecycle: Updated biennially or as needed to reflect new industry practices and technological advancements.
  • Compliance: Mandatory; essential for maintaining quality and uniformity in outputs.
  • Icon: :straight_ruler: (ruler). It symbolizes measurement, precision, and consistency, which align closely with the idea of standards setting specific requirements and guidelines to ensure quality and uniformity across projects.
  • Example: A standard might specify that all code must undergo peer review or adhere to a particular coding standard like ISO/IEC 27001 for security.

Processes :tools:

  • Definition: Detailed, step-by-step instructions that must be followed in specific situations.
  • Purpose: To ensure activities are performed consistently and effectively, especially for complex or critical tasks.
  • Lifecycle: Regularly tested and updated, ideally after major project milestones or annually, to adapt to process improvements and feedback.
  • Compliance: Mandatory where specified; critical for ensuring consistency and reliability of specific operations.
  • Icon: :tools: (hammer and wrench). It symbolizes tools and construction, fitting for the concept of processes as they provide detailed, step-by-step instructions necessary to construct or execute specific tasks systematically and efficiently.
  • Example: A process might outline the steps for a release process, including code freezes, testing protocols, and deployment checks.

Guidelines :compass:

  • Definition: Recommended approaches that are not mandatory but are suggested as best practices.
  • Purpose: To guide developers in their decision-making processes by providing options that align with best practices.
  • Lifecycle: Evaluated and possibly revised every two to three years, or more frequently to incorporate innovative techniques and tools.
  • Compliance: Optional; best practice recommendations that are advisable but not required.
  • Icon: :compass: (compass). It symbolizes guidance, direction, and navigation, which aligns well with the purpose of guidelines to provide recommended approaches and best practices that help steer decisions in software development.
  • Example: Guidelines might suggest using certain frameworks or libraries that enhance productivity and maintainability but are not strictly required.

Governance and Compliance Controls shield

  • Definition: A collection of key controls and processes designed to ensure adherence to governance standards and compliance with both internal policies and external regulations.
  • Purpose: To track compliance with the mandatory sections of the handbook, including policies, rules, and standards. This section ensures that all critical governance measures are documented and enforced, providing oversight on adherence to the core practices that uphold the quality and security of our software development process.
  • Lifecycle: Controls are reviewed quarterly or following significant changes in regulations, technology, or business needs to ensure they remain effective and relevant.
  • Compliance: Mandatory for all teams. Any deviation or non-compliance may result in audits, corrective actions, or further review, ensuring alignment with organizational standards and legal requirements.
  • Icon: shield (shield). It symbolizes protection and security, representing the safeguarding of our software engineering practices through strong governance and compliance.
  • Example: A control might track instances where code merges bypass branch protection, ensuring that changes still follow the correct peer review process to maintain code integrity.
« Older posts

© 2025 Peter Birkholm-Buch

Theme by Anders NorenUp ↑