Peter Birkholm-Buch

Stuff about Software Engineering

Page 3 of 14

Software Architecture Patterns

Introduction

The following is an subset of software architecture patterns, which tend to be referenced when academic discussions around patterns arise. The following are my comments.

Patterns

Application Architecture

Microservice

The microservice pattern comes from domain-driven design where in particular the concept of bounded context came to be the decoupling of services. The post Microservices by Martin Fowler also played a large part in naming this pattern.

Drivers:

  • Loosely coupled with other services – enables a team to work independently the majority of time on their service(s) without being impacted by changes to other services and without affecting other services
  • Independently deployable – enables a team to deploy their service without having to coordinate with other teams
  • Capable of being developed by a small team – essential for high productivity by avoiding the high communication head of large teams
  • The application must be easy to understand and modify
  • You must run multiple instances of the application on multiple machines in order to satisfy scalability and availability requirements
  • You want to take advantage of emerging technologies (frameworks, programming languages, etc)

Problems:

  • Managing dependencies
  • Deployments of the entire system may become complex
  • Developers must implement the inter-service communication mechanism and deal with partial failure
  • Implementing requests that span multiple services is more difficult
  • Testing the interactions between services is more difficult
  • Implementing requests that span multiple services requires careful coordination between the teams

Database

Database per Service

This pattern is basically the natural follow-on to choosing the microservice application architecture pattern, where if a service is to become independant, then it must have it’s own independant data layer.

Drivers:

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Databases must sometimes be replicated in order to scale
  • Different services have different data storage requirements, like relational database or NoSQL

Problems:

  • How to manage consistency across services
  • Who owns (masters) data?

Messaging

Messaging is a communications pattern which uses asynchronous messaging to replace the synchronous style of request/response used in most REST-style APIs. Most common styles of asynchronous messaging are:

  • Notifications – a sender sends a message a recipient but does not expect a reply. Nor is one sent.
  • Request/asynchronous response – a service sends a request message to a recipient and expects to receive a reply message eventually
  • Publish/subscribe – a service publishes a message to zero or more recipients
  • Publish/asynchronous response – a service publishes a request to one or recipients, some of whom send back a reply

In the following there’s no differentiation between “event”-driven and “data”-driven, as a message will always contain the full message body.

Event Driven with full Messages

Drivers:

  • The complete body of the message must be sent in each event
  • The broker may or may not allow for queries against the messages
  • Extreme loose coupling as the sender is completely decoupled from the receiver

Problems:

  • Subscriber/consumer overload which requires buffering so that events are not lost
  • The broker/mediator must always be available
  • Almost always creates mixed messaging styles with events for downstream and request/response across and upstream which requires carefull planning

Broker

Event driven messaging with a broker implies that all messages are delivered through a central broker but that there’s no processing control flow and messages are delivered using a publish/subscribe pattern.

Drivers:

  • Scalability

Problems:

  • How to handle orchestration? With “normal” synchronous request/response messaging services orchestrate business processes in the order that they are called – with broker driven messaging everybody potentially could get the same message at the same time, so who orchestrates the overall process?

Mediater

A mediater expands on the broker with support for business process workflows usually with support for BPEL.

Drivers:

  • Business processes change often
  • Traceability and governance is important

Problems:

  • Management and governance of platform takes time
  • Requires turnkey solutions from major vendors
  • Not really available as a cloud solution

Event Driven with Notifications

Drivers:

  • You don’t know how many subscribers/consumers of your events there will be, so you must preserve bandwidth
  • The size of the payload in the event is limited
  • A subset of the events doesn’t require the complete body of the message to be sent
  • Subscribers explicitly want to chose which message bodies they want to pull

Problems:

  • Requires a two-step dance where the consumer of an event must issue a request to the central broker and ask for the full body of the message

Amazon the shit out of it…

In today’s fast-paced business environment, one strategy stands out: leveraging IT as the core of business operations. This concept, which I call “Amazon the Shit Out of It,” is inspired by Amazon’s renowned “working backwards” approach and their remarkable success story.

Amazon’s methodology includes meticulous planning with their six-page memos and a commitment to long-term investment. Unlike companies focused on immediate gains, Amazon invests with a long-term perspective, which allows them to innovate and maintain a competitive edge. Notably, these six-page memos have also led to the banning of PowerPoints for strategic business planning, encouraging more thoughtful and comprehensive documentation.

A key factor in Amazon’s success is their ability to apply IT to virtually any problem. They did not start as a tech company; they began as a bookstore. By fully embracing IT, Amazon transformed into a tech powerhouse with Amazon Web Services (AWS), revolutionizing the tech industry.

This transformation highlights a crucial point: businesses must evolve into IT-centric entities to thrive in the modern world. It is no longer sufficient to excel in your original industry; you must become an IT company that leverages technology as a primary business accelerator. Amazon’s journey from a bookstore to a tech giant exemplifies this evolution.

Consider any traditional business, whether it is selling beer, manufacturing products, or providing services. Often, competition boils down to price and product quality, which can be challenging to differentiate in the eyes of consumers. For many, beer is beer, toilet paper is toilet paper, and detergent is detergent.

This is where IT becomes a game-changer. By integrating advanced IT solutions, businesses can significantly reduce production costs, streamline logistics, resolve supply chain issues, and optimize operations in unprecedented ways. Amazon’s success demonstrates how IT can be a powerful lever, enabling businesses to move beyond price competition and deliver exceptional value.

A crucial part of this strategy is shifting from a mindset of Relentless Rationalization—merely doing the same thing better—to one of Relentless Value Adding. Inspired by Amazon’s “working backwards” methodology, this approach focuses on continually enhancing the customer experience and adding value through innovative uses of technology. It is not just about saving 10% and delivering 10% more; it is about leveraging IT to create new opportunities, solve complex problems, and fundamentally accelerate business growth.

In conclusion, “Amazon the Shit Out of It” is more than a catchy phrase—it is a call to action for businesses to rethink their strategies and embrace IT as the cornerstone of their growth and innovation. By adopting a Relentless Value Adding approach, they can position themselves not only to survive but to thrive in an increasingly competitive and technologically driven world.

I would encourage everyone to read “Working Backwards”.

Requirements Engineering

Imagine that the sum of the business requirements for a system can be represented by the surface area of a square where the length of the side is 1 meter. Then the surface area is 1 square meter (m2).

Now image also that the shape of the system that can solve those business requirements is a circle and that we have to construct a circle with the surface area of 1 m2. Knowing that the area of a circle is Pi * r^2 then the radius is the square root of 1/Pi which is approximately 0.564 making the diameter 1.128. So we can draw the square and circle as follows:

Notice that when the figures are overlapped one doesn’t cover the other even if they have the same surface area.

So the implementation of the system which can solve the business requirements – but in a slightly different way. To turn the circle into a square custom implementations will have to be done and they are a lot more costly than a standard system.

I don’t have any statistical evidence but in my experience you can usually get 80% of the way with a standard system, but getting to 100% will cost you a lot more.

GitHub Copilot drives better Developer Experience

Introduction

This is part 3 of:

Explaining how Carlsberg unifies development on GitHub and accelerates innovation with Copilot in more detail.

By integrating GitHub Copilot into our development workflow, Carlsberg has significantly enhanced the developer experience. Copilot acts as an intelligent coding assistant, offering real-time suggestions and code completions. This seamless integration enables our developers to write more efficient and error-free code. From a business perspective, this translates to accelerated development cycles and a boost in productivity, allowing us to bring innovations to market faster and maintain a competitive edge.

Understand Code Faster

GitHub Copilot transcends simple code suggestions by providing developers with the ability to quickly understand existing codebases and even entire projects. This feature is invaluable for onboarding new team members and tackling complex legacy systems. By asking Copilot to explain intricate code, developers can rapidly grasp functionality without deep-diving into documentation or consulting peers. For Carlsberg, this means reduced ramp-up times for new projects and more efficient utilization of developer time, leading to cost savings and faster project deliveries.

Spend Less Time on Scaffolding

Scaffolding, while necessary, often consumes valuable time that could be better spent on developing business-critical features. GitHub Copilot streamlines this process by generating the foundational code structures automatically. This allows our developers at Carlsberg to concentrate on crafting the unique aspects of our solutions that drive real business value. The direct result is a more agile development process, with resources optimally allocated towards innovation and creating competitive advantages.

Lower the Learning Curve

Adopting new frameworks and technologies is a constant challenge in the fast-paced tech environment. GitHub Copilot lowers the learning curve for our developers by suggesting how to effectively use new frameworks. This guidance reduces the time spent on trial and error, enabling our team to leverage the latest technologies confidently. For Carlsberg, this capability ensures that we are always at the forefront of technology adoption, enhancing our agility and ability to respond to market changes swiftly.

Reduce Monotonous Work

Monotonous Work, like writing unit tests, though critical for ensuring code quality, can be tedious and time-consuming. GitHub Copilot addresses this by generating unit tests, which developers can then review and refine. This automation not only speeds up the development process but also ensures a high standard of code quality. At Carlsberg, leveraging Copilot for unit testing means our developers can focus more on developing features that add value to the business, while still maintaining a robust and reliable codebase.

Improve Documentation

Well-crafted documentation is crucial for maintainability and scalability but is often overlooked due to the time it requires. GitHub Copilot aids in this aspect by automatically generating meaningful comments and documentation during code commits or pull request reviews. This not only saves time but also enhances the quality of our documentation, making it easier for developers to understand and work with our code. At Carlsberg, improved documentation directly translates to reduced maintenance costs and smoother collaboration among teams, further driving operational efficiency.

Developer Experience = Productivity + Impact + Satisfaction

At Carlsberg, our integration of GitHub Copilot into our development workflow has not just been about improving individual elements of the coding process—it’s about a holistic enhancement of the overall developer experience. 

GitHub frames Developer Experience as the sum of Productivity, Impact, and Satisfaction. Here’s how Copilot aligns with these components:

  • Productivity: By automating and accelerating parts of the development cycle, Copilot directly boosts productivity. In the “Spend Less Time on Scaffolding” and “Reduce Monotonous Work” sections, we explored how Copilot streamlines tasks that traditionally consume significant time and resources. This allows our developers to focus on higher-value work, speeding up our overall project timelines and making our workflow more efficient.
  • Impact: The true measure of any tool or process change is the impact it has on the business and its goals. As discussed in “Understand Code Faster” and “Improve Documentation,” Copilot helps our team tackle complex systems more effectively and maintain better documentation. This not only enhances our current projects but also secures our long-term ability to adapt and grow, significantly impacting our operational success and market competitiveness.
  • Satisfaction: A satisfied developer is a productive and innovative developer. Through features like lowering the learning curve for new technologies and reducing the drudgery of repetitive tasks, as highlighted in “Lower the Learning Curve” and “Reduce Monotonous Work,” Copilot increases job satisfaction. This leads to a more engaged team, ready to innovate and push boundaries in pursuit of new solutions.

By investing in tools that elevate these aspects of the developer experience, Carlsberg is not just improving our software; we are fostering a culture of efficiency, innovation, and satisfaction. This commitment not only enhances our current team’s morale and output but also positions us as a forward-thinking leader in leveraging technology to drive business success.

Conclusion

GitHub Copilot has revolutionized the way we approach software development at Carlsberg, significantly enhancing the overall developer experience. By automating repetitive tasks, simplifying complex codebases, and expediting the learning process for new technologies, Copilot has allowed our developers to focus on what they do best: creating innovative solutions that drive real business value. This not only leads to a more satisfied and engaged development team but also accelerates our time-to-market and improves our competitive stance. The integration of GitHub Copilot into our workflow is a testament to Carlsberg’s commitment to leveraging cutting-edge technology to foster a culture of efficiency, innovation, and continuous improvement. It’s clear that by investing in tools that enhance the developer experience, we’re not just improving our software; we’re building a stronger foundation for our business’s future success.

Four Categories of AI Solutions

Introduction

When driving value from generative AI (GenAI) it’s important to choose the right approach in order to be able to get a return on investment. This page attempts at explaining possible approaches and required resources.

Takers, Shapers and Makers

There seems to be 3 major categories of GenAI adopters according to McKinsey and Gartner:

McKinseyGartnerDescription
TakersQuick WinsFocus on utilizing existing GenAI tools and models for productivity improvements with minimal customization.

These initiatives typically have short time to value and are task-specific, aiming for immediate efficiency gains in routine tasks.
ShapersDifferentiating Use CasesEngage in integrating GenAI tools with proprietary data or adapting them for specific applications.

These initiatives aim to achieve competitive advantages, involving medium time to value with higher costs and risks than quick wins.

They leverage GenAI to extend current processes and create unique value propositions.
MakersTransformative InitiativesConcentrate on developing new GenAI models or tools for specialized applications, with the potential to transform business models and markets.

These are the most ambitious initiatives, characterized by high cost, complexity, and risk, and a long time to value.

They aim for strategic benefits that may be difficult to quantify initially.

TCO/ROI

The Total Cost of Ownership (TCO) and Return on Investment (ROI) for GenAI adoption across takers, shapers, and makers categories involve several considerations, including hidden costs, strategic implications, and potential benefits.

Gartner offers insights on measuring GenAI ROI, advocating for a business case approach that simulates potential cost and value realization across GenAI activities. This approach categorizes investments into quick wins, differentiating use cases, and transformational initiatives. Quick wins focus on immediate productivity improvements with short time to value, differentiating use cases aim at competitive advantage with medium time to value, and transformative initiatives have the potential to upend business models with longer time to value but higher costs and complexity. The guide emphasizes the importance of balancing financial returns with strategic benefits, which might be difficult to quantify initially.

Source: https://www.gartner.com/en/articles/take-this-view-to-assess-roi-for-generative-ai.
Red box is added by me, see conclusion below.

Builders

I’m introducing an extra “Builders” category into the GenAI adoption landscape beyond merely adopting or adapting, Builders take a step further by crafting bespoke extensions and plugins for GenAI platforms. This initiative is driven by the ambition to tackle intricate, multi-step workflows that typically demand considerable human intervention. The essence of being a Builder lies in their ability to not just work with GenAI but to enhance its core capabilities, enabling solutions that seamlessly bridge various systems and processes. This approach demands a blend of creativity, technical prowess, and a deep understanding of both the technology and the problem domain.

CategoryDescriptionRequired People Resources/SkillsTools
TakersUtilize existing GenAI tools for productivity improvements with minimal customization.

Aimed at immediate efficiency gains in routine tasks with short time to value.
Basic understanding of AI/ML conceptsSkills in integrating and configuring APIs

Ability to adapt third-party GenAI tools to existing workflows
Microsoft Copilot

Microsoft Copilot Plugins

Enterprise “Chat”-GPTs
ShapersIntegrate GenAI tools with proprietary data or adapt them for specific applications to achieve competitive advantages, involving medium time to value with higher costs and risks.Low/No-code developers

Domain experts for data interpretation

Project managers with a technical background
Retrieval Augmented Generation (RAG)

Microsoft Copilot Studio

Microsoft Azure AI Studio
BuildersDevelop custom solutions or extensions to GenAI platforms to solve complex, multi-step processes that usually require significant human effort.Advanced programming skills in relevant languages

Data scientists for model tuning

Experience with GenAI frameworks

Systems integration expertise

Creative problem-solving abilities
Microsoft Copilot Extensions

Microsoft PromptFlow

LangChain

LangGraph

LlamaIndex

AutoGen

CrewAI

(OpenAI Swarm)

LLM Function Calling

LLM Routing

LLM Threat Modelling

LLM Security
MakersDevelop new GenAI models or tools for specialized applications with the potential to transform business models and markets.

Characterized by high cost, complexity, and risk, with a long time to value.
Expertise in deep learning and neural networks

Experience in building and training large-scale AI modelsStrong research and development background

Ability to work with high-performance computing resources
LLM Models

LLM Frameworks

LLM Fine-Tuning

(LLM Creation and Training)

The “Builders” category fills the gap between “Shapers,” who mainly adapt existing models for their unique needs, and “Makers,” who create new GenAI models from scratch. Builders leverage powerful frameworks and platforms to create bespoke solutions that automate complex workflows, potentially revolutionizing how businesses approach process automation and efficiency. This distinction underscores the evolving landscape of GenAI adoption, highlighting the increasing sophistication and customization capabilities available to organizations.

Conclusion

The red box on the image above indicates that solutions made in the Takers and lower Shapers category are likely to be overtaken by standard solutions from vendors and the plethora of SaaS AI offerings appearing on a daily basis. Caution should be used when choosing to invest in solutions in this area unless quick wins are important.

Clearly it’s important to have a strategic, well-planned approach to integrating GenAI with emphasis on organizational readiness, skill development, and a focus on applications that offer a competitive advantage – otherwise GenAI just becomes a technology looking for a problem like Blockchain.

References

GitHub Advanced Security enables Shifting Security Left

Introduction

This is part 2 of:

Explaining how Carlsberg unifies development on GitHub and accelerates innovation with Copilot in more detail.

In the digital transformation journey of Carlsberg, the implementation of GitHub Advanced Security (GHAS) marked a significant shift towards embedding security directly into the developer workflow. This strategic move, part of our “Security First” initiative in Software Engineering, has not only elevated the security posture of our software development lifecycle but has also manifested in substantial time and cost savings by identifying and rectifying vulnerabilities prior to deployment.

Using GHAS to Scan for Vulnerabilities

Centralizing scanning in the GHAS platform eliminates concerns about the security configurations of developer workstations and the code contributions from external parties. By conducting security scanning centrally, we ensure consistent and thorough examination of all code, regardless of its origin. This approach not only streamlines our security processes but also enhances the security posture of our software, providing peace of mind and allowing our developers to focus on innovation and productivity raising the Developer Experience.

We use GHAS to scan for the following:

  • Secrets: The inclusion of secret scanning within our workflow has been pivotal in detecting exposed secrets such as keys and tokens, preventing potential security breaches.
  • Static Application Security Testing (SAST) with CodeQL: CodeQL’s integration allows us to perform comprehensive static code analysis, identifying security vulnerabilities and coding errors at their inception.
  • Dependencies with Dependabot: Dependabot plays a critical role in our ecosystem by monitoring dependencies for known vulnerabilities and automatically suggesting updates or patches, thus maintaining the integrity of our software supply chain.
  • Binaries and Containers: By incorporating tool plugins for CodeQL we enhance our ability to scan binaries and containers for vulnerabilities, ensuring a robust security framework across all components of our software.

Integration into Developer Workflow

The transformation brought about by GHAS in our developer workflow cannot be overstated. By embedding security checks directly into pull requests, GHAS ensures that every code change is automatically scanned for vulnerabilities before being merged. This integration not only streamlines the security assessment process but also empowers developers to address security issues in real-time. The proactive security posture facilitated by GHAS equips developers with the tools and insights needed to identify and rectify potential security flaws from the outset, fostering a culture of security awareness and responsibility. This approach significantly enhances the overall security of our software projects, contributing to a more secure and efficient development environment.

Centralized Security Scanning

Adopting GHAS as a centralized platform for security scanning has provided us with a command and control center for managing vulnerabilities emanating from both source code and dependencies. The Security Center dashboard offers a comprehensive overview of vulnerabilities, CVEs, and the most affected repositories, allowing us to prioritize and focus our remediation efforts effectively.

Since the inception of GHAS in our development practices, we’ve observed a notable reduction in security vulnerabilities, with over 30000 issues addressed. This achievement underscores the effectiveness of GHAS in enhancing our security posture, demonstrating its value not only in safeguarding our applications but also in supporting our broader business objectives of innovation and growth.

Conclusion

With GHAS and shifting security left into the developer workflow we’ve achieved remarkable success eliminating 600+ secrets from source code (now down to 0 and no secrets can enter our code now) and removing more than 30000+ security vulnerabilities and using AutoFix to constantly remediate vulnerabilities.

Using GitHub as a Software Development Platform improves Developer Experience

Introduction

This is part 1 of:

Explaining how Carlsberg unifies development on GitHub and accelerates innovation with Copilot in more detail.

Navigating Complexity: The Challenge of Multiple Development Tools

In the fast-paced world of software development, managing multiple tools can become a bottleneck that impedes efficiency and innovation. At Carlsberg, our developers and engineers were navigating a complex toolchain landscape that included GitHub, GitLab, BitBucket, Azure DevOps, Jenkins, Nexus, SonarQube, and both Azure and AWS Container Registries. This multiplicity not only slowed down our processes but also fragmented our development environment, leading to increased context-switching and security vulnerabilities.

Unified Platform: Adopting GitHub for Streamlined Operations

The decision to streamline our development tools into a single, integrated platform came as a strategic move to enhance our operational efficiency and bolster security measures. We chose GitHub as our all-encompassing platform for several reasons:

  • GitHub Source Control provides a robust system for tracking changes in computer files and coordinating work on those files among multiple people.
  • GitHub Actions makes it easier to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub.
  • GitHub Packages serves as a software package hosting service that allows you to host your software packages privately or publicly and use packages as dependencies in your projects.

By migrating from Jenkins to GitHub Actions, from SonarQube to GitHub’s integrated code scanning features, and from Nexus and various container registries to GitHub Packages, we were able to retire outdated systems and reduce our toolchain complexity significantly.

Enhanced Productivity and Security: The Benefits of Consolidation

The consolidation has profoundly impacted our software development operations by centralizing source control and unifying the development environment. Our engineers now enjoy a streamlined workflow with reduced context-switching, thanks to a centralized pipeline and improved build and deployment processes managed through GitHub Actions. The centralization of package management through GitHub Packages has also enhanced the efficiency of managing and sharing package dependencies.

By reducing our tools from nine to one, we’ve not only simplified our technology stack but also enhanced our capability to manage projects more effectively, ensuring that our software development practices continue to support Carlsberg’s legacy of innovation.

Future Focus: Continuing Innovation at Carlsberg

As we continue to refine our approach and leverage the full potential of GitHub, our focus remains on innovation and efficiency. The journey of consolidating our development tools has been a pivotal step in our digital transformation strategy, positioning Carlsberg at the forefront of technological advancement in the beverage industry.

Carlsberg unifies development on GitHub and accelerates innovation with Copilot

I’m honored that GitHub have chosen to do a customer story on how we’re transforming software development in Carlsberg: https://github.com/customer-stories/carlsberg-group and a friggin awesome movie.

The movie was also used by Satya Nadella in the Build 2024 Keynote:

To provide a bit more background information I’ve written some posts:

The Four Tenets of SOA

The SOA tenets originally appeared back in 2004 when  Don Box published an article on MSDN called “A Guide to developing and Running Connected Systems with Indigo” (Indigo is what’s known today as Windows Communication Foundation or WCF for short). Don Box wrote that WCF is based on SOA principles and that unlike other approaches, specifically object orientation, SOA requires a different set of assumptions:

In Indigo, a service is simply a program that one interacts with via message exchanges. A set of deployed services is a system. Individual services are built to last—the availability and stability of a given service is critical. The aggregate system of services is built to allow for change—the system must adapt to the presence of new services that appear a long time after the original services and clients have been deployed, and these must not break functionality.

Although Microsoft have pledged to keep the MSDN Magazine online at the time of writing the article linked above is not available.

Service-oriented development is based on the four fundamental tenets that follow:

  • Boundaries are explicit 
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy 

Let’s go over what that means in terms of modern REST services.

Four Tenets

Boundaries are explicit

Services interact by sending messages across boundaries. These boundaries are formal and explicit. No assumptions are made about what is behind boundaries, and this preserves flexibility in how services are implemented and deployed.

This means that:

  • You must treat all services as external to you
  • Internal (private) implementation details should not be leaked outside of a service boundary
  • Avoid RPC interfaces because this can lead to an overuse of calls – accessing a service is not the same as accessing a local object

Services are autonomous

Services are not subservient to other code: a service reacts to a message – how that message was created and what will happen to any response the service creates is immaterial to the action that this service will take.

This means that:

  • Deploy and version services independently from the clients
  • Design contracts with the assumption that once published, they can’t be modified

Services share schema and contract, not class

Only messages pass from service to service, code does not.

This means that:

  • Contracts should be designed to be as explicit as possible to minimize misinterpretation
  • A service must be able to convert its native data types to and from some technology-neutral representation
  • The contract must be versioned using semantic versioning

Service compatibility is determined based on policy

A service must be able to express in a standard representation of policy what it does and how clients should communicate with it.

This means that:

  • The policy must be exposed using an Open API Specification

ChatGPT Vision could be used to destroy incumbent image recognition solutions in CPG

I finally got access to ChatGPT Vision and I uploaded a picture of some beers I had at home:

This is the response from ChatGPT:

Nice selection of beers you got there! I see Perlenbacher, which is often associated with Lidl supermarkets, and of course, the iconic Carlsberg. Carlsberg, founded in 1847, is one of the most well-known beer brands globally and is based out of Copenhagen, Denmark. “Birra Messina” seems to be a Sicilian beer. Each of these beers brings a different taste profile to the table. Enjoy responsibly! Would you like to know more about any specific beer?

I then prompted this using TypeChat:

Please attempt to recognize the cans in the picture and fill out the following response:
interface Response {
data: Array<{ name: string, brand: string, count: number}>;
}

And I got this from ChatGPT:

Sure, here’s the response based on the image:

interface Response {
data: Array<{ name: string, brand: string, count: number}>;
}

const beerData: Response = {
data: [
{ name: "Premium Pilsner", brand: "Perlenbacher", count: 2 },
{ name: "Pilsner", brand: "Carlsberg", count: 2 },
{ name: "Classic", brand: "Birra Messina", count: 1 }
]
};

This is based on the visible labels in the image.

No training – no nothing. I know the internet is full of examples of people having ChatGPT turning Figma drawings into web sites and apps – but this is totally out of the comfort zone of ChatGPT and I have previous experience trying to teach image recognition of packaged goods to models and it’s not that easy.

Solutions based on this could destroy the incumbent vendors in image recognition in the CPG market.

« Older posts Newer posts »

© 2024 Peter Birkholm-Buch

Theme by Anders NorenUp ↑