Blog | SAGE Group

7 Steps towards making AI a differentiator in your business

Written by Francois Botha | March 16, 2026

There has been an incredible amount of media attention (or what many call ‘hype’) on how AI is going to take over the world, but it seems, at least to me, that there is very little in the way of concrete explanations of HOW businesses can leverage AI. 

The journey of AI adoption follows a path from augmentation to automation and ultimately differentiation. In this article, I aim to provide (in simple terms) a playbook towards making AI a differentiator for your organisation. 

Step 1: Understanding

The first step in adopting any new technology is having a sufficient base understanding of it, enough at least to be able to formulate an initial strategy.

LLMs, the large language models underpinning the likes of ChatGPT, are, in essence, just next-word predictors: they take a partial sentence and provide the next word. They do this over and over, until you have an answer to your query – powerful, but not magic.

They have no memory. The illusion of memory is merely providing the platform with the history of your previous conversations every time you interact with it.

The same is true for what is called retrieval-augmented generation, or RAG: the ability to provide the LLM with information about your organisation so that it can answer questions based on relevant facts. Information is looked up and provided with your query to the LLM for it to provide a coherent answer.

LLMs can make use of tools, like the ability to query a database or initiate a business process. Model Context Protocol (MCP) is just a standardised way of letting the LLM know which tools are available. In the end, it is the software layer that is responsible for actually running the tool and providing the context back to the LLM. Not exactly magic, but extremely powerful. 

Step 2: Security

Next, you have to consider security.

You probably want your employees to leverage the immense power of LLMs, but as a business leader, you might be conflicted by the prospect of losing control over confidential information.

Anthropic, the major competitor of ChatGPT, has recently released a series of very funny ads mocking OpenAI’s intention to start serving advertisements along with LLM answers.

Funny, but also a bit scary. 

 

So, what are your options:

  • You can use services like ChatGPT. The least secure option, but good when you need access to state-of-the-art (SOTA) models. When choosing this option, it is very important to configure it not to use your data for its own training purposes.
  • You can run open-source models (e.g. GPT OSS or Llama) on your own hardware hosted on premises. This is the most secure option as no confidential information leaves the business, and it’s a viable solution for extremely sensitive workloads. This option does come with its drawbacks in terms of complexity and capital expenditure.
  • A third option is to make use of cloud services to provide LLM services, for example, AWS Bedrock, which is a fully managed cloud service providing access to foundational models and capabilities for building generative AI applications. AWS is recognised as providing highly secure options, in this case, meeting the standards of the Australian Defence Force, which has agreements in place to support national security and defence priorities.

Out of the three possible options, running models on premises is, depending on your security requirements, probably the least desirable option. This is primarily due to global supply chain constraints in securing the required hardware, together with its inevitable obsolescence due to the fast-moving pace of AI advancements.

The next section describes how you can maximise your outcomes by implementing a combination of the first and last options. 

Step 3: Augmentation

Humans are experts at both using tools and adapting, and many of your employees are already making use of LLMs.

How is your business losing out by not formalising its use

As described above, there is no consideration for the exposure of confidential information when employees do not have access to an internal LLM service. Implementing a corporate LLM allows for implementing an LLM gateway that can route prompts based on sensitivity, with sensitive queries going to a privately hosted open-source LLM and less sensitive prompts going to public SOTA models. AWS Bedrock can also automatically implement guardrails on all queries, such as identifying personally identifiable information (PII).

Another opportunity lost by not formalising your LLM service is in the area of organisational learning. You could be capturing how employees are interacting with LLMs and which problems they are aiming to solve, such as emerging pain points and inefficiencies. This will highlight actual business problems that can be solved in a more structured way with automation. Having user prompts logged for analysis is trivial using AWS Bedrock. 

Step 4: Automation

Providing a secure way for employees to access LLMs and learning from how they are using it is a valuable stepping stone to the next phase: automation.

When attempting to automate business processes, the first step is to document the business processes in their current state. A universally acceptable way of doing this is using Business Process Management Notation (BPMN). Nukon has extensive experience in leveraging BPMN in automating complex workflows for our clients.

Having your business processes well defined will show you where you can automate decisions using LLM agents, and where to include a human to verify the results produced by the AI agents if required (the concept of Human-in-the-loop).

 

Clearly laid out business processes and logged information on how employees are currently using AI tools will shine a spotlight on which IT services should be converted as MCP tools for your AI agents to use.  

Step 5: Curation

This step involves the identification and curation of data sources to be used in AI workflows to add context, either through RAG or by fine-tuning models.

Data can be sourced either from your own proprietary data, synthetic data, or data obtained from external sources such as Kaggle. This is one of the more difficult areas in the AI journey, but at the same time, it provides the greatest opportunity for differentiation. The saying ‘Data is gold’ is particularly true when it comes to AI workflows.

Step 6: Validation

A key driver of successful AI adoption is the ability to validate AI outputs. i.e., “how do we trust the output?”

My background is in software development, a profession that is facing major disruption at the moment. The reason for the overwhelming success of AI in the software development space is that the AI outputs can be validated, either using a programmed unit test or against satisfying requirements documented in a specification.

This lesson of validating AI output can be applied to AI workflows in general. A simple use case for AI adoption in an organisation is providing the LLM with added context in a particular area, and having it answer a question based on it (RAG). The problem is that RAG is not foolproof in its ability to provide useful context. Fortunately, the effectiveness of providing the LLM with contextual information can be formally measured. One way of doing this is by calculating the MRR (Mean Reciprocal Rank). This is a topic that could get very technical very quickly, but the important takeaway is that AI workflows need to be validated.

LLM output is by its nature non-deterministic (that is, it can produce a different result each time the same piece of data is entered), unlike traditional software, which will give a guaranteed output for a given input. This is a serious shortcoming in business, where certainty of outcomes for some use cases is non-negotiable.

By making use of BPMN to formalise workflows with human-in-the-loop checks and validation steps, the risks of non-deterministic output can be mitigated.

Step 7: Organisation

The final piece of the puzzle is organisational.

Decide how AI adoption fits into current strategic priorities. Decide how you want to address security issues. Evaluate your current organisational structure for alignment (e.g. as from 2024 every U.S. federal department is mandated to appoint a Chief AI Officer). Empower employees to use internal AI tools and learn from the process. Provide training. Document business processes. Inventory and curate information sources and tools. Formalise everything in tactical plans and company policy. Start executing. 

 

Getting your business on the AI differentiation path

 

Augmentation and automation are about survival, but differentiation will determine the winners of this era. Following the steps outlined above can put you on a path towards realising cost savings and enhancing customer satisfaction in your operations, moving your business closer towards achieving its objectives.

Make sure you stay tuned for further articles, where we’ll look at how to align AI with your strategic priorities.

About the author:  Francois Botha

A seasoned full-stack developer and team lead with a strong foundation in accounting, Francois has a proven track record of driving digital transformation through innovative software solutions. His unique blend of technical expertise and business acumen, honed through formal training as an accountant and experience with a 'Big 4' accounting firm, enables Francois to create software solutions that address complex business challenges. 

As a Technical Lead at Nukon, Francois has been instrumental in implementing digital supply chain solutions for large-scale operations, utilising his skills in integration, inventory, demand and production planning. 

Francois' technical proficiency spans across a range of technologies, including Amazon Web Services (AWS), Java (Spring Boot), Apache Camel, JavaScript, Vue.js and Cucumber BDD testing framework. He is dedicated to applying advanced technologies to enhance organisational efficiency, optimise processes, and identify new strategic growth opportunities. With a commitment to continuous technical development, he plays an integral role in establishing strong AI foundations that position organisations for long-term innovation and success.

 

From AI Differentiation to Strategic Alignment

AI can be a powerful differentiator, but its true value emerges when it is embedded within an organisation’s broader strategy. Moving beyond experimentation, leading organisations are now focusing on how AI initiatives directly support their business priorities, operational goals, and long‑term growth.

In an upcoming article, we’ll explore how to align any AI investments with strategic objectives to ensure they deliver measurable and sustainable impact for you and your teams. To be sure you don't miss it, subscribe to our newsletter.