What we do
We help organisations adopt artificial intelligence as a capability of the business — deliberately, with the controls and governance to make the adoption safe, sustainable, and defensible.
This is not a separate practice from the rest of what we do. It is the same information security practice, applied to a category of technology that is changing faster than most organisations can govern unaided. The starting question is not “how do we deploy AI” — it is “what does AI do to the data, the processes, and the obligations we already have to manage?”
Why this matters now
AI is an enabler and a multiplier. Used well, it changes how a business operates — the speed of decisions, the leverage on knowledge work, the things that become economically viable to do at all. The consequences of getting it wrong are also real, and many of them are unrecoverable.
Once the security and privacy horse has bolted from the stable, you cannot put it back. Data sent to an external model has been seen by that model. Information embedded into a third-party AI tool has been processed by infrastructure outside your control. Decisions made by an AI system are decisions your organisation has made, regardless of where the model ran. These are not abstract risks — they are operational realities that need to be managed before AI is deployed, not after.
The right response is not to abstain. AI is becoming part of how business is done, and organisations that opt out of the question entirely will find themselves at a competitive disadvantage. The right response is to adopt deliberately — with governance, with controls, and with a clear understanding of what is being changed. That is the work this practice supports.
When organisations need this
Most clients come to AI work for one of a few reasons:
They have decided that AI adoption is now a strategic question for the business and they need to think it through properly before deploying anything. The board is asking. The leadership team is hearing competitors talk about it. The decision is on the table.
They are already using AI — Copilot, ChatGPT, embedded features in existing tools — without an explicit governance framework, and they have realised that ad-hoc use is producing risks that are not being managed. They want to bring the existing use under control before the next decision is made.
They have a specific use case in mind — automating a workflow, augmenting a team’s output, deploying an AI-assisted tool for clients — and they want the deployment shaped by security thinking rather than by enthusiasm alone.
They have an obligation that AI use intersects with — POPIA, FSCA, sector regulation, customer contracts, supplier audits — and they need the AI strategy to be defensible to whoever is asking the questions.
They are deploying AI infrastructure for sensitive workloads and need it operated privately, where the threat model rules out sending data to public model providers.
If your situation does not look like one of these, AI work may not be the right starting point. Our Advisory practice is often a better entry point — we can help you work out whether and how AI fits, before any deployment is committed to.
How we approach the work
AI work is shaped by the same disciplines that shape the rest of the practice. The technology is new; the underlying questions are not.
Security thinking comes first. Most AI advisory in the market starts from “what can we do with AI?” and works toward “how do we secure it?” We do the work the other way around. The first question is what the AI use changes about the data flows, the processes, the obligations, and the decision authority. Once that is understood, the deployment options follow. This ordering matters because the decisions made early — what data is exposed to what models, where inference runs, who is allowed to do what — are difficult to walk back.
The risks are not just the conventional cyber ones. AI introduces new categories of risk that conventional security thinking does not fully cover. Data ends up in places that are hard to track and harder to retract. Outputs that look authoritative can be wrong in ways that are hard to detect. Behaviours that the system was not designed for can emerge from how it is used. We name these categories explicitly in advisory work and in governance documents — not as a comprehensive catalogue, but as the specific risks that your particular use case creates.
Governance before tools. Before deploying AI capability, the governance framework needs to exist. Acceptable use policy. Data classification rules that name what can and cannot be exposed to AI systems. Decision-making authority around AI-assisted outputs. Logging and review disciplines. These exist on paper for many organisations and not in operating practice. We help close that gap.
Adopt incrementally, learn deliberately. The right adoption pattern is rarely big-bang. Specific use cases, deployed with controls, observed in operation, learned from. We help structure this — choosing where to start, what to measure, what to do differently next time.
Use what we have used ourselves. WR360 operates AI infrastructure internally — for our own work, in our own environment, under our own governance. The recommendations we make are shaped by the experience of operating AI in a practice that takes information security seriously. We do not recommend approaches we have not lived inside.
What we cover
We work across the AI adoption stack rather than specialising in any single layer. The work shape varies by client, but the capabilities are consistent.
AI strategy and adoption planning. Working with leadership to define what AI is being adopted, why, in what sequence, with what controls. The output is a roadmap that makes the adoption deliberate rather than reactive.
AI governance and policy. Acceptable use policies, data classification frameworks for AI exposure, decision-making frameworks for AI-assisted outputs, and the supporting controls — written, embedded, and operating. This is where information security thinking does its most direct work.
Private AI infrastructure. For clients whose threat model warrants it — sensitive data, regulatory obligations, contractual constraints, or simply the strategic preference for sovereignty over inference — we deploy AI infrastructure privately, where data remains inside the controlled environment. This is not the right answer for every client; for the clients it is right for, the difference is foundational.
AI-assisted workflow and automation. We design and operate workflow automation that incorporates AI capability — document processing, integration logic, decision support, content generation. Built with the same disciplines as the rest of the practice: documented, governed, monitored, observable. The automation is meant to be useful and accountable, not opaque.
Agentic system evaluation. Where clients are considering or deploying agentic systems — AI that takes actions rather than just answering questions — we help evaluate the design, the controls, and the failure modes. Agentic systems introduce a different risk profile from generative-only AI; the work reflects that.
Adoption support. Once a strategy is in place and tools are deployed, the practical work of helping the team use AI effectively — usage patterns, prompt design at the workflow level, the disciplines that make AI productive in operation rather than just available.
A note on what we do not do
We are not an AI research practice. We do not train custom foundation models. We do not produce novel machine learning research. Where these are needed, we work alongside specialist providers or recommend them.
We also do not pretend that AI is suitable for every problem. Some workflows benefit from AI; some do not; some are actively harmed by AI introduction. The advisory conversation is partly about deciding which is which. We will tell you when we think AI is the wrong tool for what you are trying to do.
Who this is for
Organisations that have decided AI adoption is now a strategic question worth answering deliberately, and want it shaped by people who treat information security as the parent discipline.
It works for organisations that have lived through the alternative — uncontrolled AI use producing data exposure, governance gaps, or compliance findings — and have decided the next deployment has to be different.
It works for organisations with regulatory obligations around data, decisions, or assurance — where AI use needs to be defensible to whoever is asking, not just functionally working.
It works for organisations that want AI capability deployed in their own infrastructure rather than in third-party model providers, for sovereignty, regulatory, or strategic reasons.
It works less well for organisations that want AI as a procurement decision rather than as a discipline. AI is not a product to be selected once and then forgotten. The governance has to operate continuously. Clients who want a one-off deployment without ongoing oversight will find this practice expensive relative to what they actually want.
It does not work for organisations that have already decided on a deployment without thinking through the governance. We can help you fix the governance afterwards, but the cost is higher than getting it right the first time.
How we work with you
AI engagements typically run as advisory or consulting work, scoped around a specific question or set of decisions. The shape varies — see Engagement Models for detail.
A typical pattern starts with an advisory engagement to define the strategy, the governance, and the initial scope. From there, the work flows into more specific deliverables — policy work, infrastructure deployment, automation design, ongoing adoption support — depending on what the client needs. Some clients move quickly through advisory into deployment; others sit in advisory longer because the strategic questions are still being answered.
We work alongside internal teams where they exist. AI is one of the few areas where internal expertise is genuinely scarce in the SME and mid-market, and where our role is often to bring the framework depth and the operational discipline alongside the client’s domain knowledge of their own business.
What it looks like in practice
We use AI inside our own practice. For our own work, in our own environment, under our own governance. The disciplines we recommend to clients — acceptable use, data classification, output review, audit trails, infrastructure choices — are the disciplines we apply to ourselves first.
This shapes the recommendations we make. We have lived with the operational realities of running AI in a practice that takes information security seriously. We know where the friction points are, which controls add value and which add only friction, and what the realistic adoption arc looks like for an organisation taking AI seriously for the first time.
We are also careful with what we say publicly about how we operate AI internally. The architecture, the specific tools, the integration patterns, and the operational disciplines are part of the practice — not a public marketing artefact. Clients who engage seriously can see the work; the public page describes the shape of it.
A few honest things to know
AI adoption is genuinely fast-moving. The vendor landscape is shifting; the model capabilities are shifting; the regulatory landscape is shifting. We are explicit with clients that recommendations made today may need to be revisited in six months. The disciplines we put in place — governance, classification, controls, review — are designed to survive that change. The specific tools may not.
AI adoption is also genuinely useful. We do not believe the right response to AI risk is to abstain. Organisations that opt out entirely will find themselves at a competitive disadvantage as their peers, suppliers, and customers embed AI into how they operate. The work is to adopt deliberately, not to avoid the question.
We will sometimes recommend slowing down. When a client wants to deploy AI capability before the governance framework is in place, we will say so. When a use case is more trouble than it is worth, we will say so. The advisory voice does not change just because the topic is exciting.
We will not pretend AI is something it is not. AI is not magic. It is not a substitute for thinking. It is a category of tooling with specific capabilities and specific limitations, deployed inside organisations with specific obligations. Treating it as a strategic miracle produces the kind of decisions that show up later as audit findings or as data leakage events. We treat it as what it is — and we recommend clients do the same.
What this connects to
AI work does not stand alone. It is one application of the practice.
The Information Security Management System practice is what AI deployments are governed by. AI use is a control concern, a data flow concern, and an audit concern — all of which the ISMS frames.
Advisory and Architecture is where most AI conversations begin. Strategy, governance, and deployment decisions are advisory questions before they are operational ones.
Technology Operations is where AI infrastructure is operated when deployed privately. The same disciplines that operate the rest of the environment apply to AI workloads.
Monitoring and Response is what produces visibility into AI use — what is being done, by whom, with what data, with what outcomes. AI without monitoring is AI without accountability.
Procurement is downstream of AI decisions. Once the strategy is settled and the tools are chosen, procurement is how the licensing and supporting arrangements come into the practice.
Want to talk?
The fastest way to start is to tell us where you are with AI. Whether you have a specific use case in mind, are responding to leadership pressure, are concerned about uncontrolled use already happening, or are evaluating private deployment for sensitive workloads — we will read it carefully and reply with what we think a useful conversation would look like.