Your DLP Strategy Probably Does Not Cover AI - That Is a Problem

The data loss vector nobody planned for

Data Loss Prevention has been a staple of enterprise security for years. Block sensitive files from being emailed externally. Prevent USB drives from being used on corporate devices. Monitor for credit card numbers and personally identifiable information leaving the network. These controls still matter, but they were designed for a world that no longer exists.

Today, the most common way sensitive data leaves an organisation is through AI tools. Employees paste financial models into ChatGPT to get a summary. They upload customer data to AI analytics platforms. They use copilot features that send prompts containing proprietary information to cloud-hosted models. And in almost every case, the organisation's DLP controls do not see it happening.

Why traditional DLP misses AI data flows

Traditional DLP works by inspecting content at defined boundaries - email gateways, endpoint USB ports, cloud access security brokers. AI data flows do not follow these patterns. A prompt typed into a browser-based AI tool looks like normal web traffic. A copilot integration pulling data from SharePoint to generate a summary operates within sanctioned applications. An employee using an AI coding assistant may inadvertently expose proprietary source code through the prompts they write.

The challenge is not just detection. It is classification. Most organisations cannot answer a basic question: what data is sensitive enough that it should never be entered into an AI tool? Without that classification, no DLP policy can be effective, because the system does not know what to protect.

What a modern DLP approach looks like

Effective data protection in 2026 requires three things that most organisations do not have in place.

First, clear data classification that is actually enforced. Not a policy document that says "all data must be classified" but a practical system where sensitive data is labelled, tracked, and subject to controls that prevent it from reaching AI tools. This does not need to be perfect across every file in the organisation - start with the data that would cause real damage if it leaked.

Second, visibility into AI usage. You need to know which AI tools your employees are using, what data they are sending to them, and whether those tools retain or train on that data. Shadow AI - tools adopted without IT approval - is the DLP equivalent of shadow IT five years ago, except the data exposure is often immediate and irreversible.

Third, policy controls that work at the AI layer. This means integrating DLP with AI governance - blocking sensitive data from being pasted into unauthorised tools, monitoring copilot interactions for data that should not be leaving its source system, and ensuring that sanctioned AI platforms are configured to not retain or train on your data.

The conversation that needs to happen

Most security leaders we speak to know this is a problem. Few have a plan for it. The reason is usually that AI adoption happened faster than governance could keep up, and now the organisation is in a position where blocking AI entirely would be met with resistance from the business.

The answer is not to ban AI tools. It is to treat AI data flows with the same rigour that email and removable media received ten years ago. That means classification, visibility, and enforceable policy. The organisations that get ahead of this will avoid the inevitable breach headlines. The ones that do not will learn the hard way that their DLP programme had a gap the size of a language model.