In August 2025, the acting director of the United States Cybersecurity and Infrastructure Security Agency — the person literally responsible for protecting America's critical infrastructure — uploaded sensitive government documents into the public version of ChatGPT.

Not a junior employee. Not someone who didn't know better. The most senior cybersecurity official in the country.

He'd been given special permission to use the tool. He'd been briefed on the safeguards. And he still put documents marked "For Official Use Only" into a system that sends data to a third party.

If the head of CISA can get this wrong, what do you think is happening inside your company?

The scale of the problem

According to recent research, 68% of employees now use AI tools that haven't been approved by their employer. That's not a small minority doing something they shouldn't. That's the majority of your workforce.

And it's not just that they're using the tools. It's what they're putting into them.

77% of employees have pasted company information into AI and large language model services. Customer records. Financial reports. HR data. Meeting notes. Legal documents. Source code. The kind of information that, if it appeared in a competitor's inbox, would keep you up at night.

82% of those employees used personal accounts — not enterprise tools with data protections, but free-tier ChatGPT on their personal login, on their own devices, completely outside your visibility.

Why it's happening

Your employees aren't doing this to cause harm. They're doing it because the tools genuinely help them work faster. They're summarising documents, drafting emails, analysing data, writing reports. They're trying to be more productive.

The problem is that nobody told them what they can and can't put into these tools. Or if someone did tell them, there's nothing actually stopping them.

This is the uncomfortable reality: 43% of UK businesses have an AI usage policy. But only 14% enforce it at the enterprise level. And research from UpGuard found that even when companies block AI tools, 45% of employees find workarounds — personal devices, mobile hotspots, alternative tools their IT team hasn't heard of.

Policy without enforcement is just a document in a folder.

What's actually at risk

When an employee pastes customer data into a public AI tool, that data is now with a third party. Depending on the tool and the plan they're on, it may be used to train the model — meaning fragments of your customer information could surface in someone else's conversation.

That's not a theoretical risk. It's a data protection issue. The law is clear: your organisation is responsible for how personal data is handled, even if you didn't know it was happening. Ignorance isn't a defence. If customer data ends up somewhere it shouldn't because an employee put it into ChatGPT, that's your problem.

The financial side is just as stark. Research from IBM found that organisations with high shadow AI usage face an additional $670,000 in breach costs compared to those without. And 98% of UK respondents in a recent survey reported financial losses from unmanaged AI risks — averaging $3.9 million per organisation.

Then there's the stuff that doesn't show up in a report. Your best salesperson is feeding client briefs into a free AI tool. Your finance team is pasting cashflow projections into a chatbot. Your HR manager is using AI to draft redundancy letters. None of them think they're doing anything wrong. And technically, without a clear policy and proper tools, they're right — nobody told them not to.

Why blocking doesn't work

Some companies respond by banning AI tools entirely. On paper, that feels like the safe option. In practice, it's almost impossible to enforce and it makes the problem worse.

When you ban AI, you don't stop people using it. You stop people telling you they're using it. Usage goes underground. You lose all visibility. And instead of managing the risk, you're now blind to it.

The smarter approach is to give people approved tools with proper guardrails — enterprise AI with data protections, clear policies about what can and can't go in, and technical controls that actually work. It costs a fraction of what a breach would.

Enterprise ChatGPT costs about $60 per user per month. The average data breach costs $4.45 million. One prevented incident pays for thousands of licences.

The governance gap

Only 7% of UK businesses have what you'd call a fully embedded AI governance framework. Over half have minimal governance or none at all. This isn't a technology problem. It's a leadership problem.

The companies getting this right aren't the ones with the biggest IT budgets. They're the ones where someone senior owns the issue — someone who understands both the opportunity and the risk, and can build a framework that lets people use AI productively without putting the business in danger.

That's often where a fractional AI leadership role helps. At Bramforth AI, this is a significant part of what we do — helping mid-market companies build AI governance that actually works, not just a policy document that nobody reads.

If you're not sure where you stand

Ask yourself three questions:

  1. Do you know which AI tools your employees are using right now?
  2. Do you have a policy that tells them what they can and can't put into those tools?
  3. Is that policy actually enforced — technically, not just on paper?

If the answer to any of those is no, you're in the same position as the majority of UK businesses. The difference is whether you choose to do something about it now, or wait until something goes wrong.

The data suggests that waiting is expensive.

Find out where your business stands.

Take our 2-minute AI Readiness Assessment for a clear picture of your strengths, gaps, and next steps. Or book a discovery call to talk it through.

Take the Assessment Book a Discovery Call

Want more like this? Subscribe to the newsletter — no hype, no jargon.