Insight

AI Adoption Isn’t a Tooling Problem. It’s an Ownership Problem.

Amanda Falshaw, AI Enablement Lead 14.04.26

AI adoption isn’t being held back by technology. So what’s really slowing teams down?

Most of us and our teams have access to generative AI in one form or another. It’s in our web browser, our workflow tools, in office packages and helping us spell.

But if you think, like me, that AI adoption feels slow or ad-hoc, it’s tempting to blame the tool itself. For example, “the model isn’t good enough”, “we need more training”, “can you tell me what a good prompt looks like?”, or “the tool such and such are using is much better than ours”.

But in reality, the tool itself, is rarely the real constraint.

Organisations who are making meaningful progress aren’t doing anything magical with the technology. They’ve simply been clearer than most about ownership: who is accountable for outputs, where decisions get made, and what “good” looks like when the answer is probabilistic.

Why is IA sometimes called a “copilot”

The term copilot is intentional. In aviation, a copilot supports the pilot but does not take full control. They monitor instruments, cross-check decisions, run checklists, and help manage workload but there is always a pilot in command, and everyone knows who that is.

It’s a useful way to think about AI tools as well. They assist the work that we do, they help generate drafts, explore options, provide ideas and find information ludicrously quickly. But we need to remember, they never own decisions.

Risks appear when we start treating the assistant as the decision-maker. We wouldn’t let a copilot land the plane? So, don’t let AI publish, commit, or decide without a named human owning it either.

When I talk about “ownership”, I’m really describing the organisational equivalent of pilot in command.

What “ownership” means (and what it doesn’t)

Ownership doesn’t mean “the AI team owns it” or “IT owns the tool”. It means something much more ordinary. The user of AI should remain accountable for the outcome in the same way they would be if no AI were involved.

Ownership answers questions like:

  • Who is allowed to use AI for this task?

  • Who reviews the output, and against what standard?

  • Who signs it off when it becomes real (sent, published, shipped)?

  • What happens when it’s wrong and who fixes it?

  • What data is in play, and who is responsible for handling it safely?

If you can’t answer these questions, adoption will slow down or worse be out of control.

Some people avoid AI entirely because it feels risky. Others use it anyway, quietly, because it feels useful.

Neither outcome is particularly healthy for a team.

Why AI creates an “ownership gap”

Generative AI is a brilliant diffuser of responsibility. Outputs arrive quickly, written in a convincing tone that make it feel as though the thinking has already happened.

With traditional software, the boundaries are clearer. A spreadsheet calculates what you tell it to calculate. A CMS publishes what you paste into it.

With generative AI, the answer often arrives as structured prose. It feels polished and finished.

And as I mentioned in my previous post, confidence can easily outpace certainty.

This is when familiar symptoms start to appear:

  • People ask “are we allowed to use AI for this?” but nobody knows who can answer.

  • Work gets faster, but quality becomes more variable.

  • Risk conversations happen after something has already been sent or published.

  • Everyone assumes someone else is checking.

None of this is malicious. It’s simply what happens when responsibility becomes blurred.

Making ownership explicit

Fixing this doesn’t usually require a complex governance model. In most teams, it simply means being clearer about a few responsibilities that already exist.

Someone still owns the outcome of the work, even if AI helped produce it.

Someone reviews the output with enough subject knowledge to challenge it.

And someone understands the boundaries around data and tools, what can be used, and where.

In most organisations, these responsibilities already exist in different parts of the team, subject specialists, delivery teams, technology leads, and platform owners.

The challenge isn’t creating entirely new structures but making these responsibilities explicit when AI becomes part of the workflow. That clarity is often what turns cautious experimentation into confident adoption.

The point where someone has to say: “I own this.”

Ownership becomes most apparent in three situations:

1. When AI touches real data

When you paste in internal content, customer information, or sensitive data. This is moving from experimentation to real operational use.

2. When a draft becomes a real thing

Drafting is low risk. Publishing, sending, committing to AI results, or putting something in a client deck is not. Somewhere in that journey, someone needs to say: “I own this.”

3. When AI influences a decision

If AI is used to shape hiring, prioritisation or eligibility, or for communications during sensitive situations, or let’s face it anything with real-world consequences. You need challenge, traceability, and clear accountability.

A simple “can we use AI for this?” check

A few questions can help in determining is AI is safe to use:

  • What’s the harm if this is wrong?

  • What data is going in? Is it public, internal, or sensitive?

  • Who owns the outcome?

  • What review happens before it’s sent or published?

Deliberately, the name of the tool is missing, this is because guardrails aren’t about banning tools. They’re about making responsibility explicit so people can move faster safely.

Looking ahead

If you’re trying to move from “some people using AI” to “AI used well across a team”, starting with ownership is often the most practical step.

Name the accountable person. Define the review point. Be clear about what data is in scope.

The rest tends to follow more easily.

If you’d like to explore this with your team, Reading Room is always happy to talk through how organisations are introducing AI in practical, responsible ways. Get in touch today.

Next month, I’ll look at how teams build the review muscle - what changes when prompting stops being the headline skill and critical judgement becomes the real differentiator.

Want to use AI more effectively across your team?

Speak to us about making adoption practical, safe, and scalable.

Talk to our team!