Insight
AI in Practice: What Digital Teams Are Really Learning
AI is already part of day-to-day work in many digital teams, even if it isn’t always talked about openly.
I joined Reading Room recently as AI Enablement Lead, so what follows reflects early observations rather than settled conclusions. Even so, a few patterns are already emerging. Most notably: much of the confusion around AI adoption doesn’t come from the technology itself, but from how poorly we explain what we mean by “AI” in the first place and how that affects decisions, ownership, and risk.
This post looks at the foundations the different types of AI, how machine learning and generative AI fit together, and what this means for organisations trying to use AI responsibly and effectively.
What are the different types of AI?
In practice, most organisations encounter three broad types of AI, each working in a very different way.
1. Rule-based systems
Rule-based systems rely of explicitly programmed logic (if/then rules) often called symbolic AI or expert systems. They do not generalise beyond the rules supplied to them.
If X happens, the system does Y. They don’t learn from data or improve over time, but they are highly predictable.
Example:
Eligibility rules in a form, workflow automation that routes content for approval, or validation checks that ensure required fields are completed before submission.
These systems are often invisible, but they underpin many digital services.
2. Machine learning systems
Machine learning systems identify patterns, learns statistical relationships in data, using these to make predictions or recommendations.
Instead of being explicitly programmed, they improve as they are exposed to more data or are retrained.
Example:
Content recommendations, search ranking, fraud detection, or forecasting user demand based on historical behaviour.
Machine learning is also the form of AI most people interact with every day, often without realising it.
Personalised recommendations, predictive text and autocomplete, and the way search results are ranked and refined all rely on machine-learning models that learn from patterns in data rather than fixed rules.
3. Generative AI
Generative AI produces new outputs such as text, images, summaries, or code rather than simply classifying or scoring existing data.
It does this by interacting with a large language model (LLM) or similar generative model that has learns patterns from vast amounts of training data. When prompted, the model combines these patterns to generate content that can appear fluent and original, even though it is not based on true understanding or intent.
Example:
Drafting content, summarising documents, generating code snippets, or supporting ideation and exploration during early stages of work.
Because generative AI tools act as the highly visible front door to an LLM, people interact with them directly. This makes questions of review, accuracy, appropriateness, and ownership especially when outputs are used in professional or client facing contexts.
A note on how these relate:
Generative AI tools (like chat interfaces and copilots) are powered by large language models (LLMs).
An LLM is a very specialised type of machine learning system trained on massive datasets to predict and generate language.
In other words:
Machine Learning → LLMs → Generative AI tools (what people interact with)
On the surface, these systems are often grouped together under the banner of “AI”. However, the way they work and the expectations we place on them are fundamentally different.
When organisations don’t distinguish between rule based logic, machine learning, and generative tools, it becomes much harder to make good decisions about risk, ownership, and appropriate use. That’s why the language we use around AI matters as much as the technology itself.
What is AI and machine learning and what is the difference?
Artificial intelligence is the broad concept: machines performing tasks that typically require human intelligence, such as recognising patterns or making decisions.
Machine learning is a subset of AI. Instead of being explicitly programmed, machine learning systems learn from data and improve over time. Most of what organisations currently label as “AI” is, in reality, machine learning under the hood.
This distinction is important because machine learning systems are:
• probabilistic rather than certain
• dependent on the quality of their data
• limited to the patterns they’ve seen before
They can be powerful, but they are not neutral, infallible, or context-aware on their own.
These definitions aren’t just academic. They shape how teams interpret what AI can and can’t do when it’s introduced into everyday work.
When we talk about “AI” as a single, capable entity, it’s easy to overestimate confidence, underestimate uncertainty, and assume a level of understanding that simply isn’t there.
What does this mean in practice?
In practice, the difference shows up in expectations.
AI is often spoken about as if it “knows” things. Machine learning systems don’t. They recognise patterns and generate outputs based on likelihood, not understanding.
This is why strong human oversight remains essential. Without it, there’s a real risk that outputs are accepted because they sound confident — not because they’re correct.
What is generative AI vs AI?
Generative AI takes these expectation gaps and amplifies them.
Unlike earlier forms of machine learning that quietly scored or ranked things in the background, generative tools produce visible, fluent outputs. That visibility changes how people engage with them — and how much trust they place in what they produce.
The shift isn’t just technical. It’s behavioural. Generative AI increases the perceived confidence of outputs, which can subtly change how people review, trust, and reuse them.
That makes questions of accountability and validation more important, not less.
How can AI realistically help my business?
This mix of usefulness and risk often leads to a familiar question: if AI isn’t an authority, where does it genuinely add value?
In practice, the most successful uses aren’t about replacing human work, but reshaping how teams get started, explore options, and make decisions.
In my early weeks at Reading Room, the most effective uses of AI haven’t been about automation or replacement. They’ve been about support.
AI helps when it:
• reduces blank-page syndrome
• accelerates first drafts and exploration
• supports analysis and sense-making
• frees up time for human judgement
It struggles when it’s treated as an authority rather than an assistant.
The organisations getting the most value from AI are those that pair it with:
• clear ownership
• defined boundaries
• confident human review
• an understanding of where responsibility sits
AI doesn’t remove accountability. It makes it more visible.
A delivery lens on AI adoption
Once AI moves from experimentation into delivery, the challenge shifts again.
The question is no longer “can this tool do useful things?” — it’s “how do we introduce it in a way that’s visible, responsible, and sustainable?”
That’s where delivery, governance, and capability building become critical.
From a delivery perspective, successful AI adoption looks less like a tool rollout and more like a capability being introduced deliberately.
That means:
• clarity over where AI is used
• shared understanding of its limits
• governance that enables rather than blocks
• and space for teams to build confidence safely
When those elements are missing, AI adoption tends to stall — or worse, drift into use without visibility or control.
Looking ahead
These are still early observations rather than settled conclusions. I’m curious… how are teams in your organisation talking about AI at the moment? Where do you see clarity helping, and where is confidence getting ahead of certainty?
If you’d like to explore this with your team, Reading Room’s happy to come in for a practical chat / session on “AI in practice”. Drop us a note at [email protected] or via our website.
Next month, I’ll be exploring what changes when confidence outpaces certainty, introducing new risks and why confidence — not capability — is often the deciding factor.