You've decided to train your team on Claude. Good. Now the harder question: how do you actually do it well?
Because here's what happens when most organisations try to run AI training internally. Someone from IT gives a 45-minute presentation about what AI is. There's a slide about neural networks. Someone asks "will this replace my job?" and the presenter says something carefully non-committal. Everyone goes back to their desks. Nothing changes.
That is not training. That is a briefing. And briefings don't build skills.
This guide covers how to design and run Claude AI training that actually changes behaviour — based on having trained hundreds of non-technical teams across companies like Estée Lauder, London Business School, Amdocs, Springer Nature, and Colgate-Palmolive.
The Difference Between Teaching a Tool and Teaching a Skill
This distinction is everything.
Teaching a tool means showing someone where the buttons are. "Here's where you type. Here's where you upload a file. Here's the settings menu." That takes ten minutes.
Teaching a skill means helping someone learn how to think with AI. How to break down a task into something Claude can help with. How to evaluate whether the output is good. How to iterate instead of accepting the first draft. How to integrate AI into their actual workflow, not as a novelty but as a daily habit.
The tool part is easy. The skill part is what your training needs to focus on.
The Ideal Session Structure
Over hundreds of sessions, we've found that a 90-minute hands-on workshop is the sweet spot for an introductory Claude training. Here's the structure that works:
Opening: The Why (15 minutes)
Don't start with Claude. Start with the audience.
Address the room. Acknowledge that some people are excited, some are sceptical, and some are quietly terrified. All of those reactions are normal. Name them.
Then explain why the organisation is investing in this. Not "because AI is the future." Because there's a specific problem or opportunity. "Our client services team spends 12 hours a week writing reports. We believe Claude can cut that to 3 hours." Be concrete.
Cover three things in this opening:
- What Claude is — in one paragraph, not one lecture. It's an AI assistant made by Anthropic. You type. It responds. It can write, analyse, summarise, brainstorm, and process documents.
- What it can't do — it doesn't browse the internet, it can sometimes get facts wrong, and it doesn't know your company. Always check its work.
- The data rules — what you can and can't put into Claude. Keep this simple and specific to your organisation's policy.
Core: Hands-On Practice (60 minutes)
This is the training. Everything else is setup.
Participants should have Claude open in front of them from minute 16 onwards. They should be using it — on their actual work, not hypothetical scenarios.
Structure the hands-on time into three blocks:
Block 1: First Contact (15 minutes)
Everyone does the same exercise. Take something from your real work this week — an email you need to write, notes you need to summarise, a document you need to review — and ask Claude for help.
Walk around the room. See what people are typing. The most common mistake at this stage is prompts that are too vague: "Help me with this email." Coach people to add context: who is the email for, what's the situation, what tone do you want, how long should it be.
This first exercise does something crucial: it proves that Claude works. It produces something useful. The emotional barrier drops.
Block 2: The Art of the Brief (20 minutes)
Now teach the skill that separates good AI users from everyone else: how to communicate clearly with Claude.
Introduce what we call "the five elements of a good brief":
- Role — Tell Claude who it is. "You are a senior HR business partner."
- Context — What's the situation? "I need to write feedback for a team member who's underperforming on deadlines but is strong on quality."
- Task — What specifically do you want? "Write a draft of the mid-year review conversation opening."
- Constraints — What are the boundaries? "Keep it under 200 words. Professional but empathetic tone. Don't use the word 'disappointing.'"
- Format — How should the output look? "Three paragraphs. Start with what's going well."
Have everyone practice this with a real task from their week. Then have two or three people share their results with the room. Compare a weak prompt with a strong one. The difference is always dramatic.
Block 3: Department-Specific Use Cases (25 minutes)
This is where you make it relevant. Break into small groups by department or function. Give each group three scenarios specific to their work:
For HR teams:
- Drafting a job description from bullet-point notes
- Summarising 360-degree feedback into themes
- Creating an onboarding checklist for a new role
For marketing teams:
- Turning a product brief into three social media posts
- Analysing a competitor's website messaging
- Drafting email campaign copy from a creative brief
For finance teams:
- Explaining budget variances in plain English for a non-finance audience
- Summarising a 20-page financial report into key takeaways
- Drafting commentary for a quarterly business review
For operations teams:
- Writing standard operating procedures from process descriptions
- Creating FAQ documents from a list of common questions
- Summarising project status updates across multiple workstreams
Each group works through their scenarios, discusses the results, and identifies one "wow moment" to share back to the room.
Closing: Making It Stick (15 minutes)
The final 15 minutes are about what happens after the session ends. This is where most training programmes fail — they stop at the workshop. Address three things:
The homework. Give everyone a specific assignment: "This week, use Claude on at least three real work tasks. Write down what you asked, what it produced, and what you changed." This is not optional. It's the single most important thing that happens after the session.
The support system. Point people to wherever ongoing support lives — a Slack channel, a Teams group, a weekly office hours session. Make sure they know where to go when they get stuck.
The invitation. End on this: "You are not going to be perfect at this straight away. Nobody is. The goal this week is not mastery. It's familiarity. Use it. Play with it. Break it. That's how you learn."
Running It Remotely vs In-Person
Both work. The dynamics are different.
In-person advantages: Energy in the room. Easy to look over shoulders and coach in real time. People feed off each other's reactions. The "wow moments" land harder when you can hear someone across the room say "oh, that's actually really good."
Remote advantages: Easier to run for distributed teams. Screen sharing means everyone can see demonstrations clearly. Chat functions let people ask questions they might not ask out loud.
Remote-specific tips:
- Use breakout rooms for the department-specific exercises
- Have people paste their prompts and outputs into a shared document so the group can learn from each other
- Keep cameras on — engagement drops significantly without them
- Build in a 5-minute break at the halfway mark
The Follow-Up Plan (This Is Where Most Organisations Fail)
A workshop without follow-up is a workshop that gets forgotten. Here's the follow-up structure that makes training stick:
Week 1 after training: Send a "quick wins" email with five prompts people can copy and try immediately. These should be specific and useful, not generic.
Week 2: Host a 30-minute "show and tell" where people share how they've used Claude since the workshop. Peer learning is more powerful than any training material.
Week 4: Run a 45-minute "next level" session covering more advanced techniques — multi-step workflows, using Claude for analysis, creating templates for recurring tasks.
Month 2: Check adoption metrics. Who's using it? Who stopped? Reach out to the people who dropped off — the barrier is usually something specific and solvable.
Month 3: Run a full refresher session. Not a repeat of the original training. A deeper session based on what people have actually been using Claude for. By now, they have real questions, real frustrations, and real wins to build on.
Measuring Whether Your Training Worked
Don't just count attendance. Measure these:
| Timeframe | What to measure | How |
|---|---|---|
| Immediately after | Confidence level | Pre/post survey (1–10 scale) |
| Week 1 | First use after training | Self-reported or licence data |
| Week 4 | Regular usage | Active weekly users |
| Month 3 | Time saved | Self-reported, validated by managers |
| Month 3 | Quality improvement | Manager assessment of outputs |
| Ongoing | Organic demand | Requests for training from other teams |
The metric that matters most is week-4 regular usage. If people are still using Claude a month after training, the behaviour change has taken hold. If they've stopped, something went wrong — usually lack of follow-up or a mismatch between the training content and their actual work.
The Seven Mistakes That Ruin Claude Training
1. Starting with the technology, not the person
Don't open with "Claude is a large language model built by Anthropic that uses a transformer architecture." Nobody cares. Open with: "How many of you spent more than an hour this week writing something that felt repetitive?" Start with their problem, then introduce the solution.
2. Using hypothetical exercises
"Pretend you're planning a trip to Paris and ask Claude for recommendations." Your team is not planning trips. They're writing performance reviews and summarising board packs. Use their actual work.
3. Skipping the emotional stuff
If you don't address the fear — "will this replace me?" and "is this cheating?" — it sits in the room like an uninvited guest. Name it. Address it. Move on.
4. Making it a lecture
If participants don't have Claude open and are not actively using it for more than half the session, it's a presentation, not training. Aim for 70% hands-on, 30% instruction.
5. One-size-fits-all content
HR teams don't care about marketing prompts. Finance teams don't care about content creation examples. Tailor the use cases to the audience. If that means running separate sessions for different departments, do that.
6. No follow-up
A one-off session produces a one-off spike in interest. Without follow-up, usage decays within two weeks. Build the follow-up plan before you run the first session.
7. The wrong trainer
The person running the session needs two things: genuine hands-on experience with Claude (not just theoretical knowledge) and the ability to make non-technical people feel comfortable. Technical depth without empathy doesn't work. Neither does enthusiasm without practical knowledge.
Frequently Asked Questions
How long should a Claude training session be?
90 minutes is the sweet spot for an introductory session. Shorter than that and you can't include enough hands-on practice. Longer and attention drops. For follow-up sessions, 45–60 minutes works well.
Should we train everyone at once or in small groups?
Small groups. 15–25 people per session is ideal. Large enough to generate energy and peer learning. Small enough for the trainer to provide individual coaching. If you have 200 people to train, run 8–10 cohort sessions rather than one massive webinar.
Do we need to buy Claude licences before training?
Ideally, yes. Participants should use Claude during the session. The free tier works for training purposes, but it has usage limits that can be frustrating during a workshop. Claude Team licences are the best option — they offer higher limits and data privacy protections suitable for business use.
Can our internal L&D team run this, or do we need external trainers?
Your L&D team can absolutely run this — if they have genuine hands-on experience with Claude themselves. The trap is asking someone to teach a tool they've only used casually. The trainer needs to have used Claude extensively on real work tasks, know its strengths and limitations inside out, and be comfortable answering unexpected questions.
How do we handle people who are resistant to AI?
Don't force them. Make the first session voluntary if possible. The resistant people often come around when they see their peers getting value from it. In the session itself, acknowledge their concerns directly, show them use cases that solve a problem they actually have, and let the tool demonstrate its value rather than arguing for it.