Focused selection on a simple figurine Large

How to Hire an AI Adoption SME (And Actually Get the Right Person)

Most companies hire an AI Adoption SME with the wrong spec. Here's how to define the role correctly, assess candidates well, and onboard them for impact.

Table of contents

Skip to the AI Adoption SME job description template.

TLDR: An AI Adoption SME is part trainer, part process analyst, part prototype builder. They need to be comfortable across people, code, and content. They’re rare, but they’re there—and need the right organisational support to succeed.

Most companies buying AI tools aren’t getting their money’s worth. 88% of organisations now use AI in at least one business function, yet only 6% qualify as high performers seeing significant enterprise-wide value. 

The gap between those two numbers is less of a technology problem and more of a people problem—specifically a training, workflow, and enablement problem.

That’s the job an AI Adoption SME exists to close. But most organisations are hiring this role with a broken spec, which means they’re either selecting engineers who don’t want to run workshops, or generalists who can’t prototype anything. 

This guide covers what the role truly requires, how to assess candidates well, and what to do with the person once you’ve hired them.

What an AI adoption SME actually does

The title is misleading. “SME” implies deep technical expertise in a narrow domain. In practice, this role is broader than that and requires a different kind of depth.

An AI Adoption SME’s primary output is behaviour change at team level. 

They’re not responsible for deploying production AI systems. That belongs to engineering. They’re responsible for reducing the distance between what AI tools can do and what teams are currently doing with them, which is a combination of enablement, discovery, and lightweight build work.

In practice, the role operates in three overlapping modes:

Enablement covers training sessions, prompt engineering workshops, guidance documentation, and hands-on sessions that help both technical and non-technical users understand what AI tools can and can’t do. This is the most visible part of the role.

Discovery means spending time with business teams to understand their existing processes, identify where work is slow, repetitive, or manual, and map realistic opportunities for AI assistance. The SME needs to ask good questions before they prescribe anything.

Lightweight build involves prototyping small, demonstrable workflows using Python, internal AI libraries, or low-code platforms like Copilot Studio or Power Automate. The goal is a working proof of concept, not a production system. When ideas grow beyond that scope, the SME hands off to engineering.

The three modes interact constantly:

  • A discovery session identifies a manual data-extraction process
  • The SME builds a rough prototype to prove the concept 
  • That prototype becomes the demo in a training session. 
  • The training session generates feedback that improves the next discovery conversation.

Gallup data from early 2026 found that 65% of employees in AI-adopting organisations say AI has improved their productivity, but the benefits remain concentrated at individual task level rather than broader workplace systems. 

Scaling those individual gains into repeatable, team-level patterns is the job of an AI enablement SME.

Why this role is so hard to hire for

Organisations are posting AI Adoption SME roles without a settled internal definition of the job. The result is a spec that pulls in three directions at once, and a shortlist of candidates who satisfy none of them fully.

The most common failure mode is writing the spec through an engineering lens

When the job description leads with LLM architecture, RAG pipelines, and model evaluation, you’ll attract engineers who find training sessions tedious and won’t stay in the role. The adoption half of the title gets lost.

The problem runs deeper than job spec design, though. Writer’s 2026 enterprise AI survey found that 79% of organisations face challenges adopting AI, a double-digit increase from 2025, and 54% of C-suite executives say adopting AI is actively tearing their company apart. 

Despite that, only 13% of employees have received any AI training. 

The enablement function is absent in most organisations, which means there’s no internal benchmark for what good looks like when you’re trying to hire someone to run it.

The talent pool is thin for additional structural reasons. The role requires a blend of applied AI knowledge, adult learning experience, and business analysis instinct that doesn’t correspond to any established credential or career path. 

A strong candidate might come from a technical training background, a solutions engineering role, an applied AI consultancy, or a content and operations background at a technology company.

The profile is wide, and the screening process needs to reflect that and make room for a diverse range of candidates.

The skills matrix: what to assess and how to weight it

Split your assessment criteria into three tiers. Depth in the essential tier matters more than breadth across all three.

Skill areaTierWhat good looks likeCommon mistake
Prompt engineering and LLM literacyEssentialCan explain failure modes and iterate systematicallyCandidates who only demo polished outputs
Training delivery for mixed audiencesEssentialHas facilitated it, not just designed itConfusing “created documentation” with facilitation experience
Python at a practical levelEssentialScripts, APIs, data handling without production-grade engineeringTreating this as a binary: either they’re an engineer or they can’t code
Business process mappingEssentialCan interview stakeholders and document workflows clearlyOverlooked in favour of technical skills; often not assessed at all
RAG and data groundingStrong preferenceUnderstands why free-text prompting alone produces unreliable outputsTreating RAG knowledge as an engineering skill that doesn’t belong in this role
Low-code and no-code AI toolingStrong preferenceHas used Copilot Studio, Power Automate, or equivalent in a real workflowDismissing this as beneath technical candidates
Responsible AI and output validationStrong preferenceCan coach teams on when not to trust a modelSkipped almost entirely in most interview processes
Agent-style workflowsNice to haveExposure to tool use, structured outputs, task decompositionMaking this essential when most organisations don’t need it yet

The essential tier is non-negotiable. A candidate who can build but can’t train is an engineer. A candidate who can train but can’t build is a change manager. You need both.

The responsible AI row deserves more attention than most hiring processes give it. Only 29% of organisations currently see significant ROI from generative AI, and only 5% of AI pilot programmes achieved the kind of rapid revenue impact that leadership expects. 

A significant portion of those failures trace back to insufficient output validation and no structured process for human review. The SME needs to be the person who prevents those failures at the team level, not just the person who trains people to use tools faster.

Four common hiring mistakes

Writing the spec for an AI engineer. This produces a shortlist of people who find the enablement work beneath them. The role needs someone who can prototype enough to demonstrate a concept, then hand it off. The build component is a means to an end, not the job.

Ignoring training and facilitation experience. Hands-on AI knowledge without adult learning experience produces someone who can’t change team behaviour. McKinsey’s 2025 data found that AI high performers are three times more likely to have senior leaders actively engaged in driving adoption, including active role modelling. The SME needs to bring that same active engagement at team level. That’s a teachable skill, but it requires a candidate who has done it before.

Over-indexing on domain expertise. In regulated sectors especially, hiring teams often want a candidate who already knows the industry deeply. This narrows the pool significantly and matters less than you’d think. Communication clarity, process instinct, and applied AI depth transfer across domains faster than specialist knowledge transfers to someone who lacks the other three.

No internal sponsor for the role. The SME can’t drive adoption without organisational access and a clear mandate. Hiring the right person into a poorly-scoped, resource-squeezed, sponsor-less role produces the same outcome as hiring the wrong person. This is a pre-hire decision, not a post-hire problem.

How to structure the interview process for an AI Adoption SME

Stage 1: Screening

Assess communication clarity and conceptual AI fluency. Ask the candidate to explain a RAG pipeline to a non-technical stakeholder. 

Listen for how they manage jargon, whether they check for comprehension, and whether they adjust their framing in real time. This is a training skill test disguised as a knowledge question.

Stage 2: Technical task

Give a realistic brief with enough ambiguity to be authentic. For example: “Map a repetitive internal process of your choice and propose one AI-assisted improvement, including a rough prototype, a workflow diagram, or a structured written recommendation. You have five days.” 

Assess the output for practical rigour, and assess how the candidate frames tradeoffs and limitations. A candidate who only highlights the upside hasn’t grappled with responsible AI at all.

Stage 3: Stakeholder roleplay

Put them in a structured conversation with a sceptical, non-technical team lead. The assessor plays someone who’s heard the AI pitch before and is unconvinced. 

  • Can the candidate listen before they prescribe? 
  • Do they oversell? 
  • Do they handle pushback without becoming defensive or dismissive? 

This is the closest proxy to one of the job’s hardest day-to-day moments.

Stage 4: Panel

Front-office, technology, and people teams should all contribute. This is a cross-functional hire. A single-function panel will miss important signals, specifically the ones that matter most for organisational fit.

Red flags to watch for in candidates

  • They position AI as a universal solution before asking about the problem
  • Their portfolio examples all involve tools they built and used independently, not tools they helped others adopt
  • They can’t describe a concrete example of a failed AI implementation and what they’d have done differently
  • Their prompt engineering knowledge is limited to ChatGPT’s consumer interface with no systematic methodology
  • They’re dismissive of governance, validation, or responsible AI concerns
  • They haven’t delivered training before
  • They give vague answers about how they’d measure adoption; this is a core output of the role, not an administrative add-on

BCG research found that employees who receive at least five hours of AI training show significantly higher regular usage and confidence. A candidate who can’t describe a structured training programme isn’t equipped to deliver that.

How to set this person up once you’ve hired them

Poor onboarding is the most common reason good AI Adoption SMEs leave or underperform within twelve months. 

The role is cross-functional by design, which means it depends entirely on internal access and trust that has to be built deliberately.

Give them thirty days of discovery before any delivery expectations. Rushing them into workshop delivery before they understand the organisation produces generic training sessions that don’t address real workflows. 

The thirty-day discovery phase should produce a stakeholder map, a rough process inventory, and a shortlist of quick-win opportunities. That’s their first deliverable.

Assign a named sponsor with enough organisational authority to open doors

Without that sponsor, the SME will spend their first three months getting declined for meeting access and working around informal blockers. 

The sponsor doesn’t need to be a technical leader; they need to be someone whose endorsement carries weight across departments.

Define adoption metrics together before the first training session runs. 

Don’t leave metric definition to the SME alone. Writer’s 2026 survey found that 75% of executives admit their company’s AI strategy is more for show than actual internal guidance. 

If the organisation hasn’t defined what successful adoption looks like, the SME can’t measure it, and their contribution becomes invisible, which creates retention risk.

Connect them to the engineering or AI platform team early so they know what they can prototype independently and when to escalate. Clear boundaries here save weeks of confusion later.

PhaseFocusKey outputs
Days 1 to 30Discovery and relationship-buildingStakeholder map, process inventory, short-listed quick wins
Days 31 to 60First enablement sprintOne workshop delivered, one prototype or proof of concept
Days 61 to 90Metrics baseline and iterationAdoption baseline report, refined training materials, documented workflow patterns

Avoid routing every AI-related question through this person once the role becomes visible internally. That burns them out quickly and doesn’t scale. 

Part of the role’s design is to build team-level capability so the SME isn’t a permanent dependency. If your onboarding process creates that dependency, you’ve misunderstood the job.

AI super-users, the employees who engage with AI tools most deeply and frequently, are currently 5x more productive than their colleagues who haven’t adopted. 

The AI Adoption SME’s job is to move more of your organisation into that category, systematically and safely. That outcome requires a strong hire, a well-scoped role, and an organisation ready to support the work.

AI Adoption SME Job Description Template

Use this as a starting point. Adapt the reporting line, tooling stack, and sector context to your organisation. Remove the “desirable” criteria if you want a wider shortlist.

Job title: AI Adoption SME

Department: [Technology / Operations / Front Office Technology]

Reporting to: [Head of AI / CTO / Head of Operations]

Location: [City] | [X] days per week/month in office (Remote / Hybrid)

Type: Permanent / Contract

About the role

We’re building out our AI capability and need someone who can close the gap between what our AI tools can do and how our teams are actually using them. This isn’t a pure engineering role. It’s an enablement role with technical depth: part trainer, part process analyst, part prototype builder.

You’ll work directly with business and technology teams to understand how they operate, identify where AI can help, and deliver training and lightweight workflows that change how work gets done day to day. You’ll also track adoption across the organisation and feed those insights back to the teams responsible for tooling and infrastructure.

What you’ll do

Enablement and training

  • Run hands-on workshops, demos, and training sessions for both technical and non-technical audiences across the organisation
  • Deliver practical training on prompt engineering, LLM limitations, and good usage patterns
  • Build and maintain reusable materials including prompt libraries, workflow walkthroughs, and short guidance notes
  • Refine training content continuously based on user feedback and observed gaps

Discovery and process mapping

  • Spend structured time with teams to understand their current workflows, manual processes, and pain points
  • Map end-to-end workflows and identify realistic opportunities for AI assistance
  • Help teams articulate problems clearly enough that targeted AI tooling can be applied

Lightweight prototyping

  • Build small prototypes and proof-of-concept workflows using Python, internal AI libraries, and low-code platforms such as Copilot Studio or Power Automate
  • Work alongside engineering or platform teams when ideas move beyond the prototype stage
  • Focus on small, demonstrable improvements rather than large speculative builds

Adoption metrics and reporting

  • Define and track adoption metrics for approved AI tools in collaboration with technology and operations leads
  • Maintain simple dashboards showing active usage, frequency of use, and common use cases
  • Combine quantitative usage data with qualitative feedback to build a clear view of how AI tools are being used in practice
  • Share regular adoption insights with relevant stakeholders to inform training priorities and tooling decisions

Governance and responsible use

  • Help teams understand the limitations of AI outputs and the importance of human review and validation
  • Support the organisation’s AI governance and control frameworks at the point of team-level adoption
  • Flag recurring misuse patterns or governance gaps to the appropriate teams

What we’re looking for

Essential

  • Demonstrated experience delivering training or enablement sessions for both technical and non-technical audiences
  • Hands-on experience with modern AI tools beyond basic end-user interaction, including practical prompt engineering and an understanding of common LLM failure modes
  • Familiarity with grounding AI outputs in data through document retrieval, APIs, MCP, or structured context rather than free-text prompting alone
  • Comfortable working with Python at a practical level for scripts, API calls, and data handling
  • Experience with enterprise or low-code AI tooling such as Microsoft Copilot Studio, Power Automate, Power Apps, or similar platforms
  • Strong communication skills with the ability to explain technical concepts clearly to non-technical users
  • Comfortable operating in environments where requirements evolve and able to iterate pragmatically

Desirable

  • Familiarity with retrieval-augmented generation (RAG) concepts at a practical level
  • Experience designing or supporting AI-assisted workflows for document handling, summarisation, data extraction, or knowledge access
  • Awareness of responsible AI considerations in regulated environments, including data handling, output validation, and human oversight
  • Exposure to lightweight agent-style patterns including tool use, structured outputs, and task decomposition
  • Previous research, writing, or thought leadership on AI adoption, enablement, or implementation

About you

You’re technically capable but measure your impact through the behaviour change you create in others, not the code you ship. You’re comfortable in a workshop room, at a whiteboard with a sceptical team lead, and in a Python script. You ask good questions before you propose anything. You understand that AI tools require sensible constraints, not just enthusiasm, and you can make that case clearly to users at every level.

What we offer

  • [Salary range]
  • [Hybrid or remote policy]
  • [Benefits summary]
  • [Learning and development budget]
  • [Any sector-specific context, e.g. front-office exposure, regulated environment experience]

Get a free audit

Book a 30-minute call to see where AI could help your business.

Virtual personal assistant from Los Angeles supports companies with administrative tasks and handling of office organizational issues.