Everyone in the room knows what an LLM is. Half of them have built something with one.
The junior devs have deployed AI agents, the senior architects have read the MCP spec, and the consultants have all the acronyms memorised: RAG, A2A, LangChain, Foundry, fine-tuning, agentic loops.
And yet, organisations are still making expensive, avoidable decisions about AI because nobody in the building can explain what’s actually happening—to the people who need to understand it most.
And therein lies the problem: translation.
What the job descriptions get wrong
Browse any AI specialist or AI consultant listing right now and you’ll find a familiar structure.
The core requirements fill half the page: platform familiarity, architectural awareness, hands-on delivery, governance knowledge. All good stuff.
Then, almost as an afterthought, the “what will make you stand out” section mentions communication skills: the ability to convey technical concepts to non-technical audiences, thought leadership, bridging business and technical teams.
Consultancies have this backwards. Communication isn’t a bonus. It’s the job.
Five audiences, five conversations
In most AI engagements, you’re not talking to one type of stakeholder. You’re managing five distinct conversations simultaneously, each with its own vocabulary, risk tolerance, and definition of success.
| Stakeholder | What they actually need to hear |
| Board | What this decision costs, what it risks, and what happens if competitors move first |
| Management | Where the project is, what the blockers are, and what you need from them |
| Clients | What to expect, when to expect it, and what they control |
| Peers | What you’re building, how it connects to their work, and where the handoffs are |
| The AI itself | Exactly what you want, in the right order, with the right constraints |
That last row isn’t a joke. Prompt engineering is communication.
Writing a system prompt that gets consistent, accurate, useful output from a model is the same discipline as writing a good brief, a good spec, or a good client email.
You’re structuring information for an audience that will behave differently depending on how clearly you express yourself.
The people who are best at prompting are usually the best communicators, not the best engineers.
Why communication this is the rarest skill
Technical skills are teachable and increasingly commoditised.
The tools get better, the documentation gets clearer, and the tutorials get shorter.
What doesn’t get easier is learning to read a room, to understand what a CFO fears versus what a CTO wants, to know when to use a diagram and when to use a number, and to simplify without condescending.
Most people who move into AI consulting arrive from technical backgrounds; they know the stack. But they don’t always know how to stand in front of a board and say, in two sentences, why the proof of concept failed and what that means for the roadmap.
They don’t know how to write the client update that prevents the panicked call. They don’t know how to frame the same information differently for six people in the same meeting.
That skill took years to build before AI came along, and AI didn’t shorten the timeline.
If anything, the pace of change made it harder—because now you’re communicating about things that didn’t exist twelve months ago, to people who are simultaneously anxious, excited, sceptical, and under-informed.
What this means for anyone hiring in AI
If you’re building an AI team or evaluating an AI consultancy, don’t just assess the technical depth.
- Ask them to explain what they built to someone who wasn’t in the room.
- Ask them to write the exec summary before you look at the deliverable.
- Ask them how they’d handle a client who doesn’t understand why the model got it wrong.
And pick the one who can do all of that clearly and quickly.