Guide11 March 20265 min read

How to Run Your First AI Leadership Conversation

Most leadership teams need to have an honest conversation about AI before they can make any useful decisions about it. Here is how to run one that actually works.

How to Run Your First AI Leadership Conversation
Photo by Daria Nepriakhina on Unsplash

Most leadership teams fall into one of two patterns when the subject of AI comes up.

The first: someone raises it, there is a brief discussion, someone else says "we should probably do something about this," and nothing happens. The conversation had no structure and produced no output, so it dies.

The second: someone raises it, one person in the room gets excited, the conversation jumps straight to tools and vendors, and the organisation spends three months evaluating platforms before anyone has agreed on what problem they are trying to solve.

Neither pattern produces anything useful. What both are missing is a structured leadership conversation that does one specific job: establish a shared, realistic understanding of what AI does before the team starts talking about strategy, investment, or implementation.

That conversation is harder to run than it sounds. Here is how to approach it.

What the conversation needs to produce

Be clear on the output before you start. The goal is not to educate the leadership team about AI. It is not to generate a list of AI use cases. It is not to make a decision about tooling or budget.

The goal is to end the conversation with a leadership team that:

  • Has a shared vocabulary for what AI does and does not do in operational terms
  • Has agreed a framework for categorising AI capability (what works reliably, what is inconsistent, what AI genuinely cannot do)
  • Has an honest picture of where each person in the room actually sits on the understanding spectrum
  • Has identified two or three operational areas where AI has proven effective in comparable businesses
  • Has a stated position on the workforce question
  • Has agreed to move into structured experimentation

That last point is the commitment the session closes on. Everything else builds towards it.

The structure that works

The conversation needs to run in a specific sequence. The early conversations build shared vocabulary and evidence. The later conversations apply that vocabulary to the business. Jumping ahead to operational application before the vocabulary exists is the single most common reason these sessions produce nothing useful.

Start from what people know, not what you want to teach. Ask each person in the room: in one sentence, what does AI actually do? Do not correct anyone. Write the answers down. This gives you an instant picture of where the room sits and surfaces the assumptions you need to work with during the rest of the conversation.

Build the reliability framework together. The most practically useful thing a leadership team can do with an hour and a whiteboard is agree on three categories: what AI does reliably today, what it does inconsistently, and what it cannot do at all. Most AI failures in business come from applying AI to the second or third category and expecting first-category reliability. Building this framework as a group means everyone is working from the same mental model when they start evaluating opportunities.

Name the ChatGPT problem. Most senior leaders' experience of AI is personal and consumer-facing. Asking ChatGPT to draft an email creates two misleading impressions simultaneously: it underestimates what operational AI does (because drafting emails feels trivial), and it overestimates how reliably AI works (because consumer interfaces hide the failure modes). The conversation needs to close that gap explicitly. What does it mean to configure AI with your brand guidelines and product data and use it to generate 200 product descriptions an hour? That is a different thing from rewriting an email.

Map where the team actually sits. Ask each person to place themselves in one of four categories: hands-on experience of operational AI, informed but limited practical experience, generally aware but haven't used it, or sceptical. Do this anonymously if the seniority dynamic in the room makes honest self-assessment uncomfortable. The map tells you how much education and evidence-building is needed before the team can make strategic decisions with confidence.

Get to pain points before use cases. The most productive route to AI applications is not "where could we use AI?" It is "where do we spend too much time, too much money, or get too inconsistent a result?" Start from operations, not from technology. The AI relevance of each pain point becomes obvious once the reliability framework is in place.

Name the workforce question. This will be on everyone's mind whether it is voiced or not. If the facilitator does not raise it, it will distort every other conversation because people are thinking about it while discussing something else. Name it directly. Distinguish between tasks and jobs. Be honest about where roles will genuinely change and where they will not. Establish the leadership team's initial position. It does not need to be a final position, but it needs to exist.

Close with a commitment, not a plan. The session does not need to produce a roadmap. It needs to produce one agreed next step: the organisation will move from awareness into structured experimentation, someone sponsors it, there is a date for the next session. That is the output.

What goes wrong

Starting with a vendor demo. If the first AI conversation your leadership team has is a vendor pitching their platform, the conversation will be structured around what that vendor sells. Everything else becomes a deviation from their reference frame. Run the orientation conversation before any vendor is in the room.

Letting it become an education session. The purpose of the session is shared understanding, not knowledge transfer. If the facilitator is presenting slides about how LLMs work, the session has drifted. This is a leadership conversation, not a lecture.

Skipping the workforce question. Every leadership team does this. The conversation goes well, people agree on use cases, there is genuine energy, and nobody mentions jobs. Then the programme moves into experimentation and the workforce concern surfaces as resistance at the worst possible moment. Address it in the orientation session, where it belongs.

Finishing with enthusiasm but no commitment. "We should definitely do something about this" is not a commitment. A commitment has a sponsor, a next step, and a date. If the session ends without those three things, the energy in the room evaporates within a week.

Where to go from here

If you are planning to run this conversation with your leadership team, the Orientation Facilitation Guide (01c) gives you the full session structure: seven conversations in sequence, with facilitation approaches, probing questions, patterns to watch for, and a workshop synthesis template. It is designed for an external facilitator or an internal sponsor running the session themselves.

The AI Capability Reference Card (01b) is a printed take-away for participants: the reliable / inconsistent / impossible framework with consumer business examples across eight common use cases. Distribute it at the end of the second conversation, not the first.

Both tools are free to download. The orientation session is the foundation for everything in the AI Transformation Playbook. Get this right and the strategic work that follows is significantly easier.

AI Transformation Playbook

Ready to put this into practice?

The playbook gives you 95+ practical tools, checklists, templates, and facilitation guides for every stage of an AI transformation programme.