Guide24 March 20265 min read

How to Set Up and Run an AI Hackathon

A practical guide to planning, running, and following through on an AI hackathon that produces real business outcomes, not just excitement.

How to Set Up and Run an AI Hackathon

Why run an AI hackathon?

A hackathon compresses weeks of exploration into two days. You take a cross-section of your business, give them structured time with AI tools, and ask them to solve real problems. The best ones produce working prototypes, surface unexpected use cases, and shift how the organisation thinks about AI. No training programme does that.

The risk is it becomes an expensive away day. Lots of energy, nothing afterwards. The difference comes down to preparation and follow-through, not the event itself.

Scope the right challenges

The single biggest mistake is running an open-ended hackathon. "Use AI to solve any business problem" sounds liberating. It produces shallow work. Anchor the event around 4-6 specific business challenges drawn from your opportunity identification work.

Each challenge needs a real problem with a real owner. "Reduce the time to onboard a new franchisee from 12 weeks to 4" beats "improve onboarding." That problem owner should be in the room, answering questions, validating assumptions, and committed to taking a viable solution forward.

If a challenge needs weeks of data engineering before AI can add value, it's not a hackathon challenge. Pick problems where the data and systems are already accessible.

Build the right teams

Don't let people self-select into comfortable groups. Each team needs someone who understands the business problem, someone with enough technical confidence to build a prototype, and someone who can present the outcome commercially. Four to five people per team works best.

Mix seniority deliberately. Junior staff who use AI tools daily bring a completely different perspective to senior leaders who understand the strategic context but haven't touched the tools themselves. Pair them. Both learn something.

Include sceptics. If every participant already uses AI daily, you learn nothing about adoption barriers. The objections from operational staff are the most valuable input for scaling and adoption.

Set up the environment before the event

Nothing kills momentum like spending the first two hours on login credentials and software installations. Sort three things before the event:

  • Enterprise AI tool access for every participant (ChatGPT, Claude, Copilot, whatever your organisation has approved). Start the IT security review six weeks out. Leave it until the week before and you'll get a compromised experience.
  • Data packs for each challenge: anonymised extracts, links to relevant internal systems, and a few example prompts to get teams moving.
  • A starter kit per challenge: one-page brief, success criteria, constraints, and the problem owner's contact details.

Within 15 minutes of the start, every team should be working on the problem. Not setting up laptops.

Structure the two days

One day is too short. Teams spend all their time in discovery and never build anything. Three days and energy drops off. Two days hits the balance.

TimeActivity
Day 1, AMProblem briefings from challenge owners (30 mins each). Teams assigned. Initial exploration
Day 1, PMBuild. Roaming coaches available for technical support and prompt engineering
Day 1, CloseLightning check-ins (5 mins per team). Forces teams to articulate what they're building and why
Day 2, AMRefine and build. Polish the solution, prepare the pitch
Day 2, PMFinal presentations (10 mins per team, 5 mins Q&A). Judging panel scores against criteria

Have roaming coaches available throughout. People comfortable with prompt engineering and the tooling landscape who can unblock teams quickly. Without them, you'll lose hours to avoidable dead ends.

Judge on business impact, not technical cleverness

Weight the scoring heavily toward value:

CriteriaWeight
Problem-solution fit30%
Feasibility (could this ship in 90 days?)25%
Business impact (time, cost, or quality improvement)25%
Scalability to other areas10%
Presentation quality10%

Put at least one senior sponsor on the panel. Someone with the authority to greenlight next steps. If the judges can't action anything, the event loses credibility instantly.

What good looks like

A UK high-street retailer (800 stores) ran a two-day hackathon with 8 teams. The winning team built a prototype that used an LLM to draft store manager weekly reports from EPOS and footfall data, a task that took each manager 45 minutes every Monday morning. Rolled out to 200 stores within 8 weeks. Estimated saving: 3,000 hours per year.

A QSR franchise (2,000 locations) focused their hackathon on compliance. One team built a proof of concept that analysed franchisee compliance visit reports and flagged recurring issues by region. The prototype was rough, but it revealed that 60% of compliance failures clustered around three operational areas. That insight had been invisible in the manual reporting process. The compliance team adopted and refined the tool within six weeks.

The most impactful outputs are not always the winners. At a 50-person DTC brand, a customer service agent shared a prompt chain she'd been using informally for weeks to draft personalised returns responses. It cut handling time from 8 minutes to 3. The hackathon gave her permission to share it. It became the team standard within a fortnight.

The follow-through

This is where most hackathons fail. Within 48 hours of the event, every challenge owner needs to make a clear call on each output: kill, experiment, or scale. Connect viable outputs to your experimentation framework and assign an owner, a timeline, and a first milestone.

If there's no post-hackathon governance, the best ideas die in a shared drive. Schedule the follow-up review before you run the event, not after.

Common traps

  • Too many spectators. Sponsors who only show up for the final presentations signal that this is entertainment, not work. They need to be present throughout, asking questions and removing blockers.
  • Ignoring data access. Teams discover on day one that they can't reach the data they need. Security restrictions, format issues, or nobody thought to extract it. Solve this entirely beforehand.
  • Open-ended briefs. "Use AI to improve something" produces nothing actionable. Constrain the challenges tightly.
  • No decision framework. Without a clear kill/experiment/scale process, the outputs go nowhere. You've spent the budget and captured none of the value.
  • Only inviting enthusiasts. You need the people who will resist adoption in the room. Their objections are exactly what you need to solve for at scale.

Next steps

If you're planning your first hackathon, start with the AI Readiness Assessment to understand your organisation's starting point, then use Opportunity Identification (Tool 04a) to shortlist the right challenges. The Experimentation Framework gives you the structure to take winning ideas forward.

Build the event around real problems, real people, and a real plan for what happens next. Everything else is secondary.

AI Transformation Playbook

Ready to put this into practice?

The playbook gives you 95+ practical tools, checklists, templates, and facilitation guides for every stage of an AI transformation programme.