
The 30-Minute AI Check Every Agency Founder Should Do
Most agencies say they want to “use AI better”.
What that usually means in practice:
Trying a few new tools
Writing better prompts
Encouraging the team to experiment
It feels productive.
But it rarely changes how the agency actually runs.
If AI is going to reduce workload and protect margin, it has to live inside real processes — not alongside them.
So instead of another strategy post, here’s something you can actually use.
This is the 30-minute AI check I run with agency founders before we decide whether AI is worth touching at all.
Step 1: List Your Weekly “Thinking Drains” (10 minutes)
Open a doc and answer this question:
Where does thinking — not judgement — eat time every week?
Not creative work.
Not relationship work.
Not commercial calls.
The thinking prep work.
Common answers look like:
Reviewing account performance before meetings
Summarising delivery status across projects
Pulling insights together for reports
Sense-checking outputs before they go to clients
Answering the same internal questions repeatedly
Write down 5–10 tasks that:
Happen weekly
Follow a similar pattern each time
Still rely heavily on humans
If a task requires final judgement, that’s fine.
If it requires humans to rethink from scratch every time, that’s the issue.
Step 2: Apply the “Should AI Touch This?” Test (10 minutes)
For each task, answer yes or no to these three questions:
Does this follow a repeatable pattern?
Are there existing standards (even informal ones)?
Is the final decision still human-led?
If the answer is yes to all three, AI should already be involved.
If the answer is no, AI will likely create mess — not leverage.
This is where most agencies go wrong.
They try to use AI where judgement is highest
…and ignore the work that simply repeats.
Step 3: Decide What AI’s Role Actually Is (5 minutes)
For the tasks that passed the test, write one sentence per task:
“AI’s role in this process is to ___ so that humans can ___.”
Examples:
Analyse performance trends so humans can decide what matters
Draft structured reports so humans can refine insights
Surface SOP guidance so humans don’t interrupt delivery
If you can’t clearly describe AI’s role, don’t deploy it yet.
Step 4: Identify the Real Block (5 minutes)
At this point, most founders realise the problem isn’t AI.
It’s one of these:
The process isn’t clear enough
Standards aren’t documented
Ownership is missing
No one knows how to embed AI without risking quality
Which is why AI keeps hovering around the business instead of sitting inside it.
What This Exercise Usually Reveals
Agencies that get value from AI aren’t more advanced.
They’re more precise.
They:
Use AI in fewer places
Tie it to specific processes
Treat it as operational infrastructure — not a tool
That’s the difference between experimentation and leverage.
Where the AI in Agencies Sprint Fits
This is exactly what the AI in Agencies Sprint is designed to do — end to end.
Not “here’s how to use ChatGPT.”
Instead:
Mapping where AI should sit in your operations
Redesigning key workflows with AI built in
Creating SOPs and prompts your team can actually use
Putting guardrails around quality and delivery
The Sprint is:
~1 hour a day for one week
Followed by a month of support to make sure it sticks
A small, contained commitment — but it gives you the operating system to keep implementing AI properly across the year.
You can find full details here:
https://aiinagencies.com/ai-sprint
The Real Question to Ask
Not:
“What else could AI do for us?”
But:
“What thinking should humans stop repeating?”
That’s where AI actually earns its place.
