AI tools like ChatGPT and Claude are changing the way we work. They’re brilliant! Until they’re not. Have you ever caught AI confidently making stuff up? Exactly!
The AI Collaboration Matrix helps you avoid costly AI mistakes by matching your expertise and task importance. I call it that because it sounds super cool. But really, it’s just a 2x2 grid that captures my experience of what frame of mind to be in when using AI.
It’s based on how much you know, and how much the task matters.
Before we start
Accurately assessing your own expertise accurately is hard. Humans (me included) often confuse confidence with competence (see Dunning-Kruger). I’m guilty of this, when I’m yelling at the football manager to play a different formation whereas in reality I haven’t got a clue.
Similarly, the curse of knowledge can make you assume that the information is so obvious, so it doesn’t need sharing resulting in poor collaboration (between you and other humans, but also between you and AI).
How do you counteract these biases? Awareness helps. Regularly calibrate your expertise by comparing it with trusted colleagues or external sources.
Ideation
AI is brilliant here. If I’m exploring a new idea, and the consequences are low all is good!
Do I want to generate ten different titles for this blog post? Sure! I can just ignore them if they are crap (“AI Partnership Quadrants: Matching Strategy to Situation” indicates to me a strong training partnership with McKinsey!).
Do I want to summarize the absolutely massive email chain about where we’re going to celebrate Kerry’s birthday? Sure. Worst case, I miss out on a trip to Nando’s.
More seriously, if I want to reframe something for a different persona, explore some new product feature ideas, or even create an agenda for a meeting, AI is awesome. It gets over that “blank-page” syndrome and gives me a starting point.
Augmentation
You’re an expert and you’ve got some work to do where the cost of failure isn’t so high. Use AI to amplify your expertise. AI can generate content faster than you can, and you can confidently review it.
For me, this is the sweet spot of AI right now. Treat the AI as a coffee-fuelled intern and work together to solve the problem. Sure, the intern might make mistakes but since you know the domain you can correct those mistakes and move forward. The costs of failure aren’t so high.
Another important skill is being able to recognize when you’re being led astray. If you’ve used AI to do stuff, you’ll probably have recognized the situation where the AI has led you down an endless rabbit hole. Being an expert in the domain means you can stop this before you waste too much time.
Verification
Unlike augmentation, verification tasks mean you can’t afford mistakes. Use AI cautiously, verifying every step.
You are OK to use AI here, but you have to be super sceptical of the work that is being produced. There is some value here, but everything the AI produces has to be checked and double checked.
Your mindset here should be to treat the AI like a new hire. They don’t have your context; they are enthusiastic and make mistakes. They’ll do the donkey work for you, but you’ve still got to cast a critical eye over everything.
There are productivity gains to be had here, but not as much as you’d think.
Caution
One of the best quotes I’ve seen on AI is:
AI is an expert in everything you’re not.
This is where AI is most dangerous.
To get a feel for it, find a topic you are deeply knowledgeable about. Could be anything. Start up a dialogue with AI and start asking questions. Ask the hard questions. Ask a subtly impossible question and you’ll wrongly receive an answer. You’ll almost certainly find subtle mistakes. Now imagine doing this on something you know little about. Those little mistakes start to compound.
Imagine using AI for legal advice without consulting a lawyer; small errors become costly quickly.
When you’re not an expert, you must seek external validation for any of the ideas that AI is coming up with. Find a colleague with expertise or ask for external view. AI’s greatest weakness is that when it tells you something it does with conviction and sounds confident when doing so. This triggers all sorts of cognitive biases that increase the believability of what it says. It’s a trap!
You’re going to struggle to validate it yourself.
Remember, a small AI mistake in a domain you don’t understand well can quickly snowball into something much bigger. Don’t just trust - always verify externally.
Conclusion
As those scaling laws kick-in, the axes in the matrix shift. The x-axis moves up, and you can increasingly use AI in domains you’re less familiar with, but we must always remember the inherent limitations of these AI systems.
They are trained on the internet.
They don’t have your context.
They can make stuff up.
AI can boost productivity when used smartly. Know your limits. Use the matrix. Verify relentlessly. AI complements (but doesn’t replace) you!