- Minds x Machines
- Posts
- Your AI Did the Work. Did You?
Your AI Did the Work. Did You?
Intelligence follows patterns. Judgement decides what matters.

The Polished Illusion
Derek’s report looked finished. The thinking wasn't.
AI did the intelligence. Nobody added the judgement.
A year ago, AI-generated work was easy to spot. It had a slightly plastic texture: too smooth, too generic, too fond of words like "delve" and "landscape." You could smell the prompt from across the room.
That has changed. AI output no longer announces itself with jazz hands.
One of my clients received a report that hit every mark visually.The sentences behave. The formatting is clean. The structure looks sensible.
The Report arrives looking like it owns a navy blazer.
But the thinking was thin. There was no substance, and it took a while for my client to identify what felt off. The report was AI slop. Lots’s of pages saying nothing much.
And that is precisely the problem. When something looks polished, our brains downgrade scrutiny.
We assume the thinking has happened because the document looks like thinking happened.
PowerPoint has been enjoying this loophole for decades, to be fair.

That is where AI gets interesting. And slightly dangerous.
Intelligence vs. Judgement
Sequoia Capital partner Julien Bek published an essay earlier this year that gave me the language for something I'd been bumping into with clients for months. He draws a line through all professional work: intelligence on one side, judgement on the other.

Intelligence follows rules, even when the rules are complex. Organising. Building decks. Writing code. Crunching large volumes of information into structured outputs.
Judgement adds what matters: the thinking, deciding whether the output has value and is useful, and what action to take.
Bek puts it simply: writing code is mostly intelligence. Knowing what to build next is judgement.
His point was that AI has crossed the threshold where it can handle intelligence work autonomously. It has not crossed it on judgement. Not yet. Maybe not for a long time.
I split my work and workflows into intelligence (AI value add) and judgement (my value add).

My value add is curating the AI learning journey for my clients. My ability to consume and collect and connect the dots, is why I get hired. My years of experience teaching complex concepts is my Value Proposition. It is the judgement I bring.
Once you understand where your work sits on the Intelligence/Judgement continuum, you can also use Ai better.
When MORE Becomes a Problem
AI gives us abundance.

What will move the needle?
Most companies will soon have more possibilities than they can act on. This sounds exciting, until you remember that many organisations already struggle to choose between the seventeen priorities they confidently called "top priority" in January.
Therefore I define judgement as:
Judgement is the ability to make a defensible choice when the rules run out and the machine gives you more options than you can act on.
This reframes the value question entirely. Before AI, producing the output was the hard part, the bottleneck. Research took time. Drafts took effort. Formatting took patience. Now production is nearly free.
The hard part has moved. Judgement is knowing what to remove, ignore, simplify, or stop doing when AI generates infinite options. In a world of scarcity, the bottleneck was production. In a world of AI-driven abundance, the bottleneck is curation, and that is a judgement job.
Judgement is the x-factor that turns AI output into value and bottom line. It includes:

The Accountability Spectrum
If you would not defend the recommendation in a meeting, it should not be in the document.
If you are not proud of the work, and it isn’t your view, you should not share it.
Because once you share it, it becomes yours. AI may have helped create it (win), but your name is on the email, your face is in the meeting, and your reputation absorbs the consequences.
The gap between "this looks done" and "I own this" is where professional risk lives. The spectrum is a daily gut-check. Know where you are on it before you hit send.

The winners live at level 4 and 5.
There is no footnote that says "generated by ChatGPT, so any errors are its fault." The professional world does not work that way.
Cognitive Offloading vs Cognitive Surrender
It’s the critical line between using AI and being used by it.
Researchers at Wharton call it cognitive surrender: adopting AI outputs with minimal scrutiny, overriding both intuition and deliberate reasoning.
Cognitive offloading is different. That is when AI handles the how while you still own the what. You are the pilot who offloads to the copilot, not the passenger watching Friends at the back.

If all you do is upload a file and press "go" in ChatGPT, you have created slop. And any machine can do your work.
Are you applying judgement throughout the process or only showing up at the end?
So where do you start?
Here are three tests I developed for our AI Programmes.
1. The Agreement Test
Take a task you do regularly.
If you give the same inputs and the same brief to two competent people in your field. Would they produce roughly the same output?
If yes, the rules are doing most of the work. That is intelligence.
If their answers differ, but both could be defended, judgement is involved.

Here is an example:

2. The Locator Test
Most work is a mix. Identify what part of a workflow is intelligence and which is judgement.
A report may be mostly intelligence: gather data, summarise findings, structure the document. But the judgement may sit in deciding what matters, what to leave out, what the numbers mean, and what action to recommend.

Find that judgement layer. That is where human value sits. Outsource the rest.
3. The Error-Cost Test
The first two tests tell you whether AI can do the work. This one tells you whether it should.
Some intelligence work has very low error costs. A wrong subject line costs you a tenth of a percent of open rate.
Other intelligence work has catastrophic error costs. A miscoded medical diagnosis can affect treatment. A KYC failure can cost a bank a regulatory fine that wipes out a year of profit.

If the cost is low, AI can probably do more of it.
If the cost is high, a human needs to review it properly. Not glance at it. Not admire the formatting. Review it.
But the most dangerous zone is the top-left: high cost, low visibility. This is where something is wrong and nobody notices. An inaccurate data point buried in a report. A flawed assumption in a financial model. A recommendation that sounds reasonable but is based on hallucinated logic.
The Finish Line
For now, start with the line. Before you share anything that AI helped create, run through these four questions in order.

Do not confuse pretty with completion. A polished report is not a finished report. It is a report that has learned to dress well.
Your job is to check whether it has anything to say.
Want to learn how to outsource Intelligence to AI, and apply real Judgement?
Contact us to learn more about our AI Literacy Programmes.
👉 Schedule a free consultation and let’s get started.
If you forget everything else, remember this…
"The work is not done when the output looks polished. The work is done when someone has added judgement and is willing to stand behind it.
Don’t want to miss our next newsletter? | Or, if you’re already a subscriber… |
