5 min read
A Fool With a Tool Is Still a Fool Why the AI Pilot Purgatory Is a Leadership Problem, Not a Technology One
David Mantica
May 16, 2026
A Fool With a Tool Is Still a Fool
Why the AI Pilot Purgatory Is a Leadership Problem, Not a Technology One
I've been in a lot of rooms lately where leaders are frustrated about AI.
The pattern is almost always the same. They bought the licenses. They sent a memo. They held a kickoff. Maybe they even ran a few pilots. And six months later, the same dashboards are showing the same disappointing numbers — adoption is flat, the productivity gains are theoretical, and nobody can quite explain why the thing isn't working.
There's a phrase that gets stuck in my head every time I hear one of these stories. It comes from Grady Booch, one of the architects of modern software engineering:
"A fool with a tool is still a fool."
It's the most uncomfortable truth in the whole AI conversation, and it's the one almost nobody wants to put on a slide.
The hype is doing real damage
Let me say what I think is actually happening.
We are living through a period where leaders have been told, repeatedly and with enormous conviction, that AI is going to transform their business. Gartner says 80% of project management tasks will be AI-driven by 2030. KPMG says 44% of leaders expect AI agents to take lead roles in managing projects. Layoffs.fyi has tracked roughly 780,000 tech-sector layoffs since 2022 — many of them framed by CEOs as "efficiency" tied to AI-driven restructuring.
That pressure is real. It's also distorting decisions.
When leaders feel behind, they buy. They buy tools, they buy licenses, they buy consulting hours. Buying is the easiest thing a leader can do, because it looks like action. It generates a press release. It moves a line item. It makes the board feel like the company is "doing something about AI."
What buying does not do is build the capability inside your organization to actually use the tool. And without that capability, the tool just sits there — or worse, it gets used poorly by people who don't have the judgment to catch the mistakes the AI is going to make.
What "pilot purgatory" actually looks like
I keep running into the same four failure patterns. Every leader I talk to recognizes at least two of them, usually three. Sometimes all four:
Tool-first thinking. Buying licenses without redesigning the work the tools are supposed to do. The classic version of this is a company that signs an enterprise Copilot deal and then is shocked that knowledge workers don't magically reinvent their workflows. Why would they? You didn't ask them to.
No use-case discipline. Experiments are happening everywhere — marketing has a thing, finance has a thing, customer service has a thing — but no one can point to a measurable productivity outcome. There's energy without direction, and energy without direction burns out fast.
A capability gap. The workforce hasn't been trained in prompting, critical thinking about AI output, or human-in-the-loop oversight. People are either underusing the tools (because they don't know how) or overusing them (because they don't know they shouldn't trust the output).
Leadership avoidance. Leaders delegate AI to IT and then act surprised when nothing strategic comes out. AI is not an IT problem. It is a how-we-work problem.
Recognize any of these? Most leaders recognize all of them. And here's the thing — none of them are technology failures. They're leadership failures dressed up in technology costumes.
The Booch principle, restated
If you hand a powerful tool to a person who hasn't built the judgment to use it, three things happen — none of them good.
First, the tool amplifies whatever dysfunction was already there. If your meetings are unfocused, AI-generated meeting notes will make them more unfocused at higher volume. If your decision-making is sloppy, AI-generated decision memos will be sloppy faster. AI is a multiplier. It multiplies good practice. It also multiplies bad practice. There's no neutral setting.
Second, the user mistakes fluency for correctness. This is the most dangerous failure mode, and it's well-documented. The Harvard Business School and BCG study published in Organization Science — the one that introduced the concept of the "jagged frontier" — found that knowledge workers using GPT-4 outperformed their peers significantly on tasks inside the AI's capability frontier. But on tasks just outside it, the AI users performed worse than the non-AI control group, because they accepted confident-sounding wrong answers without checking. The researchers called it "falling asleep at the wheel."
Third, the organization loses the ability to learn. If AI is doing the first draft, the analysis, and the synthesis, junior people aren't building those muscles. Five years from now, you have a workforce that can prompt a model but can't reason without one. That is a strategic vulnerability hiding inside a productivity gain.
None of this is the AI's fault. The AI is doing exactly what it's designed to do. The failure is that we handed a chainsaw to someone we never taught to use one.
What capable looks like
Let me flip the picture. What does it look like when an organization is actually building the capability the tool requires?
They start with the work, not the tool. Before they pick a vendor or sign a license, they look at the actual flow of knowledge work in the organization — where decisions get made, where bottlenecks form, where rework lives, where institutional knowledge is being lost. Then they ask which of those problems AI can credibly help with. Tool selection follows. It doesn't lead.
They invest in critical thinking, not just prompting. Prompting is a skill, but it's the smaller skill. The bigger skill is being able to look at an AI-generated output and ask: Is this correct? What did it miss? What's the source? Whose perspective is missing? What would I want to verify before I act on this? That is critical thinking, and it has to be deliberately developed.
They redesign the work. This is the part most organizations skip. If AI is now drafting the report, what does the human do? If AI is now synthesizing the meeting, what does the analyst do? If AI is now answering tier-one tickets, what does the support team focus on instead? These are not rhetorical questions. They are operational questions, and if you don't answer them deliberately, the answer becomes "nothing different" — which is the worst answer.
They build psychological safety for experimentation. Amy Edmondson's foundational research at Harvard Business School (published in Administrative Science Quarterly, 1999, and reinforced in dozens of follow-on studies) is unambiguous: teams that don't feel safe to take interpersonal risks don't learn. They don't speak up about what's not working. They don't share mistakes. They don't admit they're stuck. If your AI rollout is happening in a culture where people are afraid to say "I tried it and it didn't work," your AI rollout is going to fail. The research on this is settled.
They lead the change. They don't delegate it. When a CEO says "AI is a priority," and then doesn't change their own calendar to engage with it, the organization reads the second signal, not the first. Adaptive leadership work cannot be outsourced.
The hard part
Here's what I tell leaders when this comes up in a coaching conversation:
The hard part isn't the technology. The technology is going to keep getting better whether you do anything or not. The hard part is building an organization full of people who can think clearly, question outputs, hold judgment in tension with speed, and adapt their habits faster than the tools change underneath them.
That is leadership work. It is the work of building capability — not just deploying tools.
The companies that are winning right now are not the ones with the most expensive AI stack. They are the ones whose leaders understood, very early, that this was an adaptive challenge dressed in technical clothing — and they invested accordingly.
A fool with a tool is still a fool. A capable person with a tool is a force multiplier. The leader's job is to build the capable people.
That's the whole game.
Sources & further reading:
- Dell'Acqua, F., McFowland, E., Mollick, E., et al. (2023). Navigating the Jagged Technological Frontier. Harvard Business School Working Paper 24-013. Published in Organization Science (2026).
- Edmondson, A. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44(2), 350–383.
- Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio.
- Layoffs.fyi (live tracker, accessed May 2026).
Have a Question? Let’s Talk.
Interested in our courses, webinars, or corporate training solutions?
Send us a message and a member of our team will get back to you shortly.

