5 min read

Mollick's Four Laws Aren't Optional Anymore A Practical Framework Every Knowledge Worker Should Operate By

Mollick's Four Laws Aren't Optional Anymore A Practical Framework Every Knowledge Worker Should Operate By
Mollick's Four Laws Aren't Optional Anymore A Practical Framework Every Knowledge Worker Should Operate By
9:24

Mollick's Four Laws Aren't Optional Anymore

A Practical Framework Every Knowledge Worker Should Operate By


If I could put one framework into every organizational training program in the country tomorrow, it would be the four laws of co-intelligence from Ethan Mollick's book Co-Intelligence: Living and Working with AI (Portfolio, 2024).

Not because Mollick is the only thinker working on this. But because his four laws are the rare piece of AI guidance that is simultaneously simple enough to remember, practical enough to operationalize, and rigorous enough to actually work.

Three years into the GenAI era, most of the workforce still doesn't have a working framework for engaging with these tools. They either use AI too little (out of fear or inertia) or too much (accepting outputs without judgment). Mollick's four laws give you the operating principles to do neither.


 
Law 1: Always invite AI to the table

This one sounds simple. It's actually the hardest of the four, because it cuts against how most professionals are wired.

Mollick's argument: the only way to develop a useful intuition for what AI is good at — and where it fails — is to use it on everything. Not just the obvious tasks. Every task. Disciplined experimentation across the full range of your work is the only way to map the territory.

The reason this is hard: most knowledge workers have a strong, often unconscious sense of which tasks "deserve" AI assistance and which ones don't. They'll use AI for an email but not for an important strategic memo. They'll use it for a first draft but not for the analysis underneath it, because the analysis is "their job." They use it for low-stakes work but not for high-stakes work — which is backwards from what the research suggests.

The Harvard / BCG study published in Organization Science found that consultants using GPT-4 outperformed their non-AI peers across 18 different business tasks — 12.2% more tasks completed, 25.1% faster, 40% higher quality output. Across the board.

That lift only happens if you actually try AI on the task in the first place.

The leadership implication: You cannot ask your team to "use AI more" as an abstract directive. You have to remove the friction preventing them from trying it on real work. Make it the default, not the exception.


 
Law 2: Be the human in the loop

This is the law that protects you from the failure mode of Law 1.

If "always invite AI to the table" is the gas pedal, "be the human in the loop" is the brake — and you need both. Mollick's point is unambiguous: AI is a partner, not a replacement. Your judgment is the final call. Every time.

The same Harvard / BCG / MIT / Wharton research surfaced the dark side of widespread AI use: when researchers gave consultants a task that fell outside the AI's capability frontier — a task the AI was likely to get wrong — the consultants using AI performed worse than the control group. They accepted plausible-sounding wrong answers. The researchers called this "falling asleep at the wheel."

You cannot stop AI from sometimes being confidently wrong. You can only stop yourself from being the person who passes the confidently-wrong answer along.

Being the human in the loop means:

  • Reading every output, not skimming it.
  • Verifying claims that matter before you act on them.
  • Holding the AI accountable for the framing of the question, not just the answer.
  • Owning the final product as your work, not as the AI's.
  • Law 1 (Invite AI to the table) gives you the experimentation discipline.
  • Law 2 (Be the human in the loop) gives you the quality control discipline.
  • Law 3 (Treat AI like a person) gives you the technical fluency discipline.
  • Law 4 (Assume this is the worst AI) gives you the strategic framing.

The professional accountability for the output stays with the human. Always. The AI is a tool. Tools don't sign off. People do.

The leadership implication: Build the human-in-the-loop check into your team's workflows explicitly. Don't assume it's happening. Make it part of the operating standard.


 
Law 3: Treat AI like a person — but tell it what kind of person it is

This is the most technically useful of the four laws, and it's the one most knowledge workers ignore.

Mollick's argument: the quality of AI output depends heavily on the context, role, and persona you give the model. A blank prompt produces generic output. A prompt that establishes who the AI is supposed to be, what audience it's writing for, what tone it should use, and what success looks like produces dramatically better output.

This is not a hack. It's how the models actually work. They're trained on enormous corpora of human-produced content, and they respond to contextual cues the way humans do. A model told "you are a senior financial analyst writing for a Fortune 500 CFO" will generate different content than the same model told "explain finance."

The skill here isn't technical. It's communication. Specifically, the ability to give clear, contextual direction. Which means the AI power users in any organization aren't the engineers — they're the people with strong language and communication skills. The communicators. The writers. The people who can articulate a problem precisely.

That should reframe how you think about who in your organization is best positioned to lead AI adoption. It's not necessarily the most technical people. It's the people who can think clearly and communicate precisely.

The leadership implication: Invest in prompting capability across your knowledge workforce. Not as a "technical skill," but as a communication skill. Build a shared library of prompts that work for your specific business. Make prompt quality part of how you assess AI work.


 
Law 4: Assume this is the worst AI you'll ever use

This is the strategic framing law. It changes how you make decisions about investment, training, and patience.

Mollick's argument: the AI you have access to today is the worst AI you will ever use. The capability curve is going straight up. Whatever frustrating limitations you're working around now will be largely solved in the next twelve months. Whatever workflow you're building today will need to be redesigned eighteen months from now to take advantage of capabilities that don't exist yet.

This has two implications.

First, build the habits now. Even if today's AI is imperfect, you need the reps. Mollick has argued that roughly 10 hours of focused experience with a current frontier model is the inflection point where useful intuition develops. You cannot wait until the AI is "good enough" — by the time it is, you'll be three cycles behind people who started with the imperfect version.

Second, don't over-invest in solving today's limitations. I see this constantly: organizations sinking enormous effort into engineering workarounds for problems the next model release will solve natively. Build flexible workflows. Build capability in your people. Build the habit of using AI. The specific tool will change — the capability is what compounds.

The leadership implication: Treat AI capability as a moving target. Plan accordingly. Don't lock in three-year contracts based on today's tool capabilities. Build a culture that expects to keep adapting, indefinitely.


 
What these four laws give you, together

Taken individually, each of Mollick's laws is useful. Taken together, they form a complete operating framework for how a knowledge worker — and an organization full of them — should engage with AI.

Most organizations have parts of one or two of these. Almost none have all four operating as a coherent system. That gap is the opportunity.

If I were a leader right now, here's what I'd do practically:

  1. Put the four laws in your AI policy. Not as suggestions. As operating principles, with examples for each.
  2. Train every knowledge worker on them. A 90-minute session, well-designed, gets you most of the way.
  3. Build them into your performance conversations. When you review someone's AI-assisted work, ask: How did you bring AI into this? How did you stay in the loop? How did you frame the prompt? What didn't work?
  4. Repeat. Repeat. Repeat. New frameworks don't become habits after one rollout.

There are dozens of AI frameworks circulating right now. Most are too complicated, too technical, or too tool-specific to survive contact with a real workforce. Mollick's four laws are the rare exception. Simple enough to remember on a Tuesday afternoon, rigorous enough to actually shape decisions.

That combination is what makes a framework worth adopting. These four have earned it. 


 
Sources & further reading:
  • Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio/Penguin.
  • Dell'Acqua, F., McFowland, E., Mollick, E., et al. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper 24-013, published in Organization Science (2026).
  • Mollick, E. One Useful Thing (Substack, ongoing essays).