SoftEd Blog

Your Brain Will Fight You on AI And Here's the Neuroscience of Why

Written by David Mantica | May 16, 2026

Every leader I work with has had some version of this experience.

They roll out a new AI tool. They explain the benefits clearly. They give the team training. They make it free, accessible, and well-supported. And then they watch, in real time, as a non-trivial percentage of the workforce simply… doesn't use it.

The instinct is to blame the people. "They don't get it." "They're resistant to change." "They're stuck in their ways."

I want to offer a different diagnosis, and it's one that's much harder to argue with:

Their brains are doing exactly what brains are designed to do.

Once you understand the neuroscience and behavioral economics of why adaptive change is physically hard, you stop being frustrated with your people and start designing for the reality of how humans actually function. That distinction — between blaming people for resisting and designing for how brains work — is the leadership move that separates rollouts that succeed from the ones that quietly die.

 
Three forces working against you

There are three well-documented mechanisms that work against AI adoption. None of them are about your specific employees. They are about being a human being with a working nervous system.

1. The brain is built for efficiency, not change

Here is a number that should sit in the back of every leader's mind during transformation work:

Your brain is roughly 2% of your body weight, but it consumes about 20% of your energy.

That's a peer-reviewed, settled fact from decades of neuroscience research. And it has a behavioral consequence: your brain is constantly looking for shortcuts to conserve fuel. It would rather use an existing pathway than build a new one. It would rather rely on a habit than think things through. It would rather assume than verify.

Adaptive change demands the opposite. It demands deliberate thinking, new pathways, conscious effort. Which is exactly what the brain has evolved to avoid wherever possible.

When you ask someone to switch from doing a task the way they've done it for fifteen years to doing it with AI assistance, you are not asking them to make a small adjustment. You are asking their brain to abandon a high-efficiency pathway and burn extra fuel building a new one. Your enthusiasm about the upside doesn't change the metabolic cost of the transition.

2. System 1 vs. System 2

Daniel Kahneman's Thinking, Fast and Slow gives us the cleanest model for what happens next.

System 1 is fast, automatic, intuitive, pattern-based. It runs on the established habits and heuristics the brain has already built. It's what gets you to work in the morning without conscious thought. It's incredibly powerful — and it's where most of our biases live.

System 2 is slow, deliberate, effortful, energy-expensive. It's what you use when you encounter genuinely new information. It questions, verifies, reasons step by step.

Adaptive AI work demands System 2 engagement. You have to consciously decide to use the tool. You have to evaluate the output. You have to apply judgment. You have to think about whether the answer is actually right.

Your brain wants System 1. Your brain always wants System 1, because System 1 is cheaper.

This means that even people who genuinely want to use AI well will default, over time and under pressure, to skipping the System 2 work. They'll accept the first AI output. They won't verify the source. They'll let fluency stand in for correctness. The Harvard / BCG study published in Organization Science documented exactly this — researchers called it "falling asleep at the wheel" — where knowledge workers using AI on tasks outside its capability frontier accepted confident-sounding wrong answers because the System 2 evaluation never happened.

You cannot eliminate this. You can only design for it.

3. Loss aversion is doing more work than you think

This one comes from Kahneman and Tversky's prospect theory, which won the Nobel Prize and is now foundational behavioral economics:

People weight potential losses roughly twice as heavily as equivalent potential gains.

That asymmetry, more than anything else, is why failed change initiatives fail.

When you announce an AI initiative and list the benefits — faster output, time savings, automation of grunt work — your people are not weighing those benefits the way you're weighing them. They're weighing each potential gain against any potential loss they perceive: loss of expertise, loss of status, loss of certainty about their role, loss of the way they used to be excellent. And the math the brain is doing puts each of those losses at roughly 2× the gain you're offering.

People only change when they believe there is essentially 100% upside in it for them, and of course that standard is almost never met. Resistance to loss is the most common adaptive failure I've seen across thirty years of consulting work. It isn't intelligence. It isn't attitude. It's biology.

This is why so many AI rollouts get verbal compliance and lose the actual behavior change. The person in the meeting says "yes, I see the benefits, I'm on board." Their brain says "I am going to wait and see whether this costs me something I value, and until I have evidence that it doesn't, I'm not investing the System 2 energy."  
 
What this means for how you lead

If you accept this framing, your leadership job changes. It is no longer to "get adoption up." It is to design the change in a way that works with how brains actually function. A few moves that flow directly from the research:

Name the loss. Don't just sell the gain. People can't process the upside until you acknowledge what they might be giving up. The senior analyst who built her career on financial modeling needs you to acknowledge that her expertise still matters, even as AI does the first draft. If you skip this and go straight to the productivity slide, you are guaranteed to lose her.

Lower the activation energy. System 2 is expensive. Make it as cheap as possible to start using AI well. Embed prompts directly into existing tools. Give people pre-built workflows. Reduce the cognitive load of the first ten attempts. The lower the activation energy, the more likely the new pathway gets built.

Build psychological safety for experimentation. Amy Edmondson's research at Harvard Business School is unambiguous on this: teams that don't feel safe to take interpersonal risks don't learn. They don't try things. They don't share what didn't work. They go quiet. In an AI rollout, that silence looks like adoption stalling for no apparent reason. The reason is almost always that people are afraid to try and look stupid. Fix that, and you unlock more adoption than any training program will.

Pace the change deliberately. You cannot run adaptive change as a sprint. The brain needs reps to build new pathways, and the system needs time to surface unexpected losses. Move too fast and you trigger more defensive behavior than you can manage. Move too slow and you lose momentum. Heifetz calls this "regulating the heat." It's a real skill.

Repeat. And then repeat again. The new pathway only becomes the default after dozens of successful repetitions. One training session does not rewrite a habit. One memo does not change a culture. Adaptive leaders accept that the same conversations need to happen four, five, six times before they take hold — and they don't get frustrated about it, because they understand why. 

 
The leadership reframe

The leaders who get this right stop thinking about their workforce as "resistant" and start thinking about their workforce as "running on hardware that's optimized for a world that doesn't exist anymore." The hardware is doing exactly what it should. The job of leadership is to redesign the operating environment so the hardware can succeed.

That reframe — from blaming people to designing for biology — is the most important shift I can offer you on this whole topic.

Your brain will fight you on AI. So will your team's. That isn't a defect. That's an input.

Lead accordingly.

 

Sources & further reading:

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47(2), 263–291.
  • Dell'Acqua, F., et al. (2023). Navigating the Jagged Technological Frontier. Harvard Business School / Organization Science
  • Edmondson, A. (1999).Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly.
  • Heifetz, R. A., Grashow, A., & Linsky, M. (2009). The Practice of Adaptive Leadership. Harvard Business Press.