If you are setting AI strategy for your organization in 2026, there is a piece of research you should read before you sign off on your next initiative. The MIT Project NANDA study, published in July 2025 and titled "The GenAI Divide: State of AI in Business 2025," surveyed 300 publicly disclosed AI initiatives, conducted 52 structured interviews, and gathered 153 survey responses from senior leaders. Its headline finding set off a round of executive meetings across the industry.
Ninety-five percent of enterprise generative AI pilots produced no measurable P&L impact. Of the estimated thirty to forty billion dollars enterprises have invested in generative AI, only about five percent of pilots achieved rapid revenue acceleration. The rest stalled, delivering little to no measurable business impact.
That figure has been debated. Some analysts have pushed back on the methodology, arguing the 95 percent number reflects reported barriers to scale rather than a precise failure rate. The critique has merit. But even if the number is off by a factor of two, the finding is striking: enterprises are spending real money on AI initiatives that mostly do not produce real returns. The question worth asking is why, and what to do differently.
The surprise in the MIT research is not that AI projects fail. Lots of technology projects fail. The surprise is where the failures are coming from. The report's authors found that the core barrier to scaling AI is not infrastructure, regulation, or talent. It is learning.
Most generative AI systems, as deployed in enterprises, do not retain feedback, adapt to context, or improve over time. They are static. A user provides the same input tomorrow that they provided today and gets back the same generic response. The tool does not know that this customer has specific preferences, that this project has particular constraints, or that the user has already corrected a similar output three times this week.
One executive quoted in the report captured it directly: the tool "doesn't retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session. For high-stakes work, I need a system that accumulates knowledge and improves over time."
This is the learning gap, and it is the single biggest predictor of whether an AI project will scale or stall.
The second major finding in the MIT research has direct implications for how enterprises should approach AI strategy. Companies that purchase AI tools from specialized vendors succeed roughly sixty-seven percent of the time. Companies that build their own internal AI tools succeed only one-third as often.
That is a two-to-one advantage for buying over building, and it runs directly counter to the instinct many enterprises have when they start their AI journey. Aditya Challapally, the report's lead author, put it plainly in an interview: "Almost everywhere we went, enterprises were trying to build their own tool, but the data showed purchased solutions delivered more reliable results."
This finding is particularly relevant in financial services, healthcare, and other regulated sectors where firms are building proprietary generative AI systems in 2025. The intent is usually sound — protect sensitive data, maintain control, avoid vendor lock-in. But the track record on internal builds is worse than the instinct suggests, and the gap comes back to the learning problem. External vendors focused on specific workflows tend to do the unglamorous work of memory, adaptation, and integration more rigorously than internal teams do.
The third finding worth studying is where AI budgets are actually being spent versus where returns are actually being generated. More than fifty percent of AI budgets are going to visible front-office functions like sales and marketing. The higher ROI is consistently showing up in back-office automation — finance operations, procurement, IT service management, internal research, compliance support.
The mismatch is understandable. Sales and marketing AI is what executives see in demos. Back-office AI is boring. But the economics favor boring. A finance operations agent that handles vendor invoice reconciliation produces measurable savings. A marketing agent that generates social media copy produces content that is hard to distinguish from what a human could have produced in the same time.
The MIT findings, combined with the operational reality of agentic AI deployment, point to a clear set of strategic implications.
Buy before you build. Unless you have a workflow so specific and so central to your competitive differentiation that no vendor can serve it, start with purchased solutions. The math on build-vs-buy favors buying by a factor of two, and purchased solutions typically include the memory and adaptation capabilities that internal builds often skip.
Invest in the learning gap. The vendors and solutions that will produce real returns are the ones whose systems retain feedback, adapt to context, and improve over time. When evaluating tools, ask specifically how the system learns. If the answer is vague or if memory is a premium add-on, you are looking at a tool that will join the ninety-five percent.
Target back-office first. The ROI is higher, the risk is lower, and the operational learning you generate will be more valuable than the visibility of a front-office deployment. The finance, procurement, IT, and internal research functions in your organization are full of high-value, repetitive, rules-based work that agentic AI handles well.
Empower line managers, not just central AI teams. The MIT research found that organizations where line managers drive adoption succeed more often than organizations where a central AI lab dictates implementation. The people who know what actually needs to change are closer to the work than the central team usually is.
Shift from buyer to partner. The report described successful buyers as acting like BPO clients, not SaaS shoppers. Demand customization. Use bottom-up adoption. Focus on operational metrics. The vendors who cross the divide are the ones who embed in your workflows; treat them as partners, not as tools.
The MIT NANDA report is not a reason to pause AI investment. It is a reason to be more deliberate about how you invest. The five percent of organizations that are translating AI into real business impact are doing so by addressing the learning gap, buying from specialists, targeting back-office workflows, and empowering the managers closest to the work.
The next eighteen months will reshape which organizations emerge as AI winners and which find themselves stuck explaining why their pilots never became production. The decisions that will determine which side of that divide you end up on are being made right now, in budget meetings and vendor evaluations and governance discussions. The research is clear about what separates the two sides. The only question is whether you use it.