Enterprise AI training programs fail at high rates despite significant investment. Research and practitioner experience consistently identify five structural causes: L&D teams lack organizational credibility to lead AI transformation, companies default to generalized training rather than role-specific activation, AI adoption requires infrastructure changes before individual behavior changes, organizations rely on enthusiast champions rather than pragmatic adopters, and L&D professionals are skilled at teaching methodology but lack domain-specific AI knowledge. The common root cause across all five failures is an architectural mismatch: organizations apply enablement thinking (preparation-based, one-size-fits-all, delivered before the moment of work) to a problem that requires activation thinking (contextual, role-specific, delivered in the moment of work). The distinction between enablement and revenue activation is the structural key to understanding why some organizations succeed at AI adoption while most stall.
The Diagnosis That Stopped One Layer Too Early
Ben Erez recently published a conversation with Gagan Biyani, CEO of Maven, about three months Biyani spent exploring whether Maven could sell into enterprise L&D budgets for AI training. The findings were striking, not because AI training is failing (most practitioners already know that), but because Biyani articulated the structural reasons with unusual clarity.
Five structural failures. Every one of them is real. And every one of them is a symptom of the same underlying condition.
Priya Mathew Badger, a Principal PM at Yelp who leads their AI upskilling initiative, added something in the comments that sharpened the picture further. She described a model that actually works: C-level sponsorship creating organizational infrastructure, partnering with L&D through existing channels rather than creating new destinations, and focusing on specific use cases rather than generalized competency.
Read the two together and a pattern emerges. The failures are all preparation failures - trying to get people ready for AI before they use it. The success is an activation pattern - building infrastructure that supports AI use in the moment of work.
I’ve Seen This Pattern Before
In 2013, I co-founded Gainsight and we named something that didn’t have a name yet: Customer Success.
Before that, it was called customer support. Support had a credibility crisis. The training was generic. The infrastructure didn’t exist. Support teams were blamed for churn but not empowered to prevent it. When boards asked “why are customers leaving?” The answer was always some version of “we need better support.”
The answer was never better support. It was a different architecture entirely. Customer support was reactive: wait for problems, then respond. Customer success was proactive: detect signals, intervene before the problem. That shift from reaction to anticipation, from preparation to activation - created a category, produced over 100,000 new job titles, and generated billions in enterprise value.
AI adoption in the enterprise is hitting the same wall. And the same architectural shift is available.
Five Symptoms, One Architecture Problem
Symptom 1: L&D’s credibility crisis.
L&D teams built their operating model around preparation: design curriculum, deliver training, measure completion. That model works when the knowledge is stable and the application is predictable. AI is neither. The credibility crisis isn’t reputational, it’s architectural. L&D is structured for enablement ( preparing people before the work) in an era that requires activation (support people during the work). The gap isn’t trust. It’s operating model.
Symptom 2: Generalized “learn ChatGPT” training.
Biyani’s comparison to “internet training in 2005” is precise. The generalization instinct comes from storage architecture thinking applied to learning: create one asset, distribute broadly, hope it fits. But AI use is contextual by nature. Every PM, every sales rep, every designer applies AI differently based on their specific tool chain, team, and workflow. Generic training is content management masquerading as capability building. The fix isn’t better training design. It’s contextual activation in the flow of work that meets each person in their specific workflow.
Symptom 3: AI adoption requires system-level change first.
This is the most important of the five, and Biyani provides the clearest example: Maven’s head of design realized the entire design system needed to be rebuilt to be legible to AI. That took four to six months and required senior judgment to diagnose. That’s an infrastructure investment, not a training investment. You cannot activate people on broken architecture. The system has to change before the individuals can. Every organization that has successfully adopted AI at scale has made this infrastructure move first - whether they called it that or not.
Symptom 4: The wrong champions are driving adoption.
Tinkerers versus pragmatists is a knowing-doing gap. Tinkerers know AI is powerful. They get excited by demos. They experiment constantly. But they can’t translate their experience into repeatable workflows for the broader organization. Pragmatists don’t care about the technology, they care about whether it makes their work better. The bridge between those two populations isn’t a training program. It’s infrastructure that embeds AI into the pragmatist’s existing workflow so seamlessly that adoption becomes invisible. They don’t “learn AI.” They just work, and AI is there.
Symptom 5: L&D knows how to teach but not what to teach.
Because “what to teach” changes weekly. New tools, new capabilities, new use cases. L&D professionals are experts in instructional design, adult learning theory, and curriculum development. They are not expected to be ahead of the curve on every AI application across every function. The expectation is structurally impossible. The solution isn’t to make L&D teams into AI experts. It’s to shift the model from teaching (static, pre-work) to activation (dynamic, in-work). When the system surfaces the right capability at the right moment in the right workflow, the “what to teach” problem dissolves. The system activates. The person learns by doing.
The Pattern That Keeps Repeating
Customer support had a credibility crisis. We renamed it Customer Success and rebuilt the architecture. Today there are over 100,000 people with that title.
Sales enablement has a credibility crisis. Enablement leaders can’t prove causation between their programs and revenue outcomes. Content goes unused. Training doesn’t transfer. Reps ignore portals. The diagnosis is the same: an enablement architecture (prepare people before the work) applied to a problem that requires an activation architecture (support people during the work).
L&D and AI adoption are hitting the same wall. The same architectural mismatch. The same gap between preparation and activation.
Biyani’s conclusion that successful AI adoption requires a C-level sponsor who takes personal ownership is correct. But it’s correct for a specific reason: the C-level sponsor provides the organizational authority to make infrastructure changes. Training doesn’t require C-level ownership. Infrastructure does.
Priya’s model at Yelp works for the same reason. She has CPO-level sponsorship (infrastructure authority), L&D partnership through existing channels (in-flow, not new destinations), and use-case specificity (contextual, not generic). That’s activation architecture operating inside a learning function, whether the organization calls it that or not.
From Enablement to Activation
The shift is the same one playing out across multiple functions simultaneously:

The pattern is structural, not coincidental. Every function built on a preparation model is hitting the same ceiling as the pace of change accelerates. When the environment changes faster than the training cycle, preparation collapses. Activation doesn’t. Because activation operates at the speed of work, not the speed of curriculum. That’s why the learning layer must connect directly to execution.
“The companies winning at AI adoption aren’t doing better training. They’re building infrastructure that activates capability in the moment of work. That’s not an L&D problem. That’s an architecture problem. And architecture problems have architecture solutions. - Sreedhar Peddineni






.png)
