
Sarah had done everything by the book. As VP of Operations at a 75-person manufacturing software company, she'd gotten executive buy-in, allocated budget, selected the right tools, and sent a company-wide email announcing their AI transformation initiative. She'd even organized mandatory training sessions. Three months later, adoption sat at 11%. The AI tools she'd carefully selected gathered digital dust while her team continued their old workflows, finding creative ways to avoid the very technology meant to help them.
What Sarah didn't realize was that she'd fallen into the most common trap in organizational change: believing that authority plus announcement equals adoption. It doesn't. And she's not alone. Salesforce—with its vast resources, market dominance, and dedicated implementation teams—launched Agentforce with massive fanfare and even removed traditional search functionality from their help pages, effectively mandating AI agent use. The result? Only about 8% of their customer base has adopted Agentforce after a year, despite the forced rollout. CEO Marc Benioff acknowledged a "bifurcation" between rapid consumer AI adoption and slower enterprise adoption, admitting "this technology innovation is out-stripping customer adoption".
If a billion-dollar software giant can't command AI adoption into existence, what makes mid-market leaders think they can?
Particularly with AI, where fear, skepticism, and genuine uncertainty create powerful resistance, the traditional top-down mandate approach doesn't just fail to drive adoption—it actively undermines it. The mid-market companies that are succeeding with AI adoption aren't doing it through executive decree. They're doing it through a fundamentally different approach: identifying natural champions, empowering grassroots experimentation within guardrails, and letting organizational change emerge from demonstrated value rather than mandated compliance. This isn't about abandoning leadership or structure. It's about understanding that AI adoption is as much a cultural transformation as a technological one, and culture cannot be commanded into existence.
The Fundamental Problem with Top-Down AI Mandates
When executives mandate AI adoption, they typically do so from a position of conviction about AI's value. They've read the research, seen the demonstrations, perhaps experimented with the tools themselves. From their vantage point, the path forward seems obvious. But this clarity creates a dangerous blind spot: they forget that adoption requires more than intellectual understanding of value. It requires emotional willingness to change, practical confidence in new skills, and social proof that the change is safe and worthwhile.
The Salesforce Agentforce situation provides a masterclass in how mandate-driven adoption fails at scale. When Salesforce removed the traditional search bar from its help pages and replaced it with an AI-powered assistant, complaints quickly emerged from users reporting that Agentforce was often slower to deliver what index-based search used to serve up instantly. Critics argued that the decision appeared to prioritize AI adoption metrics over user experience, with some believing it was "designed to inflate usage stats for Agentforce to impress shareholders, rather than address real customer needs". The mandate triggered exactly the resistance patterns that undermine genuine adoption.
Top-down mandates trigger several predictable resistance patterns in organizations. The first is what organizational psychologists call "reactance"—the psychological phenomenon where people resist directives that threaten their autonomy. When told they must use AI tools, many employees experience this as a threat to their professional autonomy and judgment. Their resistance isn't necessarily about AI itself; it's about the loss of control over how they do their work. Salesforce customers experienced this acutely when their familiar search functionality was removed without choice or transition period.
The second resistance pattern emerges from legitimate fear and uncertainty. Unlike previous technology adoptions, AI feels different to many workers. It's not just a new tool in their existing workflow; it's a technology that can potentially replicate aspects of their cognitive work. Many employees worry that AI agents could diminish their role or even replace them, seeing the tool as a threat rather than a productivity enhancer. When adoption is mandated rather than invited, these fears have no safe outlet for expression and resolution. Instead, they manifest as passive resistance, surface-level compliance without genuine adoption, and quiet skepticism shared in hallway conversations but never voiced in official meetings.
The third pattern is perhaps most damaging: mandated adoption eliminates the discovery process that builds genuine capability. When people explore AI tools voluntarily, they stumble upon use cases that matter to their specific work. They develop personal techniques, discover limitations and workarounds, and build intuition about when and how to use these tools effectively. When adoption is mandated, this exploration is replaced with compliance—people do the minimum required rather than discovering the maximum possible.
Mid-market companies face a particular challenge here. Unlike large enterprises with dedicated change management teams and extensive training resources, mid-market organizations typically have leaner structures where everyone wears multiple hats. When AI adoption is mandated from above, there's often insufficient support infrastructure to help people succeed, leading to frustration and reinforcing skepticism about the entire initiative.
Understanding the Three Types of Employees in AI Transformation
Successful AI adoption requires understanding that your organization contains three distinct groups when it comes to new technology adoption, and each requires a fundamentally different approach. Trying to treat everyone the same—whether through universal mandates or uniform training—ignores the psychological and practical realities of how people engage with transformational change.
The first group are the Champions. These are typically 10-15% of your organization, and they're characterized by natural curiosity about AI combined with relatively low fear of change. They're not necessarily your tech-savvy employees, though some may be. What defines them is psychological openness to experimentation and enough security in their core competencies that learning new tools feels like expansion rather than replacement. Champions have often already started experimenting with AI tools on their own, sometimes using personal accounts before any organizational initiative began.
The mistake many leaders make is assuming Champions will naturally drive adoption throughout the organization. They won't—at least not without intentional cultivation. Left to their own devices, Champions often become isolated enthusiasts, developing impressive personal capabilities that never transfer to their colleagues. Or worse, their enthusiasm can become evangelical, creating resistance among fence-sitters and skeptics who feel judged or pressured.
The second group are the Fence-Sitters, representing roughly 60-70% of most organizations. These employees are neither enthusiastic early adopters nor active resisters. They're waiting for proof that AI adoption is worth the investment of time and mental energy required to develop new capabilities. They need evidence that comes from people they trust, doing work they recognize, achieving outcomes they value. Mandates don't move Fence-Sitters toward adoption; they move them toward compliance theater—the appearance of adoption without genuine integration into workflow.
What Fence-Sitters need most is social proof from peers, not directives from executives. They need to see someone like them—with similar roles, similar workloads, similar constraints—using AI successfully. They need permission to experiment without fear of failure, and they need time to build capability at their own pace. Most importantly, they need to maintain their professional identity while adopting new tools. The framing cannot be "replace your old methods with AI" but rather "enhance what you already do well with additional capabilities."
The third group are the Resisters, typically 15-20% of the organization. It's tempting to view this group as obstacles to progress, but that perspective misses their value. Resisters often have legitimate concerns that Champions and Fence-Sitters share but don't articulate. They worry about data privacy, accuracy, over-reliance on technology, or the impact on human skills. They may have had negative experiences with previous technology mandates that promised transformation but delivered disruption without clear benefits.
The common mistake is trying to convert Resisters into Champions through persuasion or pressure. This rarely works and often backfires, strengthening resistance and making Fence-Sitters nervous about their own hesitations. Instead, successful AI adoption strategies engage Resisters in a different way: as critical evaluators whose skepticism helps identify real problems and prevent rushed implementation mistakes.
When a Resister raises concerns about AI accuracy, the response shouldn't be defensive reassurance but rather engagement: "You're right that accuracy matters. Help us develop validation protocols." When they worry about over-reliance on AI, the response isn't dismissal but incorporation: "That's a valid concern. What guardrails would make you comfortable?" This approach doesn't aim to convert Resisters into enthusiasts. It aims to convert their resistance into contribution, making them stakeholders in solving the problems they identify.
The Champion Cultivation Approach: Building Grassroots Momentum
The alternative to top-down mandates is what we call Champion Cultivation—a structured but bottom-up approach that identifies natural advocates, empowers their exploration within clear boundaries, and creates mechanisms for their learnings to spread organically throughout the organization. This isn't about abandoning leadership or structure. Leadership remains essential, but its role shifts from mandate-giver to environment-creator.
The first step is identifying your natural Champions. These aren't necessarily the people you'd expect. They're not always your youngest employees or your most technically proficient. They're the people who exhibit three characteristics: curiosity about AI regardless of whether they've formally explored it, willingness to experiment with new approaches, and enough social capital within the organization that others watch what they do. This last characteristic is crucial. A Champion without social capital may develop impressive capabilities, but those capabilities won't spread beyond their own workflow.
To identify Champions, leadership should issue an invitation rather than a mandate. Something like: "We're exploring how AI tools might enhance our capabilities. If you're curious and want to experiment with these tools in your work, let us know." This invitation-based approach immediately distinguishes Champions (who will raise their hands) from Fence-Sitters (who will wait and watch) and Resisters (who will voice concerns or remain silent).
Once Champions are identified, the next step is structured empowerment. This means providing them with resources, time, and clear boundaries for experimentation. Resources might include access to premium AI tools, time allocated for exploration and learning, and connection to external learning resources. But equally important are the boundaries: clear guidance about data that should not be shared with AI systems, use cases that require human oversight, and expectations about documenting their learnings.
This is where many Champion-based approaches fail. Without structure, Champions experiment in isolation, developing idiosyncratic approaches that work for them but don't transfer to others. They may inadvertently create data security risks by sharing sensitive information with AI tools, or develop over-reliance on AI outputs without appropriate validation. The goal isn't unleashing Champions to do whatever they want; it's creating structured exploration where Champions develop capabilities that can scale beyond their individual practice.
The critical next step is creating knowledge transfer mechanisms. Champions need regular forums to share what they're learning with Fence-Sitters. These shouldn't be formal training sessions, which feel like the mandated approach most organizations already resist. Instead, they should be informal sharing sessions where Champions demonstrate specific use cases, discuss what worked and what didn't, and answer questions in a peer-to-peer context. The framing is crucial: "Here's what I've been experimenting with" rather than "Here's what you should do."
These sharing sessions serve multiple purposes simultaneously. For Champions, they create accountability and reflection, forcing them to articulate what they've learned in ways others can understand. For Fence-Sitters, they provide the social proof and peer modeling they need to begin their own exploration. For Resisters, they make risks and limitations visible—Champions who honestly discuss failures and constraints actually reduce resistance by demonstrating thoughtfulness rather than blind enthusiasm.
Perhaps most importantly, these sessions create what organizational change experts call "positive deviance"—examples of people achieving better outcomes through different approaches. When Fence-Sitters see a peer saving three hours per week on research synthesis, or producing higher quality first drafts, or managing their email more effectively, it triggers a different psychological response than hearing about AI's potential in the abstract. It moves from "This might be valuable" to "This is valuable for someone like me."
Building Psychological Safety for Experimentation
One of the most overlooked aspects of successful AI adoption is the role of psychological safety—the organizational climate where people feel safe to experiment, fail, ask questions, and admit uncertainty without fear of judgment or negative consequences. Without psychological safety, AI adoption initiatives stall regardless of how good the tools are or how enthusiastic the Champions might be.
Psychological safety in AI adoption means creating explicit permission for several behaviors that traditional organizational culture often discourages. First, it means permission to admit ignorance. Many employees resist AI adoption not because they're opposed to the technology but because they don't understand it and feel embarrassed by their lack of knowledge. In organizations without psychological safety, this ignorance remains hidden, preventing the questions and exploration that build understanding.
Creating permission to admit ignorance requires leadership modeling. When executives and managers openly discuss their own learning process with AI—including confusions, mistakes, and uncertainties—it normalizes the learning curve for everyone else. When a VP shares in a company meeting that they initially struggled to write effective prompts, or that they got nonsensical outputs until they learned to provide better context, it signals that learning AI is a legitimate process rather than something people should already know.
Second, psychological safety means permission to fail experimentally. AI tools don't always work as expected. Prompts that seemed reasonable produce unhelpful outputs. AI-generated first drafts sometimes need complete rewrites rather than light editing. Experimental approaches to using AI in specific workflows sometimes make work slower rather than faster. In psychologically unsafe environments, these failures become ammunition against AI adoption: "See, I tried it and it didn't work." In psychologically safe environments, they become learning opportunities: "I tried this approach and it didn't work, but here's what I learned about why."
Creating permission to fail requires explicit framing from leadership. AI experimentation should be positioned as just that—experimentation, where the goal is learning rather than immediate productivity gains. Champions need explicit protection from criticism when their experiments don't pan out. When someone tries to use AI for a task and it takes longer than their traditional approach, the response shouldn't be "Why did you waste time on that?" but rather "What did you learn about where AI works and where it doesn't?"
Third, psychological safety means permission to maintain boundaries with AI adoption. Not every task is improved by AI. Not every person will use AI tools in the same way or to the same extent. Psychological safety means people feel comfortable saying "I experimented with using AI for this task and decided my traditional approach works better" without being viewed as resistant or behind the curve. It means acknowledging that adoption doesn't mean universal, uniform use of AI for everything.
This boundary-setting is particularly important for preventing the "AI evangelism" problem, where enthusiastic Champions inadvertently create pressure that undermines psychological safety. When Champions suggest AI solutions for every problem, Fence-Sitters and Resisters may feel judged for their slower or more selective adoption. Leadership needs to actively counterbalance this by celebrating thoughtful non-adoption as much as enthusiastic adoption: "Great judgment in recognizing that AI wasn't the right tool for this task."
The 90-Day Champion Cultivation Process
Translating these principles into practice requires a structured but flexible implementation timeline. The 90-day framework provides enough time for genuine capability development and organic spread while maintaining momentum and preventing the initiative from becoming another forgotten priority.
Days 1-15 focus on foundation and recruitment. Leadership articulates the organization's AI exploration initiative, emphasizing invitation over mandate. The communication should acknowledge uncertainty, normalize learning curves, and establish clear boundaries around data security and appropriate use. Simultaneously, leadership identifies and recruits Champions through self-selection, looking for the combination of curiosity, willingness to experiment, and social capital within the organization.
During this period, leadership also establishes the support infrastructure Champions will need. This includes securing access to AI tools, creating documentation about data security guidelines, establishing regular Champion meeting times, and allocating time in Champions' schedules for exploration. Critically, leadership should also communicate with managers to ensure Champions aren't penalized for spending time on AI experimentation rather than immediate productivity.
Days 16-45 focus on structured exploration. Champions begin experimenting with AI tools in their actual work, documenting both successes and failures. They meet regularly—perhaps weekly—to share experiences, problem-solve challenges, and develop collective understanding. These meetings should be facilitated to ensure they remain productive, focusing on specific use cases, concrete examples, and honest discussion of what works and what doesn't.
During this exploration phase, Champions should be encouraged to start small rather than attempting major workflow overhauls. The goal is to identify quick wins—tasks where AI creates clear value without requiring extensive process redesign. These might include research synthesis, first draft generation, data analysis, meeting summaries, or email composition. Quick wins serve two purposes: they build Champions' confidence and skill, and they create compelling demonstrations for Fence-Sitters.
Leadership's role during this phase is active support and protection. When Champions encounter obstacles—whether technical challenges, time constraints, or skepticism from colleagues—leadership needs to help problem-solve and provide air cover. When Champions make mistakes or have failed experiments, leadership needs to reinforce that this is expected and valuable learning. The goal is to prevent Champions from burning out or becoming discouraged during the natural difficulties of the learning curve.
Days 46-75 focus on demonstration and organic spread. Champions begin sharing their learnings with the broader organization through informal demonstrations, documentation of use cases, or lunch-and-learn sessions. The framing of these sharing sessions is crucial: they should feel like peer sharing rather than training, with emphasis on "here's what I've been trying" rather than "here's what you should do."
During this phase, some Fence-Sitters will naturally begin their own exploration, often approaching Champions informally for guidance. This peer-to-peer teaching is exactly the organic spread the approach aims to create. Leadership should support it by ensuring Champions have time for these informal mentoring conversations and by creating spaces where people can ask questions without feeling they should already know the answers.
This is also when Resisters' concerns become most valuable. As Champions demonstrate AI use cases, Resisters will likely identify legitimate problems—use cases where AI quality is insufficient, tasks where human judgment is being inappropriately replaced, or workflows where AI integration creates more complexity than value. Rather than dismissing these concerns, leadership should engage Resisters in solving them, potentially appointing them to develop validation protocols, quality standards, or use case guidelines.
Days 76-90 focus on consolidation and planning. Champions reflect on what they've learned and begin developing best practices that can guide others. Leadership assesses which use cases are showing clear value, which ones need more exploration, and which ones should be abandoned. The organization begins transitioning from experimentation to more systematic adoption in areas where value is demonstrated, while maintaining space for continued exploration in areas of uncertainty.
During this phase, leadership should also identify second-wave Champions—Fence-Sitters who have begun their own successful experimentation and are ready to take on more active roles in supporting others. This expansion of the Champion network is crucial for scaling adoption beyond the initial enthusiasts. The 90-day mark isn't an endpoint but a transition point, where lessons from the initial phase inform more structured ongoing adoption.
From Experimentation to Sustainable Practice
The transition from experimental Champion cultivation to sustainable organizational practice is where many AI adoption initiatives stumble. The experimental phase generates enthusiasm and initial wins, but without thoughtful transition planning, that momentum can dissipate. Converting grassroots experimentation into embedded practice requires several key elements.
First, it requires codifying the knowledge Champions have developed. This doesn't mean creating extensive training manuals that no one will read, but rather developing practical resources that help people get started quickly. This might include prompt templates for common use cases, decision frameworks for when to use AI versus traditional approaches, or quick-start guides for the specific AI tools the organization has found most valuable. These resources should reflect Champions' actual experiences rather than idealized best practices, including honest discussion of limitations and common mistakes.
Second, it requires integrating AI capabilities into formal processes where appropriate. When Champions have demonstrated clear value in specific workflows—perhaps using AI for meeting summaries, research synthesis, or first draft generation—leadership should consider how to systematize these approaches so they become standard practice rather than individual innovations. This doesn't mean mandating use, but rather making it the path of least resistance for those who are ready to adopt.
Third, it requires ongoing support infrastructure. Even after initial adoption, people need resources when they encounter new use cases or challenges. This might mean maintaining regular office hours where Champions are available for questions, creating internal communication channels where people can share tips and solve problems together, or developing a rotating schedule where different Champions demonstrate new techniques or use cases periodically.
Fourth, it requires measurement and celebration that reinforces desired behaviors. Organizations should track not just adoption metrics—how many people are using AI tools—but also capability metrics: how effectively people are using them, what value they're creating, what new use cases they're discovering. Equally important is celebrating thoughtful non-adoption, when people experiment and decide an AI approach isn't appropriate for their specific context. This celebration prevents the initiative from devolving back into mandate-thinking.
Finally, sustainable practice requires continuous refresh. AI capabilities are evolving rapidly, and organizational needs change over time. The Champion network should remain active not just to support ongoing adoption but to continue exploring new capabilities, testing new tools, and identifying emerging use cases. What worked six months ago may be superseded by new approaches, and the organization needs mechanisms to incorporate these innovations without requiring top-down reorganization of processes.
When Resistance Becomes Contribution
Perhaps the most counterintuitive aspect of successful AI adoption is learning to value resistance rather than overcome it. In most organizational change initiatives, resisters are viewed as obstacles—people to be convinced, worked around, or in extreme cases, pressured into compliance. In AI adoption, this approach misses the genuine value that thoughtful resistance provides.
Resisters often identify real problems that Champions, in their enthusiasm, overlook or minimize. They notice when AI outputs contain subtle errors that could be dangerous if undetected. They recognize when AI integration makes workflows more complex rather than simpler. They identify tasks where human judgment, experience, or relationship-building can't be effectively replaced by AI capabilities. These aren't obstructions to progress; they're essential feedback for developing sustainable, appropriate AI adoption.
The key is creating channels where resistance can be expressed constructively rather than festering as underground skepticism. This might mean appointing thoughtful resisters to quality assurance roles, where their skepticism becomes systematic validation. It might mean engaging them in developing use case guidelines that specify when AI should and shouldn't be used. It might mean involving them in prompt engineering, where their critical thinking helps identify ways AI outputs might be misleading or insufficient.
When resistance is engaged rather than dismissed, several valuable outcomes emerge. First, it improves the quality of AI adoption by preventing over-reliance, inappropriate use cases, and uncritical acceptance of AI outputs. Second, it increases the legitimacy of the adoption initiative by demonstrating that leadership values thoughtful critique rather than just compliance. Third, it often converts resisters from obstacles into stakeholders, as they find meaningful roles in shaping how AI is used rather than just opposing whether it's used.
Most importantly, engaging resistance reinforces psychological safety for the broader organization. When Fence-Sitters see that concerns are taken seriously rather than dismissed, they feel safer expressing their own uncertainties and questions. This creates the learning environment where genuine capability development happens, as opposed to the compliance environment where people pretend to adopt while harboring private doubts.
The Leadership Role in Champion-Driven Change
While Champion cultivation is fundamentally a bottom-up approach, leadership's role is essential—it's just different from the traditional mandate-and-enforce model. Leadership in Champion-driven change is about environment creation, strategic support, and cultural reinforcement rather than about directing specific behaviors.
Leadership must create and maintain psychological safety, which requires consistent messaging and modeling. When mistakes happen—and they will—leadership's response sets the tone for the entire organization. If mistakes trigger blame or criticism, experimentation will shut down. If mistakes trigger curiosity about learning, experimentation will continue. Leadership must also protect Champions from skepticism or criticism, ensuring they have space and support to experiment even when immediate results aren't apparent.
Leadership must also provide strategic direction without micromanagement. This means articulating where AI exploration is most valuable to organizational priorities, establishing clear boundaries around data security and appropriate use, and making decisions about resource allocation. But it doesn't mean prescribing specific use cases or mandating particular approaches. The Champion network needs enough autonomy to discover unexpected value while having enough guidance to ensure their exploration aligns with organizational needs.
Perhaps most importantly, leadership must actively counterbalance the natural tendency toward mandate-thinking that emerges as AI adoption begins showing value. When Champions demonstrate compelling use cases, there's often organizational pressure to mandate those approaches across relevant functions. Leadership must resist this pressure, recognizing that mandated adoption of even proven use cases can trigger resistance and undermine the psychological safety that enabled the discovery in the first place. The path forward is continued invitation and demonstration rather than transition to mandate.
Conclusion: Change as Emergence Rather Than Imposition
The fundamental insight of Champion cultivation is that organizational change—particularly change as significant as AI adoption—cannot be imposed through authority. It must emerge through demonstrated value, peer influence, and voluntary adoption. This doesn't mean leadership is passive or that structure is absent. It means leadership's role is creating conditions where beneficial change can emerge and spread organically rather than attempting to command change into existence.
For mid-market companies, this approach is particularly valuable because it aligns with their structural realities. These organizations typically lack the extensive change management infrastructure of large enterprises, but they have closer peer relationships and more visible demonstrations of value. When one person in a 75-person company successfully adopts AI tools, many others notice. This visibility can work against adoption when it's mandated and fails, but it works powerfully for adoption when it's voluntary and succeeds.
The Champion cultivation approach requires patience that feels counterintuitive in a moment of rapid AI advancement. When new capabilities emerge monthly and competitive pressure feels intense, the deliberate, organic pace of Champion-driven change can feel too slow. But rushed mandates that create surface compliance without genuine capability development are ultimately slower, as organizations must return to rebuild foundations that were never properly established.
The Salesforce Agentforce experience serves as a cautionary tale: even with unlimited resources, technical excellence, and captive customers, forced adoption creates compliance theater rather than genuine transformation. Salesforce realized they needed to create "forward-deployed engineers" to work directly with customers, essentially shifting from mandate to hand-holding—acknowledging that "enterprises will need Salesforce to hold their hand more than was true with its traditional SaaS products". The lesson is clear: AI adoption requires support, invitation, and demonstrated value, not authority and announcement.
Three months into her AI initiative, Sarah made a crucial shift. Instead of doubled-down mandates, she identified five natural Champions, gave them time and resources to explore, and created monthly sharing sessions where they demonstrated their learnings. Six months later, adoption had grown to 68%—not through compliance but through voluntary adoption by Fence-Sitters who saw peers achieving real value. The Resisters hadn't become enthusiasts, but they had become contributors, helping develop quality standards that made AI use more rigorous and appropriate. The change hadn't been imposed. It had emerged. And that emergence proved far more sustainable than any mandate could have been.










