There was a certain irony to a recurring theme of Charter’s recent Leading with AI summits: The firms that are leaders when it comes to AI adoption have been rapidly moving away from, well, measuring adoption.
Microsoft’s Katy George captured the shift: “We used to pay attention to adoption. Now we just pay attention to performance.” More widespread organizational use of AI isn’t the same as business impact, and impact comes in many forms—productivity, yes, but growth and innovation, too.
Yet if “97% adoption rates mean almost nothing,” as Zapier chief people officer Brandon Sammut said at our San Francisco summit, what should we be measuring instead?
Last week, I brought together practitioners from four tech companies in a Charter Forum session to try and answer exactly that question—and others. Our conversation, conducted under Chatham House rules but excerpts of which are shared below with permission, covered frameworks, failures, and a number of challenges. Here’s what stood out.
Adoption is a means, not the end
Every company in the room had gone through the same early experience: high AI usage numbers, but shallow results.
“The early AI ROI market was full of ‘saved 30 minutes’ stats,” said Ben Ostrowski, Atlassian’s head of AI go-to-market. “Most of the time, that was just reinvested back into admin tasks or correcting AI output.” Recent research from software firm ActivTrak found something similar: Time on email, messaging, and admin tools more than doubled among AI users, while time for focused, uninterrupted work fell 9%.
The mental shift the group converged on is that adoption is a proxy, not a destination. “We never said, ‘once everyone uses a computer, we’ve solved technology,'” Ostrowski said.
Colin Monaghan from Zapier was more specific: “Getting to 97% AI adoption took real effort, and it matters. But it’s only a starting point. What matters more is what people do with it, whether it’s actually making the business better.”
And Udemy’s Madhavi Bhasin described the inflection point she’s reaching: “We are at the end of the ‘walk’ [part]. The ‘run’ is where it gets integrated in how we work.”
Leaders have to go first, visibly
All four companies said visible executive role-modeling was the most reliable way to get momentum on adoption. Training programs and mandates only do so much. Leaders actually using the tools—in front of their teams—are the real spark.
Karen Fascenda, Udemy’s chief people officer, described taking her leadership team offsite with one requirement: Everyone had to build an AI assistant. “I didn’t even know how to use AI at the time—my general counsel was showing me,” she said. “Now I can’t imagine not using it throughout the day.”
At Atlassian, a senior product leader built a Rovo agent (Atlassian’s AI agent product) live in a full-company meeting. That single demo nearly doubled AI usage across that leader’s team afterward, according to Ostrowski.
When senior leaders model that they’re still learning, they remove the social risk for everyone else to do the same. That permission structure is worth more than any training rollout.
AI ‘fluency’ is becoming a hiring and performance expectation
The clearest dividing line between the companies on the call and most of their peers? AI “fluency”—the capability to know where AI creates leverage, how to prompt AI well,and what quality outputs look like—has moved from a training initiative to a hiring and performance requirement.
Zapier assesses AI fluency at four points in the hiring process (application, recruiter screen, skills assessment, and executive interview), using a consistent rubric and making coaching available to borderline candidates. They’re not looking for people to pass a test, but to show aptitude for learning.
“We want to see the curve, not the snapshot,” said Zapier global head of talent Tracy St.Dic. “We want to understand how people learn, grow, and get curious even in the four to six weeks of our hiring process.”
They’ve also folded AI experimentation into an existing performance behavior rather than separating it out, which maintains accountability without killing the experimentation culture they spent two years building.
Upwork is taking a structural approach, mandating one AI-focused goal for all employees in the first half of 2026.
Cassie Veres, a senior director of talent development and learning at Upwork, was candid about a prior misstep. “In 2024 we tried to integrate AI fluency into our performance expectations,” she said. They ended up not doing it out of fear people wouldn’t experiment, but have since embedded AI expectations into criteria for promotions.
A quality risk worth watching
The companies making progress have named the tradeoff explicitly rather than leaving fluency aspirational. “There is a key balance that needs to be struck,” said Atlassian’s Ostrowski. “Freedom and encouragement to experiment, and a clear definition of quality. Lean too hard on the former, you get slop. Too hard on the latter, and no one experiments.”
Several members of the group said those quality concerns deserve more attention than they’re currently getting. The productivity gains from AI can mask a degradation in output quality that’s hard to measure until it’s already a problem.
Ostrowski was blunt: “Faster creation of slop is my personal hell. I think it’s going to show which organizations have considered this and which haven’t.”
Where to start
Stop focusing on adoption rates as the signal of progress. Instead, ask, “Is AI actually changing how decisions get made or how work flows through teams?” Like Zapier, which looks at efficiency, quality, and employee engagement, figure out what your scorecard looks like.
Get leaders to model AI use in public. Have a senior leader build or use an AI tool live, in front of a large audience. A messy, real demonstration that they’re still learning lands better than a polished presentation.
Audit your AI expectations. If AI proficiency lives only in your training catalog, it’s aspirational. Identify how AI fluency is evaluated in hiring and performance management, and realize that those standards are going to evolve, rapidly, over time.