As AI moves from the experimentation phase to broader deployment, leaders face a new set of challenges. They include redesigning work for entire teams, preserving expertise and mentorship, and managing AI agents and humans simultaneously. On the sidelines of the World Economic Forum annual meeting in Davos, Switzerland, recently, a panel of business leaders and experts explored what we’ve learned about AI’s implications for leadership over the past year, and what comes next. The discussion, which included former IBM executive Diane Gherson, Valence founder and CEO Parker Mitchell, and Microsoft’s Katy George—who is also speaking at our Leading with AI Summit today—was hosted by Charter in partnership with Valence and The Washington Post’s WP Intelligence unit.
Here are takeaways from the discussion, which you can watch in its entirety here:
1. 2026 is the year you may manage AI and humans at the same time.
The speakers agreed that AI’s role in organizations has evolved rapidly from individual tool adoption to something more important. Mitchell said two years ago, the question was whether anyone would even talk to an AI coach. A year ago, his provocation was that AI would join the org chart. “This year, there’s generally consensus that there will be a large number of people who will be managing both humans and AI agents,” said Mitchell. “There’s a world in which 2026 is the year that your AI coach will know you better than your manager knows you— and you might actually want to choose an AI manager over a human manager.”
2. Leaders’ focus must shift from individual adoption to team-level transformation.
George, who is corporate vice president of workforce transformation at Microsoft, described how the tech company is no longer focused just on getting individuals to use AI tools, but on redesigning how whole teams work together.
“This will be the year [for] moving from a focus on individual adoption, individual upskilling, [and] individual productivity to really shifting teams and systems of work,” she said.
George pointed to Microsoft’s “Camp Air” bootcamp where teams redesign their workflows with AI, then permanently change how they operate. Colleagues she spoke with in Microsoft’s sales organization stopped monitoring individual Copilot usage because the link between AI usage and revenue per sales rep is now well established. They told her they “no longer do that because we all know revenue per sales rep goes way up with AI usage. So now we just look at people’s performance,” she said. Measuring outcomes—not adoption—is a signal of a more mature new phase.
3. Beware of sleepwalking into a new Taylorism.
Gherson—who was IBM’s chief human resources officer, serves on the board of Kraft Heinz, and is a senior advisor at BCG—drew a parallel between the current moment and the early factory era. Frederick Winslow Taylor’s “time-and-motion” studies aimed to maximize productivity by breaking skilled craftsmanship into micro-tasks—stripping workers of pride, agency, and autonomy in the process.
“The thing that bothers me is that we’re kind of sleepwalking our way into a Frederick Winslow [Taylor] moment,” Gherson said.
She warned that the emerging pattern of AI running entire processes while humans merely oversee the output risks repeating that history. She pointed to CEO earnings calls announcing layoffs because of AI as a particularly corrosive message. The alternative, she argued, is having explicit design principles that treat meaningful work as a litmus test and “test every opportunity and say, ‘No, that’s not meaningful work. We’re not going to do that,’” Gherson said.
4. Expertise is becoming more valuable, not less.
Early research on AI in workplaces—particularly studies of call centers—suggested that less experienced workers benefited most from AI, while top performers saw little improvement or even a drop in productivity. But the speakers argued that those findings don’t tell the full story as AI is deployed in more complex settings. “Right now I see the value of human judgment going up, not down,” said George.
She described Microsoft teams where experts in specific areas have used AI to dramatically accelerate their work—a prototyping team now completes in a week what used to take three to four months, for example. She described an emerging “t-shaped” human role: People still need extensive disciplinary expertise, but AI enables them to connect horizontally across functions and teams are more fluid. The prototyping team members have started calling themselves “firefly teams,” perhaps a nod to the agile way they now work. Gherson added an example from Zapier, where she said the company discovered that complex customer questions were best handled not by more customer service reps, but by routing engineers to the front lines, with an expert showing up when needed.
5. Watch out for “brain rot” and design for cognitive engagement.
The speakers acknowledged that AI output can look so polished that people stop thinking critically about it. Gherson cited research showing that even high performers, when working with AI, pulled back on their critical thinking because the output appeared so good.
“There’s what people call brain rot, where people only used 35% of their brain that they would have normally used in the work they were doing,” she said.
George responded that organizations need to design for cognitive engagement—not just efficiency. “Purposely designing cognitive engagement is going to be so important,” she said, rather than just designing for productivity. Gherson was emphatic that this should function as a hard constraint: “That has to be a design principle,” she said. “We’re not going to do that if we don’t pass that test.”
6. The people who do the work must redesign the work.
Many AI transformation efforts are designed from the top down, without involving the employees who actually do the work. “The people who do the work have to redesign the work,” said George. “So much of knowledge work is tacit anyway. Nobody can actually redesign it for you.”
The history of bringing technology into workplaces includes many failures from not involving workers. George pointed to how Taylor’s top-down factory approach ultimately failed, while Toyota’s system of involving every employee in continuous improvement succeeded.
7. We’re on the cusp of a new science of knowledge work.
George said organizations need to invest in developing a “new science of knowledge work”—understanding for the first time how this type of work actually happens, in the same way lean manufacturing once made factory work visible and measurable. Mitchell cited a CEO who told him 30% of knowledge work is “basically wasted work.” “My goal at the end of the year is that, however you measure it, that 30% goes down to 20% and then we’ll bring it down to 10%,” Mitchell said.
He noted that the power users of AI are where leaders should be focused. “A small number of people are getting an enormous amount of benefit, and there’s a long tail,” Mitchell said. “We shouldn’t be paying any attention to averages in this early adoption phase. We should find the outliers and understand the outliers and then expand the outliers and…scale their practices.”