The top thing that workers and managers we previously surveyed wanted from employers around AI was clear communication about AI plans relative to their roles.
New research now suggests that some ways employers communicate about AI are significantly more effective than others.
We spoke recently about this with Gabriella Rosen Kellerman, co-author of Tomorrowmind and chief innovation officer and chief product officer at BetterUp, which provides coaching and related services. Here are excerpts from our conversation, edited for space and clarity:
What do we know about the most effective ways for leaders to communicate with workers about the use of artificial intelligence in their organizations?
I’ll tackle it in two ways. One is about how you approach communication on AI in order to drive adoption of AI. Then, how do you communicate around the uncertainty of AI in a way that will yield positive outcomes?
With the first one, at the most basic level, we know that if people have a mindset of optimism on the one hand and agency on the other, it drives significantly more adoption of AI. With those two things together, people end up using many more times the amount of AI both in their personal life and in their professional life. We see this trickle down from leaders, so if a manager has that combination of optimism and agency around AI, then their teams are more likely to embody that. So showing that spirit of optimism and agency and communicating in ways that embody that is going to be helpful.
We also know that when people feel warmly toward their managers, when they feel that their managers are warm human beings, they’re more likely to adopt AI. So encouraging warmth and connection around these communications seems to be part of what’s effective as well. We don’t know a lot about that finding, by the way. But we know that of all these manager factors we looked at, warmth was the one that stood out.
Then the most practical piece on communicating about AI is we find a big difference in how you frame AI to others when you’re trying to encourage their use of it. In general, in our experimental research, we see that people get a lot more benefit from the same AI tool when it’s positioned for its machine-like strength rather than its human-like qualities. When we emphasize what the AI is really good at that’s different from what a human’s really good at, it encourages people to want to engage with it rather than trying to position it as a person or a substitute for a person. So emphasizing things like it’s nonjudgmental, it has so much knowledge, it’s highly accessible, yields better outcomes in terms of adoption.
The second piece is AI has introduced a lot of uncertainty, and in that sense it falls in the category of how we communicate to our people about scenarios where we don’t really know what’s going to play out but there’s a lot at stake for them, including the possibility of job loss either for them or their colleagues.
One of the approaches that we’ve seen has not been successful is to be transparent and say, ‘We don’t know.’ There are a lot of values-based reasons why people might want to take that approach, but it’s been falling flat with the workforce who feels that they want and deserve more clarity than not knowing. In a climate of declining trust overall, it can reinforce the sense that something’s being kept from them, that there’s a plan that they’re not in the loop on. The more you emphasize that there’s no plan, we just really don’t know, the more it reinforces that sort of suspicion.
What we are working with organizations to help them do is increase prospection, their capability to imagine and plan for the future, which is really hard, especially in environments of uncertainty. But there are ways to do it and to spend time doing it in a structured way where you end up really with a set of scenarios of how this could play out. You end up with indicators that you can share and be open about, such as, ‘Here are the three things that we’re thinking are possibilities. Here’s what we’re watching in the market, here’s what we’re watching in the tech. We welcome your input, we welcome your insights. You are seeing things that could be helpful, but here’s how we are putting our energy into planning around these possibilities.’ That prospection has been a huge factor in our research in increasing confidence in leadership and increasing trust in leadership.
If I want to get good at prospection, are there any specific frameworks or resources that you recommend?
We have a bunch in our book and we go through exercises. The framework that I think is most helpful is called ‘pragmatic prospection.’ It was developed by Roy Baumeister and his co-authors and it basically says that when we prospect, we do it in two phases. The first one is really fast and optimistic, and he calls it ‘dream big.’ Then the second is slower and more deliberative, more evaluative—it’s ‘get real.’ The first thing that a person can do is really think about, ‘Do I do justice to each of those phases? Am I someone who rushes to get real? Do I do enough on the ‘dream big’ side?’ Over on the ‘get real’ side is more the world of scenario planning. At the corporate level, those are all the rage again. That is a really effective way of mapping out these possibilities and then being able to communicate them to the employee workforce.
Read a full transcript of our conversation with Kellerman, including how the hierarchical distance between employees and a manager could alter the key ingredients of trust.
Read our 2024 interview with Kellerman about how to improve learning agility.
Download our research playbook on using AI in ways that enhance worker dignity and inclusion.