Sundar Pichai

CEO, Google and Alphabet

10 minute read

It’s hard to imagine what modern life would look like without Google. Its search business prints hundreds of billions of dollars in yearly revenue, and starting over two decades ago, Google began channeling some of that money toward AI research. Its industry-leading scientists were responsible for many of the breakthroughs that drove the field to its current inflection point. And yet the product that in late 2022 kick-started today’s AI boom, ChatGPT, came from a startup backed by Google’s major competitor, Microsoft. Suddenly Google was no longer the symbolic leader of the AI race, but instead playing catch-up.

Google’s CEO Sundar Pichai, who joined the company in 2004 and was appointed to the top job in 2015, took that hurdle in stride. Google wasn’t the first to build a search engine, he points out, but was the first to build one good enough to attract the lion’s share of the market. The same for browsers. Email. Maps. His point: it matters less whether Google is first, and more that its version is the best. The U.S. Department of Justice takes an alternative view: that Google’s search is a monopoly upheld by illegal anti-competitive actions. On Aug. 5, a judge ruled in favor of that argument; Pichai says Google plans to appeal.

Facing that giant risk to its business, Google has begun to introduce generative AI tools into products with billions of users, the most visible being Google Search, where new “AI Overviews” are now appearing above the familiar 10 blue links. Pichai spoke with TIME about how the tech giant is approaching the AI future.

This interview has been condensed and edited for clarity.

Google is now rolling out AI Overviews in Search, which is the front door to the Internet for most people. How are you thinking about the ripple effects of such a fundamental change?

By our metrics, it is one of the biggest improvements we have done in 20 years. Given that we can do this in a way that touches billions of people, it's both an extraordinary opportunity, but at the same time, we have to be very responsible in how we approach it. There are times where we are testing things rigorously enough to make sure they’re working well. But I won’t underestimate the opportunity—in the longer run, you're enabling access to knowledge and intelligence, which is core to our mission, at an unprecedented scale. We have work left to do to make all of this work well. But that's the vision.

In 2016 you pivoted Google to an “AI first” company. But the thing that kick-started the current AI revolution was ChatGPT, which came from a competitor. Do you see that as a missed opportunity?

When I look at it, it’s a long journey. Through the journey, there were going to be new products. Anytime when there's an exciting product, when you run a company, do you wish you were the first to do the product? Yeah, absolutely. Who wouldn't? But we take a long-term view. We weren’t the first to build search, or a browser, or maps, or email, right? So I think what is important is, in the context of the new products we build, are we going to stay at the forefront? And I couldn’t be more pleased with the trajectory there, and more importantly what's ahead. These are very, very long term trends. We are in the early stages of what is going to be an extraordinarily big opportunity. So I'm focused on that.

Generative AI still has its flaws. There's the hallucination problem, which is getting better, but is by no means solved. How are you thinking about the appropriateness of putting generative AI into search, given those flaws? Do you think users should trust the results that they get from AI Overviews without feeling the need to check other sources?

In AI Overviews, we don't generate the answer through an LLM. We’re using the LLM to give an overview, using our ranking, and grounding it in that. And so it's a very different approach to what you would get in [Google’s generative AI chatbot] Gemini. The reason we have chosen this approach in search is to avoid the hallucination problem. That doesn't mean it doesn't happen, but it's happening at the rate of, let's say, one in 10 million. And we're constantly making that better. 

The business of journalism is built on people reaching news sites. But with AI, Google seems to be working to replace the open web. What happens to society when the business model for so many publishers is threatened?

It's an important question. Through all the work we are doing, I think more than any other company, we are prioritizing approaches that value journalistic content and will send traffic to important sources. And actually, I think in a world where there's increasing AI-related content, people are looking for those voices. We see that in how people use our products. We are trying to help users synthesize, formulate queries, ask it more naturally, give them context, so they can find the information they're looking for. But ultimately, the way we at least envision it is, the information that you're looking for is out there in the world.

Google recently pulled an ad for Gemini after a backlash. [The ad featured a father asking Gemini to help his daughter write a letter to her role model.] Why do you think people didn't like the ad?

Our goal with AI is to be able to help you with tasks so that, if anything, you have more time for the human moments to shine through. Talking to your child is one of those human moments. I think people aren't looking for help in those dimensions. I think that's where that [ad] got it wrong. But it's good that people are giving us feedback in terms of how they want to use the technology. I think we have to listen to it carefully and empower them in the way they want to be empowered.

We're seeing a growing backlash to AI from creators—people who are angry about their art, writing, or music being scraped from the internet, often without permission or compensation, to train AI. That AI is then flooding the web with spam, putting creators out of work in some cases, while enriching the tech companies. How do you plan to win those people back? 

I would argue that, if you take music as an example, we've actually taken a very different approach. We are partnering our initial generative AI tools for musicians to explore their creative space in more unique ways. Everything we are approaching in YouTube through this generative AI moment has been [about] providing tools for creators to create content. And we are not trying to put any AI-created content in YouTube. So I think you have a choice to make in these things. Ultimately, where I see the technology will have the most application—no different from what the internet did—is to be a deep collaborator to help people reach their full potential, whatever line of work they do. And it's more obvious in the case of creators and artists. I think society will always value those human voices. And I view it as an opportunity for us to invest in that.

When it comes to training AI tools, the scraping of the internet for AI training data is justified by the companies doing it under the “fair use” doctrine of copyright law. But given that many of those who that content belongs to don't believe that it is fair use, I'm curious how you're empathizing with those people, and whether there's an understanding, on Google's side, of that argument.

Look, this is an important argument, and I think this is why we have given a clear way for people to opt out of our AI training. Any time there's a new technology, I think, as a society, you have to find the right balance between what's fair use and how do you really protect the copyright and intellectual property of the people who are producing the content. I think those questions will end up being very, very important. And I think in the early stages of the technology, we've given an opt-out, but over time we will invest in approaches which figure out a better balance. I think it has to be something that's evolving over time, with everyone having a stake in it.

Google recently released a demo of Project Astra, an AI assistant that can take in voice and video, and respond in real time. That sounds like it's really expensive to run, with the amount of data going in and out. Is it? And how might you make a tool like that  profitable and sustainable to run as a business? 

We've said that over time we want Gemini to have these capabilities, but we will start small and figure out a way to get it to a subset of users. Throughout [the history of] technology, we've always had these things in which something looks computationally expensive, and we work super hard to make that more accessible. Eighteen months ago, people were like, ‘LLMs are very expensive.’ But we today have AI Overviews rolling out to hundreds of millions of people. The reason we built Gemini to be natively multimodal is so that we can do things like Astra one day at scale and provide it to everyone in the world.

Investors, like Goldman Sachs and Sequoia, are raising questions about whether the huge investments that are being made in AI are going to result in profitability anytime soon. Do you think we're in a bubble here?

I've never seen a technology which is so cross-cutting, and so it's a very leveraged investment.  I think it'll end up making sense long-term. Of course there'll be cycles, and we're all adaptable. If you feel the technology is going at a slower pace, you can course-correct your investment. These products have long useful life cycles. So you're investing in something that can be used for a long time. I think we are choosing the right approach in investing to push the frontier.

A judge ruled on Aug. 5 that Google has illegally maintained a monopoly in its search business. The DOJ is weighing some remedies, including a potential breakup of Google's business. What could that mean for the future of AI and for Google's role in it?

We respectfully disagree with the court's decision here. Even the ruling acknowledges that we have relentlessly innovated and have the best product in terms of quality. That users prefer it. Many partners independently validate and say it's the best product out there. And so, we will appeal, but we'll respectfully work through the process. I expect it to take some time, but we'll continue staying focused on innovation.

There was allegedly an argument between Larry Page and Elon Musk over whether it would be a good thing for digital life to outcompete human life. Do you agree with Larry that it would be a good thing if digital life out competed human life?

I've had many conversations with the founders, and we haven't talked about it in this way. I think more often than not, we end up talking about frustrations like, why can't we make progress so that we can detect cancer better and save lives? I can speak to how we are approaching it as Google: I think we are working super hard to build a technology in a way that empowers people, and I think that's an important framework by which we will approach everything we do.

Buy your copy of the TIME100 AI issue here

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com