To run an AI model, computers must constantly shift vast amounts of data between separate memory and logic chips, a process that chokes performance. To solve this, Cerebras Systems in 2019 engineered a dinner plate–sized chip—the largest ever—that embeds both memory and logic. “People thought we were mad hatters,” says Andrew Feldman, Cerebras’s CEO and co-founder, given the huge technical hurdles. In March, the company released a third generation of the chip, the record-fast Wafer-Scale Engine 3 (WSE-3), which can train models 10 times bigger than OpenAI’s GPT-4, and will comprise the Condor Galaxy 3, a supercomputer under construction in Texas.
More Must-Reads from TIME
- L.A. Fires Show Reality of 1.5°C of Warming
- Home Losses From L.A. Fires Hasten ‘An Uninsurable Future’
- The Women Refusing to Participate in Trump’s Economy
- Bad Bunny On Heartbreak and New Album
- How to Dress Warmly for Cold Weather
- We’re Lucky to Have Been Alive in the Age of David Lynch
- The Motivational Trick That Makes You Exercise Harder
- Column: No One Won The War in Gaza
Contact us at letters@time.com