Nearly a quarter of organizations polled in a recent McKinsey report said they had experienced negative consequences from generative AI’s inaccuracy. Guardrails, released last fall by Israel-based startup Aporia, adds a collection of small language models between a chatbot and users that work together to intercept inaccurate, inappropriate, or off-topic responses while giving companies better privacy controls. It also prevents users’ attempts to manipulate AI, by, for example, stopping users who pressure a chatbot into giving them a discount. Liran Hason, Aporia’s co-founder and CEO, says the company’s goal is ensuring humanity “can really trust AI." Guardrails’s early clients include insurance giant Munich Re and rental car company Sixt.
More Must-Reads from TIME
- Inside Elon Musk’s War on Washington
- Meet the 2025 Women of the Year
- The Harsh Truth About Disability Inclusion
- Why Do More Young Adults Have Cancer?
- Colman Domingo Leads With Radical Love
- How to Get Better at Doing Things Alone
- Cecily Strong on Goober the Clown
- Column: The Rise of America’s Broligarchy
Contact us at letters@time.com