Nearly a quarter of organizations polled in a recent McKinsey report said they had experienced negative consequences from generative AI’s inaccuracy. Guardrails, released last fall by Israel-based startup Aporia, adds a collection of small language models between a chatbot and users that work together to intercept inaccurate, inappropriate, or off-topic responses while giving companies better privacy controls. It also prevents users’ attempts to manipulate AI, by, for example, stopping users who pressure a chatbot into giving them a discount. Liran Hason, Aporia’s co-founder and CEO, says the company’s goal is ensuring humanity “can really trust AI." Guardrails’s early clients include insurance giant Munich Re and rental car company Sixt.
More Must-Reads from TIME
- L.A. Fires Show Reality of 1.5°C of Warming
- Home Losses From L.A. Fires Hasten ‘An Uninsurable Future’
- The Women Refusing to Participate in Trump’s Economy
- Bad Bunny On Heartbreak and New Album
- How to Dress Warmly for Cold Weather
- We’re Lucky to Have Been Alive in the Age of David Lynch
- The Motivational Trick That Makes You Exercise Harder
- Column: No One Won The War in Gaza
Contact us at letters@time.com