This AI Agent Is Designed to Not Go Rogue

News,

AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy that can fight with your cable company's customer service, or a to-do list auditor that will do some tasks for you and prod you to resolve the rest, agentic assistants are built to access your digital accounts and carry out your commands. This is helpful—but has also caused a lot of chaos. The bots are out there mass-deleting emails they've been instructed to preserve, writing hit pieces over perceived snubs and launching phishing attacks against their owners.

Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user's systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.

"Services like OpenClaw are at peak hype right now, but my hope is that there's an opportunity to say, 'Well, this is probably not how we want to do it,'" Provos said. "Instead, let's develop something that still gives you very high utility, but is not going to go into these completely uncharted, sometimes, destructive, paths."

Please select this link to read the complete article from WIRED.