On April 22, the LebNet Bay Area community came together for a compelling and timely conversation on the future of artificial intelligence, featuring Matt Boulos, Head of Policy & Safety at Imbue, an AI lab pioneering next-generation AI agents. Expertly moderated by Luna Maroun, the event brought together a dynamic mix of technologists, professionals, and students.
The discussion delved into the transformative role of AI agents, systems that go beyond generating code to reasoning, acting, and reshaping how we engage with technology. Unlike traditional assistants, these agents are designed to bridge the gap between human intent and machine execution. But with greater capabilities come greater responsibilities and risks.
Highlights from the conversation:
From code to software: Generating a line of code is easy. Building modular, maintainable systems is not. Matt challenged the crowd to think beyond syntax and consider the design foresight embedded in human engineering.
The illusion of understanding: Even researchers are still unraveling how models work. Mechanistic interpretability is a promising path, but we remain largely in the dark.
When AI harms: From deepfake abuse to a tragic case where a chatbot was linked to a user’s suicide, Matt reminded us that safety failures aren’t theoretical. They’re happening now—and the question of accountability remains painfully unresolved.
Policy that grows with tech: We discussed the urgent need for regulations that are principle-driven, adaptable, and designed to evolve. Without this, we risk locking society into frameworks that can’t keep pace with innovation.
Equity and access: Matt posed a powerful question: Who gets to shape the future of AI? If development remains concentrated in a handful of well-resourced players, the public loses its voice in how this transformative technology unfolds.
As Kim Nasrala, an attendee and junior at the University of California, Berkeley, aptly summarized, the future of AI must be built with integrity, empathy, and accountability.
“Let’s build systems that don’t just perform tasks, but do so with foresight and care.” – Matt Boulos

A big thank you to Matt Boulos, Luna Maroun, and to everyone who joined the conversation. If this event made one thing clear, it’s that designing safe AI isn’t just a technical challenge, it’s a societal one.
Following the discussion, attendees enjoyed an hour of networking over light refreshments, and joined together for a dinner in town continuing conversations sparked during the session.

