Description
A conversation on how emotionally intimate AI systems are built, monitored, and held together under real-world constraints. 🎧 Opening This episode explores how trust is built, measured, and sometimes strained in AI systems designed for emotionally intimate conversations. It’s a technical and ethical discussion for people working on conversational AI, product infrastructure, and safety in systems that users form real attachments to. The focus stays on operational reality - what engineers actually face when AI moves from tools to companions. 🔍 Episode overview Eva Simone Lihotzky speaks with Lior Oren about what it means to run AI companions at scale, where user trust is not an abstract principle but a daily KPI. Drawing on his experience as CTO of Replika and prior work on integrity teams at Meta, Lior explains how unpredictability, observability, and emotional reliance shape engineering decisions. The conversation examines tensions between flexibility and stability, innovation and guardrails, and regulation and lived product reality. Rather than future speculation, it stays grounded in how teams design memory, user control, and safety systems when conversations themselves are the product. 🧩 Key themes discussed Trust treated as a measurable success metric, not a philosophical goal Why observability is essential in statistical, non-deterministic AI systems Guardrails as part of core infrastructure, similar to security or reliability Emotional attachment influencing uptime, priorities, and team culture User agency through transparency, memory control, and conversational steering The risk of breaking “tone” and continuity when models change Limits of regulation and the trade-offs inherent in statistical safety systems