Building AI agents is no longer the hard part—making them reliable is. This talk dives into the under-discussed but critical practice of context engineering: the techniques used to control what your agents “see,” “remember,” and respond to.
The session will examine how to design effective context flows across planning, execution, and memory stages using frameworks such as LangGraph and Google’s Agent Development Kit (ADK). Through real-world examples and a demo, we’ll explore how to handle context drift, token overflow, and prompt injection. I’ll also share a before-and-after demo of a flaky agent that becomes robust simply by improving its context strategy.
This session will equip attendees (whether they’re building single-agent copilots or complex multi-agent systems) with practical tools to debug, refine, and productionize agent behavior through clean, modular context pipelines.
I'm a Tech Lead at Red Buffer, where I design and deploy generative AI systems for real-world use—ranging from healthcare automation to multi-agent copilots for SaaS platforms. I'm also a Google Developer Expert (GDE) in AI, with a focus on making advanced LLM systems accessible, robust, and production-ready.
When I'm not leading AI projects or running workshops, I mentor learners at Turing College as a Senior Team Lead, helping them bridge the gap between theory and real-world engineering. I also write for Towards Data Science, where I break down complex AI topics with a focus on practical use cases.
Outside of tech, I have a deep love for classic literature (yes, Tolstoy counts—even if he's Russian), strong coffee, and the quiet company of cats. I'm driven by curiosity, clarity, and the joy of building things that make life a little smarter and simpler.