This talk lifts the lid on how we have built processes around our new features, while onboarding language experts to become prompt engineers.
You’ll see how we:
Co-design prompts with subject-matter experts
Turning teaching strategies into reproducible prompt patterns for pronunciation and conversation feedback.
Personalise feedback with learner-aware prompts
Adapting to errors, goals, and CEFR level—while guarding against over-correction and jargon.
Translate and adapt prompts across languages
Building a multilingual prompt layer that preserves intent, pedagogy, and tone.
Make feedback measurable
Using rubric prompts and scoring agents to assess feature outputs for accuracy, level-fit, clarity, and encouragement—driving a continuous evaluation loop and consistent judgments across languages.
Expect concrete templates, failure modes we hit (and fixed), and the human-in-the-loop practices that kept educational integrity front and center.
If our students are turning to AI to build speaking confidence, this is how teachers and engineers must co-design the prompts—and the evaluators—that truly serve them.
Ilya Kiselev Busuu, Lead Al Product Manager Ilya Kiselev is a Lead Al Product Manager with over seven years of experience building Al-powered products that address real-world challenges. Ilya currently leads Al Product strategy at Busuu (a Chegg company). There Ilya spearheaded initiatives that power and enhance the learner experience, ranging from LLM-based speaking practice to learner assessment using more conventional ML models, fostering user growth and engagement. His career spans work in computer vision, semantic search, and user research at companies like Satis.ai, Attraqt, and Elastic. A former startup founder and published neuroscience researcher, Ilya blends technical expertise with a passion for user-centered Al innovation.