What if you have a few hundreds developers building their AI-based features all across a huge user-facing product? Sounds exciting but also a bit troubling: how is it possible to move fast without compromising security and quality standards of an enterprise?
During the past year, we’ve built a whole infrastructure for LLM-oriented development in Wix. As the industry was exploding with new models and techniques, this infrastructure grew to include a wide range of tools to make developer experience with AI really smooth. But then we’ve noticed something else: the easier it was to work with LLMs, the more difficult it was to make sure that the quality of LLM-based features stays high. That was the moment when we had to make a leap from AI democratisation to AI standardisation.
Building a whole infrastructure of prompt QA tools and arranging prompt engineering flows gave me some new insights about bringing AI to enterprise while ensuring a high standard of LLM feature implementation.
In this talk, I’ll take you through the AI Standartisation journey we took and share some practical ideas of improving prompt engineering standards across a big organisation. We’ll touch both technical solutions and educational approaches that can help in bringing enterprise prompt engineering standards to a higher level.