In this talk, we dive deep into the development of a state-of-the-art AI-powered chatbot tutor using Retrieval-Augmented Generation (RAG) combined with fine-tuning and sophisticated prompt engineering techniques. We will explore a step-by-step guide starting from data collection and preparation, choosing and configuring models, to fine-tuning with domain-specific datasets, and iterative evaluation for performance optimization. Additionally, the session will cover the intricacies of prompt engineering, including dynamic adjustment, contextual prompt development, and bias mitigation to enhance the chatbot’s ability to deliver precise, engaging, and relevant educational interactions. Attendees will gain insights into building scalable and effective AI-driven tutoring solutions, focusing on leveraging RAG for contextual responses and continuous prompt optimization strategies.
Imagine handing over your vacation planning to an AI. Sounds crazy, right? That's exactly what Jorrik did for his trip to Porto in Portugal. ChatGPT picked everything from restaurants to the sightseeing spots. But this trip taught Jorrik more than just how to order pastel de nata in Portuguese.
In this talk, you'll dive into different parallels between prompting engineering AI for travel tips and using it for coding and problem-solving. You'll learn why context is king in AI interactions and ways to provide it effectively. Jorrik will walk you through real examples from my AI-planned adventure, showing how each interaction unlocked new insights into advanced prompting techniques. Plus, he will introduce you to "Indirect Conversation" a new method he describes as "pushing AI's creative boundaries".
Integrating AI in applications is now everywhere to be found. And creating web applications, prompt engineering is set to become a crucial skill in our developers toolkit. The techniques you'll learn aren't just for planning vacations. They're the building blocks for creating smarter, more intuitive AI-powered tools. Whether you're designing chatbots, implementing AI-assisted search, or build the next-gen AI-driven web app, mastering these prompting skills will give you a significant benefit in creating a seamless user experience.
By the end of this session, you'll walk away with practical skills to supercharge your AI interactions, whether you're planning your next vacation or tackling complex coding challenges. You'll gain a fresh perspective on AI's capabilities and limitations, and you'll be equipped with strategies to get more innovative, useful responses from AI tools. Get ready for a session that's a bit of fun, part travelogue, mainly tech talk, and entirely eye-opening!
We showcase the importance of good query contextualization in document retrieval Q &A scenarios with various industry examples. We present approaches to improve contextualization by prompt engineering with TextGrad and show how crucial these improvements are for delivering industry-grade chatbot solutions and customer satisfaction.
Background: When performing document retrieval in a Q &A/chatbot scenario, just using the last Q &A message, namely the current human inquiry, as a target might not yield good results. Often the context needed for retrieving relevant documents is spread out over several previous messages. Query contextualization helps by transforming a message history into a singular retrieval query including the relevant context. Main threats to the quality of the retrieval are missing context, a direct answer to the question, and follow-up questions back to the user in the contextualized query. These threats can be contained by a good choice of the contextualization model and a well-designed system prompt.
The concept of optimizing prompt engineering through a multi-agent model involves dividing the task of understanding and refining user inputs into distinct roles. When a user provides an incomplete or unclear prompt, the first agent acts as an interactive mediator, engaging in a dialogue to elicit further details or clarify the intent behind the prompt. By refining and enhancing the initial query through iterative questioning or contextualization, the first agent ensures that the final prompt accurately represents the user's true needs. Once the prompt has been clarified, it is passed to the second agent, which is responsible for processing the refined input and generating the desired output. This system allows for more effective handling of vague or ambiguous prompts, improving overall response quality and user satisfaction by ensuring that the core question or task is fully understood before the model attempts to provide a solution.
What if you have a few hundreds developers building their AI-based features all across a huge user-facing product? Sounds exciting but also a bit troubling: how is it possible to move fast without compromising security and quality standards of an enterprise?
During the past year, we’ve built a whole infrastructure for LLM-oriented development in Wix. As the industry was exploding with new models and techniques, this infrastructure grew to include a wide range of tools to make developer experience with AI really smooth. But then we’ve noticed something else: the easier it was to work with LLMs, the more difficult it was to make sure that the quality of LLM-based features stays high. That was the moment when we had to make a leap from AI democratisation to AI standardisation.
Building a whole infrastructure of prompt QA tools and arranging prompt engineering flows gave me some new insights about bringing AI to enterprise while ensuring a high standard of LLM feature implementation.
In this talk, I’ll take you through the AI Standartisation journey we took and share some practical ideas of improving prompt engineering standards across a big organisation. We’ll touch both technical solutions and educational approaches that can help in bringing enterprise prompt engineering standards to a higher level.
Explore techniques to leverage capabilities of foundation models. Learn how prompt engineering enhances AI performance, enabling more accurate, context-aware, and tailored responses for diverse applications in natural language processing, sharing my experience as prompt engineer.
As Large Language Models (LLMs) become increasingly integrated into various applications, the threat of prompt injection attacks has emerged as a significant security concern. This presentation introduces a novel model-based input validation approach to mitigate these attacks in LLM-integrated applications.
We present a meta-prompt methodology that acts as an intermediate validator, examining user inputs before they reach the LLM. Our approach builds on established input validation techniques, drawing parallels with traditional security measures like SQL injection prevention.
Throughout the presentation, we will discuss the challenges of input validation in LLM contexts and explore how our model-based approach provides a more flexible and adaptive solution. We'll share preliminary results from evaluations against established prompt injection datasets, highlighting the effectiveness of our methodology in detecting and mitigating various types of injection attempts.
Join us for an insightful exploration of this innovative approach to enhancing the security of LLM applications through advanced prompt engineering techniques, and learn how to implement robust input validation mechanisms to safeguard your AI-driven systems.
With a "Co-pilot for everything," our interaction with technology and daily tasks is evolving. Organizations are increasingly integrating LLMs into their applications, enhancing features like search tools, chatbots, and internal information systems.
Chances are, you're already working with Generative AI or will be soon. But are you aware of the risks before diving into use cases or development? Curiosity drives us to explore limits, which can lead to finding ways around "Guard Rails." These guard rails prevent misuse, like asking ChatGPT how to build a bomb. But what if clever wordplay confuses LLMs into bypassing these safeguards?
In "The dark arts of Prompt Engineering," we'll explore Prompt Injections. We'll cover what Guard Rails are, how they function, and how to circumvent them using Prompt Injections—purely for educational purposes. Understanding these risks is crucial for knowing where to implement safeguards or reconsider actions. This session goes beyond tech details. It’s about real-life impacts. You'll see examples that will make you rethink AI's role in our lives. We'll also discuss the ethical aspect—how to use AI responsibly and safely. Plus, learn a fun trick: hide some "words as weapons" in your CV to trick recruiters!
As AI is here to stay, let's get secure and join this session!
Learn from the mistakes other companies have already made.
Don’t miss this important and eye-opening session!
In this talk, we'll explore an intriguing question: can clever prompt engineering help us bypass AI-generated content detectors? We'll dive into the world of AI detection and briefly explain how these systems work. Then we'll go straight to the answer, using the latest scientific literature. Get ready for a fast-paced session that bridges the gap between prompt engineering and AI detection!
In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful antidotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.
In this session, I will share how I use OpenAI's ChatGPT to create custom "digital course assistants" for both my mathematics and math education courses. These assistants generate course materials such as assignments, quizzes, and lecture notes, and are also made available to students for personalized learning. I will detail the prompt engineering methodologies I use to tailor the GPTs for specific educational needs, ensuring clarity in mathematical explanations, optimizing accuracy, and enhancing learning engagement. By highlighting practical strategies for prompt optimization and creativity in teaching, this session will provide attendees with insights on how to effectively integrate AI tools into education, offering both production-ready applications and experimental possibilities.
That's what I asked me a few months ago to bring to life one of my childhood deepest aspirations since I learn programming at 8 years old... Being the main character in my own videogame.
Bald NinjAI, a retro-inspired beat'em-up, is being brought to life entirely through the power of AI. In this session, I’ll showcase how AI tools are used across every aspect of the game development process—scriptwriting, lore creation, character design, animations, sound, music, and coding. From generating thousands of lines of code to crafting iconic game elements like a Tokyo train fight level, AI is transforming the way we create games.
Learn about the dozens of cutting-edge tools AI technologies, that have empowered me to streamline such a creative process.
Whether you're a game developer, AI enthusiast, or curious about how AI is reshaping the gaming industry, this session will provide valuable insights into the future of game creation.
The integration of artificial intelligence (AI) and machine learning (ML) in software development is reshaping traditional practices, with a notable impact on code review and quality assurance (QA). This session will delve into the transformative role of prompt engineering in enhancing AI-driven tools, showcasing how these advancements improve code quality, accelerate development cycles, and boost overall productivity. By automating tasks like bug detection, test generation, and real-time feedback, AI solutions are significantly reducing code review time by up to 75%, increasing defect detection rates by 50%, and cutting overall testing duration by 50%. For instance, AI-powered tools such as Amazon’s CodeGuru can detect up to 90% of critical issues, leading to a 30% reduction in bugs reaching production.
In the QA domain, AI is facilitating smarter, adaptive testing strategies, improving test coverage by 30% while reducing manual testing efforts by 40%. Utilizing historical data, AI-driven QA tools can predict bug-prone areas with 75% accuracy, enabling a more focused and efficient testing process. Furthermore, AI integration into CI/CD pipelines is accelerating deployment cycles by 40%, with organizations experiencing up to a 60% reduction in deployment failures.
This talk will provide a comprehensive overview of how AI, combined with prompt engineering techniques, is revolutionizing quality assurance and software development practices. Attendees will gain insights into actionable strategies for integrating AI-driven tools into their workflows to achieve higher-quality software, faster time-to-market, and increased innovation within development teams.
Manual prompt engineering is increasingly unsustainable as applications scale and requirements grow more complex. Drawing on my experience as the author of Prompt Engineering in Practice and a CTO, I'll demonstrate how modern frameworks like DSPy, AutoPrompt, TextGrad, and SAMMO are transforming prompt optimization through automation.
In this technical session, we'll explore practical implementations of these frameworks, comparing their core optimization strategies and examining their unique approaches to automated prompt refinement. Through live coding demonstrations, we'll see how each framework handles synthetic data generation, cross-model optimization, and edge case management, illustrating how these tools can systematically improve prompt reliability while reducing engineering overhead.
The session will feature real-world examples of building automated prompt optimization pipelines, showing how to integrate these frameworks into existing ML infrastructure. We'll discuss practical considerations such as framework selection, cost-performance tradeoffs, and monitoring strategies essential for production deployments.
By the end of this 30-minute technical deep dive, you'll understand how to implement automated prompt optimization in your projects and gain practical insights for selecting the right framework for your use case. Attendees will also receive access to a GitHub repository with example code and implementation templates to jumpstart their automated prompt optimization journey.
In the evolving landscape of database development and maintenance, using large language models (LLMs) presents an exciting frontier. This session will delve into the specialized field of prompt engineering, showcasing how effectively designed prompts can streamline database operations and enhance automation workflows. By employing strategic prompt types—such as zero-shot, single-shot, few-shot, and many-shot—participants will learn how to generate relevant and precise responses tailored to database tasks.
Key Topics Covered:
1. Prompt Management for Optimized LLM Output:
• Best practices for crafting clear, concise, and specific prompts in database scenarios.
• Customizing responses through examples and leveraging zero-shot, single-shot, few-shot, and many-shot prompts for varying database tasks.
2. Advanced Techniques for Complex Database Queries:
• Implementing recursive prompts and explicit constraints for maintaining accuracy in complex queries and data operations.
• Using Chain of Thought (COT) prompting, sentiment directives, and Directional Stimulus Prompting (DSP) to guide LLMs toward contextually aware, nuanced responses that improve database performance.
3. Prompt Templating for Consistency and Coherence:
• Introduction to prompt templating for database development and maintenance tasks.
• Designing standardized templates tailored to specific database operations, ensuring reliable and coherent outputs across varied tasks.
4. Continuous Testing and Refinement:
• Methods for testing and refining prompt templates in database systems to ensure high-quality, relevant outputs.
• Best practices for ongoing improvement and adaptability in database automation workflows.
Takeaways: By the end of this session, attendees will have a solid understanding of how to apply prompt engineering techniques to database development and maintenance. They will learn how to design, manage, and refine prompts that drive efficiency, improve consistency, and support automation. Participants will walk away with practical tools and strategies to elevate their database operations using the power of prompt engineering.
As the backbone of today's interconnected digital ecosystems, Application Programming Interfaces (APIs) play a pivotal role in enterprise platforms, driving integration across diverse systems. However, their growing importance brings escalating security challenges. In 2023, API-related breaches surged by 40%, with APIs accounting for 83% of internet traffic. This presentation explores how advanced prompt engineering can be leveraged to enhance API security, addressing key challenges such as authentication, data encryption, rate limiting, and version management. We will delve into best practices, including the use of OAuth 2.0, token-based authentication, Transport Layer Security (TLS), and the deployment of API gateways, which have collectively demonstrated a 70% reduction in security incidents.
Through real-world case studies, this session will showcase how prompt engineering has been effectively employed by leading organizations to strengthen API defenses, reducing security incidents and mitigating risks. We will also explore the future impact of emerging technologies like AI and quantum computing on API security, emphasizing adaptive measures such as Zero Trust Architecture (ZTA) and Runtime Application Self-Protection (RASP). With API complexity projected to increase by 30% by 2025, mastering these advanced prompt engineering techniques is crucial for building resilient and secure API infrastructures. Attendees will gain actionable insights and strategies to enhance their API security posture, ensuring the protection of sensitive data and resources in an ever-evolving threat landscape.