Event Overview

In this event, you will gain the knowledge and tools to identify, mitigate, and prevent security risks, strengthening the reliability and security of your AI systems. WWT's Prompt Injection and Training Data Poisoning labs help users understand and defend against two major LLM security threats. The Prompt Injection Lab demonstrates how attackers manipulate LLMs with deceptive inputs to extract sensitive data or execute unintended actions, while the Training Data Poisoning Lab explores how corrupting training or retrieval data can introduce biases , vulnerabilities, or backdoors.

What to expect

This hands-on interactive session will explore using the labs is to introduce users to the risks of prompt injection and training data poisoning to Large Language Model (LLM) and Retrieval Augmented Generation (RAG) systems. Users will explore both direct and indirect prompt injection, as well as training data poisoning, through real-time queries and examples. The lab walks the user through accomplishing the following:
  • Lab Architecture, key concepts, terms and technologies.
  • Performing direct prompt injection to extract private information from an LLM.
  • Performing indirect prompt injection by uploading a "malicious" resume to a RAG system.
  • Examining an online forum and seeing the repercussions of poisoning a RAG chatbot.
  • Exploring LLMGuard methods to protect against prompt injection and training data poisoning.

Goals and Objectives

WWT's Prompt Injection and Training Data Poisoning Labs help users understand and defend against two major LLM security threats: Prompt Injection Lab: Demonstrates how attackers manipulate LLMs with deceptive inputs to extract sensitive data or execute unintended actions. Training Data Poisoning Lab: Explores how corrupting training or retrieval data can introduce biases, vulnerabilities or backdoors. By attending these labs, users will gain practical knowledge and skills to identify, mitigate and prevent these security threats, ensuring the safe and ethical use of LLMs.

Who should attend?

This event is ideal for IT professionals, data scientists, and AI and security engineers.