Overview
Explore
Events
Select a tab
7 results found
Retrieval Augmented Generation (RAG) Walk Through Lab
This lab will go into the basics of Retrieval Augmented Generation (RAG) through hands on access to a dedicated environment.
Foundations Lab
• 771 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
• 284 launches
Deploying and Securing Multi-Cloud and Edge Generative AI Workloads with F5 Distributed Cloud
In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments.
However, distributed deployments come with challenges, including:
Complexity in managing multi-cloud environments.
Lack of unified visibility across clouds.
Inconsistent security and policy enforcement.
F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
• 9 launches
Training Data Poisoning Lab
Training data poisoning poses significant risks to Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) systems. This lab explores these dangers through a case study of an online forum, demonstrating how corrupted data can compromise AI effectiveness and security, and examines methods to mitigate such threats.
Foundations Lab
• 138 launches
AIPG: The AI Security Enclave
The AI Security Enclave in the AI Proving Ground (AIPG) adds an environment dedicated to supporting AI security efforts and demonstrating WWT expertise and capabilities for testing innovative hardware and software security solutions.
Advanced Configuration Lab
Protect AI Guardian Sandbox
Protect AI Guardian is an ML model scanner and policy enforcer that ensures ML models meet an organization's security standards. It scans model code for malicious operators and vulnerabilities, while also checking against predefined policies. Guardian covers both first-party (developed within the organization) and third-party models (from external repositories). This comprehensive approach helps organizations manage ML model risks effectively.
In this Lab, you will walk through the Protect AI Interface, explore the different feature sets there, and submit example models for scanning.
Sandbox Lab
• 170 launches
Deep Instinct Data Security X (DSX) for NAS
Deep Instinct provides several solutions powered by deep learning to quickly identify potential attacks. This lab will demonstrate the capabilities of their DSX for NAS - NetApp solution, which can scan files in milliseconds anytime they enter the network or are edited. Files are scanned within the network environment, ensuring full data privacy, confidentiality, and compliance. Files that are found to be malicious can be either deleted or quarantined. Deep Instinct works with both network attached storages and cloud storages.
Foundations Lab
• 42 launches