How to Secure Against Generative AI and Protect AI Systems | Briefing

Event Overview

Two of the most common questions we get about generative AI are relatively simple: How can I secure my organization against bad actors using Generative AI? And how can I protect my LLM-powered architecture and AI systems and data? Following his key session at NVIDIA GTC, WWT's Global Head of AI Security Kent Noyes shares critical security challenges in regards to Generative AI and how you can overcome them.

Kent Noyes

World Wide Technology

Global Head of AI & Cyber Innovation

Having began his career at WWT over 20 years ago, Kent started as a consultant. He was the first individual to receive the designation Distinguishe...

What to expect

With over 20 years of cybersecurity experience, Kent Noyes is one of the foremost experts in AI security. In this briefing, Kent shares insights into the risks and threats brought about AI and benefits security teams can realize by embracing AI. Watch this episode and learn more about:
  • What makes up a comprehensive AI security approach.
  • Current and emerging threats generated by AI, such as common jailbreaks and deepfakes.
  • LLM-powered architectures and how to secure them.
  • The rapidly evolving Generative AI security ecosystem.

Goals and Objectives

Better understand risk in the context of AI, how security can impact LLM models and API extensions, how security teams can lean on co-pilots, and how security teams can position themselves to take on the uncertain (yet exciting) future of AI advancement.

Who should attend?

C-level leaders looking to securely drive AI transformation; business leaders looking to understand more about how the executive suite should be thinking about AI security; business and IT leaders looking to gain insight and understanding of today's complex AI environment; security teams and personnel wanting to understand more about how security plays an important role in driving AI success.