Overview
Explore
Select a tab
4 results found
Overview of NVIDIA NIM Microservices
Welcome to part 2 about our RAG lab infrastructure built in collaboration with NetApp, NVIDIA, and WWT. NVIDIA NIM microservices is a suite of user-friendly microservices that facilitate the deployment of generative AI models, such as large language models (LLMs), embeddings, re-rankings, and others, across various platforms. NVIDIA NIM microservices simplify the process for IT and DevOps teams to manage LLMs in their environments, providing standard APIs for developers to create AI-driven applications like copilots, chatbots, and assistants. It leverages NVIDIA's GPU technology for fast, scalable deployment, ensuring efficient inference and high performance.
Video
• 3:29
• Aug 28, 2024
AI Proving Ground: NetApp AI Pod
WWT's Derek Elbert gives a behind the scenes look at the NetApp AI Pod in the AI Proving Ground. This environment is one of the few environments that is attached to a single NVIDIA DGX. Get an inside look into how we are using this environment to add value to our clients.
Video
• 3:05
• Aug 12, 2024
The NetApp and NVIDIA Infrastructure Stack
Welcome to part 1 of the videos series about the RAG Lab Infrastructure built in collaboration with NetApp, NVIDIA, and World Wide Technology. This video series will take you behind the scenes of this state-of-the-art lab environment inside the AI Proving Ground from WWT, powered by the Advanced Technology Center.
Video
• 4:45
• Aug 23, 2024
Partner POV | Turbocharge Your AI with Intelligent Data Infrastructure
AI demands intelligent data infrastructure to deliver simplified data management while maximizing performance. With NetApp, you can turn AI opportunities into AI reality.
Video
• 1:17
• Jul 9, 2024