Overview
Explore
2 results found
Overview of NVIDIA NIM Microservices
Welcome to part 2 about our RAG lab infrastructure built in collaboration with NetApp, NVIDIA, and WWT. NVIDIA NIM microservices is a suite of user-friendly microservices that facilitate the deployment of generative AI models, such as large language models (LLMs), embeddings, re-rankings, and others, across various platforms. NVIDIA NIM microservices simplify the process for IT and DevOps teams to manage LLMs in their environments, providing standard APIs for developers to create AI-driven applications like copilots, chatbots, and assistants. It leverages NVIDIA's GPU technology for fast, scalable deployment, ensuring efficient inference and high performance.
Video
• 3:29
• Aug 28, 2024
The NetApp and NVIDIA Infrastructure Stack
Welcome to part 1 of the videos series about the RAG Lab Infrastructure built in collaboration with NetApp, NVIDIA, and World Wide Technology. This video series will take you behind the scenes of this state-of-the-art lab environment inside the AI Proving Ground from WWT, powered by the Advanced Technology Center.
Video
• 4:45
• Aug 23, 2024