Overview
NimLLMService
provides access to NVIDIA’s NIM language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses, function calling, and context management, with special handling for NVIDIA’s incremental token reporting and enterprise deployment.
NIM LLM API Reference
Pipecat’s API methods for NVIDIA NIM integration
Example Implementation
Complete example with function calling
NVIDIA NIM Documentation
Official NVIDIA NIM documentation and setup
NVIDIA Developer Portal
Access NIM services and manage API keys
Installation
To use NVIDIA NIM services, install the required dependencies:Prerequisites
NVIDIA NIM Setup
Before using NVIDIA NIM LLM services, you need:- NVIDIA Developer Account: Sign up at NVIDIA Developer Portal
- API Key: Generate an NVIDIA API key for NIM services
- Model Selection: Choose from available NIM-hosted models
- Enterprise Setup: Configure NIM for on-premises deployment if needed
Required Environment Variables
NVIDIA_API_KEY
: Your NVIDIA API key for authentication