Overview

NimLLMService provides access to NVIDIA’s NIM language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management, with special handling for NVIDIA’s incremental token reporting and enterprise deployment.

Installation

To use NVIDIA NIM services, install the required dependencies:
pip install "pipecat-ai[nim]"

Prerequisites

NVIDIA NIM Setup

Before using NVIDIA NIM LLM services, you need:
  1. NVIDIA Developer Account: Sign up at NVIDIA Developer Portal
  2. API Key: Generate an NVIDIA API key for NIM services
  3. Model Selection: Choose from available NIM-hosted models
  4. Enterprise Setup: Configure NIM for on-premises deployment if needed

Required Environment Variables

  • NVIDIA_API_KEY: Your NVIDIA API key for authentication