| Guides | Architecture and Features | APIs | SDK |
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
- Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
- LLM-aware request routing – Eliminates unnecessary KV cache re-computation
- Accelerated data transfer – Reduces inference response time using NIXL.
- KV cache offloading – Leverages multiple memory hierarchies for higher system throughput
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
The following examples require a few system level packages.
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See support_matrix.md
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate
pip install ai-dynamo[all]
Note
TensorRT-LLM Support is currently available on a branch
To run a model and interact with it locally you can call dynamo run
with a hugging face model. dynamo run
supports several backends
including: mistralrs
, sglang
, vllm
, and tensorrtllm
.
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
Dynamo provides a simple way to spin up a local set of inference
components including:
- OpenAI Compatible Frontend – High performance OpenAI compatible http api server written in Rust.
- Basic and Kv Aware Router – Route and load balance traffic to a set of workers.
- Workers – Set of pre-configured LLM serving engines.
To run a minimal configuration you can use a pre-configured
example.
First start the Dynamo Distributed Runtime services:
docker compose -f deploy/docker-compose.yml up -d
Next serve a minimal configurat
3 Comments
Carrok
As someone who spent the better part of a year trying to get various Nvidia inference products to work _at all_ even with a direct line to their developers, I will simply say "beware".
bloomingkales
Built in Rust for performance and in Python for extensibility
Omg, a team that knows how to selectively use tech as needed. Looking at the Rust web developers in corner.
lmeyerov
So this replaces triton for LLMs or?