From 02a61c26c044a338a13ca42f0e4fe4165619d47b Mon Sep 17 00:00:00 2001 From: Pranav Prashant Thombre Date: Sat, 21 Mar 2026 17:49:48 -0700 Subject: [PATCH 01/16] [docs] Add NeMo Automodel training guide Signed-off-by: Pranav Prashant Thombre --- docs/source/en/_toctree.yml | 2 + docs/source/en/training/nemo_automodel.md | 349 ++++++++++++++++++++++ 2 files changed, 351 insertions(+) create mode 100644 docs/source/en/training/nemo_automodel.md diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index 6b1a7288d60f..af881ff12292 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -161,6 +161,8 @@ - local: training/ddpo title: Reinforcement learning training with DDPO title: Methods + - local: training/nemo_automodel + title: NeMo Automodel title: Training - isExpanded: false sections: diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md new file mode 100644 index 000000000000..30d2fa35d189 --- /dev/null +++ b/docs/source/en/training/nemo_automodel.md @@ -0,0 +1,349 @@ + + +# NeMo Automodel + +[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It uses [flow matching](https://arxiv.org/abs/2210.02747) for training and [FSDP2](https://pytorch.org/docs/stable/fsdp.html) for distributed parallelism, supporting single-node and multi-node training with YAML-driven configuration. + +NeMo Automodel integrates directly with Diffusers — it loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs via Diffusers pipelines. No checkpoint conversion is needed. + +### Why NeMo Automodel? + +- **Hugging Face native**: Train any Diffusers-format model from the Hub with no checkpoint conversion — day-0 support for new model releases. +- **Any scale**: The same YAML recipe and training script runs on 1 GPU or across hundreds of nodes. Parallelism is configuration, not code. +- **High performance**: FSDP2 distributed training with multiresolution bucketed dataloading and pre-encoded latent space training for maximum GPU utilization. +- **Hackable**: Linear training scripts with YAML configuration files. No hidden trainer abstractions — you can read and modify the entire training loop. +- **Open source**: Apache 2.0 licensed, NVIDIA-supported, and actively maintained. + +### Workflow overview + +```text +┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ +│ 1. Install │───>│ 2. Prepare │───>│ 3. Configure │───>│ 4. Train │───>│ 5. Generate │ +│ │ │ Data │ │ │ │ │ │ │ +│ pip install │ │ Encode to │ │ YAML recipe │ │ torchrun │ │ Run inference│ +│ or Docker │ │ .meta files │ │ │ │ │ │ with ckpt │ +└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ +``` + +## Supported models + +| Model | Hugging Face ID | Task | Parameters | +|-------|----------------|------|------------| +| Wan 2.1 T2V 1.3B | [`Wan-AI/Wan2.1-T2V-1.3B-Diffusers`](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers) | Text-to-Video | 1.3B | +| FLUX.1-dev | [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev) | Text-to-Image | 12B | +| HunyuanVideo 1.5 | [`hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v) | Text-to-Video | 13B | + +Use the table below to pick the right model for your use case: + +| Use Case | Model | Why Choose It | +|----------|-------|---------------| +| **Video generation on limited hardware** | Wan 2.1 T2V 1.3B | Smallest model (1.3B params) — fast iteration, fits on a single A100 40GB | +| **High-quality image generation** | FLUX.1-dev | State-of-the-art text-to-image with 12B params and guidance-based control | +| **High-quality video generation** | HunyuanVideo 1.5 | Larger video model with condition-latent support for richer motion and detail | + +## Installation + +Install NeMo Automodel with pip. + +```bash +pip3 install nemo-automodel +``` + +Alternatively, use the pre-built Docker container which includes all dependencies. + +```bash +docker pull nvcr.io/nvidia/nemo-automodel:26.02.00 +docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02.00 +``` + +> [!WARNING] +> **Docker users:** Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. For example, add `-v /host/path/checkpoints:/workspace/checkpoints` to the `docker run` command. + +> [!TIP] +> For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html). + +## Data preparation + +Diffusion training in NeMo Automodel operates in latent space. Raw images or videos must be preprocessed into `.meta` files containing VAE latents and text embeddings before training. This avoids re-encoding on every training step. + +Use the built-in preprocessing tool to encode your data. The tool automatically distributes work across all available GPUs. + +**Video preprocessing (Wan 2.1):** + +```bash +python -m tools.diffusion.preprocessing_multiprocess video \ + --video_dir /data/videos \ + --output_dir /cache \ + --processor wan \ + --resolution_preset 512p \ + --caption_format sidecar +``` + +**Image preprocessing (FLUX):** + +```bash +python -m tools.diffusion.preprocessing_multiprocess image \ + --image_dir /data/images \ + --output_dir /cache \ + --processor flux \ + --resolution_preset 512p +``` + +**Video preprocessing (HunyuanVideo):** + +```bash +python -m tools.diffusion.preprocessing_multiprocess video \ + --video_dir /data/videos \ + --output_dir /cache \ + --processor hunyuan \ + --target_frames 121 \ + --caption_format meta_json +``` + +### Output format + +Preprocessing produces a cache directory organized by resolution bucket. NeMo Automodel supports multiresolution training through bucketed sampling — samples are grouped by spatial resolution so each batch contains same-size samples, avoiding padding waste. + +``` +/cache/ +├── 512x512/ # Resolution bucket +│ ├── .meta # VAE latents + text embeddings +│ ├── .meta +│ └── ... +├── 832x480/ # Another resolution bucket +│ └── ... +├── metadata.json # Global config (processor, model, total items) +└── metadata_shard_0000.json # Per-sample metadata (paths, resolutions, captions) +``` + +> [!TIP] +> For caption formats, input data requirements, and all available preprocessing arguments, see the [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html) guide. + +## Training configuration + +Fine-tuning is driven by two components: + +1. **A recipe script** (e.g., [`finetune.py`](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/finetune/finetune.py)) — the Python entry point that orchestrates the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging. +2. **A YAML configuration file** — specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, and more. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to hundreds of GPUs seamlessly. + +Any YAML field can also be overridden from the CLI: + +```bash +torchrun --nproc-per-node=8 examples/diffusion/finetune/finetune.py \ + -c examples/diffusion/finetune/wan2_1_t2v_flow.yaml \ + --optim.learning_rate 1e-5 \ + --step_scheduler.num_epochs 50 +``` + +Below is the annotated config for fine-tuning Wan 2.1 T2V 1.3B, with each section explained. + +```yaml +seed: 42 + +# ── Experiment tracking (optional) ────────────────────────────────────────── +# Weights & Biases integration for logging metrics, losses, and learning rates. +# Set mode: "disabled" to turn off. +wandb: + project: wan-t2v-flow-matching + mode: online + name: wan2_1_t2v_fm + +# ── Model ─────────────────────────────────────────────────────────────────── +# pretrained_model_name_or_path: any Hugging Face model ID or local path. +# mode: "finetune" loads pretrained weights; "pretrain" trains from scratch. +model: + pretrained_model_name_or_path: Wan-AI/Wan2.1-T2V-1.3B-Diffusers + mode: finetune + +# ── Training schedule ─────────────────────────────────────────────────────── +# global_batch_size: effective batch across all GPUs. +# Gradient accumulation is computed automatically: global / (local × num_gpus). +step_scheduler: + global_batch_size: 8 + local_batch_size: 1 + ckpt_every_steps: 1000 # Save a checkpoint every N steps + num_epochs: 100 + log_every: 2 # Log metrics every N steps + +# ── Data ──────────────────────────────────────────────────────────────────── +# _target_: the dataloader factory function. +# Use build_video_multiresolution_dataloader for video models (Wan, HunyuanVideo). +# Use build_text_to_image_multiresolution_dataloader for image models (FLUX). +# model_type: "wan" or "hunyuan" (selects the correct latent format). +# base_resolution: target resolution for multiresolution bucketing. +data: + dataloader: + _target_: nemo_automodel.components.datasets.diffusion.build_video_multiresolution_dataloader + cache_dir: PATH_TO_YOUR_DATA + model_type: wan + base_resolution: [512, 512] + dynamic_batch_size: false # When true, adjusts batch per bucket to maintain constant memory + shuffle: true + drop_last: false + num_workers: 0 + +# ── Optimizer ─────────────────────────────────────────────────────────────── +# learning_rate: 5e-6 is a good starting point for fine-tuning. +# Adjust weight_decay and betas for your dataset. +optim: + learning_rate: 5e-6 + optimizer: + weight_decay: 0.01 + betas: [0.9, 0.999] + +# ── Learning rate scheduler ───────────────────────────────────────────────── +# Supports cosine, linear, and constant schedules. +lr_scheduler: + lr_decay_style: cosine + lr_warmup_steps: 0 + min_lr: 1e-6 + +# ── Flow matching ─────────────────────────────────────────────────────────── +# adapter_type: model-specific adapter — must match the model: +# "simple" for Wan 2.1, "flux" for FLUX.1-dev, "hunyuan" for HunyuanVideo. +# timestep_sampling: "uniform" for Wan, "logit_normal" for FLUX and HunyuanVideo. +# flow_shift: shifts the flow schedule (model-dependent). +# i2v_prob: probability of image-to-video conditioning during training (video models). +flow_matching: + adapter_type: "simple" + adapter_kwargs: {} + timestep_sampling: "uniform" + logit_mean: 0.0 + logit_std: 1.0 + flow_shift: 3.0 + num_train_timesteps: 1000 + i2v_prob: 0.3 + use_loss_weighting: true + +# ── FSDP2 distributed training ────────────────────────────────────────────── +# dp_size: number of GPUs for data parallelism (typically = total GPUs on node). +# tp_size, cp_size, pp_size: tensor, context, and pipeline parallelism. +# For most fine-tuning, dp_size is all you need; leave others at 1. +fsdp: + tp_size: 1 + cp_size: 1 + pp_size: 1 + dp_replicate_size: 1 + dp_size: 8 + +# ── Checkpointing ────────────────────────────────────────────────────────── +# checkpoint_dir: where to save checkpoints (use a persistent path with Docker). +# restore_from: path to resume training from a previous checkpoint. +checkpoint: + enabled: true + checkpoint_dir: PATH_TO_YOUR_CKPT_DIR + model_save_format: torch_save + save_consolidated: false + restore_from: null +``` + +### Config field reference + +| Section | Required? | What to Change | +|---------|-----------|----------------| +| `model` | Yes | Set `pretrained_model_name_or_path` to the Hugging Face model ID. Set `mode: finetune` or `mode: pretrain`. | +| `step_scheduler` | Yes | `global_batch_size` is the effective batch size across all GPUs. `ckpt_every_steps` controls checkpoint frequency. Gradient accumulation is computed automatically. | +| `data` | Yes | Set `cache_dir` to the path containing your preprocessed `.meta` files. Change `_target_` and `model_type` for different models. | +| `optim` | Yes | `learning_rate: 5e-6` is a good default for fine-tuning. Adjust for your dataset and model. | +| `lr_scheduler` | Yes | Choose `cosine`, `linear`, or `constant` for `lr_decay_style`. Set `lr_warmup_steps` for gradual warmup. | +| `flow_matching` | Yes | `adapter_type` must match the model (`simple` for Wan, `flux` for FLUX, `hunyuan` for HunyuanVideo). See model-specific configs for `adapter_kwargs`. | +| `fsdp` | Yes | Set `dp_size` to the number of GPUs. For multi-node, set to total GPUs across all nodes. | +| `checkpoint` | Recommended | Set `checkpoint_dir` to a persistent path, especially in Docker. Use `restore_from` to resume from a previous checkpoint. | +| `wandb` | Optional | Configure to enable Weights & Biases experiment tracking. Set `mode: disabled` to turn off. | + +> [!NOTE] +> NeMo Automodel also supports **pretraining** diffusion models from randomly initialized weights. Set `mode: pretrain` in the model config. Pretraining example configs are available in the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/pretrain). + +> [!TIP] +> Full example configs for all models are available in the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/finetune). + +## Launch training + +**Single-node training:** + +```bash +torchrun --nproc-per-node=8 \ + examples/diffusion/finetune/finetune.py \ + -c examples/diffusion/finetune/wan2_1_t2v_flow.yaml +``` + +**Multi-node training** (run on each node, setting `NODE_RANK` accordingly): + +```bash +export MASTER_ADDR=node0.hostname +export MASTER_PORT=29500 +export NODE_RANK=0 # 0 on master, 1 on second node, etc. + +torchrun \ + --nnodes=2 \ + --nproc-per-node=8 \ + --node_rank=${NODE_RANK} \ + --rdzv_backend=c10d \ + --rdzv_endpoint=${MASTER_ADDR}:${MASTER_PORT} \ + examples/diffusion/finetune/finetune.py \ + -c examples/diffusion/finetune/wan2_1_t2v_flow_multinode.yaml +``` + +> [!NOTE] +> For multi-node training, set `fsdp.dp_size` in the YAML to the **total** number of GPUs across all nodes (e.g., 16 for 2 nodes with 8 GPUs each). + +## Generation + +After training, generate videos or images from text prompts using the fine-tuned checkpoint. + +**Wan 2.1 (single-GPU):** + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_wan.yaml +``` + +**With a fine-tuned checkpoint:** + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_wan.yaml \ + --model.checkpoint ./checkpoints/step_1000 \ + --inference.prompts '["A dog running on a beach"]' +``` + +Generation configs are also available for [FLUX](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/generate/configs/generate_flux.yaml) (images) and [HunyuanVideo](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/generate/configs/generate_hunyuan.yaml) (videos). + +## Diffusers integration + +NeMo Automodel is built on top of Diffusers and uses it as the backbone for model loading and inference. It loads models directly from the Hugging Face Hub using Diffusers model classes such as [`WanTransformer3DModel`], [`FluxTransformer2DModel`], and [`HunyuanVideoTransformer3DModel`], and generates outputs via Diffusers pipelines like [`WanPipeline`] and [`FluxPipeline`]. + +This integration provides several benefits for Diffusers users: + +- **No checkpoint conversion**: pretrained weights from the Hub work out of the box. Point `pretrained_model_name_or_path` at any Diffusers-format model ID and start training immediately. +- **Day-0 model support**: when a new diffusion model is added to Diffusers and uploaded to the Hub, it can be fine-tuned with NeMo Automodel without waiting for a dedicated training script. +- **Pipeline-compatible outputs**: fine-tuned checkpoints are saved in a format that can be loaded directly back into Diffusers pipelines for inference, sharing on the Hub, or further optimization with tools like quantization and compilation. +- **Scalable training for Diffusers models**: NeMo Automodel adds distributed training capabilities (FSDP2, multi-node, multiresolution bucketing) that go beyond what the built-in Diffusers training scripts provide, while keeping the same model and pipeline interfaces. +- **Shared ecosystem**: any model, LoRA adapter, or pipeline component from the Diffusers ecosystem remains compatible throughout the training and inference workflow. + +## Hardware requirements + +| Component | Minimum | Recommended | +|-----------|---------|-------------| +| GPU | A100 40GB | A100 80GB / H100 | +| GPUs | 4 | 8+ | +| RAM | 128 GB | 256 GB+ | +| Storage | 500 GB SSD | 2 TB NVMe | + +## Resources + +- [NeMo Automodel GitHub](https://github.com/NVIDIA-NeMo/Automodel) +- [Diffusion Fine-Tuning Guide](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/finetune.html) +- [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html) +- [Diffusion Model Coverage](https://docs.nvidia.com/nemo/automodel/latest/model-coverage/diffusion.html) +- [NeMo Automodel for Transformers (LLM/VLM fine-tuning)](https://huggingface.co/docs/transformers/en/community_integrations/nemo_automodel_finetuning) From 7f47bbc6c833ca835db13111f16589289f51cbfd Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Mon, 23 Mar 2026 18:24:45 -0700 Subject: [PATCH 02/16] Update docs/source/en/training/nemo_automodel.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 30d2fa35d189..4749d2607f22 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License. # NeMo Automodel -[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It uses [flow matching](https://arxiv.org/abs/2210.02747) for training and [FSDP2](https://pytorch.org/docs/stable/fsdp.html) for distributed parallelism, supporting single-node and multi-node training with YAML-driven configuration. +[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It uses [flow matching](https://huggingface.co/papers/2210.02747) for training and [FSDP2](https://pytorch.org/docs/stable/fsdp.html) for distributed parallelism on single-node and multi-node setups. NeMo Automodel integrates directly with Diffusers — it loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs via Diffusers pipelines. No checkpoint conversion is needed. From ead7ff9ebb84e226611c6a31e65100eb0eac5ad2 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Mon, 23 Mar 2026 18:25:10 -0700 Subject: [PATCH 03/16] Update docs/source/en/training/nemo_automodel.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 4749d2607f22..8b9540c0866e 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -14,7 +14,7 @@ specific language governing permissions and limitations under the License. [NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It uses [flow matching](https://huggingface.co/papers/2210.02747) for training and [FSDP2](https://pytorch.org/docs/stable/fsdp.html) for distributed parallelism on single-node and multi-node setups. -NeMo Automodel integrates directly with Diffusers — it loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs via Diffusers pipelines. No checkpoint conversion is needed. +NeMo Automodel integrates directly with Diffusers, and doesn't require checkpoint conversion. It loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs with the [`DiffusionPipeline`]. ### Why NeMo Automodel? From 92007e3cf3b3c0a7ae641ea61db32a69b75fb0f1 Mon Sep 17 00:00:00 2001 From: linnan wang Date: Tue, 24 Mar 2026 16:58:54 +0800 Subject: [PATCH 04/16] adding contacts into the readme --- docs/source/en/training/nemo_automodel.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 8b9540c0866e..47db76a8bd66 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -340,6 +340,12 @@ This integration provides several benefits for Diffusers users: | RAM | 128 GB | 256 GB+ | | Storage | 500 GB SSD | 2 TB NVMe | +## NVIDIA Team + +- Pranav Prashant Thombre, pthombre@nvidia.com +- Linnan Wang, linnanw@nvidia.com +- Alexandros Koumparoulis, akoumparouli@nvidia.com + ## Resources - [NeMo Automodel GitHub](https://github.com/NVIDIA-NeMo/Automodel) From a9c1b71fe5db497341ef66d5d037eae2732cec76 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:45:54 -0700 Subject: [PATCH 05/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 47db76a8bd66..55f06a220c2f 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -53,7 +53,7 @@ Use the table below to pick the right model for your use case: ## Installation -Install NeMo Automodel with pip. +Install NeMo Automodel with pip. For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html). ```bash pip3 install nemo-automodel From 22a96c8a16e6a2035849e3e26b02a13a2fbde58e Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:46:15 -0700 Subject: [PATCH 06/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 55f06a220c2f..b3172fcc8eaa 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -67,7 +67,7 @@ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02 ``` > [!WARNING] -> **Docker users:** Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. For example, add `-v /host/path/checkpoints:/workspace/checkpoints` to the `docker run` command. +> Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. For example, add `-v /host/path/checkpoints:/workspace/checkpoints` to the `docker run` command. > [!TIP] > For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html). From e9a80f5ff2e83ceb3ff3853de540a44aa26900cb Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:46:28 -0700 Subject: [PATCH 07/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index b3172fcc8eaa..d76a668fd531 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -69,8 +69,6 @@ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02 > [!WARNING] > Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. For example, add `-v /host/path/checkpoints:/workspace/checkpoints` to the `docker run` command. -> [!TIP] -> For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html). ## Data preparation From 18d324143c3c9aca7073ee8c3e9d049902f5a686 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:46:43 -0700 Subject: [PATCH 08/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index d76a668fd531..3a8198baf512 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -72,7 +72,7 @@ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02 ## Data preparation -Diffusion training in NeMo Automodel operates in latent space. Raw images or videos must be preprocessed into `.meta` files containing VAE latents and text embeddings before training. This avoids re-encoding on every training step. +NeMo Automodel trains diffusion models in latent space. Raw images or videos must be preprocessed into `.meta` files containing VAE latents and text embeddings before training. This avoids re-encoding on every training step. Use the built-in preprocessing tool to encode your data. The tool automatically distributes work across all available GPUs. From 43c98204c13f20b1d1a394154ce2db80c16808a7 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:46:57 -0700 Subject: [PATCH 09/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 3a8198baf512..c178c1a9252f 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -110,7 +110,7 @@ python -m tools.diffusion.preprocessing_multiprocess video \ ### Output format -Preprocessing produces a cache directory organized by resolution bucket. NeMo Automodel supports multiresolution training through bucketed sampling — samples are grouped by spatial resolution so each batch contains same-size samples, avoiding padding waste. +Preprocessing produces a cache directory organized by resolution bucket. NeMo Automodel supports multi-resolution training through bucketed sampling, Samples are grouped by spatial resolution so each batch contains same-size samples, avoiding padding waste. ``` /cache/ From 93e5c8536f39f8f833e53e71b5628afb7c1f60e3 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:47:11 -0700 Subject: [PATCH 10/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index c178c1a9252f..cefa678e2474 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -125,7 +125,7 @@ Preprocessing produces a cache directory organized by resolution bucket. NeMo Au ``` > [!TIP] -> For caption formats, input data requirements, and all available preprocessing arguments, see the [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html) guide. +> See the [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html) guide for caption formats, input data requirements, and all available preprocessing arguments. ## Training configuration From e498fdfff358dc3fb971e76e4baa12b16a164d15 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:47:23 -0700 Subject: [PATCH 11/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index cefa678e2474..2dff7a2d708f 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -131,7 +131,7 @@ Preprocessing produces a cache directory organized by resolution bucket. NeMo Au Fine-tuning is driven by two components: -1. **A recipe script** (e.g., [`finetune.py`](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/finetune/finetune.py)) — the Python entry point that orchestrates the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging. +1. A recipe script ([finetune.py](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/finetune/finetune.py)) is a Python entry point that contains the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging. 2. **A YAML configuration file** — specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, and more. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to hundreds of GPUs seamlessly. Any YAML field can also be overridden from the CLI: From 27cc64b3a9f173939aa42e29cea064888bf46300 Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:47:34 -0700 Subject: [PATCH 12/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 2dff7a2d708f..e5882f8c8ee6 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -132,7 +132,7 @@ Preprocessing produces a cache directory organized by resolution bucket. NeMo Au Fine-tuning is driven by two components: 1. A recipe script ([finetune.py](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/finetune/finetune.py)) is a Python entry point that contains the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging. -2. **A YAML configuration file** — specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, and more. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to hundreds of GPUs seamlessly. +2. A YAML configuration file specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, and more. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to hundreds of GPUs. Any YAML field can also be overridden from the CLI: From 842de1a26c4baf728f63684806544200c09a24ce Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:47:50 -0700 Subject: [PATCH 13/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index e5882f8c8ee6..4a442949243d 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -247,6 +247,8 @@ checkpoint: ### Config field reference +The table below lists the minimal required configs. See the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/finetune) have full example configs for all models. + | Section | Required? | What to Change | |---------|-----------|----------------| | `model` | Yes | Set `pretrained_model_name_or_path` to the Hugging Face model ID. Set `mode: finetune` or `mode: pretrain`. | From 5249cd07e81a7886918cf63fc470ac87014e29af Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:48:01 -0700 Subject: [PATCH 14/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 4a442949243d..57731341d690 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -264,8 +264,6 @@ The table below lists the minimal required configs. See the [NeMo Automodel exam > [!NOTE] > NeMo Automodel also supports **pretraining** diffusion models from randomly initialized weights. Set `mode: pretrain` in the model config. Pretraining example configs are available in the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/pretrain). -> [!TIP] -> Full example configs for all models are available in the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/finetune). ## Launch training From a434c5988f4a9dab5d4b79f81356435b5e8fca5b Mon Sep 17 00:00:00 2001 From: Pranav Thombre Date: Fri, 27 Mar 2026 16:48:19 -0700 Subject: [PATCH 15/16] Apply suggestion from @stevhliu Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/nemo_automodel.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 57731341d690..2f4dbf797e2b 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -261,8 +261,6 @@ The table below lists the minimal required configs. See the [NeMo Automodel exam | `checkpoint` | Recommended | Set `checkpoint_dir` to a persistent path, especially in Docker. Use `restore_from` to resume from a previous checkpoint. | | `wandb` | Optional | Configure to enable Weights & Biases experiment tracking. Set `mode: disabled` to turn off. | -> [!NOTE] -> NeMo Automodel also supports **pretraining** diffusion models from randomly initialized weights. Set `mode: pretrain` in the model config. Pretraining example configs are available in the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/pretrain). ## Launch training From 3dd1b44a4ac61e2f1edb567549760948bb1e5aaa Mon Sep 17 00:00:00 2001 From: Pranav Prashant Thombre Date: Fri, 27 Mar 2026 17:06:02 -0700 Subject: [PATCH 16/16] Address CR comments Signed-off-by: Pranav Prashant Thombre --- docs/source/en/training/nemo_automodel.md | 143 +++++++++++++--------- 1 file changed, 85 insertions(+), 58 deletions(-) diff --git a/docs/source/en/training/nemo_automodel.md b/docs/source/en/training/nemo_automodel.md index 2f4dbf797e2b..55c4cdfcacf1 100644 --- a/docs/source/en/training/nemo_automodel.md +++ b/docs/source/en/training/nemo_automodel.md @@ -12,46 +12,30 @@ specific language governing permissions and limitations under the License. # NeMo Automodel -[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It uses [flow matching](https://huggingface.co/papers/2210.02747) for training and [FSDP2](https://pytorch.org/docs/stable/fsdp.html) for distributed parallelism on single-node and multi-node setups. +[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It is **Hugging Face native** — train any Diffusers-format model from the Hub with no checkpoint conversion. The same YAML recipe and hackable training script runs on **any scale**, from 1 GPU to hundreds of nodes, with [FSDP2](https://pytorch.org/docs/stable/fsdp.html) distributed training, multiresolution bucketed dataloading, and pre-encoded latent space training for **maximum GPU utilization**. It uses [flow matching](https://huggingface.co/papers/2210.02747) for training and is fully open source (Apache 2.0), NVIDIA-supported, and actively maintained. -NeMo Automodel integrates directly with Diffusers, and doesn't require checkpoint conversion. It loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs with the [`DiffusionPipeline`]. +NeMo Automodel integrates directly with Diffusers. It loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs with the [`DiffusionPipeline`]. -### Why NeMo Automodel? - -- **Hugging Face native**: Train any Diffusers-format model from the Hub with no checkpoint conversion — day-0 support for new model releases. -- **Any scale**: The same YAML recipe and training script runs on 1 GPU or across hundreds of nodes. Parallelism is configuration, not code. -- **High performance**: FSDP2 distributed training with multiresolution bucketed dataloading and pre-encoded latent space training for maximum GPU utilization. -- **Hackable**: Linear training scripts with YAML configuration files. No hidden trainer abstractions — you can read and modify the entire training loop. -- **Open source**: Apache 2.0 licensed, NVIDIA-supported, and actively maintained. - -### Workflow overview - -```text -┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ -│ 1. Install │───>│ 2. Prepare │───>│ 3. Configure │───>│ 4. Train │───>│ 5. Generate │ -│ │ │ Data │ │ │ │ │ │ │ -│ pip install │ │ Encode to │ │ YAML recipe │ │ torchrun │ │ Run inference│ -│ or Docker │ │ .meta files │ │ │ │ │ │ with ckpt │ -└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ -``` +The typical workflow is to install NeMo Automodel (pip or Docker), prepare your data by encoding it into `.meta` files, configure a YAML recipe, launch training with `torchrun`, and run inference with the resulting checkpoint. ## Supported models -| Model | Hugging Face ID | Task | Parameters | -|-------|----------------|------|------------| -| Wan 2.1 T2V 1.3B | [`Wan-AI/Wan2.1-T2V-1.3B-Diffusers`](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers) | Text-to-Video | 1.3B | -| FLUX.1-dev | [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev) | Text-to-Image | 12B | -| HunyuanVideo 1.5 | [`hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v) | Text-to-Video | 13B | +| Model | Hugging Face ID | Task | Parameters | Use case | +|-------|----------------|------|------------|----------| +| Wan 2.1 T2V 1.3B | [Wan-AI/Wan2.1-T2V-1.3B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers) | Text-to-Video | 1.3B | video generation on limited hardware (fits on single 40GB A100) | +| FLUX.1-dev | [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) | Text-to-Image | 12B | high-quality image generation | +| HunyuanVideo 1.5 | [hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v) | Text-to-Video | 13B | high-quality video generation | -Use the table below to pick the right model for your use case: +## Installation -| Use Case | Model | Why Choose It | -|----------|-------|---------------| -| **Video generation on limited hardware** | Wan 2.1 T2V 1.3B | Smallest model (1.3B params) — fast iteration, fits on a single A100 40GB | -| **High-quality image generation** | FLUX.1-dev | State-of-the-art text-to-image with 12B params and guidance-based control | -| **High-quality video generation** | HunyuanVideo 1.5 | Larger video model with condition-latent support for richer motion and detail | +### Hardware requirements -## Installation +| Component | Minimum | Recommended | +|-----------|---------|-------------| +| GPU | A100 40GB | A100 80GB / H100 | +| GPUs | 4 | 8+ | +| RAM | 128 GB | 256 GB+ | +| Storage | 500 GB SSD | 2 TB NVMe | Install NeMo Automodel with pip. For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html). @@ -76,7 +60,12 @@ NeMo Automodel trains diffusion models in latent space. Raw images or videos mus Use the built-in preprocessing tool to encode your data. The tool automatically distributes work across all available GPUs. -**Video preprocessing (Wan 2.1):** + + + +The video preprocessing command is the same for both Wan 2.1 and HunyuanVideo, but the flags differ. Wan 2.1 uses `--processor wan` with `--resolution_preset` and `--caption_format sidecar`, while HunyuanVideo uses `--processor hunyuan` with `--target_frames` to set the frame count and `--caption_format meta_json`. + +**Wan 2.1:** ```bash python -m tools.diffusion.preprocessing_multiprocess video \ @@ -87,17 +76,7 @@ python -m tools.diffusion.preprocessing_multiprocess video \ --caption_format sidecar ``` -**Image preprocessing (FLUX):** - -```bash -python -m tools.diffusion.preprocessing_multiprocess image \ - --image_dir /data/images \ - --output_dir /cache \ - --processor flux \ - --resolution_preset 512p -``` - -**Video preprocessing (HunyuanVideo):** +**HunyuanVideo:** ```bash python -m tools.diffusion.preprocessing_multiprocess video \ @@ -108,6 +87,20 @@ python -m tools.diffusion.preprocessing_multiprocess video \ --caption_format meta_json ``` + + + +```bash +python -m tools.diffusion.preprocessing_multiprocess image \ + --image_dir /data/images \ + --output_dir /cache \ + --processor flux \ + --resolution_preset 512p +``` + + + + ### Output format Preprocessing produces a cache directory organized by resolution bucket. NeMo Automodel supports multi-resolution training through bucketed sampling, Samples are grouped by spatial resolution so each batch contains same-size samples, avoiding padding waste. @@ -265,7 +258,8 @@ The table below lists the minimal required configs. See the [NeMo Automodel exam ## Launch training -**Single-node training:** + + ```bash torchrun --nproc-per-node=8 \ @@ -273,7 +267,10 @@ torchrun --nproc-per-node=8 \ -c examples/diffusion/finetune/wan2_1_t2v_flow.yaml ``` -**Multi-node training** (run on each node, setting `NODE_RANK` accordingly): + + + +Run the following on each node, setting `NODE_RANK` accordingly: ```bash export MASTER_ADDR=node0.hostname @@ -293,18 +290,22 @@ torchrun \ > [!NOTE] > For multi-node training, set `fsdp.dp_size` in the YAML to the **total** number of GPUs across all nodes (e.g., 16 for 2 nodes with 8 GPUs each). + + + ## Generation After training, generate videos or images from text prompts using the fine-tuned checkpoint. -**Wan 2.1 (single-GPU):** + + ```bash python examples/diffusion/generate/generate.py \ -c examples/diffusion/generate/configs/generate_wan.yaml ``` -**With a fine-tuned checkpoint:** +With a fine-tuned checkpoint: ```bash python examples/diffusion/generate/generate.py \ @@ -313,7 +314,42 @@ python examples/diffusion/generate/generate.py \ --inference.prompts '["A dog running on a beach"]' ``` -Generation configs are also available for [FLUX](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/generate/configs/generate_flux.yaml) (images) and [HunyuanVideo](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/generate/configs/generate_hunyuan.yaml) (videos). + + + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_flux.yaml +``` + +With a fine-tuned checkpoint: + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_flux.yaml \ + --model.checkpoint ./checkpoints/step_1000 \ + --inference.prompts '["A dog running on a beach"]' +``` + + + + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_hunyuan.yaml +``` + +With a fine-tuned checkpoint: + +```bash +python examples/diffusion/generate/generate.py \ + -c examples/diffusion/generate/configs/generate_hunyuan.yaml \ + --model.checkpoint ./checkpoints/step_1000 \ + --inference.prompts '["A dog running on a beach"]' +``` + + + ## Diffusers integration @@ -327,15 +363,6 @@ This integration provides several benefits for Diffusers users: - **Scalable training for Diffusers models**: NeMo Automodel adds distributed training capabilities (FSDP2, multi-node, multiresolution bucketing) that go beyond what the built-in Diffusers training scripts provide, while keeping the same model and pipeline interfaces. - **Shared ecosystem**: any model, LoRA adapter, or pipeline component from the Diffusers ecosystem remains compatible throughout the training and inference workflow. -## Hardware requirements - -| Component | Minimum | Recommended | -|-----------|---------|-------------| -| GPU | A100 40GB | A100 80GB / H100 | -| GPUs | 4 | 8+ | -| RAM | 128 GB | 256 GB+ | -| Storage | 500 GB SSD | 2 TB NVMe | - ## NVIDIA Team - Pranav Prashant Thombre, pthombre@nvidia.com