Skip to content

Fix explicit sharding for moe models#3595

Open
Shuwen-Fang wants to merge 2 commits intomainfrom
explicitpp
Open

Fix explicit sharding for moe models#3595
Shuwen-Fang wants to merge 2 commits intomainfrom
explicitpp

Conversation

@Shuwen-Fang
Copy link
Copy Markdown
Collaborator

@Shuwen-Fang Shuwen-Fang commented Apr 7, 2026

Description

This PR fixes explicit sharding for deepseek by specifying correct sharding for expert weights, and add tests with ds3-test for both the sparse and dense matmul path.

Tests

  • Verified pytest tests/integration/smoke/train_smoke_test.py::Train -v passed

Tested the following configs to validate correctness with deepseek3-test:

ici_fsdp_parallelism=2 \
ici_expert_parallelism=8 \
use_ring_of_experts=false \
ici_fsdp_parallelism=2 \
ici_expert_parallelism=8 \
use_ring_of_experts=false \
ici_fsdp_parallelism=4 \
ici_tensor_parallelism=4 \
ici_fsdp_parallelism=16 \
ici_fsdp_parallelism=8\
ici_tensor_transpose_parallelism=2 \

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 7, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@Shuwen-Fang Shuwen-Fang changed the title temp Fix explicit sharding for moe models Apr 7, 2026
@Shuwen-Fang Shuwen-Fang self-assigned this Apr 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant