Skip to content

support bf16 moe tp permute, group_gemm, unpermute#7194

Open
ckl117 wants to merge 1 commit intoPaddlePaddle:release/2.5from
ckl117:25_bf16_moe_deepgemm
Open

support bf16 moe tp permute, group_gemm, unpermute#7194
ckl117 wants to merge 1 commit intoPaddlePaddle:release/2.5from
ckl117:25_bf16_moe_deepgemm

Conversation

@ckl117
Copy link
Copy Markdown
Collaborator

@ckl117 ckl117 commented Apr 3, 2026

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 3, 2026

Thanks for your contribution!

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 3, 2026

Codecov Report

❌ Patch coverage is 40.00000% with 18 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (release/2.5@5666993). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...l_executor/layers/moe/fused_moe_cutlass_backend.py 40.00% 13 Missing and 5 partials ⚠️
Additional details and impacted files
@@              Coverage Diff               @@
##             release/2.5    #7194   +/-   ##
==============================================
  Coverage               ?   69.47%           
==============================================
  Files                  ?      390           
  Lines                  ?    54384           
  Branches               ?     8575           
==============================================
  Hits                   ?    37786           
  Misses                 ?    13869           
  Partials               ?     2729           
Flag Coverage Δ
GPU 69.47% <40.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-07 14:42 CST

📋 Review 摘要

PR 概述:为 Cutlass MoE 后端添加 bf16 数据类型的 moe_permute、group_gemm、moe_unpermute 支持,通过 FD_USE_PHI_MOE_PERMUTE 环境变量控制新代码路径
变更范围model_executor/layers/moe/
影响面 TagOP Models

📝 PR 规范检查

PR 标题缺少有效的功能标签 Tag。

标题建议(可直接复制):

  • [OP] support bf16 moe tp permute, group_gemm, unpermute

描述建议:建议在 Motivation 和 Modifications 部分补充以下内容:

  • Motivation:说明为何需要在 Cutlass 后端支持 bf16 MOE permute 路径
  • Modifications:说明新增 deep_batch_gemm 函数和 FD_USE_PHI_MOE_PERMUTE 分支的设计

问题

级别 文件 概述
🟡 建议 fused_moe_cutlass_backend.py:59 paddlefleet_ops 可能为 None,缺少防御性检查
🟡 建议 fused_moe_cutlass_backend.py:352 permute_scale 变量未被使用

总体评价

代码逻辑正确,与 fused_moe_deepgemm_backend.py 中的实现模式保持一致。建议添加对 paddlefleet_ops 的防御性检查以提高代码健壮性。测试用例已在 tests/layers/test_deepgemm_fused_moe.py 中覆盖。


def deep_batch_gemm(x, y, expert_idx_per_token):
out = paddle.empty([x.shape[0], y.shape[-1]], dtype=x.dtype)
paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous(x, y, out, expert_idx_per_token)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 paddlefleet_ops 可能为 None,缺少防御性检查

paddlefleet_ops 是通过 try_import(["paddlefleet.ops"]) 导入的,当 paddlefleet 模块不可用时会返回 None。如果用户在未安装 paddlefleet 的环境下启用 FD_USE_PHI_MOE_PERMUTE,此处会抛出 AttributeError

建议添加防御性检查:

def deep_batch_gemm(x, y, expert_idx_per_token):
    if paddlefleet_ops is None:
        raise RuntimeError(
            "paddlefleet.ops is required for FD_USE_PHI_MOE_PERMUTE=1. "
            "Please install paddlefleet or disable this feature."
        )
    out = paddle.empty([x.shape[0], y.shape[-1]], dtype=x.dtype)
    paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous(x, y, out, expert_idx_per_token)
    return out

permute_input,
permute_indices_per_token, # == zipped_expertwise_rowmap
topk_weights,
permute_scale,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 permute_scale 变量未被使用

moe_permute 返回的 permute_scale 在后续代码中未被使用。如果确实不需要,建议用 _ 替代以明确表示该变量被有意忽略:

(
    permute_input,
    permute_indices_per_token,
    topk_weights,
    _,  # permute_scale not used in bf16 path
    expert_idx_per_token,
) = paddle.nn.functional.moe_permute(...)

注意:行 423 处也存在相同情况。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants