-
Notifications
You must be signed in to change notification settings - Fork 886
Description
🐛 Describe the bug
During model quantiation for Ethos-U, Softmax ops are decomposed to atomic ops and a maximum substitution is added before the Softmax to improve activation quantization range. The maximum substitution uses aten::amax which is very slow on the NPU.
Quantizing a toy model:
python softmax_script.py
Results show a very large inference time:
Batch Inference time 1057.09 ms
with aten::amax using 85% of the NPU cycles with very low Util% (MAC) value.
Note: the tensor sizes in the script are taken from the whisper-tiny encoder model.
Versions
PyTorch version: 2.9.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 26.3 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.6.4.2)
CMake version: version 4.2.1
Libc version: N/A
Python version: 3.12.10 (v3.12.10:0cc81280367, Apr 8 2025, 08:46:59) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-26.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] executorch==1.0.0
[pip3] numpy==2.3.4
[pip3] onnx==1.20.0
[pip3] onnx-ir==0.1.14
[pip3] onnxruntime==1.23.2
[pip3] onnxscript==0.5.7
[pip3] torch==2.9.0
[pip3] torchao==0.14.0
[pip3] torchaudio==2.9.0
[pip3] torchcodec==0.9.1
[pip3] torchmetrics==1.8.2
[pip3] torchvision==0.24.0
[conda] Could not collect
cc @digantdesai @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell