Skip to content

Latest commit

 

History

History
15 lines (11 loc) · 854 Bytes

File metadata and controls

15 lines (11 loc) · 854 Bytes

Adaptive Tokenization for Vision Transformer PDE Simulation

This repository contains partial code that is crucial for the understanding of the iterative patchification concept introduced in the manuscript "Adaptive Tokenization for Vision Transformer PDE Simulation" submitted to the ICLR2026 Workshop AI&PDE.

The heads.py contains helper functions that evaluate the gradient map, adaptively partition the input signals on N-D grid, and then project variable-size patches to a uniform embedding space. Please check the demo.ipynb for its usage.

You can cite this work as following

@inproceedings{wang2026adaptive,
  title={Adaptive Tokenization for Vision Transformer PDE Simulation},
  author={Wang, Hanwen and Perdikaris, Paris},
  booktitle={AI $\{$$\backslash$\&$\}$ PDE: ICLR 2026 Workshop on AI and Partial Differential Equations}
}