Skip to content

refactor: extract shared tabulate math, remove pt_expt→pt dependency#5339

Open
wanghan-iapcm wants to merge 7 commits intodeepmodeling:masterfrom
wanghan-iapcm:refact-tabulate
Open

refactor: extract shared tabulate math, remove pt_expt→pt dependency#5339
wanghan-iapcm wants to merge 7 commits intodeepmodeling:masterfrom
wanghan-iapcm:refact-tabulate

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Mar 25, 2026

Summary

  • Extract tabulation math (activation derivatives, chain-rule propagation, embedding-net forward pass) from deepmd/pt/utils/tabulate.py into a shared numpy module deepmd/utils/tabulate_math.py
  • Both pt and pt_expt backends now inherit from the shared DPTabulate class
  • pt_expt no longer imports from deepmd.pt for tabulation or compression
  • Remove is_pt flag from BaseTabulate — normalize lower/upper bounds to per-type scalars, move _convert_numpy_float_to_int into subclass _convert_numpy_to_tensor

Changes

File Lines What
deepmd/utils/tabulate_math.py +582 New: numpy DPTabulate with all math
deepmd/pt/utils/tabulate.py -560 Thin wrapper: _get_descrpt_type + _convert_numpy_to_tensor
deepmd/pt_expt/utils/tabulate.py rewrite Inherit from tabulate_math, no pt imports
deepmd/pt_expt/descriptor/*.py -6 files Remove ActivationFn import, pass string
deepmd/utils/tabulate.py -12 Remove is_pt flag and branching
deepmd/tf/utils/tabulate.py -1 Remove is_pt=False arg

Test plan

  • PT compression tests pass (16 tests: se_a, se_atten, se_r, se_t, dpa2)
  • PT_EXPT model compression tests pass (6 tests)
  • PT_EXPT descriptor compression tests pass (16 tests)
  • No deepmd.pt imports in pt_expt tabulate or descriptors
  • No is_pt references remain

Summary by CodeRabbit

  • Refactor

    • Consolidated tabulation and derivative logic into a backend-agnostic implementation, removing many backend-specific code paths and simplifying activation handling and numeric-to-tensor conversion.
  • New Features

    • Added a reusable, backend-neutral tabulation/math component providing activation derivatives and chain-rule propagation for embedding computations.
  • Tests

    • Updated tests to be backend-agnostic by using NumPy-compatible inputs and outputs.

Han Wang added 4 commits March 25, 2026 15:06
Backend-agnostic DPTabulate with numpy implementations of:
- Activation derivatives (grad, grad_grad)
- Chain-rule propagation (unaggregated_dy_dx_s, etc.)
- Embedding net forward pass (_make_data, _layer_0, _layer_1)
- Network variable extraction (_get_network_variable)
- Descriptor type detection from serialized data
Remove ~500 lines of duplicated math. PT DPTabulate now only overrides
_get_descrpt_type (isinstance checks) and _convert_numpy_to_tensor
(torch tensor conversion).
…dency

DPTabulate now inherits from deepmd.utils.tabulate_math.DPTabulate
instead of deepmd.pt.utils.tabulate.DPTabulate. Removed all imports
of ActivationFn and DPTabulatePT from deepmd.pt. Descriptors now pass
activation_fn_name (string) instead of ActivationFn (torch module).
Normalize lower/upper to per-type scalars after _get_env_mat_range
so build() no longer needs is_pt branching. Move
_convert_numpy_float_to_int into subclass _convert_numpy_to_tensor.
Remove is_pt parameter from BaseTabulate and all callers.
@wanghan-iapcm wanghan-iapcm requested a review from njzjz March 25, 2026 07:59
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 56bdcf77a9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

)

log = logging.getLogger(__name__)
from deepmd.utils.tabulate_math import DPTabulate as DPTabulateBase

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve PT tabulate helper exports

This refactor removes the module-level unaggregated_dy_dx_s, unaggregated_dy2_dx_s, unaggregated_dy_dx, and unaggregated_dy2_dx symbols from deepmd.pt.utils.tabulate, but existing callers still import them from this module (for example source/tests/pt/test_tabulate.py); importing that module now raises ImportError instead of providing the helper ops. Please re-export compatible wrappers (or forward imports from deepmd.utils.tabulate_math) to avoid breaking current PT tabulation consumers.

Useful? React with 👍 / 👎.

type_one_side,
exclude_types,
)
self._activation_fn = get_activation_fn(activation_fn_name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use numerically stable softplus for tabulation

The new shared path delegates activations to get_activation_fn, and for softplus that implementation uses log(1 + exp(x)); during compression we explicitly tabulate extrapolated inputs, so large positive pre-activations can overflow exp(x) and propagate inf/nan into table coefficients. The previous PT path used torch.nn.functional.softplus, which is stable in this regime, so this change can corrupt compressed models that use softplus activations.

Useful? React with 👍 / 👎.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 25, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 77975b8c-3615-4fe4-9544-5e9ca2e570a3

📥 Commits

Reviewing files that changed from the base of the PR and between cb4a122 and 1067ff0.

📒 Files selected for processing (1)
  • source/tests/pt/test_tabulate.py

📝 Walkthrough

Walkthrough

Refactors tabulation math into a new backend-agnostic module and replaces PT-specific math with a thin PT wrapper that delegates activation/derivative computations to the new module; callers now pass activation names (activation_fn_name) and tensor conversion is centralized in PT wrappers.

Changes

Cohort / File(s) Summary
Backend-agnostic tabulation math
deepmd/utils/tabulate_math.py
Added new numpy-based tabulation math module: activation derivatives, chain-rule helpers, and DPTabulate(BaseTabulate) implementing forward + first/second derivative propagation and activation-name-based init.
Base tabulation updates
deepmd/utils/tabulate.py
Removed is_pt parameter and PT branches; normalized multi-dim lower/upper via np.min/np.max; removed conditional float->int conversion.
PyTorch backend wrapper
deepmd/pt/utils/tabulate.py
Replaced in-file math with thin wrapper DPTabulate(DPTabulateBase) delegating to deepmd.utils.tabulate_math.DPTabulate; now forwards activation_fn_name and limits PT-specific logic to type checks and tensor conversion.
PT-expt wrapper & tensor conversion
deepmd/pt_expt/utils/tabulate.py
Switched base class to deepmd.utils.tabulate_math.DPTabulate; constructor uses activation_fn_name: str; added _convert_numpy_to_tensor() to convert numpy outputs to torch tensors on configured DEVICE.
PT-expt descriptor call sites
deepmd/pt_expt/descriptor/dpa1.py, .../dpa2.py, .../se_e2_a.py, .../se_r.py, .../se_t.py, .../se_t_tebd.py
Removed local ActivationFn imports and changed calls to pass raw data["activation_function"]/repinit_data["activation_function"] (or activation_fn_name=) to DPTabulate instead of constructing ActivationFn(...).
TensorFlow minor adjustment
deepmd/tf/utils/tabulate.py
Removed trailing False argument from super().__init__(...) to match updated base signature.
Tests
source/tests/pt/test_tabulate.py
Switched test usage from torch tensors to numpy for derivative helpers and removed torch/env imports and device moves.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Caller as enable_compression (descriptor)
participant PTWrapper as deepmd.pt.utils.DPTabulate
participant TabMath as deepmd.utils.tabulate_math.DPTabulate
participant NumPy as NumPy math
participant Torch as torch/device conversion

Caller->>PTWrapper: instantiate with activation_fn_name (raw string)
PTWrapper->>TabMath: super().__init__(..., activation_fn_name=...)
TabMath->>NumPy: load params, run forward and derivative math
TabMath-->>PTWrapper: return numpy-based data structures
PTWrapper->>PTWrapper: _convert_numpy_float_to_int()
PTWrapper->>Torch: convert numpy arrays -> torch.tensor(device=DEVICE)
PTWrapper-->>Caller: tensor-based tabulation object ready

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Core, breaking change

Suggested reviewers

  • njzjz
  • iProzd
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.63% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the main refactoring: extracting tabulation math into a shared backend-agnostic module and removing the pt_expt→pt dependency, which aligns with the substantial changes across multiple tabulate files and descriptor implementations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/utils/tabulate_math.py`:
- Around line 33-42: ACTIVATION_TO_FUNCTYPE currently omits aliases like
"linear" and "none" and fails to normalize prefixed names such as "silut*" and
"custom_silu*", causing lookups (where the code maps an activation name to
ACTIVATION_TO_FUNCTYPE) to raise RuntimeError; update ACTIVATION_TO_FUNCTYPE to
include "linear" and "none" entries and extend it with any derivative names you
need, and modify the lookup code to normalize inputs (e.g., lowercasing and
stripping/normalizing known prefixes like "silut", "custom_silu", "silu", etc.)
before dict lookup; also add matching derivative handling cases in the grad and
grad_grad implementations (ensure they accept the same normalized
aliases/prefixes) so the activation, grad, and grad_grad mappings are
consistent.
- Around line 438-440: The code currently assigns vv = zz.astype(self.data_type)
but zz is only defined for layer > 0, causing UnboundLocalError for single-layer
nets; change the assignment to use the current-layer output variable yy (i.e.,
set vv from yy.astype(self.data_type)) and ensure corresponding dd and d2 still
come from dy.astype(self.data_type) and dy2.astype(self.data_type) so both
one-layer and multi-layer cases work (look for vv, zz, yy, dy, dy2, and
data_type to locate the lines to update).
- Around line 246-259: The __init__ signature currently uses a mutable default
exclude_types: list[list[int]] = [], which causes a shared list across
instances; change the parameter to exclude_types: list[list[int]] | None = None
and inside DPTabulate.__init__ (before calling super().__init__) set
exclude_types = [] if exclude_types is None so each instance gets its own list,
then pass that local variable to super().__init__.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: fa001961-2e88-4010-890f-763ebb944a9c

📥 Commits

Reviewing files that changed from the base of the PR and between e97967b and 56bdcf7.

📒 Files selected for processing (11)
  • deepmd/pt/utils/tabulate.py
  • deepmd/pt_expt/descriptor/dpa1.py
  • deepmd/pt_expt/descriptor/dpa2.py
  • deepmd/pt_expt/descriptor/se_e2_a.py
  • deepmd/pt_expt/descriptor/se_r.py
  • deepmd/pt_expt/descriptor/se_t.py
  • deepmd/pt_expt/descriptor/se_t_tebd.py
  • deepmd/pt_expt/utils/tabulate.py
  • deepmd/tf/utils/tabulate.py
  • deepmd/utils/tabulate.py
  • deepmd/utils/tabulate_math.py
💤 Files with no reviewable changes (1)
  • deepmd/tf/utils/tabulate.py

Comment on lines +33 to +42
ACTIVATION_TO_FUNCTYPE: dict[str, int] = {
"tanh": 1,
"gelu": 2,
"gelu_tf": 2,
"relu": 3,
"relu6": 4,
"softplus": 5,
"sigmoid": 6,
"silu": 7,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The new activation lookup drops valid project activations.

ACTIVATION_TO_FUNCTYPE omits linear and none declared in deepmd/common.py, Lines 48-60, and the exact lookup at Lines 261-263 also rejects the silut* / custom_silu* names that deepmd/pt/utils/utils.py, Lines 174-189, can emit. Those models will now fail compression with RuntimeError("Unknown activation function ..."). Please normalize aliases/prefix forms before lookup and add matching derivative cases in grad and grad_grad.

Also applies to: 260-264

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/utils/tabulate_math.py` around lines 33 - 42, ACTIVATION_TO_FUNCTYPE
currently omits aliases like "linear" and "none" and fails to normalize prefixed
names such as "silut*" and "custom_silu*", causing lookups (where the code maps
an activation name to ACTIVATION_TO_FUNCTYPE) to raise RuntimeError; update
ACTIVATION_TO_FUNCTYPE to include "linear" and "none" entries and extend it with
any derivative names you need, and modify the lookup code to normalize inputs
(e.g., lowercasing and stripping/normalizing known prefixes like "silut",
"custom_silu", "silu", etc.) before dict lookup; also add matching derivative
handling cases in the grad and grad_grad implementations (ensure they accept the
same normalized aliases/prefixes) so the activation, grad, and grad_grad
mappings are consistent.

Comment on lines +246 to +259
def __init__(
self,
descrpt: Any,
neuron: list[int],
type_one_side: bool = False,
exclude_types: list[list[int]] = [],
activation_fn_name: str = "tanh",
) -> None:
super().__init__(
descrpt,
neuron,
type_one_side,
exclude_types,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify this file is clean under the Python lint/format gates relevant to the change.
ruff check deepmd/utils/tabulate_math.py
ruff format --check deepmd/utils/tabulate_math.py

Repository: deepmodeling/deepmd-kit

Length of output: 780


🏁 Script executed:

# Find the class structure and parent class
rg -B 5 "def __init__" deepmd/utils/tabulate_math.py | head -50

Repository: deepmodeling/deepmd-kit

Length of output: 236


🏁 Script executed:

# Look for the class definition and parent class
rg "class.*:" deepmd/utils/tabulate_math.py

Repository: deepmodeling/deepmd-kit

Length of output: 98


🏁 Script executed:

# Check for exclude_types usage in the file
rg "exclude_types" deepmd/utils/tabulate_math.py -C 2

Repository: deepmodeling/deepmd-kit

Length of output: 1363


🏁 Script executed:

# Find the BaseTabulate class
rg "class BaseTabulate" deepmd/utils/ -A 20

Repository: deepmodeling/deepmd-kit

Length of output: 1155


🏁 Script executed:

# Look for BaseTabulate in the same file or other files
fd . deepmd/utils/ -name "*.py" | xargs rg "class BaseTabulate"

Repository: deepmodeling/deepmd-kit

Length of output: 364


🏁 Script executed:

# Look for all __init__ methods in the file to check for other mutable defaults
rg "def __init__" deepmd/utils/tabulate_math.py -A 10 | head -100

Repository: deepmodeling/deepmd-kit

Length of output: 353


🏁 Script executed:

# Check for other mutable defaults (lists, dicts) in the file
rg "= \[\]|= \{\}" deepmd/utils/tabulate_math.py

Repository: deepmodeling/deepmd-kit

Length of output: 178


Replace the shared exclude_types=[] default.

DPTabulate triggers Ruff B006 with the mutable list default at line 251. This leaves one list instance shared across all DPTabulate objects. Change the signature to exclude_types: list[list[int]] | None = None and initialize the list inside the constructor before calling super().__init__().

♻️ Proposed fix
     def __init__(
         self,
         descrpt: Any,
         neuron: list[int],
         type_one_side: bool = False,
-        exclude_types: list[list[int]] = [],
+        exclude_types: list[list[int]] | None = None,
         activation_fn_name: str = "tanh",
     ) -> None:
+        exclude_types = [] if exclude_types is None else exclude_types
         super().__init__(
             descrpt,
             neuron,
             type_one_side,
             exclude_types,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def __init__(
self,
descrpt: Any,
neuron: list[int],
type_one_side: bool = False,
exclude_types: list[list[int]] = [],
activation_fn_name: str = "tanh",
) -> None:
super().__init__(
descrpt,
neuron,
type_one_side,
exclude_types,
)
def __init__(
self,
descrpt: Any,
neuron: list[int],
type_one_side: bool = False,
exclude_types: list[list[int]] | None = None,
activation_fn_name: str = "tanh",
) -> None:
exclude_types = [] if exclude_types is None else exclude_types
super().__init__(
descrpt,
neuron,
type_one_side,
exclude_types,
)
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 251-251: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/utils/tabulate_math.py` around lines 246 - 259, The __init__ signature
currently uses a mutable default exclude_types: list[list[int]] = [], which
causes a shared list across instances; change the parameter to exclude_types:
list[list[int]] | None = None and inside DPTabulate.__init__ (before calling
super().__init__) set exclude_types = [] if exclude_types is None so each
instance gets its own list, then pass that local variable to super().__init__.

Comment on lines +438 to +440
vv = zz.astype(self.data_type)
dd = dy.astype(self.data_type)
d2 = dy2.astype(self.data_type)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Return the current layer output here, not zz.

zz is only assigned in the layer > 0 branch. If an embedding net has a single layer, Line 438 raises UnboundLocalError even though yy, dy, and dy2 are already valid. Using yy here preserves both the one-layer and multi-layer cases.

🐛 Proposed fix
-        vv = zz.astype(self.data_type)
+        vv = yy.astype(self.data_type)
         dd = dy.astype(self.data_type)
         d2 = dy2.astype(self.data_type)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/utils/tabulate_math.py` around lines 438 - 440, The code currently
assigns vv = zz.astype(self.data_type) but zz is only defined for layer > 0,
causing UnboundLocalError for single-layer nets; change the assignment to use
the current-layer output variable yy (i.e., set vv from
yy.astype(self.data_type)) and ensure corresponding dd and d2 still come from
dy.astype(self.data_type) and dy2.astype(self.data_type) so both one-layer and
multi-layer cases work (look for vv, zz, yy, dy, dy2, and data_type to locate
the lines to update).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deepmd/pt/utils/tabulate.py (1)

57-65: ⚠️ Potential issue | 🟡 Minor

Fix shared defaults and normalize activation input type in __init__.

Lines 57–58 use shared mutable default objects (a code smell), and the type signature at line 58 is misleading since the parameter can reasonably accept strings at call sites. Line 65's direct access to .activation assumes a specific type, which is fragile if the type contract isn't enforced.

The proposed fix correctly:

  • Replaces mutable defaults with None and creates fresh objects
  • Widens the type annotation to accept both str and ActivationFn
  • Normalizes the input before passing to the parent class

Apply the suggested diff to improve type safety and prevent accidental state sharing across instances.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/utils/tabulate.py` around lines 57 - 65, The __init__ currently
uses a shared mutable default for exclude_types and assumes activation_fn is an
ActivationFn instance; change the signature to accept exclude_types:
Optional[list[list[int]]] = None and activation_fn: Union[str, ActivationFn] =
"tanh", then inside __init__ set exclude_types = [] if exclude_types is None and
normalize activation_value = activation_fn.activation if
isinstance(activation_fn, ActivationFn) else str(activation_fn) before calling
super().__init__(..., exclude_types, activation_fn_name=activation_value) to
avoid shared state and handle string inputs safely.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pt/utils/tabulate.py`:
- Around line 2-7: The constructor currently uses mutable and call-in-defaults
which fail linters: change the parameter defaults in the DPTabulate (or relevant
__init__) signature from exclude_types: list[list[int]] = [] to exclude_types:
Optional[list[list[int]]] = None and from activation_fn: ActivationFn =
ActivationFn("tanh") to activation_fn: Optional[ActivationFn] = None; then
inside the constructor, if exclude_types is None set exclude_types = [] (or a
fresh list) and if activation_fn is None set activation_fn =
ActivationFn("tanh"). Update any type hints/imports to include Optional and
ensure the instance attribute assignments use these local variables instead of
the original defaults.

---

Outside diff comments:
In `@deepmd/pt/utils/tabulate.py`:
- Around line 57-65: The __init__ currently uses a shared mutable default for
exclude_types and assumes activation_fn is an ActivationFn instance; change the
signature to accept exclude_types: Optional[list[list[int]]] = None and
activation_fn: Union[str, ActivationFn] = "tanh", then inside __init__ set
exclude_types = [] if exclude_types is None and normalize activation_value =
activation_fn.activation if isinstance(activation_fn, ActivationFn) else
str(activation_fn) before calling super().__init__(..., exclude_types,
activation_fn_name=activation_value) to avoid shared state and handle string
inputs safely.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 61f3ebed-df39-41bc-bd39-aeaaf52ceaad

📥 Commits

Reviewing files that changed from the base of the PR and between 56bdcf7 and cb4a122.

📒 Files selected for processing (1)
  • deepmd/pt/utils/tabulate.py

Comment on lines +2 to +7
"""PyTorch-specific DPTabulate wrapper.

Inherits the numpy math from ``deepmd.utils.tabulate_math.DPTabulate``
and adds torch-specific ``_convert_numpy_to_tensor`` and
``_get_descrpt_type`` (isinstance checks against PT descriptor classes).
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
ruff check .
ruff format . --check

Repository: deepmodeling/deepmd-kit

Length of output: 50381


🏁 Script executed:

#!/bin/bash
# Search for the target file in the repository
fd "tabulate.py" deepmd/pt/utils --type f

Repository: deepmodeling/deepmd-kit

Length of output: 94


🏁 Script executed:

#!/bin/bash
# Check the file for ruff violations specifically
ruff check deepmd/pt/utils/tabulate.py 2>&1 || echo "Exit code: $?"

Repository: deepmodeling/deepmd-kit

Length of output: 1426


🏁 Script executed:

#!/bin/bash
# Check format status of the file
ruff format deepmd/pt/utils/tabulate.py --check 2>&1 || echo "Exit code: $?"

Repository: deepmodeling/deepmd-kit

Length of output: 91


Fix mutable default arguments and function call in constructor defaults.

Lines 57-58 have two linting violations that will cause CI failure:

  • Line 57: Replace mutable default exclude_types: list[list[int]] = [] with None and initialize within the function
  • Line 58: Replace function call activation_fn: ActivationFn = ActivationFn("tanh") with a module-level constant or initialize within the function

These violations prevent the code from passing ruff check as required by the project guidelines.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/utils/tabulate.py` around lines 2 - 7, The constructor currently
uses mutable and call-in-defaults which fail linters: change the parameter
defaults in the DPTabulate (or relevant __init__) signature from exclude_types:
list[list[int]] = [] to exclude_types: Optional[list[list[int]]] = None and from
activation_fn: ActivationFn = ActivationFn("tanh") to activation_fn:
Optional[ActivationFn] = None; then inside the constructor, if exclude_types is
None set exclude_types = [] (or a fresh list) and if activation_fn is None set
activation_fn = ActivationFn("tanh"). Update any type hints/imports to include
Optional and ensure the instance attribute assignments use these local variables
instead of the original defaults.

… tensors

The derivative functions now return numpy arrays from tabulate_math.
Update test inputs/outputs accordingly. Remove unused torch/env imports.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant