Skip to content

Feat/add selfmemory memory provider#1768

Open
shrijayan wants to merge 4 commits intoNVIDIA:developfrom
shrijayan:feat/add-selfmemory-memory-provider
Open

Feat/add selfmemory memory provider#1768
shrijayan wants to merge 4 commits intoNVIDIA:developfrom
shrijayan:feat/add-selfmemory-memory-provider

Conversation

@shrijayan
Copy link

@shrijayan shrijayan commented Mar 8, 2026

Description

Closes #1767

Add nvidia-nat-selfmemory package that implements MemoryEditor using SelfMemory as the backend, enabling 29+ vector stores and 15+ embedding providers for NeMo Agent Toolkit memory operations.

What's included

  • SelfMemoryProviderConfig(MemoryBaseConfig, name="selfmemory") — config class with vector store, embedding, LLM, and encryption settings
  • SelfMemoryEditor(MemoryEditor) — async editor bridging SelfMemory's sync API via asyncio.to_thread()
  • translator.py — bidirectional conversion between MemoryItem and SelfMemory formats
  • Plugin registration via @register_memory and nat.components entry point
  • Unit tests for config, editor, and translator

Usage

memory:
  my_store:
    _type: selfmemory
    vector_store_provider: qdrant
    embedding_provider: openai

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • New Features

    • Added SelfMemory as a configurable memory backend with pluggable vector stores, embedding providers, optional LLM support, multi-tenant isolation, encryption support, and flexible metadata handling.
  • Documentation

    • Added package PyPI documentation with usage examples and configuration guidance.
  • Tests

    • Added comprehensive tests for config, editor operations (add/search/remove), and translation utilities.
  • Chores

    • Added package configuration and entry point for the new selfmemory integration.
  • License

    • Included Apache-2.0 license for the new package.

Add nvidia-nat-selfmemory package that implements MemoryEditor
using SelfMemory as the backend, enabling 29+ vector stores and
15+ embedding providers for NeMo Agent Toolkit memory operations.

Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
…ling and async item addition

Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
@shrijayan shrijayan requested a review from a team as a code owner March 8, 2026 16:13
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 8, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Mar 8, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 994924fc-2e85-461b-8493-17a930b682c6

📥 Commits

Reviewing files that changed from the base of the PR and between ccc2c5c and 69f7ea0.

📒 Files selected for processing (1)
  • packages/nvidia_nat_selfmemory/tests/test_translator.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/nvidia_nat_selfmemory/tests/test_translator.py

Walkthrough

A new SelfMemory memory provider plugin package is introduced for NVIDIA NeMo Agent Toolkit. The implementation enables integration with SelfMemory as the memory backend, supporting configurable vector stores, embedding providers, optional LLM-based extraction, and encryption. Includes configuration class, async provider function, editor wrapper, data translators, comprehensive test coverage, and entry point registration.

Changes

Cohort / File(s) Summary
Package Metadata
packages/nvidia_nat_selfmemory/LICENSE.md, packages/nvidia_nat_selfmemory/pyproject.toml, packages/nvidia_nat_selfmemory/src/nat/meta/pypi.md
Add Apache-2.0 license, new package pyproject with build system, dynamic dependencies (including selfmemory), entry-point registration, and PyPI usage doc.
Plugin Implementation
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/__init__.py, packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py, packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/register.py
Introduce SelfMemoryProviderConfig, an async selfmemory_provider registered via @register_memory, encryption key handling (MASTER_ENCRYPTION_KEY), and module import registration.
Editor & Translator
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py, packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/translator.py
Add SelfMemoryEditor implementing async MemoryEditor by delegating sync SelfMemory calls via asyncio.to_thread; add translators for mapping between MemoryItem and SelfMemory add/search payloads.
Test Suite
packages/nvidia_nat_selfmemory/tests/test_config.py, packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py, packages/nvidia_nat_selfmemory/tests/test_translator.py
Add unit tests for config defaults/customization, editor add/search/remove behavior with mocked backend, and translator conversions including edge cases.
Root Configuration
pyproject.toml
Add optional dynamic dependency entry selfmemory = ["nvidia-nat-selfmemory == {version}"].

Sequence Diagram(s)

sequenceDiagram
    participant Client as Application
    participant Provider as selfmemory_provider
    participant Memory as SelfMemory Backend
    participant Editor as SelfMemoryEditor
    participant VectorStore as Vector Store<br/>(Qdrant/Chroma)
    participant Embeddings as Embedding<br/>Provider

    Client->>Provider: Initialize with config
    Provider->>Memory: Create SelfMemory instance<br/>(vector_store, embeddings, llm, encryption)
    Memory->>Embeddings: Register embeddings provider
    Memory->>VectorStore: Connect to vector store
    Provider->>Editor: Yield SelfMemoryEditor(memory)
    
    rect rgba(100,150,200,0.5)
    Note over Client,Editor: Add Items
    Client->>Editor: add_items(items)
    Editor->>Editor: Translate MemoryItem -> add_kwargs
    Editor->>Memory: Run add() in thread pool<br/>(content, user_id, tags, metadata)
    Memory->>VectorStore: Store embeddings
    Memory-->>Editor: Confirm added
    Editor-->>Client: Complete
    end
    
    rect rgba(150,200,100,0.5)
    Note over Client,Editor: Search
    Client->>Editor: search(query, user_id, top_k)
    Editor->>Memory: Run search() in thread pool<br/>(query, user_id, limit)
    Memory->>Embeddings: Embed query
    Memory->>VectorStore: Vector similarity search
    VectorStore-->>Memory: Return top_k results
    Editor->>Editor: Translate results -> MemoryItem[]
    Editor-->>Client: Return MemoryItem list
    end
    
    rect rgba(200,150,100,0.5)
    Note over Client,Editor: Remove Items
    Client->>Editor: remove_items(memory_id or user_id)
    Editor->>Memory: Run delete() or delete_all()<br/>in thread pool
    Memory->>VectorStore: Remove stored embeddings
    Memory-->>Editor: Complete
    Editor-->>Client: Complete
    end
    
    Client->>Provider: Exit context
    Provider->>Memory: Call memory.close()
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feat/add selfmemory memory provider' clearly describes the main change: adding a SelfMemory memory provider to the toolkit.
Linked Issues check ✅ Passed All coding requirements from issue #1767 are met: SelfMemoryProviderConfig class, async MemoryEditor implementation, translator utilities, @register_memory registration, entry point discovery, and comprehensive unit tests.
Out of Scope Changes check ✅ Passed All changes are directly related to adding the SelfMemory memory provider: new package structure, config, editor implementation, translator, tests, documentation, and root-level dependency management.
Docstring Coverage ✅ Passed Docstring coverage is 96.97% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@shrijayan shrijayan force-pushed the feat/add-selfmemory-memory-provider branch from 06f0efd to ccc2c5c Compare March 8, 2026 16:14
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
pyproject.toml (1)

70-70: Alphabetical ordering: selfmemory should be placed after s3 and before security.

The comment on line 50 indicates dependencies should be kept sorted. The entry selfmemory is currently placed between profiler and rag, but alphabetically it should come after s3 (line 76) and before security (line 77).

♻️ Suggested placement

Move the selfmemory entry to line 77 (after s3, before security):

 s3 = ["nvidia-nat-s3 == {version}"]
+selfmemory = ["nvidia-nat-selfmemory == {version}"]
 security = ["nvidia-nat-security == {version}"]

And remove it from line 70.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 70, The dependency entry for selfmemory (selfmemory =
["nvidia-nat-selfmemory == {version}"]) is out of alphabetical order; move the
entire selfmemory entry so it appears after the s3 entry and before the security
entry in pyproject.toml, and remove the original occurrence (currently between
profiler and rag) so there is only one correctly ordered selfmemory line.
packages/nvidia_nat_selfmemory/tests/test_config.py (1)

52-56: Docstring/test mismatch: test doesn't verify the registered name "selfmemory".

The docstring says "Test the config registers with name 'selfmemory'" but the assertion only checks the Python class name (__class__.__name__), not the registration name passed to MemoryBaseConfig. Consider either updating the docstring to reflect what's actually tested, or adding an assertion that verifies the registration mechanism.

♻️ Suggested docstring fix
     def test_name_attribute(self):
-        """Test the config registers with name 'selfmemory'."""
+        """Test the config class name is SelfMemoryProviderConfig."""
         config = SelfMemoryProviderConfig()

         assert config.__class__.__name__ == "SelfMemoryProviderConfig"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/tests/test_config.py` around lines 52 - 56,
The test docstring claims the config registers with name "selfmemory" but the
test only checks the class name; update the test to actually verify the
registration name by asserting the registration value provided to
MemoryBaseConfig (e.g., check SelfMemoryProviderConfig.name or the registry
lookup used by MemoryBaseConfig) equals "selfmemory" — locate the test function
test_name_attribute and add an assertion that inspects the registered name (or
update the docstring to match the current class-name-only check if you prefer
not to test registration).
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py (2)

21-28: Use Field(default_factory=dict) for mutable defaults and consider OptionalSecretStr for the encryption key.

  1. The mutable default dict = {} triggers RUF012. While Pydantic handles this correctly, using Field(default_factory=dict) is more explicit and silences the linter.

  2. The encryption_key stores a sensitive secret. Consider using OptionalSecretStr to prevent accidental logging of the key value. As per coding guidelines, use default=None for optional secret fields.

♻️ Proposed fix
+from pydantic import Field
+
 from nat.builder.builder import Builder
 from nat.cli.register_workflow import register_memory
 from nat.data_models.memory import MemoryBaseConfig
+from nat.utils.type_utils import OptionalSecretStr


 class SelfMemoryProviderConfig(MemoryBaseConfig, name="selfmemory"):
     vector_store_provider: str = "qdrant"
-    vector_store_config: dict = {}
+    vector_store_config: dict = Field(default_factory=dict)
     embedding_provider: str = "openai"
-    embedding_config: dict = {}
+    embedding_config: dict = Field(default_factory=dict)
     llm_provider: str | None = None
-    llm_config: dict = {}
-    encryption_key: str | None = None
+    llm_config: dict = Field(default_factory=dict)
+    encryption_key: OptionalSecretStr = None

If OptionalSecretStr is used, update the provider function to extract the secret value:

encryption_key = (
    config.encryption_key.get_secret_value() if config.encryption_key else None
) or os.environ.get("MASTER_ENCRYPTION_KEY")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around
lines 21 - 28, Modify the SelfMemoryProviderConfig class: replace mutable
default dicts (vector_store_config, embedding_config, llm_config) with
Field(default_factory=dict) and change encryption_key to OptionalSecretStr with
default=None; then update the consumer/provider logic that reads
config.encryption_key (the code that builds encryption_key for the provider) to
call config.encryption_key.get_secret_value() when present (falling back to
os.environ.get("MASTER_ENCRYPTION_KEY") or None) so the secret is not stored or
logged as plain str.

56-59: Document the environment variable mutation.

Setting os.environ["MASTER_ENCRYPTION_KEY"] mutates global state, which is necessary for SelfMemory but could affect other components. Consider adding a comment explaining why this is needed.

📝 Suggested documentation
     encryption_key = config.encryption_key or os.environ.get("MASTER_ENCRYPTION_KEY")

     if encryption_key:
+        # SelfMemory reads encryption key from environment; ensure it's set if provided via config
         os.environ["MASTER_ENCRYPTION_KEY"] = encryption_key
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around
lines 56 - 59, Add an inline comment next to the encryption_key
assignment/assignment-to-os.environ explaining that setting
os.environ["MASTER_ENCRYPTION_KEY"] is an intentional global mutation required
for SelfMemory's encryption plumbing (so downstream code can access the master
key), and note the potential side-effects for other components and why it's
safe/necessary here; update the block around the encryption_key variable and the
if encryption_key: os.environ["MASTER_ENCRYPTION_KEY"] line (in memory.py) to
include that explanatory comment.
packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py (1)

24-55: Add return type hints to fixtures.

Per coding guidelines, public functions (including fixtures) should have return type hints. The editor_fixture and sample_memory_item_fixture are missing return type annotations.

♻️ Proposed fix
 `@pytest.fixture`(name="editor")
-def editor_fixture(mock_backend: MagicMock):
+def editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor:
     """Fixture to provide a SelfMemoryEditor with a mocked backend."""
     return SelfMemoryEditor(backend=mock_backend)


 `@pytest.fixture`(name="sample_memory_item")
-def sample_memory_item_fixture():
+def sample_memory_item_fixture() -> MemoryItem:
     """Fixture to provide a sample MemoryItem."""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py` around lines
24 - 55, The fixtures editor_fixture and sample_memory_item_fixture lack return
type hints; update their signatures to include explicit return types (e.g.,
change def editor_fixture(mock_backend: MagicMock) to def
editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor and def
sample_memory_item_fixture() to def sample_memory_item_fixture() -> MemoryItem)
so the fixtures are properly typed (ensure SelfMemoryEditor and MemoryItem are
imported/available in the test module).
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py (1)

45-53: Consider a more descriptive error for missing user_id.

The kwargs.pop("user_id") on line 47 raises a bare KeyError when user_id is not provided. While the test confirms this behavior, consider wrapping it with a more descriptive error message for better developer experience.

💡 Optional improvement for clearer error messaging
     async def search(self, query: str, top_k: int = 5, **kwargs) -> list[MemoryItem]:
         """Retrieve items relevant to the given query."""
-        user_id = kwargs.pop("user_id")
+        try:
+            user_id = kwargs.pop("user_id")
+        except KeyError:
+            raise KeyError("user_id is required for search operations") from None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`
around lines 45 - 53, The search method currently does kwargs.pop("user_id")
which raises a bare KeyError if user_id is missing; update the search function
to explicitly check for the presence of user_id (e.g., if "user_id" not in
kwargs or user_id is None) and raise a descriptive error (ValueError or
TypeError) with a clear message like "user_id is required for search" before
calling self._backend.search and converting results via
search_results_to_memory_items; reference the search method, the kwargs
handling, and the subsequent call to
self._backend.search/search_results_to_memory_items when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nvidia_nat_selfmemory/pyproject.toml`:
- Around line 76-77: The file is missing a trailing newline at EOF; open
packages/nvidia_nat_selfmemory/pyproject.toml, locate the
[project.entry-points.'nat.components'] block and the nat_selfmemory =
"nat.plugins.selfmemory.register" entry, then add a single newline character at
the end of the file and save so the file ends with exactly one trailing newline.

In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py`:
- Around line 63-66: The file currently ends without a trailing newline; ensure
the source ends with a single newline character by adding one at EOF so the last
lines (the try/finally yielding SelfMemoryEditor and memory.close()) are
followed by a newline; this change is purely formatting of
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py and does not
require code changes to SelfMemoryEditor or memory.close().

In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`:
- Around line 55-63: The file is missing a trailing newline; open the module
containing the async method remove_items (the selfmemory_editor.py
implementation of remove_items) and add a single newline character at the end of
the file so the file ends with one trailing newline per coding guidelines.

In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py`:
- Around line 132-150: The file is missing a trailing newline at EOF; update the
test file by adding a single newline character at the end (ensure the file ends
with exactly one newline) — e.g., open
packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py and add the
newline after the last test function (test_remove_items_missing_arguments) so
the file ends with a single trailing newline.

---

Nitpick comments:
In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py`:
- Around line 21-28: Modify the SelfMemoryProviderConfig class: replace mutable
default dicts (vector_store_config, embedding_config, llm_config) with
Field(default_factory=dict) and change encryption_key to OptionalSecretStr with
default=None; then update the consumer/provider logic that reads
config.encryption_key (the code that builds encryption_key for the provider) to
call config.encryption_key.get_secret_value() when present (falling back to
os.environ.get("MASTER_ENCRYPTION_KEY") or None) so the secret is not stored or
logged as plain str.
- Around line 56-59: Add an inline comment next to the encryption_key
assignment/assignment-to-os.environ explaining that setting
os.environ["MASTER_ENCRYPTION_KEY"] is an intentional global mutation required
for SelfMemory's encryption plumbing (so downstream code can access the master
key), and note the potential side-effects for other components and why it's
safe/necessary here; update the block around the encryption_key variable and the
if encryption_key: os.environ["MASTER_ENCRYPTION_KEY"] line (in memory.py) to
include that explanatory comment.

In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`:
- Around line 45-53: The search method currently does kwargs.pop("user_id")
which raises a bare KeyError if user_id is missing; update the search function
to explicitly check for the presence of user_id (e.g., if "user_id" not in
kwargs or user_id is None) and raise a descriptive error (ValueError or
TypeError) with a clear message like "user_id is required for search" before
calling self._backend.search and converting results via
search_results_to_memory_items; reference the search method, the kwargs
handling, and the subsequent call to
self._backend.search/search_results_to_memory_items when making the change.

In `@packages/nvidia_nat_selfmemory/tests/test_config.py`:
- Around line 52-56: The test docstring claims the config registers with name
"selfmemory" but the test only checks the class name; update the test to
actually verify the registration name by asserting the registration value
provided to MemoryBaseConfig (e.g., check SelfMemoryProviderConfig.name or the
registry lookup used by MemoryBaseConfig) equals "selfmemory" — locate the test
function test_name_attribute and add an assertion that inspects the registered
name (or update the docstring to match the current class-name-only check if you
prefer not to test registration).

In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py`:
- Around line 24-55: The fixtures editor_fixture and sample_memory_item_fixture
lack return type hints; update their signatures to include explicit return types
(e.g., change def editor_fixture(mock_backend: MagicMock) to def
editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor and def
sample_memory_item_fixture() to def sample_memory_item_fixture() -> MemoryItem)
so the fixtures are properly typed (ensure SelfMemoryEditor and MemoryItem are
imported/available in the test module).

In `@pyproject.toml`:
- Line 70: The dependency entry for selfmemory (selfmemory =
["nvidia-nat-selfmemory == {version}"]) is out of alphabetical order; move the
entire selfmemory entry so it appears after the s3 entry and before the security
entry in pyproject.toml, and remove the original occurrence (currently between
profiler and rag) so there is only one correctly ordered selfmemory line.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0b999223-74d8-4252-aeca-c98d67b4a05b

📥 Commits

Reviewing files that changed from the base of the PR and between f33952b and ccc2c5c.

📒 Files selected for processing (12)
  • packages/nvidia_nat_selfmemory/LICENSE.md
  • packages/nvidia_nat_selfmemory/pyproject.toml
  • packages/nvidia_nat_selfmemory/src/nat/meta/pypi.md
  • packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/__init__.py
  • packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py
  • packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/register.py
  • packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py
  • packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/translator.py
  • packages/nvidia_nat_selfmemory/tests/test_config.py
  • packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py
  • packages/nvidia_nat_selfmemory/tests/test_translator.py
  • pyproject.toml

Comment on lines +76 to +77
[project.entry-points.'nat.components']
nat_selfmemory = "nat.plugins.selfmemory.register"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add trailing newline at end of file.

The file should end with a single newline per coding guidelines.

🔧 Proposed fix
 [project.entry-points.'nat.components']
 nat_selfmemory = "nat.plugins.selfmemory.register"
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[project.entry-points.'nat.components']
nat_selfmemory = "nat.plugins.selfmemory.register"
[project.entry-points.'nat.components']
nat_selfmemory = "nat.plugins.selfmemory.register"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/pyproject.toml` around lines 76 - 77, The file
is missing a trailing newline at EOF; open
packages/nvidia_nat_selfmemory/pyproject.toml, locate the
[project.entry-points.'nat.components'] block and the nat_selfmemory =
"nat.plugins.selfmemory.register" entry, then add a single newline character at
the end of the file and save so the file ends with exactly one trailing newline.

Comment on lines +63 to +66
try:
yield SelfMemoryEditor(memory)
finally:
memory.close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add trailing newline at end of file.

The file should end with a single newline per coding guidelines.

🔧 Proposed fix
     try:
         yield SelfMemoryEditor(memory)
     finally:
         memory.close()
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
yield SelfMemoryEditor(memory)
finally:
memory.close()
try:
yield SelfMemoryEditor(memory)
finally:
memory.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around
lines 63 - 66, The file currently ends without a trailing newline; ensure the
source ends with a single newline character by adding one at EOF so the last
lines (the try/finally yielding SelfMemoryEditor and memory.close()) are
followed by a newline; this change is purely formatting of
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py and does not
require code changes to SelfMemoryEditor or memory.close().

Comment on lines +55 to +63
async def remove_items(self, **kwargs) -> None:
"""Remove items by memory_id or user_id."""
if "memory_id" in kwargs:
memory_id = kwargs.pop("memory_id")
await asyncio.to_thread(self._backend.delete, memory_id)

elif "user_id" in kwargs:
user_id = kwargs.pop("user_id")
await asyncio.to_thread(self._backend.delete_all, user_id=user_id)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add trailing newline at end of file.

The file should end with a single newline per coding guidelines.

🔧 Proposed fix
         elif "user_id" in kwargs:
             user_id = kwargs.pop("user_id")
             await asyncio.to_thread(self._backend.delete_all, user_id=user_id)
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def remove_items(self, **kwargs) -> None:
"""Remove items by memory_id or user_id."""
if "memory_id" in kwargs:
memory_id = kwargs.pop("memory_id")
await asyncio.to_thread(self._backend.delete, memory_id)
elif "user_id" in kwargs:
user_id = kwargs.pop("user_id")
await asyncio.to_thread(self._backend.delete_all, user_id=user_id)
async def remove_items(self, **kwargs) -> None:
"""Remove items by memory_id or user_id."""
if "memory_id" in kwargs:
memory_id = kwargs.pop("memory_id")
await asyncio.to_thread(self._backend.delete, memory_id)
elif "user_id" in kwargs:
user_id = kwargs.pop("user_id")
await asyncio.to_thread(self._backend.delete_all, user_id=user_id)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`
around lines 55 - 63, The file is missing a trailing newline; open the module
containing the async method remove_items (the selfmemory_editor.py
implementation of remove_items) and add a single newline character at the end of
the file so the file ends with one trailing newline per coding guidelines.

Comment on lines +132 to +150
async def test_remove_items_by_memory_id(editor: SelfMemoryEditor, mock_backend: MagicMock):
"""Test removing items by memory ID."""
await editor.remove_items(memory_id="mem_123")

mock_backend.delete.assert_called_once_with("mem_123")


async def test_remove_items_by_user_id(editor: SelfMemoryEditor, mock_backend: MagicMock):
"""Test removing all items for a specific user ID."""
await editor.remove_items(user_id="user123")

mock_backend.delete_all.assert_called_once_with(user_id="user123")


async def test_remove_items_missing_arguments(editor: SelfMemoryEditor):
"""Test removing items with missing required arguments."""
result = await editor.remove_items()

assert result is None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add trailing newline at end of file.

The file should end with a single newline per coding guidelines.

🔧 Proposed fix
     result = await editor.remove_items()

     assert result is None
+
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py` around lines
132 - 150, The file is missing a trailing newline at EOF; update the test file
by adding a single newline character at the end (ensure the file ends with
exactly one newline) — e.g., open
packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py and add the
newline after the last test function (test_remove_items_missing_arguments) so
the file ends with a single trailing newline.

MemoryItem.user_id is str, not str | None. Use empty string to test
the default fallback in the translator.

Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
@willkill07 willkill07 added feature request New feature or request non-breaking Non-breaking change labels Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add SelfMemory memory provider plugin

2 participants