Feat/add selfmemory memory provider#1768
Conversation
Add nvidia-nat-selfmemory package that implements MemoryEditor using SelfMemory as the backend, enabling 29+ vector stores and 15+ embedding providers for NeMo Agent Toolkit memory operations. Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
…ling and async item addition Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughA new SelfMemory memory provider plugin package is introduced for NVIDIA NeMo Agent Toolkit. The implementation enables integration with SelfMemory as the memory backend, supporting configurable vector stores, embedding providers, optional LLM-based extraction, and encryption. Includes configuration class, async provider function, editor wrapper, data translators, comprehensive test coverage, and entry point registration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Application
participant Provider as selfmemory_provider
participant Memory as SelfMemory Backend
participant Editor as SelfMemoryEditor
participant VectorStore as Vector Store<br/>(Qdrant/Chroma)
participant Embeddings as Embedding<br/>Provider
Client->>Provider: Initialize with config
Provider->>Memory: Create SelfMemory instance<br/>(vector_store, embeddings, llm, encryption)
Memory->>Embeddings: Register embeddings provider
Memory->>VectorStore: Connect to vector store
Provider->>Editor: Yield SelfMemoryEditor(memory)
rect rgba(100,150,200,0.5)
Note over Client,Editor: Add Items
Client->>Editor: add_items(items)
Editor->>Editor: Translate MemoryItem -> add_kwargs
Editor->>Memory: Run add() in thread pool<br/>(content, user_id, tags, metadata)
Memory->>VectorStore: Store embeddings
Memory-->>Editor: Confirm added
Editor-->>Client: Complete
end
rect rgba(150,200,100,0.5)
Note over Client,Editor: Search
Client->>Editor: search(query, user_id, top_k)
Editor->>Memory: Run search() in thread pool<br/>(query, user_id, limit)
Memory->>Embeddings: Embed query
Memory->>VectorStore: Vector similarity search
VectorStore-->>Memory: Return top_k results
Editor->>Editor: Translate results -> MemoryItem[]
Editor-->>Client: Return MemoryItem list
end
rect rgba(200,150,100,0.5)
Note over Client,Editor: Remove Items
Client->>Editor: remove_items(memory_id or user_id)
Editor->>Memory: Run delete() or delete_all()<br/>in thread pool
Memory->>VectorStore: Remove stored embeddings
Memory-->>Editor: Complete
Editor-->>Client: Complete
end
Client->>Provider: Exit context
Provider->>Memory: Call memory.close()
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
06f0efd to
ccc2c5c
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
pyproject.toml (1)
70-70: Alphabetical ordering:selfmemoryshould be placed afters3and beforesecurity.The comment on line 50 indicates dependencies should be kept sorted. The entry
selfmemoryis currently placed betweenprofilerandrag, but alphabetically it should come afters3(line 76) and beforesecurity(line 77).♻️ Suggested placement
Move the
selfmemoryentry to line 77 (afters3, beforesecurity):s3 = ["nvidia-nat-s3 == {version}"] +selfmemory = ["nvidia-nat-selfmemory == {version}"] security = ["nvidia-nat-security == {version}"]And remove it from line 70.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 70, The dependency entry for selfmemory (selfmemory = ["nvidia-nat-selfmemory == {version}"]) is out of alphabetical order; move the entire selfmemory entry so it appears after the s3 entry and before the security entry in pyproject.toml, and remove the original occurrence (currently between profiler and rag) so there is only one correctly ordered selfmemory line.packages/nvidia_nat_selfmemory/tests/test_config.py (1)
52-56: Docstring/test mismatch: test doesn't verify the registered name"selfmemory".The docstring says "Test the config registers with name 'selfmemory'" but the assertion only checks the Python class name (
__class__.__name__), not the registration name passed toMemoryBaseConfig. Consider either updating the docstring to reflect what's actually tested, or adding an assertion that verifies the registration mechanism.♻️ Suggested docstring fix
def test_name_attribute(self): - """Test the config registers with name 'selfmemory'.""" + """Test the config class name is SelfMemoryProviderConfig.""" config = SelfMemoryProviderConfig() assert config.__class__.__name__ == "SelfMemoryProviderConfig"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_selfmemory/tests/test_config.py` around lines 52 - 56, The test docstring claims the config registers with name "selfmemory" but the test only checks the class name; update the test to actually verify the registration name by asserting the registration value provided to MemoryBaseConfig (e.g., check SelfMemoryProviderConfig.name or the registry lookup used by MemoryBaseConfig) equals "selfmemory" — locate the test function test_name_attribute and add an assertion that inspects the registered name (or update the docstring to match the current class-name-only check if you prefer not to test registration).packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py (2)
21-28: UseField(default_factory=dict)for mutable defaults and considerOptionalSecretStrfor the encryption key.
The mutable default
dict = {}triggers RUF012. While Pydantic handles this correctly, usingField(default_factory=dict)is more explicit and silences the linter.The
encryption_keystores a sensitive secret. Consider usingOptionalSecretStrto prevent accidental logging of the key value. As per coding guidelines, usedefault=Nonefor optional secret fields.♻️ Proposed fix
+from pydantic import Field + from nat.builder.builder import Builder from nat.cli.register_workflow import register_memory from nat.data_models.memory import MemoryBaseConfig +from nat.utils.type_utils import OptionalSecretStr class SelfMemoryProviderConfig(MemoryBaseConfig, name="selfmemory"): vector_store_provider: str = "qdrant" - vector_store_config: dict = {} + vector_store_config: dict = Field(default_factory=dict) embedding_provider: str = "openai" - embedding_config: dict = {} + embedding_config: dict = Field(default_factory=dict) llm_provider: str | None = None - llm_config: dict = {} - encryption_key: str | None = None + llm_config: dict = Field(default_factory=dict) + encryption_key: OptionalSecretStr = NoneIf
OptionalSecretStris used, update the provider function to extract the secret value:encryption_key = ( config.encryption_key.get_secret_value() if config.encryption_key else None ) or os.environ.get("MASTER_ENCRYPTION_KEY")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around lines 21 - 28, Modify the SelfMemoryProviderConfig class: replace mutable default dicts (vector_store_config, embedding_config, llm_config) with Field(default_factory=dict) and change encryption_key to OptionalSecretStr with default=None; then update the consumer/provider logic that reads config.encryption_key (the code that builds encryption_key for the provider) to call config.encryption_key.get_secret_value() when present (falling back to os.environ.get("MASTER_ENCRYPTION_KEY") or None) so the secret is not stored or logged as plain str.
56-59: Document the environment variable mutation.Setting
os.environ["MASTER_ENCRYPTION_KEY"]mutates global state, which is necessary for SelfMemory but could affect other components. Consider adding a comment explaining why this is needed.📝 Suggested documentation
encryption_key = config.encryption_key or os.environ.get("MASTER_ENCRYPTION_KEY") if encryption_key: + # SelfMemory reads encryption key from environment; ensure it's set if provided via config os.environ["MASTER_ENCRYPTION_KEY"] = encryption_key🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around lines 56 - 59, Add an inline comment next to the encryption_key assignment/assignment-to-os.environ explaining that setting os.environ["MASTER_ENCRYPTION_KEY"] is an intentional global mutation required for SelfMemory's encryption plumbing (so downstream code can access the master key), and note the potential side-effects for other components and why it's safe/necessary here; update the block around the encryption_key variable and the if encryption_key: os.environ["MASTER_ENCRYPTION_KEY"] line (in memory.py) to include that explanatory comment.packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py (1)
24-55: Add return type hints to fixtures.Per coding guidelines, public functions (including fixtures) should have return type hints. The
editor_fixtureandsample_memory_item_fixtureare missing return type annotations.♻️ Proposed fix
`@pytest.fixture`(name="editor") -def editor_fixture(mock_backend: MagicMock): +def editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor: """Fixture to provide a SelfMemoryEditor with a mocked backend.""" return SelfMemoryEditor(backend=mock_backend) `@pytest.fixture`(name="sample_memory_item") -def sample_memory_item_fixture(): +def sample_memory_item_fixture() -> MemoryItem: """Fixture to provide a sample MemoryItem."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py` around lines 24 - 55, The fixtures editor_fixture and sample_memory_item_fixture lack return type hints; update their signatures to include explicit return types (e.g., change def editor_fixture(mock_backend: MagicMock) to def editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor and def sample_memory_item_fixture() to def sample_memory_item_fixture() -> MemoryItem) so the fixtures are properly typed (ensure SelfMemoryEditor and MemoryItem are imported/available in the test module).packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py (1)
45-53: Consider a more descriptive error for missinguser_id.The
kwargs.pop("user_id")on line 47 raises a bareKeyErrorwhenuser_idis not provided. While the test confirms this behavior, consider wrapping it with a more descriptive error message for better developer experience.💡 Optional improvement for clearer error messaging
async def search(self, query: str, top_k: int = 5, **kwargs) -> list[MemoryItem]: """Retrieve items relevant to the given query.""" - user_id = kwargs.pop("user_id") + try: + user_id = kwargs.pop("user_id") + except KeyError: + raise KeyError("user_id is required for search operations") from None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py` around lines 45 - 53, The search method currently does kwargs.pop("user_id") which raises a bare KeyError if user_id is missing; update the search function to explicitly check for the presence of user_id (e.g., if "user_id" not in kwargs or user_id is None) and raise a descriptive error (ValueError or TypeError) with a clear message like "user_id is required for search" before calling self._backend.search and converting results via search_results_to_memory_items; reference the search method, the kwargs handling, and the subsequent call to self._backend.search/search_results_to_memory_items when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nvidia_nat_selfmemory/pyproject.toml`:
- Around line 76-77: The file is missing a trailing newline at EOF; open
packages/nvidia_nat_selfmemory/pyproject.toml, locate the
[project.entry-points.'nat.components'] block and the nat_selfmemory =
"nat.plugins.selfmemory.register" entry, then add a single newline character at
the end of the file and save so the file ends with exactly one trailing newline.
In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py`:
- Around line 63-66: The file currently ends without a trailing newline; ensure
the source ends with a single newline character by adding one at EOF so the last
lines (the try/finally yielding SelfMemoryEditor and memory.close()) are
followed by a newline; this change is purely formatting of
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py and does not
require code changes to SelfMemoryEditor or memory.close().
In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`:
- Around line 55-63: The file is missing a trailing newline; open the module
containing the async method remove_items (the selfmemory_editor.py
implementation of remove_items) and add a single newline character at the end of
the file so the file ends with one trailing newline per coding guidelines.
In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py`:
- Around line 132-150: The file is missing a trailing newline at EOF; update the
test file by adding a single newline character at the end (ensure the file ends
with exactly one newline) — e.g., open
packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py and add the
newline after the last test function (test_remove_items_missing_arguments) so
the file ends with a single trailing newline.
---
Nitpick comments:
In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py`:
- Around line 21-28: Modify the SelfMemoryProviderConfig class: replace mutable
default dicts (vector_store_config, embedding_config, llm_config) with
Field(default_factory=dict) and change encryption_key to OptionalSecretStr with
default=None; then update the consumer/provider logic that reads
config.encryption_key (the code that builds encryption_key for the provider) to
call config.encryption_key.get_secret_value() when present (falling back to
os.environ.get("MASTER_ENCRYPTION_KEY") or None) so the secret is not stored or
logged as plain str.
- Around line 56-59: Add an inline comment next to the encryption_key
assignment/assignment-to-os.environ explaining that setting
os.environ["MASTER_ENCRYPTION_KEY"] is an intentional global mutation required
for SelfMemory's encryption plumbing (so downstream code can access the master
key), and note the potential side-effects for other components and why it's
safe/necessary here; update the block around the encryption_key variable and the
if encryption_key: os.environ["MASTER_ENCRYPTION_KEY"] line (in memory.py) to
include that explanatory comment.
In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`:
- Around line 45-53: The search method currently does kwargs.pop("user_id")
which raises a bare KeyError if user_id is missing; update the search function
to explicitly check for the presence of user_id (e.g., if "user_id" not in
kwargs or user_id is None) and raise a descriptive error (ValueError or
TypeError) with a clear message like "user_id is required for search" before
calling self._backend.search and converting results via
search_results_to_memory_items; reference the search method, the kwargs
handling, and the subsequent call to
self._backend.search/search_results_to_memory_items when making the change.
In `@packages/nvidia_nat_selfmemory/tests/test_config.py`:
- Around line 52-56: The test docstring claims the config registers with name
"selfmemory" but the test only checks the class name; update the test to
actually verify the registration name by asserting the registration value
provided to MemoryBaseConfig (e.g., check SelfMemoryProviderConfig.name or the
registry lookup used by MemoryBaseConfig) equals "selfmemory" — locate the test
function test_name_attribute and add an assertion that inspects the registered
name (or update the docstring to match the current class-name-only check if you
prefer not to test registration).
In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py`:
- Around line 24-55: The fixtures editor_fixture and sample_memory_item_fixture
lack return type hints; update their signatures to include explicit return types
(e.g., change def editor_fixture(mock_backend: MagicMock) to def
editor_fixture(mock_backend: MagicMock) -> SelfMemoryEditor and def
sample_memory_item_fixture() to def sample_memory_item_fixture() -> MemoryItem)
so the fixtures are properly typed (ensure SelfMemoryEditor and MemoryItem are
imported/available in the test module).
In `@pyproject.toml`:
- Line 70: The dependency entry for selfmemory (selfmemory =
["nvidia-nat-selfmemory == {version}"]) is out of alphabetical order; move the
entire selfmemory entry so it appears after the s3 entry and before the security
entry in pyproject.toml, and remove the original occurrence (currently between
profiler and rag) so there is only one correctly ordered selfmemory line.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0b999223-74d8-4252-aeca-c98d67b4a05b
📒 Files selected for processing (12)
packages/nvidia_nat_selfmemory/LICENSE.mdpackages/nvidia_nat_selfmemory/pyproject.tomlpackages/nvidia_nat_selfmemory/src/nat/meta/pypi.mdpackages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/__init__.pypackages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.pypackages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/register.pypackages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.pypackages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/translator.pypackages/nvidia_nat_selfmemory/tests/test_config.pypackages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.pypackages/nvidia_nat_selfmemory/tests/test_translator.pypyproject.toml
| [project.entry-points.'nat.components'] | ||
| nat_selfmemory = "nat.plugins.selfmemory.register" |
There was a problem hiding this comment.
Add trailing newline at end of file.
The file should end with a single newline per coding guidelines.
🔧 Proposed fix
[project.entry-points.'nat.components']
nat_selfmemory = "nat.plugins.selfmemory.register"
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [project.entry-points.'nat.components'] | |
| nat_selfmemory = "nat.plugins.selfmemory.register" | |
| [project.entry-points.'nat.components'] | |
| nat_selfmemory = "nat.plugins.selfmemory.register" | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_selfmemory/pyproject.toml` around lines 76 - 77, The file
is missing a trailing newline at EOF; open
packages/nvidia_nat_selfmemory/pyproject.toml, locate the
[project.entry-points.'nat.components'] block and the nat_selfmemory =
"nat.plugins.selfmemory.register" entry, then add a single newline character at
the end of the file and save so the file ends with exactly one trailing newline.
| try: | ||
| yield SelfMemoryEditor(memory) | ||
| finally: | ||
| memory.close() |
There was a problem hiding this comment.
Add trailing newline at end of file.
The file should end with a single newline per coding guidelines.
🔧 Proposed fix
try:
yield SelfMemoryEditor(memory)
finally:
memory.close()
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| yield SelfMemoryEditor(memory) | |
| finally: | |
| memory.close() | |
| try: | |
| yield SelfMemoryEditor(memory) | |
| finally: | |
| memory.close() | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py` around
lines 63 - 66, The file currently ends without a trailing newline; ensure the
source ends with a single newline character by adding one at EOF so the last
lines (the try/finally yielding SelfMemoryEditor and memory.close()) are
followed by a newline; this change is purely formatting of
packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/memory.py and does not
require code changes to SelfMemoryEditor or memory.close().
| async def remove_items(self, **kwargs) -> None: | ||
| """Remove items by memory_id or user_id.""" | ||
| if "memory_id" in kwargs: | ||
| memory_id = kwargs.pop("memory_id") | ||
| await asyncio.to_thread(self._backend.delete, memory_id) | ||
|
|
||
| elif "user_id" in kwargs: | ||
| user_id = kwargs.pop("user_id") | ||
| await asyncio.to_thread(self._backend.delete_all, user_id=user_id) |
There was a problem hiding this comment.
Add trailing newline at end of file.
The file should end with a single newline per coding guidelines.
🔧 Proposed fix
elif "user_id" in kwargs:
user_id = kwargs.pop("user_id")
await asyncio.to_thread(self._backend.delete_all, user_id=user_id)
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def remove_items(self, **kwargs) -> None: | |
| """Remove items by memory_id or user_id.""" | |
| if "memory_id" in kwargs: | |
| memory_id = kwargs.pop("memory_id") | |
| await asyncio.to_thread(self._backend.delete, memory_id) | |
| elif "user_id" in kwargs: | |
| user_id = kwargs.pop("user_id") | |
| await asyncio.to_thread(self._backend.delete_all, user_id=user_id) | |
| async def remove_items(self, **kwargs) -> None: | |
| """Remove items by memory_id or user_id.""" | |
| if "memory_id" in kwargs: | |
| memory_id = kwargs.pop("memory_id") | |
| await asyncio.to_thread(self._backend.delete, memory_id) | |
| elif "user_id" in kwargs: | |
| user_id = kwargs.pop("user_id") | |
| await asyncio.to_thread(self._backend.delete_all, user_id=user_id) | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/nvidia_nat_selfmemory/src/nat/plugins/selfmemory/selfmemory_editor.py`
around lines 55 - 63, The file is missing a trailing newline; open the module
containing the async method remove_items (the selfmemory_editor.py
implementation of remove_items) and add a single newline character at the end of
the file so the file ends with one trailing newline per coding guidelines.
| async def test_remove_items_by_memory_id(editor: SelfMemoryEditor, mock_backend: MagicMock): | ||
| """Test removing items by memory ID.""" | ||
| await editor.remove_items(memory_id="mem_123") | ||
|
|
||
| mock_backend.delete.assert_called_once_with("mem_123") | ||
|
|
||
|
|
||
| async def test_remove_items_by_user_id(editor: SelfMemoryEditor, mock_backend: MagicMock): | ||
| """Test removing all items for a specific user ID.""" | ||
| await editor.remove_items(user_id="user123") | ||
|
|
||
| mock_backend.delete_all.assert_called_once_with(user_id="user123") | ||
|
|
||
|
|
||
| async def test_remove_items_missing_arguments(editor: SelfMemoryEditor): | ||
| """Test removing items with missing required arguments.""" | ||
| result = await editor.remove_items() | ||
|
|
||
| assert result is None |
There was a problem hiding this comment.
Add trailing newline at end of file.
The file should end with a single newline per coding guidelines.
🔧 Proposed fix
result = await editor.remove_items()
assert result is None
+🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py` around lines
132 - 150, The file is missing a trailing newline at EOF; update the test file
by adding a single newline character at the end (ensure the file ends with
exactly one newline) — e.g., open
packages/nvidia_nat_selfmemory/tests/test_selfmemory_editor.py and add the
newline after the last test function (test_remove_items_missing_arguments) so
the file ends with a single trailing newline.
MemoryItem.user_id is str, not str | None. Use empty string to test the default fallback in the translator. Signed-off-by: shrijayan <81805145+shrijayan@users.noreply.github.com>
Description
Closes #1767
Add
nvidia-nat-selfmemorypackage that implementsMemoryEditorusing SelfMemory as the backend, enabling 29+ vector stores and 15+ embedding providers for NeMo Agent Toolkit memory operations.What's included
SelfMemoryProviderConfig(MemoryBaseConfig, name="selfmemory")— config class with vector store, embedding, LLM, and encryption settingsSelfMemoryEditor(MemoryEditor)— async editor bridging SelfMemory's sync API viaasyncio.to_thread()translator.py— bidirectional conversion betweenMemoryItemand SelfMemory formats@register_memoryandnat.componentsentry pointUsage
By Submitting this PR I confirm:
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
License