feat: pluggable index cache via CacheBackend trait#6222
feat: pluggable index cache via CacheBackend trait#6222wjones127 wants to merge 12 commits intolance-format:mainfrom
Conversation
The Session's index cache was hardcoded to use Moka. This adds a CacheBackend trait so users can provide their own cache implementation (e.g. Redis-backed, disk-backed, shared across processes). Two-layer design: - CacheBackend: object-safe async trait with opaque byte keys. This is what plugin authors implement (get, insert, invalidate_prefix, clear, num_entries, size_bytes). - LanceCache: typed wrapper handling key construction (prefix + type tag), type-safe get/insert, DeepSizeOf size computation, hit/miss stats, and concurrent load deduplication. MokaCacheBackend is the default, preserving existing behavior. Custom backends are wired through Session::with_index_cache_backend() or DatasetBuilder::with_index_cache_backend(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
PR Review: feat: pluggable index cache via CacheBackend traitP0: TOCTOU race in
|
Add type_name()/type_id() to CacheKey and UnsizedCacheKey traits so backends can identify the type of cached entries. Add parse_cache_key() utility for backends to extract (user_key, type_id) from opaque key bytes. CacheKey-based methods now pipe the key's type_id through to the backend. Non-CacheKey methods use type_id_of::<T>() as a sentinel. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. Remove #[cfg(test)] convenience methods; tests now use CacheKey via a TestKey helper, eliminating the parallel method hierarchy. 2. Fix dedup race condition: re-check the cache while holding the in-flight lock so no two tasks can both become leader for the same key. 3. Use Arc::try_unwrap on the leader error path to preserve the original error type when possible. 4. Make invalidate_prefix async instead of fire-and-forget spawn. 5. Replace type_name().as_ptr() with a hash of std::any::TypeId for stable type discrimination. Defined once in type_id_of() and used by CacheKey::type_id() default. 6. Add dedup to WeakLanceCache::get_or_insert, sharing the in-flight map from the parent LanceCache. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
Address feedback: 1. Move get_or_insert() onto CacheBackend. The method takes a pinned future (not a closure), so LanceCache can type-erase the user's non-'static loader before passing it to the backend. Default impl does simple get-then-insert; MokaCacheBackend uses moka's built-in optionally_get_with for dedup. This eliminates duplicated dedup logic and the manual watch-channel machinery. 2. Restore type_name().as_ptr() for type_id derivation on CacheKey. Remove standalone type_id_of() function. The derivation lives in one place: CacheKey::type_id()/UnsizedCacheKey::type_id(). 3. Remove approx_size_bytes from CacheBackend trait and Session debug output. Only approx_num_entries remains. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
56a3273 to
00867ad
Compare
Remove all methods that bypass CacheKey from WeakLanceCache (get, insert, get_or_insert, get_unsized, insert_unsized). Remove insert_unsized/get_unsized from LanceCache. Remove type_tag helper. All cache access now goes through CacheKey/UnsizedCacheKey. Make parse_cache_key return (empty, 0) instead of panicking on short keys. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
369afe2 to
1ba4ac3
Compare
The type_name().as_ptr() approach for type discrimination was unstable across crate boundaries due to monomorphization. Replace with an explicit fn type_id() -> &'static str that each CacheKey impl provides as a short human-readable literal (e.g. 'Vec<IndexMetadata>', 'Manifest'). Key format changes from user_key\0<8 LE bytes> to user_key\0<type_id str>. parse_cache_key() now returns (&[u8], &str).
Add IvfIndexState struct and serialization to lance-index, enabling IVFIndex to export its reconstructable state (IVF model, quantizer metadata) without non-serializable handles. Add reconstruct_vector_index() which rebuilds an IVFIndex from cached state by re-opening FileReaders (cheap with warm metadata cache) instead of re-fetching global buffers from object storage. Also adds IvfQuantizationStorage::from_cached() to skip global buffer reads during reconstruction, and Session::file_metadata_cache() to expose the metadata cache for the reconstruction context. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reconstructed VectorIndex instances need the original cache key prefix to share partition entries with the two-tier cache backend. Also adds LanceCache::with_backend_and_prefix() and WeakLanceCache::prefix(). Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Previously the Session's index cache was hardcoded to Moka. This adds a
CacheBackendtrait so users can provide their own cache implementation. There is a defaultMokaCacheBackendthat works the same as the existing cache.flowchart LR LanceCache --> backend["dyn CacheBackend"] backend --> MokaCacheBackend backend --> CustomCacheBackendThe cache key construction is handled at the
LanceCachelayer, soCacheBackendimplementations just receive opaque bytes for keys. They can optionally useparse_cache_keyto get a uniqueu64type id. This might be used by caches to figure out how they can downcast and serialize / deserialize the entry.flowchart LR CacheKey --> key["&[u8]"] key --"parse_cache_key"--> typeid