Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
745f25a
Mega Cleanup
markovejnovic Feb 18, 2026
c80c09c
refactor: remove redundant FUSE error types, use io_to_errno helper
markovejnovic Feb 20, 2026
7a63d49
feat: add FuseReply trait and FuseResultExt for centralized FUSE erro…
markovejnovic Feb 20, 2026
98e906f
refactor: use fuse_reply in getattr
markovejnovic Feb 20, 2026
3d26de2
refactor: use fuse_reply in lookup
markovejnovic Feb 20, 2026
44ffc1f
refactor: use fuse_reply in open
markovejnovic Feb 20, 2026
74904cd
refactor: use fuse_reply in read
markovejnovic Feb 20, 2026
cc820ca
refactor: use fuse_reply in readdir
markovejnovic Feb 20, 2026
e7d5909
DCache with per-parent info
markovejnovic Feb 20, 2026
44d5f07
refactor: use DCache population tracking, remove readdir_populated fr…
markovejnovic Feb 20, 2026
f07db8b
refactor: use DCache population tracking in CompositeFs
markovejnovic Feb 20, 2026
7558e86
fix: update stale readdir_populated comment in async_fs.rs
markovejnovic Feb 20, 2026
bcf2f1e
feat: add ConcurrentBridge for lock-free inode address translation
markovejnovic Feb 20, 2026
a19e91d
fix: eliminate TOCTOU race in ConcurrentBridge::backward_or_insert
markovejnovic Feb 20, 2026
d9fdc04
feat: add CompositeRoot trait, ChildInner, and CompositeReader
markovejnovic Feb 20, 2026
e2f8215
test: extract async_backed inline tests to tests/async_backed_correct…
markovejnovic Feb 20, 2026
6fe9dd5
fix: add #[must_use] to CompositeReader::new
markovejnovic Feb 20, 2026
781d7bb
test: extract dcache inline tests to tests/dcache_correctness.rs
markovejnovic Feb 20, 2026
903392f
feat: add CompositeFs struct with FsDataProvider impl
markovejnovic Feb 20, 2026
4c55565
refactor: extract slot creation helper in register_child
markovejnovic Feb 20, 2026
5e31225
test: add integration tests for generic CompositeFs
markovejnovic Feb 20, 2026
aa989f7
feat: add domain roots (MesaRoot, StandardOrgRoot, GithubOrgRoot) and…
markovejnovic Feb 20, 2026
9885de0
refactor: wire CompositeFs into daemon, delete old composite.rs and C…
markovejnovic Feb 20, 2026
36d9fea
bug fixes
markovejnovic Feb 21, 2026
b735ac8
more fixes
markovejnovic Feb 21, 2026
7106d60
tests
markovejnovic Feb 21, 2026
61f5f30
more docs
markovejnovic Feb 21, 2026
e890c3d
thread-safety on forget
markovejnovic Feb 21, 2026
08d51c6
TOCTOU fixes
markovejnovic Feb 21, 2026
9b30b55
fix: atomicize forget slot GC and clean up name_to_slot
markovejnovic Feb 21, 2026
2dccf32
perf: use BTreeMap in DCache to eliminate readdir re-sort
markovejnovic Feb 21, 2026
2dd4d39
refactor: rename TrackedINode to ResolvedINode
markovejnovic Feb 21, 2026
754eeca
perf: use Arc<OsStr> for lookup cache key to reduce allocations
markovejnovic Feb 21, 2026
3b93bc4
fix: hide LoadedAddr::new_unchecked from public API docs
markovejnovic Feb 21, 2026
d9169af
docs: document get_or_try_init dedup limitation and cache invalidatio…
markovejnovic Feb 21, 2026
4d19859
slightly more cleanup
markovejnovic Feb 22, 2026
f274a5a
refactor: replace ouroboros with Arc in AsyncFs, InodeLifecycle, Fuse…
markovejnovic Feb 22, 2026
12fac6d
feat: add DCache::child_dir_addrs for prefetch discovery
markovejnovic Feb 22, 2026
e16c0a2
feat: prefetch child directories after readdir populates parent
markovejnovic Feb 22, 2026
eb39492
concurrency edge cases
markovejnovic Feb 22, 2026
defd963
fix: race in forget slot removal and unbounded prefetch spawning
markovejnovic Feb 22, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@ cargo fmt --all && cargo clippy --all-targets --all-features -- -D warnings && c
- Channels: `tokio::sync::mpsc` for multi-producer, `tokio::sync::oneshot` for request-response
- Never block the async runtime — offload blocking work with `tokio::task::spawn_blocking`

## Testing

- Avoid writing tests in-line in the same file as production code; use separate `tests/` directory
for tests.

## Dependencies

- Check for existing deps with `cargo tree` before adding new crates
Expand Down
394 changes: 394 additions & 0 deletions lib/cache/async_backed.rs

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions lib/cache/mod.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
/// Async-backed cache implementation.
pub mod async_backed;
/// Cache eviction policies.
pub mod eviction;
/// File-backed cache implementation.
Expand Down
136 changes: 136 additions & 0 deletions lib/drop_ward.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
//! Automatic, type-directed cleanup driven by reference counting.
//!
//! [`DropWard`] tracks how many live references exist for a given key and invokes a cleanup
//! callback when a key's count reaches zero. The cleanup logic is selected at the type level
//! through a zero-sized "tag" type that implements [`StatelessDrop`], keeping the ward itself
//! generic over *what* it manages without storing per-key values.
//!
//! This is designed for resources whose lifecycle is bound to an external context (e.g. GPU device
//! handles, connection pools, graphics pipelines) where Rust's built-in `Drop` cannot be used
//! because cleanup requires access to that context.
//!
//! # Design rationale
//!
//! The tag type `T` is constrained to be zero-sized. It exists only to carry the [`StatelessDrop`]
//! implementation at the type level — no `T` value is ever constructed or stored. This means a
//! single `DropWard` instance adds no per-key overhead beyond the key and its `usize` count.
//!
//! # Example
//!
//! ```ignore
//! struct GpuTextureDrop;
//!
//! impl StatelessDrop<wgpu::Device, TextureId> for GpuTextureDrop {
//! fn delete(device: &wgpu::Device, _key: &TextureId) {
//! // e.g. flush a deferred-destruction queue
//! device.poll(wgpu::Maintain::Wait);
//! }
//! }
//!
//! let mut ward: DropWard<wgpu::Device, TextureId, GpuTextureDrop> = DropWard::new(device);
//!
//! ward.inc(texture_id); // → 1
//! ward.inc(texture_id); // → 2
//! ward.dec(&texture_id); // → Some(1)
//! ward.dec(&texture_id); // → Some(0), calls GpuTextureDrop::delete(&device, &texture_id)
//! ```

use std::marker::PhantomData;

use rustc_hash::FxHashMap;

/// Type-level hook for cleanup that requires an external context.
///
/// Implement this on a zero-sized tag type. The tag is never instantiated — it only selects which
/// `delete` implementation a [`DropWard`] will call.
pub trait StatelessDrop<Ctx, K> {
/// Called exactly once when a key's reference count reaches zero.
///
/// `ctx` is the shared context owned by the [`DropWard`]. `key` is the key whose count just
/// reached zero. This callback fires synchronously inside [`DropWard::dec`]; avoid blocking or
/// panicking if the ward is used on a hot path.
fn delete(ctx: &Ctx, key: &K);
}

/// A reference-counted key set that triggers [`StatelessDrop::delete`] on the associated context
/// when any key's count drops to zero.
///
/// # Type parameters
///
/// - `Ctx` — shared context passed to `T::delete` (e.g. a device handle).
/// - `K` — the key type being reference-counted.
/// - `T` — a **zero-sized** tag type carrying the cleanup logic.
/// Will fail to compile if `size_of::<T>() != 0`.
///
/// # Concurrency
///
/// Not thread-safe. All access requires `&mut self`. Wrap in a `Mutex` or similar if shared across
/// threads.
///
#[derive(Debug, Clone)]
pub struct DropWard<Ctx, K, T> {
map: FxHashMap<K, usize>,
ctx: Ctx,
_marker: PhantomData<T>,
}

impl<Ctx, K, T> DropWard<Ctx, K, T>
where
K: Eq + std::hash::Hash,
T: StatelessDrop<Ctx, K>,
{
/// Compile-time guard: `T` must be zero-sized.
const _ASSERT_ZST: () = assert!(size_of::<T>() == 0, "T must be zero-sized");

/// Create a new ward that will pass `ctx` to `T::delete` on cleanup.
pub fn new(ctx: Ctx) -> Self {
Self {
map: FxHashMap::default(),
ctx,
_marker: PhantomData,
}
}

/// Increment the reference count for `key`, inserting it with a count
/// of 1 if it does not exist.
///
/// Returns the count **after** incrementing.
pub fn inc(&mut self, key: K) -> usize {
*self
.map
.entry(key)
.and_modify(|count| *count += 1)
.or_insert(1)
}

fn dec_by(&mut self, key: &K, by: usize) -> Option<usize> {
let curr = *self.map.get(key)?;
let new_count = curr.saturating_sub(by);
if new_count == 0 {
// Delete before removing from the map: if `delete` panics the
// entry remains and a subsequent `dec` can retry cleanup. The
// reverse order would silently lose the entry.
T::delete(&self.ctx, key);
self.map.remove(key);
} else if let Some(slot) = self.map.get_mut(key) {
*slot = new_count;
}
Some(new_count)
}

/// Decrement the reference count for `key`.
///
/// If the count reaches zero, the key is removed and `T::delete` is
/// called synchronously with the ward's context. Returns `Some(0)` in
/// this case — the key will no longer be tracked.
///
/// Returns `None` if `key` was not present (no-op).
pub fn dec(&mut self, key: &K) -> Option<usize> {
self.dec_by(key, 1)
}

/// Decrement the reference count for `key` by `count`.
pub fn dec_count(&mut self, key: &K, count: usize) -> Option<usize> {
self.dec_by(key, count)
}
}
Loading