Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Cargo compiles each tests/*.rs as a separate binary. The native .NET core has global state — initializing from a second binary causes 'FoundryLocalCore has already been initialized'. Consolidating all test modules into tests/integration.rs ensures the OnceLock singleton initializes once and all 30 tests share it. Also adds --test-threads=1 to CI since tests share model load/unload state and cannot safely run in parallel. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Audio tests without explicit temperature produced non-deterministic
transcription output. Set language('en') and temperature(0.0) on all
audio test variants for consistent results across platforms.
- Fix catalog unknown alias assertion to match actual error message
('Unknown model alias' not 'not found').
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The free_native_buffer function uses CoTaskMemFree on Windows, which lives in ole32.lib. Without this link directive, the integration test binary fails to link with LNK2019 unresolved external symbol. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
All chat and audio tests now properly unload models after use, matching the try/finally pattern in the JS SDK tests. Setup helpers return the Model alongside the client so tests can call model.unload() at the end. Also fixes model_load_manager should_load_model test to unload after loading. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Covers previously untested Model public API: alias(), id(), variants(), selected_variant(), is_cached(), path(), select_variant() (success and error paths). Total integration tests: 38. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Tests the full web server lifecycle: start service → load model → REST call to v1/chat/completions (non-streaming and SSE streaming) → verify response → stop service → unload model. Also tests that urls() returns the correct addresses after start. Total integration tests: 41. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Packages the crate with cargo package and uploads the .crate file, matching the pattern used by CS (.nupkg) and JS (.tgz) builds. Also uploads flcore logs on all runs (including failures). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
All tests that produce response content now println! the actual output (chat completions, transcriptions, REST responses, model metadata). Output is visible with --nocapture (added to CI integration test step). Helps diagnose failures by showing actual vs expected values. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
prathikr
left a comment
There was a problem hiding this comment.
awesome first iteration! some things to look over but great start
There was a problem hiding this comment.
Pull request overview
Adds a new Rust SDK v2 under sdk_v2/rust that wraps the Foundry Local Core native library (FFI + async APIs), alongside refreshed Rust samples and GitHub Actions CI support for building/testing the Rust SDK.
Changes:
- Introduces the
foundry-local-sdkRust crate (manager/catalog/model lifecycle + OpenAI-compatible chat/audio clients). - Adds Rust integration tests and shared test utilities.
- Updates Rust samples workspace and CI workflows for Rust build/test packaging (including WinML option on Windows).
Reviewed changes
Copilot reviewed 45 out of 45 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
| sdk_v2/rust/tests/integration.rs | Single integration test binary covering manager/catalog/model/chat/audio/web service flows |
| sdk_v2/rust/tests/common/mod.rs | Shared test config/utilities (test-data-shared pathing, tool definition helper) |
| sdk_v2/rust/src/types.rs | Shared SDK types (ModelInfo, response format, tool choice, etc.) |
| sdk_v2/rust/src/openai/mod.rs | Exposes OpenAI-compatible client modules |
| sdk_v2/rust/src/openai/chat_client.rs | Chat client + streaming wrapper and settings serialization |
| sdk_v2/rust/src/openai/audio_client.rs | Audio transcription client + streaming wrapper and settings serialization |
| sdk_v2/rust/src/model_variant.rs | Model variant operations (download/load/unload/path/cache) + client factories |
| sdk_v2/rust/src/model.rs | Alias-level model wrapper with variant selection logic |
| sdk_v2/rust/src/lib.rs | Crate root exports and re-exports of OpenAI request/response types |
| sdk_v2/rust/src/foundry_local_manager.rs | Singleton manager initialization, catalog access, web service lifecycle |
| sdk_v2/rust/src/error.rs | FoundryLocalError + crate-wide Result<T> alias |
| sdk_v2/rust/src/detail/model_load_manager.rs | Load/unload/list logic via HTTP when external service is configured, else via core |
| sdk_v2/rust/src/detail/mod.rs | Internal module wiring / re-exports |
| sdk_v2/rust/src/detail/core_interop.rs | FFI bridge: dynamic library loading, command execution, streaming callback trampoline |
| sdk_v2/rust/src/configuration.rs | User config -> native parameter map conversion + unit tests |
| sdk_v2/rust/src/catalog.rs | Model discovery + TTL cache + alias/id lookup helpers |
| sdk_v2/rust/examples/tool_calling.rs | Tool-calling example using streaming tool_calls assembly |
| sdk_v2/rust/examples/interactive_chat.rs | Interactive streaming chat example |
| sdk_v2/rust/examples/chat_completion.rs | Basic non-streaming + streaming chat completion example |
| sdk_v2/rust/build.rs | Build-time NuGet download/extract of native libs + RID selection |
| sdk_v2/rust/README.md | Rust SDK README (installation, features, platform support, quick start) |
| sdk_v2/rust/GENERATE-DOCS.md | Rust docs generation instructions (cargo doc) |
| sdk_v2/rust/Cargo.toml | New Rust crate manifest (deps/features/examples) |
| sdk_v2/rust/.rustfmt.toml | Rustfmt settings |
| sdk_v2/rust/.clippy.toml | Clippy config (MSRV) |
| samples/rust/tool-calling-foundry-local/src/main.rs | Updated tool-calling sample using SDK v2 |
| samples/rust/tool-calling-foundry-local/README.md | Tool-calling sample instructions |
| samples/rust/tool-calling-foundry-local/Cargo.toml | Tool-calling sample manifest |
| samples/rust/native-chat-completions/src/main.rs | Native (non-HTTP) chat completions sample |
| samples/rust/native-chat-completions/README.md | Native chat sample instructions |
| samples/rust/native-chat-completions/Cargo.toml | Native chat sample manifest |
| samples/rust/foundry-local-webserver/src/main.rs | Web service start + REST streaming sample |
| samples/rust/foundry-local-webserver/README.md | Web service sample instructions |
| samples/rust/foundry-local-webserver/Cargo.toml | Web service sample manifest |
| samples/rust/audio-transcription-example/src/main.rs | Audio transcription (sync + streaming) sample |
| samples/rust/audio-transcription-example/README.md | Audio transcription sample instructions |
| samples/rust/audio-transcription-example/Cargo.toml | Audio transcription sample manifest |
| samples/rust/README.md | Samples index updated to new set of examples |
| samples/rust/Cargo.toml | Workspace members updated (replaces old hello sample) |
| samples/rust/hello-foundry-local/src/main.rs | Removed legacy hello sample |
| samples/rust/hello-foundry-local/README.md | Removed legacy hello sample README |
| samples/rust/hello-foundry-local/Cargo.toml | Removed legacy hello sample manifest |
| .github/workflows/rustfmt.yml | Removed legacy rustfmt workflow targeting old paths |
| .github/workflows/foundry-local-sdk-build.yml | Adds Rust build jobs (win/winml/ubuntu/macos) |
| .github/workflows/build-rust-steps.yml | New reusable workflow for Rust fmt/clippy/build/test/package/artifacts |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| - name: Run integration tests | ||
| if: ${{ inputs.run-integration-tests }} | ||
| run: cargo test --test '*' ${{ env.CARGO_FEATURES }} -- --test-threads=1 --nocapture |
There was a problem hiding this comment.
cargo test --test '*' is not a valid way to run all integration tests; --test expects a concrete test target name, so this will typically fail with "no test target named '*'" (or run nothing). To execute all integration tests, use cargo test --tests (or list the specific integration test target like --test integration).
| run: cargo test --test '*' ${{ env.CARGO_FEATURES }} -- --test-threads=1 --nocapture | |
| run: cargo test ${{ env.CARGO_FEATURES }} --tests -- --test-threads=1 --nocapture |
There was a problem hiding this comment.
Fixed. Changed cargo test --test '*' to cargo test --tests. Also added --include-ignored to run ignored integration tests in CI.
sdk_v2/rust/tests/integration.rs
Outdated
| #[tokio::test] | ||
| async fn should_throw_when_completing_streaming_chat_with_invalid_callback() { | ||
| let (client, model) = setup_chat_client().await; | ||
| let messages: Vec<ChatCompletionRequestMessage> = vec![]; | ||
|
|
||
| let result = client.complete_streaming_chat(&messages, None).await; | ||
| assert!(result.is_err(), "Expected error even with empty messages"); | ||
|
|
There was a problem hiding this comment.
This test name mentions an "invalid callback", but the body only passes an empty messages array (same as the preceding test) and doesn't exercise any callback validation. This makes the intent unclear and duplicates coverage. Consider removing this test or rewriting it to actually validate the intended error condition.
There was a problem hiding this comment.
Fixed. Removed the duplicate should_throw_when_completing_streaming_chat_with_invalid_callback test.
| println!("Downloading model..."); | ||
| model | ||
| .download(Some(|progress: &str| { | ||
| print!("\r {progress:.1}%"); |
There was a problem hiding this comment.
progress is a &str; using {progress:.1}% applies string precision and will truncate the progress string to 1 character. If you want to show the reported progress value, print {progress} directly (or parse to a number and then format with one decimal).
| print!("\r {progress:.1}%"); | |
| print!("\r {progress}%"); |
There was a problem hiding this comment.
Fixed. Changed {progress:.1}% to {progress}% in all 4 sample apps.
| let mut request = self.settings.serialize(&self.model_id, path_str); | ||
| if let Some(map) = request.as_object_mut() { | ||
| map.insert("stream".into(), json!(true)); | ||
| } | ||
|
|
There was a problem hiding this comment.
For audio transcription streaming, this adds stream: true into the OpenAI request payload. The JS and C# SDKs implement streaming via the native callback without adding a stream field to the audio_transcribe request (see sdk_v2/js/src/openai/audioClient.ts and sdk_v2/cs/src/OpenAI/AudioClient.cs), so this is an API-shape inconsistency that may cause request validation differences across SDKs. Consider removing the stream field (or aligning all SDKs / documenting why Rust needs it).
There was a problem hiding this comment.
Fixed. Removed stream: true from the audio transcription streaming request. Streaming is now handled purely via the native callback, matching JS/C# SDKs.
| /// C-ABI trampoline that forwards chunks from the native library into a Rust | ||
| /// closure stored behind `user_data`. | ||
| unsafe extern "C" fn streaming_trampoline( | ||
| data: *const u8, | ||
| length: i32, | ||
| user_data: *mut std::ffi::c_void, | ||
| ) { | ||
| if data.is_null() || length <= 0 { | ||
| return; | ||
| } | ||
| let closure = &mut *(user_data as *mut Box<dyn FnMut(&str)>); | ||
| let slice = std::slice::from_raw_parts(data, length as usize); | ||
| if let Ok(chunk) = std::str::from_utf8(slice) { | ||
| closure(chunk); | ||
| } | ||
| } |
There was a problem hiding this comment.
streaming_trampoline invokes a Rust closure directly from an extern "C" callback. If the user callback panics, the unwind would cross the FFI boundary, which is undefined behavior (and may abort or corrupt state). Please wrap the callback invocation in std::panic::catch_unwind and convert any panic into a captured error (or abort the stream safely) so no panic can unwind past the C ABI boundary.
There was a problem hiding this comment.
Fixed. Wrapped streaming_trampoline body in std::panic::catch_unwind(AssertUnwindSafe(...)) to prevent UB from panics crossing the FFI boundary.
sdk_v2/rust/tests/integration.rs
Outdated
| #[tokio::test] | ||
| async fn should_transcribe_audio_without_streaming_with_temperature() { | ||
| let (mut client, model) = setup_audio_client().await; | ||
| client.language("en").temperature(0.0); | ||
|
|
||
| let response = client | ||
| .transcribe(&audio_file()) | ||
| .await | ||
| .expect("transcribe with temperature failed"); |
There was a problem hiding this comment.
These "with_temperature" transcription tests set temperature(0.0), which is the same value used in the non-"with_temperature" variants. As written, they don't actually verify that changing temperature is accepted/applied. Consider using a non-default temperature (or asserting the request/behavior difference) so the test meaningfully covers the setting.
There was a problem hiding this comment.
Fixed. Changed temperature from 0.0 to 0.5 in the with_temperature transcription test.
sdk_v2/rust/tests/integration.rs
Outdated
| #[tokio::test] | ||
| async fn should_transcribe_audio_with_streaming_with_temperature() { | ||
| let (mut client, model) = setup_audio_client().await; | ||
| client.language("en").temperature(0.0); | ||
|
|
||
| let mut full_text = String::new(); | ||
|
|
||
| let mut stream = client | ||
| .transcribe_streaming(&audio_file()) | ||
| .await | ||
| .expect("transcribe_streaming with temperature setup failed"); | ||
|
|
There was a problem hiding this comment.
This streaming "with_temperature" transcription test sets temperature(0.0), matching the other streaming test. That means it doesn't actually validate that temperature changes are supported. Consider using a different temperature value (or asserting on request metadata) to make this test cover the intended behavior.
There was a problem hiding this comment.
Fixed. Changed temperature from 0.0 to 0.5 in the streaming with_temperature transcription test.
| pub fn create(config: FoundryLocalConfig) -> Result<&'static Self> { | ||
| // If already initialised, return the existing instance. | ||
| if let Some(mgr) = INSTANCE.get() { | ||
| return Ok(mgr); | ||
| } | ||
|
|
||
| let internal_config = Configuration::new(config)?; | ||
| let core = Arc::new(CoreInterop::new(&internal_config)?); | ||
|
|
||
| // Send the configuration map to the native core. | ||
| let init_params = json!({ "Params": internal_config.params }); | ||
| core.execute_command("initialize", Some(&init_params))?; | ||
|
|
||
| let service_endpoint = internal_config.params.get("WebServiceExternalUrl").cloned(); | ||
|
|
||
| let model_load_manager = | ||
| Arc::new(ModelLoadManager::new(Arc::clone(&core), service_endpoint)); | ||
|
|
||
| let catalog = Catalog::new(Arc::clone(&core), Arc::clone(&model_load_manager))?; | ||
|
|
||
| let manager = Self { | ||
| _config: internal_config, | ||
| core, | ||
| _model_load_manager: model_load_manager, | ||
| catalog, | ||
| urls: std::sync::Mutex::new(Vec::new()), | ||
| }; | ||
|
|
||
| // Attempt to store; if another thread raced us, return whichever won. | ||
| match INSTANCE.set(manager) { | ||
| Ok(()) => Ok(INSTANCE.get().unwrap()), | ||
| Err(_) => Ok(INSTANCE.get().unwrap()), | ||
| } |
There was a problem hiding this comment.
FoundryLocalManager::create is intended to be a singleton initializer, but the current pattern (INSTANCE.get() check followed by initialization and then INSTANCE.set) is not atomic. If two threads call create concurrently, both can run initialize and build a catalog before one loses the set race, which can reintroduce the "already initialized" failure this singleton is meant to prevent. Consider using OnceLock::get_or_try_init (or equivalent) so the initialization closure runs at most once and only one thread executes the native initialize call.
There was a problem hiding this comment.
Fixed. Replaced non-atomic get()+set() pattern with Once + OnceLock to ensure initialization runs exactly once (stable Rust compatible — get_or_try_init is nightly-only).
| model | ||
| .download(Some(|progress: &str| { | ||
| print!("\r {progress:.1}%"); | ||
| io::stdout().flush().ok(); | ||
| })) | ||
| .await?; | ||
| println!(); | ||
| } | ||
|
|
There was a problem hiding this comment.
In this progress callback, progress is a &str, and the format specifier {progress:.1} applies string precision (truncation), so it will only print the first character of the progress string. If the intent is to display the full progress (or a numeric percentage), print {progress} as-is or parse to a number before formatting.
There was a problem hiding this comment.
Fixed. Changed {progress:.1} to {progress}%.
| print!("Downloading model {model_alias}..."); | ||
| model | ||
| .download(Some(move |progress: &str| { | ||
| print!("\rDownloading model... {progress:.1}%"); |
There was a problem hiding this comment.
Here progress is a &str, and {progress:.1}% will truncate the string to a single character (string precision) rather than formatting a numeric percentage. Consider printing {progress} as-is, or parsing it to a numeric type before applying float formatting.
| print!("\rDownloading model... {progress:.1}%"); | |
| print!("\rDownloading model... {progress}%"); |
There was a problem hiding this comment.
Fixed. Changed {progress:.1}% to {progress}%.
Add comprehensive documentation matching the quality and structure of the JS and C# SDK READMEs. The expanded README now includes: - Feature overview listing all SDK capabilities - Installation instructions with tokio dependency guidance - Quick Start example with numbered steps - Usage sections for catalog browsing, model lifecycle, chat completions, streaming responses, tool calling, response format options, audio transcription, and embedded web service - Chat client settings reference table - Error handling guide with FoundryLocalError enum variants - Configuration reference table with all FoundryLocalConfig fields - Links to sample applications in samples/rust/ Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Split the monolithic integration.rs into separate test modules while keeping a single test binary (tests/integration/main.rs) to avoid .NET native runtime re-initialization errors: - manager_test.rs: FoundryLocalManager tests - catalog_test.rs: Catalog operation tests - model_test.rs: Model operations (load, unload, cache, introspection) - chat_client_test.rs: ChatClient (completions, streaming, tool calling) - audio_client_test.rs: AudioClient (transcription, streaming) - web_service_test.rs: REST API tests Additional fixes: - Remove duplicate 'invalid callback' test (was identical to empty messages streaming test) - Use temperature(0.5) instead of 0.0 in with_temperature audio tests so they actually validate non-default temperature behavior Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…rapping
Resolve all 37 review comments from nenad1002, prathikr, and copilot-bot:
Safety & correctness:
- Fix Windows buffer deallocation: CoTaskMemFree -> LocalFree (matches FreeHGlobal)
- Wrap FFI streaming callback in catch_unwind to prevent UB from panics
- Fix singleton TOCTOU race with Once + OnceLock (stable Rust compatible)
- Replace all .unwrap() on mutex locks with FoundryLocalError::Internal
API design:
- Wrap Model/ModelVariant in Arc for efficient sharing (no full clones)
- Change ResponseBuffer length to u32, move by value (consume-once)
- Return PathBuf for file system paths, remove unused struct fields
- Public APIs return Result instead of swallowing errors
- Define only platform extension, build native lib filename dynamically
- Reduce resolve_library_path to 3 strategies (config, build.rs, exe-sibling)
CI/workflow:
- Weave Rust jobs into platform-specific blocks (match cs/js format)
- Remove ubuntu (known issues), enable tests on all platforms
- Fix cargo test command (--tests, --include-ignored), fix boolean default
- Add comments for clippy and --allow-dirty
Tests:
- Split monolithic integration.rs into 6 per-feature test modules
- Remove duplicate invalid-callback test
- Use non-default temperature (0.5) in with_temperature tests
Samples:
- Fix {progress:.1}% -> {progress}% in all 4 samples (string truncation bug)
Consistency & performance:
- Remove stream:true from audio transcription (match JS/C# SDKs)
- Isolate unsafe code with SAFETY comments throughout core_interop
- Reuse reqwest::Client in ModelLoadManager (connection pooling)
- Fix libs_already_present to check specific core library file
Bug fix (not in review):
- Auto-detect WinAppSDK Bootstrap DLL and set Bootstrap=true in config
params before native initialize (matches JS SDK behavior, required for
WinML/OpenVINO execution providers)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Additional fix: WinAppSDK Bootstrap auto-detectionDuring testing, discovered that the Rust SDK was failing to load WinML/OpenVINO execution providers because the native core's WinAppSDK bootstrapper was never being triggered. Root cause: The Rust SDK pre-loaded Fix: |
…rapping
Resolve all 37 review comments from nenad1002, prathikr, and copilot-bot:
Safety & correctness:
- Fix Windows buffer deallocation: CoTaskMemFree -> LocalFree (matches FreeHGlobal)
- Wrap FFI streaming callback in catch_unwind to prevent UB from panics
- Fix singleton TOCTOU race with Once + OnceLock (stable Rust compatible)
- Replace all .unwrap() on mutex locks with FoundryLocalError::Internal
API design:
- Wrap Model/ModelVariant in Arc for efficient sharing (no full clones)
- Change ResponseBuffer length to u32, move by value (consume-once)
- Return PathBuf for file system paths, remove unused struct fields
- Public APIs return Result instead of swallowing errors
- Define only platform extension, build native lib filename dynamically
- Reduce resolve_library_path to 3 strategies (config, build.rs, exe-sibling)
CI/workflow:
- Weave Rust jobs into platform-specific blocks (match cs/js format)
- Remove ubuntu (known issues), enable tests on all platforms
- Fix cargo test command (--tests, --include-ignored), fix boolean default
- Add comments for clippy and --allow-dirty
Tests:
- Split monolithic integration.rs into 6 per-feature test modules
- Remove duplicate invalid-callback test
- Use non-default temperature (0.5) in with_temperature tests
Samples:
- Fix {progress:.1}% -> {progress}% in all 4 samples (string truncation bug)
Consistency & performance:
- Remove stream:true from audio transcription (match JS/C# SDKs)
- Isolate unsafe code with SAFETY comments throughout core_interop
- Reuse reqwest::Client in ModelLoadManager (connection pooling)
- Fix libs_already_present to check specific core library file
Bug fix (not in review):
- Auto-detect WinAppSDK Bootstrap DLL and set Bootstrap=true in config
params before native initialize (matches JS SDK behavior, required for
WinML/OpenVINO execution providers)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
d0f90cd to
93ae47a
Compare
| .await | ||
| .map_err(|e| FoundryLocalError::CommandExecution { | ||
| reason: format!("task join error: {e}"), | ||
| })? |
There was a problem hiding this comment.
Ah, this is swallowed by tokio, if you use Claude ask it to return an error out, then change the method execute_command_async to return Result<String, FoundryLocalError>.
This will force you to return Result<(), FoundryLocalError> or similar on above funs.
load() should return Error if there is an error.
There was a problem hiding this comment.
I've restructured this so the error can no longer be swallowed.
Rather than just fixing the consumer side, I moved the error propagation into execute_command_streaming_channel itself. The channel now carries Result<String> items instead of plain String, and after the FFI call completes, any error from the native core response buffer is sent as the final channel item:
tokio::task::spawn_blocking(move || {
let tx_chunk = tx.clone();
let result = this.execute_command_streaming(
&command,
params.as_ref(),
move |chunk: &str| {
let _ = tx_chunk.send(Ok(chunk.to_owned()));
},
);
// Surface any error through the channel so it
// cannot be silently swallowed.
if let Err(e) = result {
let _ = tx.send(Err(e));
}
});This eliminates the JoinHandle<Result<String>> from the return type entirely — it's now impossible for consumers to accidentally drop the error. ChatCompletionStream and AudioTranscriptionStream are much simpler as a result:
no handle field, no close() method, and poll_next just forwards Ok/Err items from the channel. Net ~100 lines deleted.
…wing them When the mpsc channel closes in poll_next, the Stream implementations now check the JoinHandle for errors from the native core response buffer. Previously, these errors were silently dropped by tokio if the consumer only iterated the stream without calling close(). The fix polls the JoinHandle when Ready(None) is received from the channel: - Ok(Ok(_)): stream ends normally - Ok(Err(e)): surfaces the FFI error as the final stream item - Err(e): surfaces the tokio join error as the final stream item Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Move error propagation into execute_command_streaming_channel itself: the channel now carries Result<String> items, and the blocking task sends any error from the native core response buffer as the final channel item. This eliminates the JoinHandle<Result<String>> from the return type, making it impossible for consumers to silently swallow errors. Simplifies ChatCompletionStream and AudioTranscriptionStream: - Remove handle field and close() method - poll_next now just forwards Ok/Err items from the channel - Errors surface naturally as stream items Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Follow-up to the channel restructure — samples also referenced the now-removed close() method on ChatCompletionStream and AudioTranscriptionStream. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add error_for_status() to http_get in model_load_manager (HTTP 4xx/5xx was silently treated as success) - Fix singleton init caching failures permanently (use double-checked locking with OnceLock, only cache success, preserve error types) - URL-encode model IDs in external service paths (add urlencoding crate) - Change builder setters to consuming mut self -> Self pattern (per reviewer guidance for idiomatic Rust fluent builders) - Fix thundering herd on catalog refresh (add tokio::sync::Mutex gate, consolidate 3 data mutexes into single CatalogState to prevent split-brain reads) - Fix empty chunk busy-wait in streams (use loop instead of wake_by_ref + Poll::Pending) - Update all call sites: examples, tests, samples, README, doc comments Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Tier 1 — Quick & Safe Fixes: - build.rs: Fix DLL name cache check (foundry_local_core -> Microsoft.AI.Foundry.Local.Core) - build.rs: Fix wrong system lib linkage (ole32 -> kernel32) - foundry_local_manager.rs: Replace expect() panic with safe match on OnceLock race - README.md: Fix model lifecycle example (Arc<Model> + select_variant) and response format example (separate let bindings for consuming builder) - lib.rs: Re-export Result type alias - chat_client.rs, audio_client.rs: Remove redundant AI voice comments Tier 2 — Medium Refactors: - examples/tool_calling.rs: Fix multi-tool tracking with HashMap<u32, StreamedToolCall> - examples/chat_completion.rs: Prefer known chat model aliases over random HashMap first() - catalog.rs: Add CacheInvalidator (AtomicBool) shared with ModelVariant; download() and remove_from_cache() now invalidate catalog cache - openai/json_stream.rs: New generic JsonStream<T: DeserializeOwned> replaces duplicate ChatCompletionStream and AudioTranscriptionStream implementations Tier 3 — Complex Fixes: - core_interop.rs: Add StreamingCallbackState with UTF-8 accumulation buffer; partial multi-byte sequences split across FFI callbacks are now accumulated and flushed correctly instead of being silently dropped - core_interop.rs: Forward final Ok(result) from execute_command_streaming through the channel instead of silently dropping it - model_variant.rs: Document is_cached() IPC cost; recommend Catalog::get_cached_models() Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The native core's response buffer contains a status/summary string, not a JSON stream chunk. Forwarding it through the channel caused JsonStream to fail deserialization with 'expected value' error. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
|
||
| | Method | Signature | Description | | ||
| |--------|-----------|-------------| | ||
| | `create` | `fn create(config: FoundryLocalConfig) -> Result<&'static Self>` | Initialise the SDK. First call creates the singleton; subsequent calls return the existing instance (config is ignored after first call). | |
There was a problem hiding this comment.
So as already suggested, let's go with builder pattern, it's idiomatic in Rust, and we already break the contract on this method in C++ anyway.
There was a problem hiding this comment.
Also btw, where is logger? Did you port logger?
There was a problem hiding this comment.
Cause in C++, we allow custom logger:
explicit FoundryLocalManager(Configuration configuration, ILogger* logger = nullptr);
| | `create` | `fn create(config: FoundryLocalConfig) -> Result<&'static Self>` | Initialise the SDK. First call creates the singleton; subsequent calls return the existing instance (config is ignored after first call). | | ||
| | `catalog` | `fn catalog(&self) -> &Catalog` | Access the model catalog. | | ||
| | `urls` | `fn urls(&self) -> Result<Vec<String>>` | URLs the local web service is listening on. Empty until `start_web_service` is called. | | ||
| | `start_web_service` | `async fn start_web_service(&self) -> Result<Vec<String>>` | Start the local web service and return listening URLs. | |
There was a problem hiding this comment.
Afaik, this shouldn't return anything, why do you return listening URLs?
GetUrls() method returns the list of urls unless if the API changed recently, and you pass urls through configuration.
Either way, returning urls can probably be [&str], cause you set them beforehand an FL Manager and is owning. You are here doing unefficient clone to return this as FL manager has to own the urls to move them in the first place.
Write to Claude:
urls can almost certainly be made owned by the manager, can you make GetUrls method that returns [&str] and make StartServer not to return anything except Result that might be needed?
| ```rust | ||
| let config = FoundryLocalConfig { | ||
| log_level: Some(LogLevel::Debug), | ||
| ..FoundryLocalConfig::new("my_app") |
There was a problem hiding this comment.
This will use build pattern in future, so examples would slightly change to be idiomatic
| |--------|-----------|-------------| | ||
| | `name` | `fn name(&self) -> &str` | Catalog name as reported by the native core. | | ||
| | `update_models` | `async fn update_models(&self) -> Result<()>` | Refresh catalog if cache expired or invalidated. | | ||
| | `get_models` | `async fn get_models(&self) -> Result<Vec<Arc<Model>>>` | Return all known models. | |
There was a problem hiding this comment.
ListModels**
This is the method we don't want to change name. Vec<Arc> is probably correct, I don't think we can do better.
| | `get_model` | `async fn get_model(&self, alias: &str) -> Result<Arc<Model>>` | Look up a model by alias. | | ||
| | `get_model_variant` | `async fn get_model_variant(&self, id: &str) -> Result<Arc<ModelVariant>>` | Look up a variant by unique id. | | ||
| | `get_cached_models` | `async fn get_cached_models(&self) -> Result<Vec<Arc<ModelVariant>>>` | Return only variants cached on disk. | | ||
| | `get_loaded_models` | `async fn get_loaded_models(&self) -> Result<Vec<String>>` | Return ids of models currently loaded in memory. | |
There was a problem hiding this comment.
Should return ModelVariant, not String, unless again, my API is out of date.
| } | ||
|
|
||
| /// Convenience alias used throughout the SDK. | ||
| pub type Result<T> = std::result::Result<T, FoundryLocalError>; |
There was a problem hiding this comment.
The internal alias in error.rs is fine for our own code, but remove pub use self::error::Result from lib.rs. Users should use Result<T, FoundryLocalError> explicitly. Otherwise we shadow standard Result.
Let's be very explicit in public API. Not Result<()>, but Result<(), FoundryLocalError>
| | `download` | `async fn download<F>(&self, progress: Option<F>) -> Result<()>` | Download the selected variant. `F: FnMut(&str) + Send + 'static` | | ||
| | `path` | `async fn path(&self) -> Result<PathBuf>` | Local file-system path of the selected variant. | | ||
| | `load` | `async fn load(&self) -> Result<()>` | Load the selected variant into memory. | | ||
| | `unload` | `async fn unload(&self) -> Result<String>` | Unload the selected variant from memory. | |
| | `path` | `async fn path(&self) -> Result<PathBuf>` | Local file-system path of the selected variant. | | ||
| | `load` | `async fn load(&self) -> Result<()>` | Load the selected variant into memory. | | ||
| | `unload` | `async fn unload(&self) -> Result<String>` | Unload the selected variant from memory. | | ||
| | `remove_from_cache` | `async fn remove_from_cache(&self) -> Result<String>` | Remove the selected variant from the local cache. | |
| | `id` | `fn id(&self) -> &str` | Unique identifier of the selected variant. | | ||
| | `variants` | `fn variants(&self) -> &[ModelVariant]` | All variants in this model. | | ||
| | `selected_variant` | `fn selected_variant(&self) -> &ModelVariant` | Currently selected variant. | | ||
| | `select_variant` | `fn select_variant(&mut self, id: &str) -> Result<()>` | Select a variant by id. | |
There was a problem hiding this comment.
Let's make this select_variant(&self, id: &str)
And write to AI to use interior mutability to achieve this. Selected variant is not really a state we need to expose mutability for. This goes for both Model & ModelVariant.
|
|
||
| /// Represents one specific variant of a model (a particular id within an alias | ||
| /// group). | ||
| #[derive(Debug, Clone)] |
There was a problem hiding this comment.
You are leaking internal impl with Debug here. Debug gives ability to print contents of the object.
Consider implementing custom impl fmt::Debug for ModelVariant
And only returning info. Same for Model.
| pub text: String, // The transcribed text | ||
| pub language: Option<String>, // Language of input audio (if detected) | ||
| pub duration: Option<f64>, // Duration in seconds (if available) | ||
| pub segments: Option<Vec<serde_json::Value>>, // Transcription segments (if available) |
There was a problem hiding this comment.
Dont return serde_json in public API,
do use string or something like this
__pub segments: Option<Vec<TranscriptionSegment>>
pub words: Option<Vec<TranscriptionWord>>__
| pub cached: bool, | ||
| pub task: Option<String>, | ||
| pub runtime: Option<Runtime>, | ||
| pub file_size_mb: Option<f64>, |
There was a problem hiding this comment.
should be u64 (measuring in bytes) or if you want to keep f64, then have check in place so it cannot go negative
| pub struct ModelInfo { | ||
| pub id: String, | ||
| pub name: String, | ||
| pub version: i64, |
| pub supports_tool_calling: Option<bool>, | ||
| pub max_output_tokens: Option<i64>, | ||
| pub min_fl_version: Option<String>, | ||
| pub created_at_unix: i64, |
| | Method | Signature | Description | | ||
| |--------|-----------|-------------| | ||
| | `name` | `fn name(&self) -> &str` | Catalog name as reported by the native core. | | ||
| | `update_models` | `async fn update_models(&self) -> Result<()>` | Refresh catalog if cache expired or invalidated. | |
There was a problem hiding this comment.
Again, @prathikr can give feedback here, but I don't think this method is public
Add Rust SDK v2
Introduces the Foundry Local Rust SDK (
sdk_v2/rust), updated Rust samples (samples/rust), and CI pipeline support.SDK (
sdk_v2/rust)A safe, async Rust wrapper over the Foundry Local Core native library, providing:
FoundryLocalManager— singleton entry point that initializes the native engine, exposes the model catalog, and manages the local web service lifecycle.Catalog— async model discovery with a 6-hour TTL cache. Lookup by alias or variant ID, with helpful error messages listing available alternatives on miss.Model/ModelVariant— full model lifecycle: download (with streaming progress), load, unload, cache inspection, and removal.ChatClient— OpenAI-compatible chat completions (non-streaming and streaming viafutures_core::Stream). Supports temperature, top-p/top-k, response format (text, JSON, JSON schema, Lark grammar), and tool calling.AudioClient— OpenAI-compatible audio transcription (non-streaming and streaming). Acceptsimpl AsRef<Path>for file paths.thiserror-basedFoundryLocalErrorenum with discriminated variants (LibraryLoad,CommandExecution,ModelOperation,Validation, etc.) and a crate-wideResult<T>alias.detail/core_interop.rs) — dynamically loads the native.dll/.so/.dylibvialibloading, with platform-aware memory deallocation and a trampoline pattern for streaming callbacks. Async methods usetokio::task::spawn_blockingto keep the runtime unblocked.build.rs) — automatically downloads NuGet packages (Core, ORT, GenAI) at compile time, extracts platform-specific native libraries, and caches them across builds. Supportswinmlandnightlyfeature flags.Re-exports
async-openairequest/response types for convenience so callers don't need a direct dependency.Samples (
samples/rust)Replaces the old
hello-foundry-localsample with four focused examples:native-chat-completions— basic sync + streaming chat completiontool-calling-foundry-local— multi-turn tool calling with streaming chunk assemblyaudio-transcription-example— audio file transcription (sync and streaming)foundry-local-webserver— starts the local web service and makes requests via the OpenAI-compatible HTTP endpointEach sample has its own
Cargo.toml,README.md, and is included in the workspace.CI (
.github/workflows)build-rust-steps.yml— reusable composite action for Rust SDK build + test steps.foundry-local-sdk-build.yml— workflow that triggers onsdk_v2/rust/**andsamples/rust/**changes.Testing
manager_test,catalog_test,model_test,model_load_manager_test,chat_client_test,audio_client_test.#[ignore]by default (require native library at runtime) and can be enabled in CI with--include-ignored.tests/common/mod.rs) provide config, model aliases, and helper functions.cargo clippy --all-targets --all-featurespasses with zero code warnings.