Add live audio transcription streaming support to Foundry Local C# SDK#485
Open
Add live audio transcription streaming support to Foundry Local C# SDK#485
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
added 2 commits
March 10, 2026 18:09
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Here's the updated PR description based on the latest changes (renamed types, CoreInterop routing fix, mermaid updates):
Title: Add live audio transcription streaming support to Foundry Local C# SDK
Description:
Adds real-time audio streaming support to the Foundry Local C# SDK, enabling live microphone-to-text transcription via ONNX Runtime GenAI's StreamingProcessor API (Nemotron ASR).
The existing
OpenAIAudioClientonly supports file-based transcription. This PR introducesLiveAudioTranscriptionSessionthat accepts continuous PCM audio chunks (e.g., from a microphone) and returns partial/final transcription results as an async stream.What's included
New files
src/OpenAI/LiveAudioTranscriptionClient.cs— Streaming session withStartAsync(),AppendAsync(),GetTranscriptionStream(),StopAsync()src/OpenAI/LiveAudioTranscriptionTypes.cs—LiveAudioTranscriptionResultandCoreErrorResponsetypesModified files
src/OpenAI/AudioClient.cs— AddedCreateLiveTranscriptionSession()factory methodsrc/Detail/ICoreInterop.cs— AddedStreamingRequestBufferstruct,StartAudioStream,PushAudioData,StopAudioStreaminterface methodssrc/Detail/CoreInterop.cs— Routes audio commands through existingexecute_command/execute_command_with_binarynative entry points (no separate audio exports needed)src/Detail/JsonSerializationContext.cs— RegisteredLiveAudioTranscriptionResultfor AOT compatibilitytest/FoundryLocal.Tests/Utils.cs— Updated to useCreateLiveTranscriptionSession()Documentation
API surface
Design highlights
Channel<T>serializes audio pushes from any thread (safe for mic callbacks) with backpressureStartAsync()and immutable during the sessionStopAsyncalways calls native stop even if cancelled, preventing native session leaksCancellationTokenSource, decoupled from the caller's tokenStartAudioStreamandStopAudioStreamroute throughexecute_command;PushAudioDataroutes throughexecute_command_with_binary— no new native entry points requiredCore integration (neutron-server)
The Core side (AudioStreamingSession.cs) uses
StreamingProcessor+Generator+Tokenizer+TokenizerStreamfrom onnxruntime-genai to perform real-time RNNT decoding. The native commands (audio_stream_start/push/stop) are handled as cases inNativeInterop.ExecuteCommandManaged/ExecuteCommandWithBinaryManaged.Verified working
StreamingProcessorpipeline verified with WAV file (correct transcript)TranscribeChunkbyte[] PCM path matches reference float[] path exactly