Conversation
Both transform hooks (system.transform and messages.transform) lacked try-catch wrapping. Any SQLite error (corruption, busy timeout, schema mismatch) propagated through OpenCode's Plugin.trigger mechanism and surfaced as a 500 'Internal server error', halting the user's session. Changes: - system.transform: wrap knowledge injection block in try-catch. On error, log via log.error(), reset LTM tokens to 0, and push a fixed fallback note directing the LLM to use the recall tool. Track degraded sessions to avoid busting the provider's read-token cache on recovery — if the conversation is longer than the LTM content, keep the fallback note rather than switching mid-session. - messages.transform: wrap the entire transform path in try-catch. On error, log via log.error() and leave output.messages unmodified (equivalent to layer 0 passthrough). - gradient.ts: export getLastTransformEstimate() for the cache trade-off calculation. - Tests: 4 new tests covering DB error survival for both hooks, plus cache-aware LTM recovery (skip on long sessions, proceed on short).
…ng BLOBs After adding the embedding BLOB column (schema v8), all SELECT * queries in ltm.ts were loading 4KB of Float32Array data per knowledge entry that was immediately discarded (KnowledgeEntry type doesn't include embedding). This wasted ~200KB per forSession() call (2 queries × ~25 entries × 4KB) and affected all other knowledge queries (search, searchLike, all, get, forProject, searchScored). Define KNOWLEDGE_COLS and KNOWLEDGE_COLS_K constants that list exactly the columns needed for KnowledgeEntry, excluding the embedding BLOB. The embedding column is only needed by vectorSearch() in embedding.ts, which already selects it explicitly.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
Both transform hooks in
src/index.ts—experimental.chat.system.transformandexperimental.chat.messages.transform— had no try-catch wrapping. Any SQLite error (corruption, busy timeout, schema mismatch) propagated through OpenCode's Plugin.trigger mechanism and surfaced as a 500 "Internal server error", halting the user's session.Additionally, after adding the
embedding BLOBcolumn (schema v8), allSELECT *queries inltm.tswere unnecessarily loading 4KB of Float32Array data per knowledge entry (~200KB perforSession()call) that was immediately discarded.Investigation: Embedding/vector search link
The embedding/vector code is not in the transform hook call path —
forSession()uses only FTS5 BM25, not embeddings. The 500 errors were a latent bug (unprotected hooks) that predated the embedding feature. The temporal correlation with the Voyage AI rollout was coincidental — it coincided with the search overhaul (PRs #46-#50).Changes
Error handling (
src/index.ts,src/gradient.ts)log.error(), resetsetLtmTokens(0), push fallback note directing LLM to use recall tool. Track degraded sessions to avoid busting the provider's read-token cache on recovery — if conversation is longer than LTM content, keep fallback note.output.messagesunmodified (layer 0 passthrough).getLastTransformEstimate()from gradient.ts for the cache trade-off calculation.Performance (
src/ltm.ts)KNOWLEDGE_COLS/KNOWLEDGE_COLS_Kconstants listing exactly the 11 columns inKnowledgeEntry, excludingembedding.SELECT */SELECT k.*queries across 8 functions.Tests (
test/index.test.ts)4 new tests:
getLtmTokens() === 0