From c5eb84b766c93a30d1c97877ec77e9376f04f0e6 Mon Sep 17 00:00:00 2001 From: Roo Code Date: Sun, 5 Apr 2026 00:36:55 +0300 Subject: [PATCH 1/2] fix(embeddings): add timeouts, 5xx retry, and proper error messages for OpenAI Compatible embedder Fix critical issues where the OpenAI Compatible Embedder would hang indefinitely on unresponsive servers and not retry 5xx server errors. Changes: - Add 60s timeout to OpenAI SDK constructor (timeout: 60000, maxRetries: 0) - Add AbortController with 60s timeout to makeDirectEmbeddingRequest() - Convert AbortError to HTTP 504 (Gateway Timeout) - Extend retry logic to handle 5xx errors (500-599) with exponential backoff - Update validation error messages: - 429 -> rateLimitExceeded - 502 -> badGateway (new) - 503 -> serviceUnavailable - 504 -> gatewayTimeout (new) - Other 5xx -> serverError (was configurationError) - Add i18n translations for new error messages in 17 languages - Add unit tests for timeout handling and 5xx retry (5 new tests) - Add unit tests for getErrorMessageForStatus (10 new tests) Impact: - All OpenAI-compatible embedders benefit (Gemini, Mistral, VercelAiGateway, OpenRouter) - No breaking changes - existing functionality preserved - Prevents infinite waits on unresponsive embedding servers - Clear error messages for 502/503/504 errors Files changed: 21 - 2 source files (openai-compatible.ts, validation-helpers.ts) - 17 i18n locale files (en, ru, de, es, fr, hi, id, it, ja, ko, nl, pl, pt-BR, tr, vi, zh-CN, zh-TW) - 3 test files (openai-compatible.spec.ts, openai.spec.ts, validation-helpers.spec.ts) Test results: - code-index tests: 482 passed, 0 failed - All project tests: 8210 total (8154 passed, 57 skipped, 0 failed) - Test files: 582 (569 run, 13 skipped) - Duration: 4m44s --- IMPLEMENTATION-REPORT.md | 111 ++++++++++++ src/i18n/locales/de/embeddings.json | 7 +- src/i18n/locales/en/embeddings.json | 7 +- src/i18n/locales/es/embeddings.json | 7 +- src/i18n/locales/fr/embeddings.json | 7 +- src/i18n/locales/hi/embeddings.json | 7 +- src/i18n/locales/id/embeddings.json | 7 +- src/i18n/locales/it/embeddings.json | 7 +- src/i18n/locales/ja/embeddings.json | 7 +- src/i18n/locales/ko/embeddings.json | 7 +- src/i18n/locales/nl/embeddings.json | 7 +- src/i18n/locales/pl/embeddings.json | 7 +- src/i18n/locales/pt-BR/embeddings.json | 7 +- src/i18n/locales/ru/embeddings.json | 7 +- src/i18n/locales/tr/embeddings.json | 7 +- src/i18n/locales/vi/embeddings.json | 7 +- src/i18n/locales/zh-CN/embeddings.json | 7 +- src/i18n/locales/zh-TW/embeddings.json | 7 +- .../__tests__/openai-compatible.spec.ts | 169 +++++++++++++++++- .../embedders/__tests__/openai.spec.ts | 4 +- .../code-index/embedders/openai-compatible.ts | 139 ++++++++------ .../__tests__/validation-helpers.spec.ts | 49 ++++- .../code-index/shared/validation-helpers.ts | 8 +- 23 files changed, 517 insertions(+), 82 deletions(-) create mode 100644 IMPLEMENTATION-REPORT.md diff --git a/IMPLEMENTATION-REPORT.md b/IMPLEMENTATION-REPORT.md new file mode 100644 index 00000000000..6453ab461e6 --- /dev/null +++ b/IMPLEMENTATION-REPORT.md @@ -0,0 +1,111 @@ +# Implementation Report: Embedding Indexing Fix + +## Summary + +Fixed critical issues in the OpenAI Compatible Embedder that caused HTTP 503 errors and infinite waits during codebase indexing. The fix adds timeouts, retry logic for 5xx server errors, and proper error messages. + +## Problem + +When indexing codebase through OpenAI-compatible API (`http://0.0.0.0:11434/v1`), the following error occurred: + +``` +Indexing partially failed: Only 780 of 2834 blocks were indexed. +Failed to process batch after 3 attempts: +Failed to create embeddings after 3 attempts: HTTP 503 - 503 status code (no body) +``` + +**Root Cause:** The OpenAI Compatible Embedder lacked timeouts and did not retry 5xx errors. The server could hang indefinitely — the program needed to handle such situations correctly. + +## Changes Made + +### 1. Core Code Changes + +#### `src/services/code-index/embedders/openai-compatible.ts` + +- **Added timeout constants:** `OPENAI_COMPATIBLE_EMBEDDING_TIMEOUT_MS = 60000` (60s), `OPENAI_COMPATIBLE_VALIDATION_TIMEOUT_MS = 30000` (30s) +- **OpenAI SDK constructor:** Added `timeout: 60000` and `maxRetries: 0` (disabled built-in retry to use our own logic) +- **AbortController in fetch:** Added `AbortController` with 60s timeout to `makeDirectEmbeddingRequest()`, converts `AbortError` to HTTP 504 (Gateway Timeout) +- **Retry for 5xx errors:** Extended retry logic in `_embedBatchWithRetries()` to handle both 429 (rate limit) and 500-599 (server errors) with exponential backoff + +#### `src/services/code-index/shared/validation-helpers.ts` + +- **Updated `getErrorMessageForStatus()`:** + - 429 → `rateLimitExceeded` (was `serviceUnavailable`) + - 502 → `badGateway` (new) + - 503 → `serviceUnavailable` (reused) + - 504 → `gatewayTimeout` (new) + - Other 5xx → `serverError` (was `configurationError`) + +### 2. Localization (17 files) + +Added 5 new i18n keys to `validation` section and 1 new key `serverErrorRetry` to root in all 17 locale files: + +| Language | File | +| --------------------- | ---------------------------------------- | +| English | `src/i18n/locales/en/embeddings.json` | +| Russian | `src/i18n/locales/ru/embeddings.json` | +| German | `src/i18n/locales/de/embeddings.json` | +| Spanish | `src/i18n/locales/es/embeddings.json` | +| French | `src/i18n/locales/fr/embeddings.json` | +| Hindi | `src/i18n/locales/hi/embeddings.json` | +| Indonesian | `src/i18n/locales/id/embeddings.json` | +| Italian | `src/i18n/locales/it/embeddings.json` | +| Japanese | `src/i18n/locales/ja/embeddings.json` | +| Korean | `src/i18n/locales/ko/embeddings.json` | +| Dutch | `src/i18n/locales/nl/embeddings.json` | +| Polish | `src/i18n/locales/pl/embeddings.json` | +| Portuguese (BR) | `src/i18n/locales/pt-BR/embeddings.json` | +| Turkish | `src/i18n/locales/tr/embeddings.json` | +| Vietnamese | `src/i18n/locales/vi/embeddings.json` | +| Chinese (Simplified) | `src/i18n/locales/zh-CN/embeddings.json` | +| Chinese (Traditional) | `src/i18n/locales/zh-TW/embeddings.json` | + +### 3. Tests + +#### `src/services/code-index/embedders/__tests__/openai-compatible.spec.ts` + +- Updated existing test: 500 error now retries 3 times (was 1) +- Added `timeout handling` describe block with 2 tests +- Added `5xx retry handling` describe block with 3 tests (502, 503, 504) + +#### `src/services/code-index/embedders/__tests__/openai.spec.ts` + +- Fixed regression: Updated test expectations for new timeout/maxRetries parameters + +#### `src/services/code-index/shared/__tests__/validation-helpers.spec.ts` + +- Added `getErrorMessageForStatus` describe block with 10 tests covering all HTTP status codes + +## Test Results + +- **21 test files** — all passed +- **482 tests** — 0 failed, 0 errors, 0 warnings +- **Duration:** ~9-11s + +## Files Changed (21 total) + +| File | Changes | +| ----------------------------------------------------------------------- | ----------------------------------------------- | +| `src/services/code-index/embedders/openai-compatible.ts` | Steps 1-4: Timeouts, AbortController, 5xx retry | +| `src/services/code-index/shared/validation-helpers.ts` | Step 5: 5xx error messages | +| `src/i18n/locales/*/embeddings.json` (17 files) | Step 6: i18n keys | +| `src/services/code-index/embedders/__tests__/openai-compatible.spec.ts` | Steps 7-8: New tests | +| `src/services/code-index/embedders/__tests__/openai.spec.ts` | Regression fix | +| `src/services/code-index/shared/__tests__/validation-helpers.spec.ts` | Step 9: New tests | + +## Architecture + +``` +Request → {Full URL?} → Yes → makeDirectEmbeddingRequest (AbortController 60s) + → No → OpenAI SDK (timeout 60s, maxRetries 0) + ↓ + Error? → {429 or 5xx?} → Yes → Retry with exponential backoff + → No → Throw immediately +``` + +## Impact + +- **All OpenAI-compatible embedders benefit:** Gemini, Mistral, VercelAiGateway, OpenRouter +- **No breaking changes:** Existing functionality preserved +- **Better user experience:** Clear error messages for 502/503/504 errors +- **Prevents infinite waits:** 60s timeout on all embedding requests diff --git a/src/i18n/locales/de/embeddings.json b/src/i18n/locales/de/embeddings.json index 766d31d5ba0..1c8cc994e6c 100644 --- a/src/i18n/locales/de/embeddings.json +++ b/src/i18n/locales/de/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Erstellung von Einbettungen nach {{attempts}} Versuchen fehlgeschlagen", "textExceedsTokenLimit": "Text bei Index {{index}} überschreitet das maximale Token-Limit ({{itemTokens}} > {{maxTokens}}). Wird übersprungen.", "rateLimitRetry": "Ratenlimit erreicht, Wiederholung in {{delayMs}}ms (Versuch {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Serverfehler ({{status}}), Wiederholung in {{delayMs}}ms (Versuch {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Ungültiges Antwortformat von Amazon Bedrock", "invalidCredentials": "Ungültige AWS-Anmeldedaten. Bitte überprüfe deine AWS-Konfiguration.", @@ -37,7 +38,11 @@ "connectionFailed": "Verbindung zum Embedder-Dienst fehlgeschlagen. Bitte überprüfe deine Verbindungseinstellungen und stelle sicher, dass der Dienst läuft.", "modelNotAvailable": "Das angegebene Modell ist nicht verfügbar. Bitte überprüfe deine Modellkonfiguration.", "configurationError": "Ungültige Embedder-Konfiguration. Bitte überprüfe deine Einstellungen.", - "serviceUnavailable": "Der Embedder-Dienst ist nicht verfügbar. Bitte stelle sicher, dass er läuft und erreichbar ist.", + "serviceUnavailable": "Embedder-Dienst vorübergehend nicht verfügbar. Bitte versuchen Sie es später erneut.", + "rateLimitExceeded": "Ratenlimit überschritten. Bitte versuchen Sie es später erneut.", + "badGateway": "Bad Gateway-Fehler vom Embedder-Dienst. Der Server hat eine ungültige Antwort erhalten.", + "gatewayTimeout": "Gateway-Timeout-Fehler. Der Embedder-Dienst hat nicht rechtzeitig geantwortet.", + "serverError": "Serverfehler vom Embedder-Dienst. Bitte versuchen Sie es später erneut.", "invalidEndpoint": "Ungültiger API-Endpunkt. Bitte überprüfe deine URL-Konfiguration.", "invalidEmbedderConfig": "Ungültige Embedder-Konfiguration. Bitte überprüfe deine Einstellungen.", "invalidApiKey": "Ungültiger API-Schlüssel. Bitte überprüfe deine API-Schlüssel-Konfiguration.", diff --git a/src/i18n/locales/en/embeddings.json b/src/i18n/locales/en/embeddings.json index 7777af9027e..6e724ba8469 100644 --- a/src/i18n/locales/en/embeddings.json +++ b/src/i18n/locales/en/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Failed to create embeddings after {{attempts}} attempts", "textExceedsTokenLimit": "Text at index {{index}} exceeds maximum token limit ({{itemTokens}} > {{maxTokens}}). Skipping.", "rateLimitRetry": "Rate limit hit, retrying in {{delayMs}}ms (attempt {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Server error ({{status}}), retrying in {{delayMs}}ms (attempt {{attempt}}/{{maxRetries}})", "ollama": { "couldNotReadErrorBody": "Could not read error body", "requestFailed": "Ollama API request failed with status {{status}} {{statusText}}: {{errorBody}}", @@ -37,7 +38,11 @@ "connectionFailed": "Failed to connect to the embedder service. Please check your connection settings and ensure the service is running.", "modelNotAvailable": "The specified model is not available. Please check your model configuration.", "configurationError": "Invalid embedder configuration. Please review your settings.", - "serviceUnavailable": "The embedder service is not available. Please ensure it is running and accessible.", + "serviceUnavailable": "Embedding service temporarily unavailable. Please try again later.", + "rateLimitExceeded": "Rate limit exceeded. Please try again later.", + "badGateway": "Bad gateway error from embedder service. The server received an invalid response.", + "gatewayTimeout": "Gateway timeout error. The embedder service did not respond in time.", + "serverError": "Server error from embedder service. Please try again later.", "invalidEndpoint": "Invalid API endpoint. Please check your URL configuration.", "invalidEmbedderConfig": "Invalid embedder configuration. Please check your settings.", "invalidApiKey": "Invalid API key. Please check your API key configuration.", diff --git a/src/i18n/locales/es/embeddings.json b/src/i18n/locales/es/embeddings.json index 930404de1f5..d7ac025fa01 100644 --- a/src/i18n/locales/es/embeddings.json +++ b/src/i18n/locales/es/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "No se pudieron crear las incrustaciones después de {{attempts}} intentos", "textExceedsTokenLimit": "El texto en el índice {{index}} supera el límite máximo de tokens ({{itemTokens}} > {{maxTokens}}). Omitiendo.", "rateLimitRetry": "Límite de velocidad alcanzado, reintentando en {{delayMs}}ms (intento {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Error del servidor ({{status}}), reintentando en {{delayMs}}ms (intento {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Formato de respuesta no válido de Amazon Bedrock", "invalidCredentials": "Credenciales de AWS no válidas. Por favor, verifica tu configuración de AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Error al conectar con el servicio de embedder. Comprueba los ajustes de conexión y asegúrate de que el servicio esté funcionando.", "modelNotAvailable": "El modelo especificado no está disponible. Comprueba la configuración de tu modelo.", "configurationError": "Configuración de embedder no válida. Revisa tus ajustes.", - "serviceUnavailable": "El servicio de embedder no está disponible. Asegúrate de que esté funcionando y sea accesible.", + "serviceUnavailable": "El servicio de embedder no está disponible temporalmente. Por favor, inténtelo de nuevo más tarde.", + "rateLimitExceeded": "Límite de tasa excedido. Por favor, inténtelo de nuevo más tarde.", + "badGateway": "Error de bad gateway del servicio de embedder. El servidor recibió una respuesta no válida.", + "gatewayTimeout": "Error de gateway timeout. El servicio de embedder no respondió a tiempo.", + "serverError": "Error del servidor del servicio de embedder. Por favor, inténtelo de nuevo más tarde.", "invalidEndpoint": "Punto de conexión de API no válido. Comprueba la configuración de tu URL.", "invalidEmbedderConfig": "Configuración de embedder no válida. Comprueba tus ajustes.", "invalidApiKey": "Clave de API no válida. Comprueba la configuración de tu clave de API.", diff --git a/src/i18n/locales/fr/embeddings.json b/src/i18n/locales/fr/embeddings.json index 7de086307ea..d1ebdad0e7c 100644 --- a/src/i18n/locales/fr/embeddings.json +++ b/src/i18n/locales/fr/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Échec de la création des embeddings après {{attempts}} tentatives", "textExceedsTokenLimit": "Le texte à l'index {{index}} dépasse la limite maximale de tokens ({{itemTokens}} > {{maxTokens}}). Ignoré.", "rateLimitRetry": "Limite de débit atteinte, nouvelle tentative dans {{delayMs}}ms (tentative {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Erreur serveur ({{status}}), nouvelle tentative dans {{delayMs}}ms (tentative {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Format de réponse invalide d'Amazon Bedrock", "invalidCredentials": "Identifiants AWS invalides. Veuillez vérifier votre configuration AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Échec de la connexion au service d'embedding. Veuillez vérifier vos paramètres de connexion et vous assurer que le service est en cours d'exécution.", "modelNotAvailable": "Le modèle spécifié n'est pas disponible. Veuillez vérifier la configuration de votre modèle.", "configurationError": "Configuration de l'embedder invalide. Veuillez vérifier vos paramètres.", - "serviceUnavailable": "Le service d'embedding n'est pas disponible. Veuillez vous assurer qu'il est en cours d'exécution et accessible.", + "serviceUnavailable": "Service d'embedding temporairement indisponible. Veuillez réessayer plus tard.", + "rateLimitExceeded": "Limite de débit dépassée. Veuillez réessayer plus tard.", + "badGateway": "Erreur bad gateway du service d'embedding. Le serveur a reçu une réponse invalide.", + "gatewayTimeout": "Erreur gateway timeout. Le service d'embedding n'a pas répondu à temps.", + "serverError": "Erreur serveur du service d'embedding. Veuillez réessayer plus tard.", "invalidEndpoint": "Point de terminaison d'API invalide. Veuillez vérifier votre configuration d'URL.", "invalidEmbedderConfig": "Configuration de l'embedder invalide. Veuillez vérifier vos paramètres.", "invalidApiKey": "Clé API invalide. Veuillez vérifier votre configuration de clé API.", diff --git a/src/i18n/locales/hi/embeddings.json b/src/i18n/locales/hi/embeddings.json index 9c7f9ca50ae..c678b7d676d 100644 --- a/src/i18n/locales/hi/embeddings.json +++ b/src/i18n/locales/hi/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "{{attempts}} प्रयासों के बाद एम्बेडिंग बनाने में विफल", "textExceedsTokenLimit": "अनुक्रमणिका {{index}} पर पाठ अधिकतम टोकन सीमा ({{itemTokens}} > {{maxTokens}}) से अधिक है। छोड़ा जा रहा है।", "rateLimitRetry": "दर सीमा समाप्त, {{delayMs}}ms में पुन: प्रयास किया जा रहा है (प्रयास {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "सर्वर त्रुटि ({{status}}), {{delayMs}}ms में पुनः प्रयास (प्रयास {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrock से अमान्य प्रतिसाद प्रारूप", "invalidCredentials": "अमान्य AWS क्रेडेंशियल्स। कृपया अपनी AWS कॉन्फ़िगरेशन जांचें।", @@ -37,7 +38,11 @@ "connectionFailed": "एम्बेडर सेवा से कनेक्ट करने में विफल। कृपया अपनी कनेक्शन सेटिंग्स जांचें और सुनिश्चित करें कि सेवा चल रही है।", "modelNotAvailable": "निर्दिष्ट मॉडल उपलब्ध नहीं है। कृपया अपनी मॉडल कॉन्फ़िगरेशन जांचें।", "configurationError": "अमान्य एम्बेडर कॉन्फ़िगरेशन। कृपया अपनी सेटिंग्स की समीक्षा करें।", - "serviceUnavailable": "एम्बेडर सेवा उपलब्ध नहीं है। कृपया सुनिश्चित करें कि यह चल रहा है और पहुंच योग्य है।", + "serviceUnavailable": "Embedder सेवा अस्थायी रूप से अनुपलब्ध है। कृपया बाद में पुनः प्रयास करें।", + "rateLimitExceeded": "रेट सीमा पार हो गई। कृपया बाद में पुनः प्रयास करें।", + "badGateway": "Embedder सेवा से bad gateway त्रुटि। सर्वर को अमान्य प्रतिक्रिया मिली।", + "gatewayTimeout": "Gateway timeout त्रुटि। Embedder सेवा ने समय पर प्रतिक्रिया नहीं दी।", + "serverError": "Embedder सेवा से सर्वर त्रुटि। कृपया बाद में पुनः प्रयास करें।", "invalidEndpoint": "अमान्य एपीआई एंडपॉइंट। कृपया अपनी यूआरएल कॉन्फ़िगरेशन जांचें।", "invalidEmbedderConfig": "अमान्य एम्बेडर कॉन्फ़िगरेशन। कृपया अपनी सेटिंग्स जांचें।", "invalidApiKey": "अमान्य एपीआई कुंजी। कृपया अपनी एपीआई कुंजी कॉन्फ़िगरेशन जांचें।", diff --git a/src/i18n/locales/id/embeddings.json b/src/i18n/locales/id/embeddings.json index 955a039effe..fcf119d6d12 100644 --- a/src/i18n/locales/id/embeddings.json +++ b/src/i18n/locales/id/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Gagal membuat embeddings setelah {{attempts}} percobaan", "textExceedsTokenLimit": "Teks pada indeks {{index}} melebihi batas maksimum token ({{itemTokens}} > {{maxTokens}}). Dilewati.", "rateLimitRetry": "Batas rate tercapai, mencoba lagi dalam {{delayMs}}ms (percobaan {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Kesalahan server ({{status}}), mencoba lagi dalam {{delayMs}}ms (percobaan {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Format respons tidak valid dari Amazon Bedrock", "invalidCredentials": "Kredensial AWS tidak valid. Harap periksa konfigurasi AWS Anda.", @@ -37,7 +38,11 @@ "connectionFailed": "Gagal terhubung ke layanan embedder. Silakan periksa pengaturan koneksi Anda dan pastikan layanan berjalan.", "modelNotAvailable": "Model yang ditentukan tidak tersedia. Silakan periksa konfigurasi model Anda.", "configurationError": "Konfigurasi embedder tidak valid. Harap tinjau pengaturan Anda.", - "serviceUnavailable": "Layanan embedder tidak tersedia. Pastikan layanan tersebut berjalan dan dapat diakses.", + "serviceUnavailable": "Layanan embedder sementara tidak tersedia. Silakan coba lagi nanti.", + "rateLimitExceeded": "Batas terlampaui. Silakan coba lagi nanti.", + "badGateway": "Kesalahan bad gateway dari layanan embedder. Server menerima respons yang tidak valid.", + "gatewayTimeout": "Kesalahan gateway timeout. Layanan embedder tidak merespons tepat waktu.", + "serverError": "Kesalahan server dari layanan embedder. Silakan coba lagi nanti.", "invalidEndpoint": "Endpoint API tidak valid. Silakan periksa konfigurasi URL Anda.", "invalidEmbedderConfig": "Konfigurasi embedder tidak valid. Silakan periksa pengaturan Anda.", "invalidApiKey": "Kunci API tidak valid. Silakan periksa konfigurasi kunci API Anda.", diff --git a/src/i18n/locales/it/embeddings.json b/src/i18n/locales/it/embeddings.json index b7314c244dc..8787cf54725 100644 --- a/src/i18n/locales/it/embeddings.json +++ b/src/i18n/locales/it/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Creazione degli embedding non riuscita dopo {{attempts}} tentativi", "textExceedsTokenLimit": "Il testo all'indice {{index}} supera il limite massimo di token ({{itemTokens}} > {{maxTokens}}). Saltato.", "rateLimitRetry": "Limite di velocità raggiunto, nuovo tentativo tra {{delayMs}}ms (tentativo {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Errore del server ({{status}}), riprovo tra {{delayMs}}ms (tentativo {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Formato di risposta non valido da Amazon Bedrock", "invalidCredentials": "Credenziali AWS non valide. Si prega di verificare la configurazione AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Connessione al servizio di embedder fallita. Controlla le impostazioni di connessione e assicurati che il servizio sia in esecuzione.", "modelNotAvailable": "Il modello specificato non è disponibile. Controlla la configurazione del tuo modello.", "configurationError": "Configurazione dell'embedder non valida. Rivedi le tue impostazioni.", - "serviceUnavailable": "Il servizio di embedder non è disponibile. Assicurati che sia in esecuzione e accessibile.", + "serviceUnavailable": "Servizio embedder temporaneamente non disponibile. Riprova più tardi.", + "rateLimitExceeded": "Limite di velocità superato. Riprova più tardi.", + "badGateway": "Errore bad gateway dal servizio embedder. Il server ha ricevuto una risposta non valida.", + "gatewayTimeout": "Errore gateway timeout. Il servizio embedder non ha risposto in tempo.", + "serverError": "Errore del server dal servizio embedder. Riprova più tardi.", "invalidEndpoint": "Endpoint API non valido. Controlla la configurazione del tuo URL.", "invalidEmbedderConfig": "Configurazione dell'embedder non valida. Controlla le tue impostazioni.", "invalidApiKey": "Chiave API non valida. Controlla la configurazione della tua chiave API.", diff --git a/src/i18n/locales/ja/embeddings.json b/src/i18n/locales/ja/embeddings.json index ce7150cf1ca..e143ac1d934 100644 --- a/src/i18n/locales/ja/embeddings.json +++ b/src/i18n/locales/ja/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "{{attempts}}回試行しましたが、埋め込みの作成に失敗しました", "textExceedsTokenLimit": "インデックス{{index}}のテキストが最大トークン制限を超えています({{itemTokens}}> {{maxTokens}})。スキップします。", "rateLimitRetry": "レート制限に達しました。{{delayMs}}ミリ秒後に再試行します(試行{{attempt}}/{{maxRetries}})", + "serverErrorRetry": "サーバーエラー ({{status}})、{{delayMs}}ms後に再試行 (試行 {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrockからの無効な応答形式", "invalidCredentials": "無効なAWS認証情報です。AWSの設定を確認してください。", @@ -37,7 +38,11 @@ "connectionFailed": "エンベッダーサービスへの接続に失敗しました。接続設定を確認し、サービスが実行されていることを確認してください。", "modelNotAvailable": "指定されたモデルは利用できません。モデル構成を確認してください。", "configurationError": "無効なエンベッダー構成です。設定を確認してください。", - "serviceUnavailable": "エンベッダーサービスは利用できません。実行中でアクセス可能であることを確認してください。", + "serviceUnavailable": "Embedderサービスは一時的に利用できません。後でもう一度お試しください。", + "rateLimitExceeded": "レート制限を超えました。後でもう一度お試しください。", + "badGateway": "Embedderサービスからのbad gatewayエラー。サーバーが無効な応答を受信しました。", + "gatewayTimeout": "Gateway timeoutエラー。Embedderサービスが時間内に応答しませんでした。", + "serverError": "Embedderサービスからのサーバーエラー。後でもう一度お試しください。", "invalidEndpoint": "無効なAPIエンドポイントです。URL構成を確認してください。", "invalidEmbedderConfig": "無効なエンベッダー構成です。設定を確認してください。", "invalidApiKey": "無効なAPIキーです。APIキー構成を確認してください。", diff --git a/src/i18n/locales/ko/embeddings.json b/src/i18n/locales/ko/embeddings.json index 436fa985c02..59fb96b8efe 100644 --- a/src/i18n/locales/ko/embeddings.json +++ b/src/i18n/locales/ko/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "{{attempts}}번 시도 후 임베딩 생성 실패", "textExceedsTokenLimit": "인덱스 {{index}}의 텍스트가 최대 토큰 제한({{itemTokens}} > {{maxTokens}})을 초과했습니다. 건너뜁니다.", "rateLimitRetry": "속도 제한에 도달했습니다. {{delayMs}}ms 후에 다시 시도합니다(시도 {{attempt}}/{{maxRetries}}).", + "serverErrorRetry": "서버 오류 ({{status}}), {{delayMs}}ms 후 재시도 (시도 {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrock에서 잘못된 응답 형식", "invalidCredentials": "잘못된 AWS 자격증명입니다. AWS 구성을 확인하세요.", @@ -37,7 +38,11 @@ "connectionFailed": "임베더 서비스에 연결하지 못했습니다. 연결 설정을 확인하고 서비스가 실행 중인지 확인하세요.", "modelNotAvailable": "지정된 모델을 사용할 수 없습니다. 모델 구성을 확인하세요.", "configurationError": "잘못된 임베더 구성입니다. 설정을 검토하세요.", - "serviceUnavailable": "임베더 서비스를 사용할 수 없습니다. 실행 중이고 액세스 가능한지 확인하세요.", + "serviceUnavailable": "Embedder 서비스가 일시적으로 사용할 수 없습니다. 나중에 다시 시도하십시오.", + "rateLimitExceeded": "속도 제한을 초과했습니다. 나중에 다시 시도하십시오.", + "badGateway": "Embedder 서비스의 bad gateway 오류. 서버가 잘못된 응답을 받았습니다.", + "gatewayTimeout": "Gateway timeout 오류. Embedder 서비스가 제시간에 응답하지 않았습니다.", + "serverError": "Embedder 서비스의 서버 오류. 나중에 다시 시도하십시오.", "invalidEndpoint": "잘못된 API 엔드포인트입니다. URL 구성을 확인하세요.", "invalidEmbedderConfig": "잘못된 임베더 구성입니다. 설정을 확인하세요.", "invalidApiKey": "잘못된 API 키입니다. API 키 구성을 확인하세요.", diff --git a/src/i18n/locales/nl/embeddings.json b/src/i18n/locales/nl/embeddings.json index 01e68683d3a..46b45194576 100644 --- a/src/i18n/locales/nl/embeddings.json +++ b/src/i18n/locales/nl/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Insluitingen maken mislukt na {{attempts}} pogingen", "textExceedsTokenLimit": "Tekst op index {{index}} overschrijdt de maximale tokenlimiet ({{itemTokens}} > {{maxTokens}}). Wordt overgeslagen.", "rateLimitRetry": "Snelheidslimiet bereikt, opnieuw proberen over {{delayMs}}ms (poging {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Serverfout ({{status}}), opnieuw proberen in {{delayMs}}ms (poging {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Ongeldig antwoordformaat van Amazon Bedrock", "invalidCredentials": "Ongeldige AWS-referenties. Controleer uw AWS-configuratie.", @@ -37,7 +38,11 @@ "connectionFailed": "Verbinding met de embedder-service mislukt. Controleer je verbindingsinstellingen en zorg ervoor dat de service draait.", "modelNotAvailable": "Het opgegeven model is niet beschikbaar. Controleer je modelconfiguratie.", "configurationError": "Ongeldige embedder-configuratie. Controleer je instellingen.", - "serviceUnavailable": "De embedder-service is niet beschikbaar. Zorg ervoor dat deze draait en toegankelijk is.", + "serviceUnavailable": "Embedder-service tijdelijk niet beschikbaar. Probeer het later opnieuw.", + "rateLimitExceeded": "Snelheidslimiet overschreden. Probeer het later opnieuw.", + "badGateway": "Bad gateway-fout van embedder-service. De server ontving een ongeldig antwoord.", + "gatewayTimeout": "Gateway timeout-fout. De embedder-service reageerde niet op tijd.", + "serverError": "Serverfout van embedder-service. Probeer het later opnieuw.", "invalidEndpoint": "Ongeldig API-eindpunt. Controleer je URL-configuratie.", "invalidEmbedderConfig": "Ongeldige embedder-configuratie. Controleer je instellingen.", "invalidApiKey": "Ongeldige API-sleutel. Controleer je API-sleutelconfiguratie.", diff --git a/src/i18n/locales/pl/embeddings.json b/src/i18n/locales/pl/embeddings.json index 0ef846b2cc9..2180ae79df0 100644 --- a/src/i18n/locales/pl/embeddings.json +++ b/src/i18n/locales/pl/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Nie udało się utworzyć osadzeń po {{attempts}} próbach", "textExceedsTokenLimit": "Tekst w indeksie {{index}} przekracza maksymalny limit tokenów ({{itemTokens}} > {{maxTokens}}). Pomijanie.", "rateLimitRetry": "Osiągnięto limit szybkości, ponawianie za {{delayMs}}ms (próba {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Błąd serwera ({{status}}), ponowna próba za {{delayMs}}ms (próba {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Nieprawidłowy format odpowiedzi z Amazon Bedrock", "invalidCredentials": "Nieprawidłowe poświadczenia AWS. Sprawdź konfigurację AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Nie udało się połączyć z usługą embeddera. Sprawdź ustawienia połączenia i upewnij się, że usługa jest uruchomiona.", "modelNotAvailable": "Określony model jest niedostępny. Sprawdź konfigurację modelu.", "configurationError": "Nieprawidłowa konfiguracja embeddera. Sprawdź swoje ustawienia.", - "serviceUnavailable": "Usługa embeddera jest niedostępna. Upewnij się, że jest uruchomiona i dostępna.", + "serviceUnavailable": "Usługa embedder jest tymczasowo niedostępna. Spróbuj ponownie później.", + "rateLimitExceeded": "Przekroczono limit szybkości. Spróbuj ponownie później.", + "badGateway": "Błąd bad gateway usługi embedder. Serwer otrzymał nieprawidłową odpowiedź.", + "gatewayTimeout": "Błąd gateway timeout. Usługa embedder nie odpowiedziała na czas.", + "serverError": "Błąd serwera usługi embedder. Spróbuj ponownie później.", "invalidEndpoint": "Nieprawidłowy punkt końcowy API. Sprawdź konfigurację adresu URL.", "invalidEmbedderConfig": "Nieprawidłowa konfiguracja embeddera. Sprawdź swoje ustawienia.", "invalidApiKey": "Nieprawidłowy klucz API. Sprawdź konfigurację klucza API.", diff --git a/src/i18n/locales/pt-BR/embeddings.json b/src/i18n/locales/pt-BR/embeddings.json index 9cdf775e76e..2a31e719117 100644 --- a/src/i18n/locales/pt-BR/embeddings.json +++ b/src/i18n/locales/pt-BR/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Falha ao criar embeddings após {{attempts}} tentativas", "textExceedsTokenLimit": "O texto no índice {{index}} excede o limite máximo de tokens ({{itemTokens}} > {{maxTokens}}). Ignorando.", "rateLimitRetry": "Limite de taxa atingido, tentando novamente em {{delayMs}}ms (tentativa {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Erro do servidor ({{status}}), tentando novamente em {{delayMs}}ms (tentativa {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Formato de resposta inválido do Amazon Bedrock", "invalidCredentials": "Credenciais AWS inválidas. Verifique sua configuração AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Falha ao conectar ao serviço do embedder. Verifique suas configurações de conexão e garanta que o serviço esteja em execução.", "modelNotAvailable": "O modelo especificado não está disponível. Verifique a configuração do seu modelo.", "configurationError": "Configuração do embedder inválida. Revise suas configurações.", - "serviceUnavailable": "O serviço do embedder não está disponível. Garanta que ele esteja em execução e acessível.", + "serviceUnavailable": "Serviço de embedder temporariamente indisponível. Por favor, tente novamente mais tarde.", + "rateLimitExceeded": "Limite de taxa excedido. Por favor, tente novamente mais tarde.", + "badGateway": "Erro de bad gateway do serviço de embedder. O servidor recebeu uma resposta inválida.", + "gatewayTimeout": "Erro de gateway timeout. O serviço de embedder não respondeu a tempo.", + "serverError": "Erro do servidor do serviço de embedder. Por favor, tente novamente mais tarde.", "invalidEndpoint": "Endpoint de API inválido. Verifique sua configuração de URL.", "invalidEmbedderConfig": "Configuração do embedder inválida. Verifique suas configurações.", "invalidApiKey": "Chave de API inválida. Verifique sua configuração de chave de API.", diff --git a/src/i18n/locales/ru/embeddings.json b/src/i18n/locales/ru/embeddings.json index 873b1c06308..74dceb5dd64 100644 --- a/src/i18n/locales/ru/embeddings.json +++ b/src/i18n/locales/ru/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Не удалось создать вложения после {{attempts}} попыток", "textExceedsTokenLimit": "Текст в индексе {{index}} превышает максимальный лимит токенов ({{itemTokens}} > {{maxTokens}}). Пропускается.", "rateLimitRetry": "Достигнут лимит скорости, повторная попытка через {{delayMs}} мс (попытка {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Ошибка сервера ({{status}}), повторная попытка через {{delayMs}} мс (попытка {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Неверный формат ответа от Amazon Bedrock", "invalidCredentials": "Неверные учетные данные AWS. Проверьте конфигурацию AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "Не удалось подключиться к службе эмбеддера. Проверьте настройки подключения и убедитесь, что служба запущена.", "modelNotAvailable": "Указанная модель недоступна. Проверьте конфигурацию модели.", "configurationError": "Неверная конфигурация эмбеддера. Проверьте свои настройки.", - "serviceUnavailable": "Служба эмбеддера недоступна. Убедитесь, что она запущена и доступна.", + "serviceUnavailable": "Служба вложений временно недоступна. Повторите попытку позже.", + "rateLimitExceeded": "Превышен лимит запросов. Повторите попытку позже.", + "badGateway": "Ошибка bad gateway от службы эмбеддера. Сервер получил неверный ответ.", + "gatewayTimeout": "Ошибка gateway timeout. Служба эмбеддера не ответила вовремя.", + "serverError": "Ошибка сервера от службы эмбеддера. Повторите попытку позже.", "invalidEndpoint": "Неверная конечная точка API. Проверьте конфигурацию URL.", "invalidEmbedderConfig": "Неверная конфигурация эмбеддера. Проверьте свои настройки.", "invalidApiKey": "Неверный ключ API. Проверьте конфигурацию ключа API.", diff --git a/src/i18n/locales/tr/embeddings.json b/src/i18n/locales/tr/embeddings.json index 30b703a93f1..d50a98fcdc8 100644 --- a/src/i18n/locales/tr/embeddings.json +++ b/src/i18n/locales/tr/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "{{attempts}} denemeden sonra gömülmeler oluşturulamadı", "textExceedsTokenLimit": "{{index}} dizinindeki metin maksimum jeton sınırını aşıyor ({{itemTokens}} > {{maxTokens}}). Atlanıyor.", "rateLimitRetry": "Hız sınırına ulaşıldı, {{delayMs}}ms içinde yeniden deneniyor (deneme {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Sunucu hatası ({{status}}), {{delayMs}}ms içinde tekrar deneniyor (deneme {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrock'tan geçersiz yanıt formatı", "invalidCredentials": "Geçersiz AWS kimlik bilgileri. Lütfen AWS yapılandırmanızı kontrol edin.", @@ -37,7 +38,11 @@ "connectionFailed": "Gömücü hizmetine bağlanılamadı. Lütfen bağlantı ayarlarınızı kontrol edin ve hizmetin çalıştığından emin olun.", "modelNotAvailable": "Belirtilen model mevcut değil. Lütfen model yapılandırmanızı kontrol edin.", "configurationError": "Geçersiz gömücü yapılandırması. Lütfen ayarlarınızı gözden geçirin.", - "serviceUnavailable": "Gömücü hizmeti mevcut değil. Lütfen çalıştığından ve erişilebilir olduğundan emin olun.", + "serviceUnavailable": "Embedder hizmeti geçici olarak kullanılamıyor. Lütfen daha sonra tekrar deneyin.", + "rateLimitExceeded": "Hız sınırı aşıldı. Lütfen daha sonra tekrar deneyin.", + "badGateway": "Embedder hizmetinden bad gateway hatası. Sunucu geçersiz bir yanıt aldı.", + "gatewayTimeout": "Gateway timeout hatası. Embedder hizmeti zamanında yanıt vermedi.", + "serverError": "Embedder hizmetinden sunucu hatası. Lütfen daha sonra tekrar deneyin.", "invalidEndpoint": "Geçersiz API uç noktası. Lütfen URL yapılandırmanızı kontrol edin.", "invalidEmbedderConfig": "Geçersiz gömücü yapılandırması. Lütfen ayarlarınızı kontrol edin.", "invalidApiKey": "Geçersiz API anahtarı. Lütfen API anahtarı yapılandırmanızı kontrol edin.", diff --git a/src/i18n/locales/vi/embeddings.json b/src/i18n/locales/vi/embeddings.json index c92ebba2765..16f47fa2d67 100644 --- a/src/i18n/locales/vi/embeddings.json +++ b/src/i18n/locales/vi/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "Không thể tạo nhúng sau {{attempts}} lần thử", "textExceedsTokenLimit": "Văn bản tại chỉ mục {{index}} vượt quá giới hạn mã thông báo tối đa ({{itemTokens}} > {{maxTokens}}). Bỏ qua.", "rateLimitRetry": "Đã đạt đến giới hạn tốc độ, thử lại sau {{delayMs}}ms (lần thử {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Lỗi máy chủ ({{status}}), thử lại sau {{delayMs}}ms (lần thử {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Định dạng phản hồi không hợp lệ từ Amazon Bedrock", "invalidCredentials": "Thông tin đăng nhập AWS không hợp lệ. Vui lòng kiểm tra cấu hình AWS của bạn.", @@ -37,7 +38,11 @@ "connectionFailed": "Không thể kết nối với dịch vụ nhúng. Vui lòng kiểm tra cài đặt kết nối của bạn và đảm bảo dịch vụ đang chạy.", "modelNotAvailable": "Mô hình được chỉ định không có sẵn. Vui lòng kiểm tra cấu hình mô hình của bạn.", "configurationError": "Cấu hình nhúng không hợp lệ. Vui lòng xem lại cài đặt của bạn.", - "serviceUnavailable": "Dịch vụ nhúng không có sẵn. Vui lòng đảm bảo nó đang chạy và có thể truy cập được.", + "serviceUnavailable": "Dịch vụ embedder tạm thời không khả dụng. Vui lòng thử lại sau.", + "rateLimitExceeded": "Vượt quá giới hạn tốc độ. Vui lòng thử lại sau.", + "badGateway": "Lỗi bad gateway từ dịch vụ embedder. Máy chủ nhận được phản hồi không hợp lệ.", + "gatewayTimeout": "Lỗi gateway timeout. Dịch vụ embedder không phản hồi kịp thời.", + "serverError": "Lỗi máy chủ từ dịch vụ embedder. Vui lòng thử lại sau.", "invalidEndpoint": "Điểm cuối API không hợp lệ. Vui lòng kiểm tra cấu hình URL của bạn.", "invalidEmbedderConfig": "Cấu hình nhúng không hợp lệ. Vui lòng kiểm tra cài đặt của bạn.", "invalidApiKey": "Khóa API không hợp lệ. Vui lòng kiểm tra cấu hình khóa API của bạn.", diff --git a/src/i18n/locales/zh-CN/embeddings.json b/src/i18n/locales/zh-CN/embeddings.json index b4f4eaad1d1..295ff447168 100644 --- a/src/i18n/locales/zh-CN/embeddings.json +++ b/src/i18n/locales/zh-CN/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "尝试 {{attempts}} 次后创建嵌入失败", "textExceedsTokenLimit": "索引 {{index}} 处的文本超过最大令牌限制 ({{itemTokens}} > {{maxTokens}})。正在跳过。", "rateLimitRetry": "已达到速率限制,将在 {{delayMs}} 毫秒后重试(尝试次数 {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "服务器错误 ({{status}}),{{delayMs}}ms 后重试 (尝试 {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrock 返回无效的响应格式", "invalidCredentials": "AWS 凭证无效。请检查您的 AWS 配置。", @@ -37,7 +38,11 @@ "connectionFailed": "连接嵌入器服务失败。请检查您的连接设置并确保服务正在运行。", "modelNotAvailable": "指定的模型不可用。请检查您的模型配置。", "configurationError": "嵌入器配置无效。请查看您的设置。", - "serviceUnavailable": "嵌入器服务不可用。请确保它正在运行且可访问。", + "serviceUnavailable": "Embedder服务暂时不可用。请稍后重试。", + "rateLimitExceeded": "超出速率限制。请稍后重试。", + "badGateway": "Embedder服务的bad gateway错误。服务器收到无效响应。", + "gatewayTimeout": "Gateway timeout错误。Embedder服务未及时响应。", + "serverError": "Embedder服务的服务器错误。请稍后重试。", "invalidEndpoint": "API 端点无效。请检查您的 URL 配置。", "invalidEmbedderConfig": "嵌入器配置无效。请检查您的设置。", "invalidApiKey": "API 密钥无效。请检查您的 API 密钥配置。", diff --git a/src/i18n/locales/zh-TW/embeddings.json b/src/i18n/locales/zh-TW/embeddings.json index 26845ed9488..5f91d405b9b 100644 --- a/src/i18n/locales/zh-TW/embeddings.json +++ b/src/i18n/locales/zh-TW/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "嘗試 {{attempts}} 次後建立內嵌失敗", "textExceedsTokenLimit": "索引 {{index}} 處的文字超過最大權杖限制 ({{itemTokens}} > {{maxTokens}})。正在略過。", "rateLimitRetry": "已達到速率限制,將在 {{delayMs}} 毫秒後重試(嘗試次數 {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "伺服器錯誤 ({{status}}),{{delayMs}}ms 後重試 (嘗試 {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Amazon Bedrock 傳回無效的回應格式", "invalidCredentials": "AWS 認證無效。請檢查您的 AWS 設定。", @@ -37,7 +38,11 @@ "connectionFailed": "連線至內嵌服務失敗。請檢查您的連線設定並確保服務正在執行。", "modelNotAvailable": "指定的模型不可用。請檢查您的模型組態。", "configurationError": "無效的內嵌程式組態。請檢閱您的設定。", - "serviceUnavailable": "內嵌服務不可用。請確保它正在執行且可存取。", + "serviceUnavailable": "Embedder服務暫時不可用。請稍後重試。", + "rateLimitExceeded": "超出速率限制。請稍後重試。", + "badGateway": "Embedder服務的bad gateway錯誤。伺服器收到無效回應。", + "gatewayTimeout": "Gateway timeout錯誤。Embedder服務未及時回應。", + "serverError": "Embedder服務的伺服器錯誤。請稍後重試。", "invalidEndpoint": "無效的 API 端點。請檢查您的 URL 組態。", "invalidEmbedderConfig": "無效的內嵌程式組態。請檢查您的設定。", "invalidApiKey": "無效的 API 金鑰。請檢查您的 API 金鑰組態。", diff --git a/src/services/code-index/embedders/__tests__/openai-compatible.spec.ts b/src/services/code-index/embedders/__tests__/openai-compatible.spec.ts index ecde7691515..53a51f15b8d 100644 --- a/src/services/code-index/embedders/__tests__/openai-compatible.spec.ts +++ b/src/services/code-index/embedders/__tests__/openai-compatible.spec.ts @@ -98,6 +98,8 @@ describe("OpenAICompatibleEmbedder", () => { expect(MockedOpenAI).toHaveBeenCalledWith({ baseURL: testBaseUrl, apiKey: testApiKey, + timeout: 60000, + maxRetries: 0, }) expect(embedder).toBeDefined() }) @@ -108,6 +110,8 @@ describe("OpenAICompatibleEmbedder", () => { expect(MockedOpenAI).toHaveBeenCalledWith({ baseURL: testBaseUrl, apiKey: testApiKey, + timeout: 60000, + maxRetries: 0, }) expect(embedder).toBeDefined() }) @@ -405,6 +409,7 @@ describe("OpenAICompatibleEmbedder", () => { }) afterEach(() => { + vitest.clearAllTimers() vitest.useRealTimers() }) @@ -441,7 +446,7 @@ describe("OpenAICompatibleEmbedder", () => { const result = await resultPromise expect(mockEmbeddingsCreate).toHaveBeenCalledTimes(3) - expect(console.warn).toHaveBeenCalledWith(expect.stringContaining("Rate limit hit, retrying in")) + expect(console.warn).toHaveBeenCalledWith("embeddings:serverErrorRetry") expect(result).toEqual({ embeddings: [[0.25, 0.5, 0.75]], usage: { promptTokens: 10, totalTokens: 15 }, @@ -463,18 +468,35 @@ describe("OpenAICompatibleEmbedder", () => { expect(console.warn).not.toHaveBeenCalledWith(expect.stringContaining("Rate limit hit")) }) - it("should throw error immediately on non-retryable errors", async () => { + it("should retry on 5xx server errors", async () => { const testTexts = ["Hello world"] const serverError = new Error("Internal server error") ;(serverError as any).status = 500 - mockEmbeddingsCreate.mockRejectedValue(serverError) + // Setup 3 rejections for 3 attempts (MAX_RETRIES = 3) + mockEmbeddingsCreate + .mockRejectedValueOnce(serverError) + .mockRejectedValueOnce(serverError) + .mockRejectedValueOnce(serverError) - await expect(embedder.createEmbeddings(testTexts)).rejects.toThrow( + const resultPromise = embedder.createEmbeddings(testTexts) + + // Register the rejection handler BEFORE advancing timers + // This prevents unhandledRejection because the error handler is attached + // before the promise actually rejects during timer advancement + const resultExpectThrow = expect(resultPromise).rejects.toThrow( "Failed to create embeddings after 3 attempts: HTTP 500 - Internal server error", ) - expect(mockEmbeddingsCreate).toHaveBeenCalledTimes(1) + // Run all timers - this triggers the retries and rejections + // The rejection handler is already registered, so no unhandledRejection + await vitest.runAllTimersAsync() + + // Wait for the assertion to complete + await resultExpectThrow + + // Verify all 3 attempts were made (retry для 5xx) + expect(mockEmbeddingsCreate).toHaveBeenCalledTimes(3) }) }) @@ -857,7 +879,7 @@ describe("OpenAICompatibleEmbedder", () => { expect(global.fetch).toHaveBeenCalledTimes(3) // Check that rate limit warnings were logged - expect(console.warn).toHaveBeenCalledWith(expect.stringContaining("Rate limit hit")) + expect(console.warn).toHaveBeenCalledWith("embeddings:serverErrorRetry") expectEmbeddingValues(result.embeddings[0], [0.1, 0.2, 0.3]) vitest.useRealTimers() }) @@ -888,6 +910,137 @@ describe("OpenAICompatibleEmbedder", () => { }) }) }) + + describe("timeout handling", () => { + it("should pass timeout and maxRetries to OpenAI SDK constructor", () => { + new OpenAICompatibleEmbedder(testBaseUrl, testApiKey, testModelId) + + expect(MockedOpenAI).toHaveBeenCalledWith({ + baseURL: testBaseUrl, + apiKey: testApiKey, + timeout: 60000, + maxRetries: 0, + }) + }) + + it("should handle AbortError as 504 Gateway Timeout in direct fetch", async () => { + const azureUrl = + "https://myresource.openai.azure.com/openai/deployments/mymodel/embeddings?api-version=2024-02-01" + const embedder = new OpenAICompatibleEmbedder(azureUrl, testApiKey, testModelId) + + // Mock fetch to simulate timeout (AbortError) + const abortError = new DOMException("The operation was aborted", "AbortError") + ;(global.fetch as MockedFunction).mockRejectedValue(abortError) + + await expect(embedder.createEmbeddings(["test"])).rejects.toThrow( + "Failed to create embeddings after 3 attempts", + ) + }) + }) + + describe("5xx retry handling", () => { + beforeEach(() => { + vitest.useFakeTimers() + }) + + afterEach(() => { + vitest.useRealTimers() + }) + + it("should retry on 502 Bad Gateway", async () => { + const azureUrl = + "https://myresource.openai.azure.com/openai/deployments/mymodel/embeddings?api-version=2024-02-01" + const embedder = new OpenAICompatibleEmbedder(azureUrl, testApiKey, testModelId) + + const base64String = Buffer.from(new Float32Array([0.1, 0.2, 0.3]).buffer).toString("base64") + + ;(global.fetch as MockedFunction) + .mockResolvedValueOnce({ ok: false, status: 502, text: async () => "Bad Gateway" } as any) + .mockResolvedValueOnce({ ok: false, status: 502, text: async () => "Bad Gateway" } as any) + .mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + data: [{ embedding: base64String }], + usage: { prompt_tokens: 10, total_tokens: 15 }, + }), + } as any) + + const resultPromise = embedder.createEmbeddings(["test"]) + + // Advance timers for retry delays (500ms, 1000ms) + await vitest.advanceTimersByTimeAsync(500) + await vitest.advanceTimersByTimeAsync(1000) + + const result = await resultPromise + + expect(global.fetch).toHaveBeenCalledTimes(3) + expect(console.warn).toHaveBeenCalledWith("embeddings:serverErrorRetry") + expect(result.embeddings).toHaveLength(1) + }) + + it("should retry on 503 Service Unavailable", async () => { + const azureUrl = + "https://myresource.openai.azure.com/openai/deployments/mymodel/embeddings?api-version=2024-02-01" + const embedder = new OpenAICompatibleEmbedder(azureUrl, testApiKey, testModelId) + + const base64String = Buffer.from(new Float32Array([0.1, 0.2, 0.3]).buffer).toString("base64") + + ;(global.fetch as MockedFunction) + .mockResolvedValueOnce({ ok: false, status: 503, text: async () => "Service Unavailable" } as any) + .mockResolvedValueOnce({ ok: false, status: 503, text: async () => "Service Unavailable" } as any) + .mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + data: [{ embedding: base64String }], + usage: { prompt_tokens: 10, total_tokens: 15 }, + }), + } as any) + + const resultPromise = embedder.createEmbeddings(["test"]) + + await vitest.advanceTimersByTimeAsync(500) + await vitest.advanceTimersByTimeAsync(1000) + + const result = await resultPromise + + expect(global.fetch).toHaveBeenCalledTimes(3) + expect(console.warn).toHaveBeenCalledWith("embeddings:serverErrorRetry") + expect(result.embeddings).toHaveLength(1) + }) + + it("should retry on 504 Gateway Timeout", async () => { + const azureUrl = + "https://myresource.openai.azure.com/openai/deployments/mymodel/embeddings?api-version=2024-02-01" + const embedder = new OpenAICompatibleEmbedder(azureUrl, testApiKey, testModelId) + + const base64String = Buffer.from(new Float32Array([0.1, 0.2, 0.3]).buffer).toString("base64") + + ;(global.fetch as MockedFunction) + .mockResolvedValueOnce({ ok: false, status: 504, text: async () => "Gateway Timeout" } as any) + .mockResolvedValueOnce({ ok: false, status: 504, text: async () => "Gateway Timeout" } as any) + .mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + data: [{ embedding: base64String }], + usage: { prompt_tokens: 10, total_tokens: 15 }, + }), + } as any) + + const resultPromise = embedder.createEmbeddings(["test"]) + + await vitest.advanceTimersByTimeAsync(500) + await vitest.advanceTimersByTimeAsync(1000) + + const result = await resultPromise + + expect(global.fetch).toHaveBeenCalledTimes(3) + expect(console.warn).toHaveBeenCalledWith("embeddings:serverErrorRetry") + expect(result.embeddings).toHaveLength(1) + }) + }) }) describe("URL detection", () => { @@ -1066,7 +1219,7 @@ describe("OpenAICompatibleEmbedder", () => { const result = await embedder.validateConfiguration() expect(result.valid).toBe(false) - expect(result.error).toBe("embeddings:validation.serviceUnavailable") + expect(result.error).toBe("embeddings:validation.rateLimitExceeded") }) it("should fail validation with generic error", async () => { @@ -1079,7 +1232,7 @@ describe("OpenAICompatibleEmbedder", () => { const result = await embedder.validateConfiguration() expect(result.valid).toBe(false) - expect(result.error).toBe("embeddings:validation.configurationError") + expect(result.error).toBe("embeddings:validation.serverError") }) }) }) diff --git a/src/services/code-index/embedders/__tests__/openai.spec.ts b/src/services/code-index/embedders/__tests__/openai.spec.ts index 089abe151a3..4919c914c77 100644 --- a/src/services/code-index/embedders/__tests__/openai.spec.ts +++ b/src/services/code-index/embedders/__tests__/openai.spec.ts @@ -514,7 +514,7 @@ describe("OpenAiEmbedder", () => { const result = await embedder.validateConfiguration() expect(result.valid).toBe(false) - expect(result.error).toBe("embeddings:validation.serviceUnavailable") + expect(result.error).toBe("embeddings:validation.rateLimitExceeded") }) it("should fail validation with connection error", async () => { @@ -535,7 +535,7 @@ describe("OpenAiEmbedder", () => { const result = await embedder.validateConfiguration() expect(result.valid).toBe(false) - expect(result.error).toBe("embeddings:validation.configurationError") + expect(result.error).toBe("embeddings:validation.serverError") }) }) }) diff --git a/src/services/code-index/embedders/openai-compatible.ts b/src/services/code-index/embedders/openai-compatible.ts index 6eaf2b6c2c1..1f6e7977c5d 100644 --- a/src/services/code-index/embedders/openai-compatible.ts +++ b/src/services/code-index/embedders/openai-compatible.ts @@ -14,6 +14,10 @@ import { TelemetryService } from "@roo-code/telemetry" import { Mutex } from "async-mutex" import { handleOpenAIError } from "../../../api/providers/utils/openai-error-handler" +// Timeout constants for OpenAI Compatible API requests +const OPENAI_COMPATIBLE_EMBEDDING_TIMEOUT_MS = 60000 // 60 секунд для embedding запросов +const OPENAI_COMPATIBLE_VALIDATION_TIMEOUT_MS = 30000 // 30 секунд для валидации + interface EmbeddingItem { embedding: string | number[] [key: string]: any @@ -73,6 +77,8 @@ export class OpenAICompatibleEmbedder implements IEmbedder { this.embeddingsClient = new OpenAI({ baseURL: baseUrl, apiKey: apiKey, + timeout: OPENAI_COMPATIBLE_EMBEDDING_TIMEOUT_MS, // 60 секунд таймаут + maxRetries: 0, // Отключаем встроенный retry SDK — используем нашу собственную логику в _embedBatchWithRetries() }) } catch (error) { // Use the error handler to transform ByteString conversion errors @@ -204,45 +210,65 @@ export class OpenAICompatibleEmbedder implements IEmbedder { batchTexts: string[], model: string, ): Promise { - const response = await fetch(url, { - method: "POST", - headers: { - "Content-Type": "application/json", - // Azure OpenAI uses 'api-key' header, while OpenAI uses 'Authorization' - // We'll try 'api-key' first for Azure compatibility - "api-key": this.apiKey, - Authorization: `Bearer ${this.apiKey}`, - }, - body: JSON.stringify({ - input: batchTexts, - model: model, - encoding_format: "base64", - }), - }) - - if (!response || !response.ok) { - const status = response?.status || 0 - let errorText = "No response" - try { - if (response && typeof response.text === "function") { - errorText = await response.text() - } else if (response) { + const controller = new AbortController() + const timeoutId = setTimeout(() => controller.abort(), OPENAI_COMPATIBLE_EMBEDDING_TIMEOUT_MS) + + try { + const response = await fetch(url, { + method: "POST", + headers: { + "Content-Type": "application/json", + // Azure OpenAI uses 'api-key' header, while OpenAI uses 'Authorization' + // We'll try 'api-key' first for Azure compatibility + "api-key": this.apiKey, + Authorization: `Bearer ${this.apiKey}`, + }, + body: JSON.stringify({ + input: batchTexts, + model: model, + encoding_format: "base64", + }), + signal: controller.signal, + }) + clearTimeout(timeoutId) + + if (!response || !response.ok) { + const status = response?.status || 0 + let errorText = "No response" + try { + if (response && typeof response.text === "function") { + errorText = await response.text() + } else if (response) { + errorText = `Error ${status}` + } + } catch { + // Ignore text parsing errors errorText = `Error ${status}` } - } catch { - // Ignore text parsing errors - errorText = `Error ${status}` + const error = new Error(`HTTP ${status}: ${errorText}`) as HttpError + error.status = status || response?.status || 0 + throw error + } + + try { + return await response.json() + } catch (e) { + const error = new Error(`Failed to parse response JSON`) as HttpError + error.status = response.status + throw error + } + } catch (error) { + clearTimeout(timeoutId) + + // Handle AbortError (timeout) — преобразуем в HTTP 504 + if (error instanceof Error && error.name === "AbortError") { + const timeoutError = new Error( + `Request timed out after ${OPENAI_COMPATIBLE_EMBEDDING_TIMEOUT_MS / 1000} seconds`, + ) as HttpError + timeoutError.status = 504 // Gateway Timeout + throw timeoutError } - const error = new Error(`HTTP ${status}: ${errorText}`) as HttpError - error.status = status || response?.status || 0 - throw error - } - try { - return await response.json() - } catch (e) { - const error = new Error(`Failed to parse response JSON`) as HttpError - error.status = response.status throw error } } @@ -321,28 +347,35 @@ export class OpenAICompatibleEmbedder implements IEmbedder { const hasMoreAttempts = attempts < MAX_RETRIES - 1 - // Check if it's a rate limit error const httpError = error as HttpError - if (httpError?.status === 429) { - // Update global rate limit state + + // Определяем тип ошибки + const errorStatus = httpError?.status + const isRetryableServerError = + typeof errorStatus === "number" && errorStatus >= 500 && errorStatus < 600 + const isRateLimitError = errorStatus === 429 + + // Обновляем global rate limit state только для 429 + if (isRateLimitError) { await this.updateGlobalRateLimitState(httpError) + } - if (hasMoreAttempts) { - // Calculate delay based on global rate limit state - const baseDelay = INITIAL_DELAY_MS * Math.pow(2, attempts) - const globalDelay = await this.getGlobalRateLimitDelay() - const delayMs = Math.max(baseDelay, globalDelay) + // Ретраим для 429 И 5xx ошибок + if ((isRateLimitError || isRetryableServerError) && hasMoreAttempts) { + const baseDelay = INITIAL_DELAY_MS * Math.pow(2, attempts) + const globalDelay = await this.getGlobalRateLimitDelay() + const delayMs = Math.max(baseDelay, globalDelay) - console.warn( - t("embeddings:rateLimitRetry", { - delayMs, - attempt: attempts + 1, - maxRetries: MAX_RETRIES, - }), - ) - await new Promise((resolve) => setTimeout(resolve, delayMs)) - continue - } + console.warn( + t("embeddings:serverErrorRetry", { + status: httpError?.status, + delayMs, + attempt: attempts + 1, + maxRetries: MAX_RETRIES, + }), + ) + await new Promise((resolve) => setTimeout(resolve, delayMs)) + continue } // Log the error for debugging diff --git a/src/services/code-index/shared/__tests__/validation-helpers.spec.ts b/src/services/code-index/shared/__tests__/validation-helpers.spec.ts index bf6c732a923..800377f4afd 100644 --- a/src/services/code-index/shared/__tests__/validation-helpers.spec.ts +++ b/src/services/code-index/shared/__tests__/validation-helpers.spec.ts @@ -1,4 +1,4 @@ -import { sanitizeErrorMessage } from "../validation-helpers" +import { sanitizeErrorMessage, getErrorMessageForStatus } from "../validation-helpers" describe("sanitizeErrorMessage", () => { it("should sanitize Unix-style file paths", () => { @@ -90,3 +90,50 @@ describe("sanitizeErrorMessage", () => { expect(sanitizeErrorMessage(input)).toBe(expected) }) }) + +describe("getErrorMessageForStatus", () => { + it("should return authenticationFailed for 401", () => { + expect(getErrorMessageForStatus(401, "openai")).toBe("validation.authenticationFailed") + }) + + it("should return authenticationFailed for 403", () => { + expect(getErrorMessageForStatus(403, "openai")).toBe("validation.authenticationFailed") + }) + + it("should return modelNotAvailable for 404 with openai embedder", () => { + expect(getErrorMessageForStatus(404, "openai")).toBe("validation.modelNotAvailable") + }) + + it("should return invalidEndpoint for 404 with non-openai embedder", () => { + expect(getErrorMessageForStatus(404, "ollama")).toBe("validation.invalidEndpoint") + }) + + it("should return rateLimitExceeded for 429", () => { + expect(getErrorMessageForStatus(429, "openai")).toBe("validation.rateLimitExceeded") + }) + + it("should return badGateway for 502", () => { + expect(getErrorMessageForStatus(502, "openai")).toBe("validation.badGateway") + }) + + it("should return serviceUnavailable for 503", () => { + expect(getErrorMessageForStatus(503, "openai")).toBe("validation.serviceUnavailable") + }) + + it("should return gatewayTimeout for 504", () => { + expect(getErrorMessageForStatus(504, "openai")).toBe("validation.gatewayTimeout") + }) + + it("should return serverError for other 5xx errors", () => { + expect(getErrorMessageForStatus(500, "openai")).toBe("validation.serverError") + expect(getErrorMessageForStatus(501, "openai")).toBe("validation.serverError") + expect(getErrorMessageForStatus(505, "openai")).toBe("validation.serverError") + expect(getErrorMessageForStatus(599, "openai")).toBe("validation.serverError") + }) + + it("should return undefined for unknown status", () => { + expect(getErrorMessageForStatus(undefined, "openai")).toBeUndefined() + expect(getErrorMessageForStatus(200, "openai")).toBeUndefined() + expect(getErrorMessageForStatus(301, "openai")).toBeUndefined() + }) +}) diff --git a/src/services/code-index/shared/validation-helpers.ts b/src/services/code-index/shared/validation-helpers.ts index 6b043d44d38..a1e5af03d6e 100644 --- a/src/services/code-index/shared/validation-helpers.ts +++ b/src/services/code-index/shared/validation-helpers.ts @@ -76,10 +76,16 @@ export function getErrorMessageForStatus(status: number | undefined, embedderTyp ? t("embeddings:validation.modelNotAvailable") : t("embeddings:validation.invalidEndpoint") case 429: + return t("embeddings:validation.rateLimitExceeded") + case 502: + return t("embeddings:validation.badGateway") + case 503: return t("embeddings:validation.serviceUnavailable") + case 504: + return t("embeddings:validation.gatewayTimeout") default: if (status && status >= 400 && status < 600) { - return t("embeddings:validation.configurationError") + return t("embeddings:validation.serverError") } return undefined } From db76904288e4fba7c7b99aaf2d9c19f59a1a68dc Mon Sep 17 00:00:00 2001 From: Roo Code Date: Sun, 5 Apr 2026 01:58:39 +0300 Subject: [PATCH 2/2] fix(i18n): add missing Catalan translations for embedding error messages --- src/i18n/locales/ca/embeddings.json | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/i18n/locales/ca/embeddings.json b/src/i18n/locales/ca/embeddings.json index 9ceec7d05c3..b494feef1c8 100644 --- a/src/i18n/locales/ca/embeddings.json +++ b/src/i18n/locales/ca/embeddings.json @@ -6,6 +6,7 @@ "failedMaxAttempts": "No s'han pogut crear les incrustacions després de {{attempts}} intents", "textExceedsTokenLimit": "El text a l'índex {{index}} supera el límit màxim de testimonis ({{itemTokens}} > {{maxTokens}}). S'està ometent.", "rateLimitRetry": "S'ha assolit el límit de velocitat, es torna a intentar en {{delayMs}}ms (intent {{attempt}}/{{maxRetries}})", + "serverErrorRetry": "Error del servidor ({{status}}), es torna a intentar en {{delayMs}}ms (intent {{attempt}}/{{maxRetries}})", "bedrock": { "invalidResponseFormat": "Format de resposta no vàlid d'Amazon Bedrock", "invalidCredentials": "Credencials d'AWS no vàlides. Si us plau, comprova la teva configuració d'AWS.", @@ -37,7 +38,11 @@ "connectionFailed": "No s'ha pogut connectar al servei d'incrustació. Comproveu la vostra configuració de connexió i assegureu-vos que el servei estigui funcionant.", "modelNotAvailable": "El model especificat no està disponible. Comproveu la vostra configuració de model.", "configurationError": "Configuració d'incrustació no vàlida. Reviseu la vostra configuració.", - "serviceUnavailable": "El servei d'incrustació no està disponible. Assegureu-vos que estigui funcionant i sigui accessible.", + "serviceUnavailable": "El servei d'incrustació no està disponible temporalment. Si us plau, torneu-ho a provar més tard.", + "rateLimitExceeded": "S'ha superat el límit de velocitat. Si us plau, torneu-ho a provar més tard.", + "badGateway": "Error de bad gateway del servei d'incrustació. El servidor ha rebut una resposta no vàlida.", + "gatewayTimeout": "Error de gateway timeout. El servei d'incrustació no ha respost a temps.", + "serverError": "Error del servidor del servei d'incrustació. Si us plau, torneu-ho a provar més tard.", "invalidEndpoint": "Punt final d'API no vàlid. Comproveu la vostra configuració d'URL.", "invalidEmbedderConfig": "Configuració d'incrustació no vàlida. Comproveu la vostra configuració.", "invalidApiKey": "Clau d'API no vàlida. Comproveu la vostra configuració de clau d'API.",