Releases: Stackbilt-dev/llm-providers
Releases · Stackbilt-dev/llm-providers
v1.3.0 — Cloudflare Workers AI vision support
Added
- Cloudflare Workers AI vision support —
CloudflareProvidernow acceptsrequest.imagesand routes to vision-capable models. Previously image data was silently dropped on the CF path. - Three new CF vision models:
@cf/google/gemma-4-26b-a4b-it— 256K context, vision + function calling + reasoning@cf/meta/llama-4-scout-17b-16e-instruct— natively multimodal, tool calling@cf/meta/llama-3.2-11b-vision-instruct— image understanding
CloudflareProvider.supportsVision = true— factory'sanalyzeImagenow dispatches to CF when configured.- Factory default vision fallback —
getDefaultVisionModel()falls back to@cf/google/gemma-4-26b-a4b-itwhen neither Anthropic nor OpenAI is configured, enabling CF-only deployments to useanalyzeImage().
Changed
- Images are passed to CF using the OpenAI-compatible
image_urlcontent-part shape (base64 data URIs). HTTP image URLs throw a helpfulConfigurationError— fetch the image and pass bytes inimage.data. - Attempting
request.imageson a non-vision CF model throws aConfigurationErrornaming the vision-capable alternatives.
Usage
factory.analyzeImage({
image: { data: base64, mimeType: 'image/jpeg' },
prompt: 'Extract recipe data',
model: '@cf/google/gemma-4-26b-a4b-it',
});See #43 for details.
v1.1.0 — Multi-Modal: Image Generation
Image Generation Provider
@stackbilt/llm-providers is now multi-modal — text + image inference under one package.
New: ImageProvider
import { ImageProvider } from '@stackbilt/llm-providers';
const img = new ImageProvider({
cloudflareAi: env.AI,
geminiApiKey: env.GEMINI_API_KEY,
});
const result = await img.generateImage({
prompt: 'a mountain landscape at sunset',
model: 'flux-dev',
});
// result.image: ArrayBuffer, result.responseTime, result.providerBuilt-in Models
| Model | Provider | Use Case |
|---|---|---|
sdxl-lightning |
Cloudflare | Fast drafts, free tier |
flux-klein |
Cloudflare | Balanced quality/speed |
flux-dev |
Cloudflare | Highest CF quality |
gemini-flash-image |
Text rendering capable | |
gemini-flash-image-preview |
Latest preview model |
Extracted from img-forge production codebase. Battle-tested response normalization handles all Workers AI return formats.
Full changelog: CHANGELOG.md
v1.0.0 — Production Release
First stable release. Production-tested in AEGIS cognitive kernel since v1.72.0.
Highlights
- Zero runtime dependencies — supply chain security by design
- 5 providers: OpenAI, Anthropic, Cloudflare Workers AI, Cerebras, Groq
LLMProviders.fromEnv()— one-line multi-provider setup- Graduated circuit breakers — automatic failover with half-open probe recovery
- CreditLedger — per-provider budget tracking with threshold alerts + burn rate projection
- npm provenance — every version cryptographically linked to its source commit
Install
npm install @stackbilt/llm-providersQuick Start
import { LLMProviders } from '@stackbilt/llm-providers';
const llm = LLMProviders.fromEnv(process.env);
const response = await llm.generateResponse({
messages: [{ role: 'user', content: 'Hello!' }],
});See README for full documentation.
See SECURITY.md for supply chain security policy.