Skip to content

Releases: Stackbilt-dev/llm-providers

v1.3.0 — Cloudflare Workers AI vision support

17 Apr 00:05
bf713cd

Choose a tag to compare

Added

  • Cloudflare Workers AI vision supportCloudflareProvider now accepts request.images and routes to vision-capable models. Previously image data was silently dropped on the CF path.
  • Three new CF vision models:
    • @cf/google/gemma-4-26b-a4b-it — 256K context, vision + function calling + reasoning
    • @cf/meta/llama-4-scout-17b-16e-instruct — natively multimodal, tool calling
    • @cf/meta/llama-3.2-11b-vision-instruct — image understanding
  • CloudflareProvider.supportsVision = true — factory's analyzeImage now dispatches to CF when configured.
  • Factory default vision fallbackgetDefaultVisionModel() falls back to @cf/google/gemma-4-26b-a4b-it when neither Anthropic nor OpenAI is configured, enabling CF-only deployments to use analyzeImage().

Changed

  • Images are passed to CF using the OpenAI-compatible image_url content-part shape (base64 data URIs). HTTP image URLs throw a helpful ConfigurationError — fetch the image and pass bytes in image.data.
  • Attempting request.images on a non-vision CF model throws a ConfigurationError naming the vision-capable alternatives.

Usage

factory.analyzeImage({
  image: { data: base64, mimeType: 'image/jpeg' },
  prompt: 'Extract recipe data',
  model: '@cf/google/gemma-4-26b-a4b-it',
});

See #43 for details.

v1.1.0 — Multi-Modal: Image Generation

01 Apr 15:54

Choose a tag to compare

Image Generation Provider

@stackbilt/llm-providers is now multi-modal — text + image inference under one package.

New: ImageProvider

import { ImageProvider } from '@stackbilt/llm-providers';

const img = new ImageProvider({
  cloudflareAi: env.AI,
  geminiApiKey: env.GEMINI_API_KEY,
});

const result = await img.generateImage({
  prompt: 'a mountain landscape at sunset',
  model: 'flux-dev',
});
// result.image: ArrayBuffer, result.responseTime, result.provider

Built-in Models

Model Provider Use Case
sdxl-lightning Cloudflare Fast drafts, free tier
flux-klein Cloudflare Balanced quality/speed
flux-dev Cloudflare Highest CF quality
gemini-flash-image Google Text rendering capable
gemini-flash-image-preview Google Latest preview model

Extracted from img-forge production codebase. Battle-tested response normalization handles all Workers AI return formats.

Full changelog: CHANGELOG.md

v1.0.0 — Production Release

01 Apr 14:12

Choose a tag to compare

First stable release. Production-tested in AEGIS cognitive kernel since v1.72.0.

Highlights

  • Zero runtime dependencies — supply chain security by design
  • 5 providers: OpenAI, Anthropic, Cloudflare Workers AI, Cerebras, Groq
  • LLMProviders.fromEnv() — one-line multi-provider setup
  • Graduated circuit breakers — automatic failover with half-open probe recovery
  • CreditLedger — per-provider budget tracking with threshold alerts + burn rate projection
  • npm provenance — every version cryptographically linked to its source commit

Install

npm install @stackbilt/llm-providers

Quick Start

import { LLMProviders } from '@stackbilt/llm-providers';

const llm = LLMProviders.fromEnv(process.env);
const response = await llm.generateResponse({
  messages: [{ role: 'user', content: 'Hello!' }],
});

See README for full documentation.
See SECURITY.md for supply chain security policy.