Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ tests/chunk.php
.idea/
.env
example.php
example.md
example.md
/blueprints/
190 changes: 178 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,11 @@ Utopia Framework requires PHP 8.0 or later. We recommend using the latest PHP ve

- **Multiple AI Providers** - Support for OpenAI, Anthropic, Deepseek, Perplexity, XAI, Gemini, and OpenRouter APIs
- **Flexible Message Types** - Support for text and structured content in messages
- **Message Attachments** - Attach files (for example images) directly to conversation turns
- **Conversation Management** - Easy-to-use conversation handling between agents and users
- **Model Selection** - Choose from various AI models (GPT-4, Claude 3, Deepseek Chat, Sonar, Grok, etc.)
- **Parameter Control** - Fine-tune model behavior with temperature and token controls
- **Streaming Output** - Consume incremental model output through callback-driven Server-Sent Events (SSE) streams

## Usage

Expand All @@ -35,8 +37,8 @@ Utopia Framework requires PHP 8.0 or later. We recommend using the latest PHP ve
<?php

use Utopia\Agents\Agent;
use Utopia\Agents\Message;
use Utopia\Agents\Roles\User;
use Utopia\Agents\Messages\Text;
use Utopia\Agents\Conversation;
use Utopia\Agents\Adapters\OpenAI;

Expand All @@ -50,7 +52,7 @@ $user = new User('user-1', 'John');
// Start a conversation
$conversation = new Conversation($agent);
$conversation
->message($user, new Text('What is artificial intelligence?'))
->message($user, new Message('What is artificial intelligence?'))
->send();
```

Expand Down Expand Up @@ -182,7 +184,7 @@ $openrouter = new OpenRouter(
```php
use Utopia\Agents\Roles\User;
use Utopia\Agents\Roles\Assistant;
use Utopia\Agents\Messages\Text;
use Utopia\Agents\Message;

// Create a conversation with system instructions
$agent = new Agent($adapter);
Expand All @@ -197,26 +199,190 @@ $assistant = new Assistant('assistant-1');

$conversation = new Conversation($agent);
$conversation
->message($user, new Text('Hello!'))
->message($assistant, new Text('Hi! How can I help you today?'))
->message($user, new Text('What is the capital of France?'));
->message($user, new Message('Hello!'))
->message($assistant, new Message('Hi! How can I help you today?'))
->message($user, new Message('What is the capital of France?'));

// Add a user message with attachments
$conversation->message(
$user,
new Message('Please summarize this screenshot'),
[new Message($imageBinaryContent)]
);

// Send and get response
$response = $conversation->send();
```

### Streaming Responses (SSE)

The conversation layer supports incremental output streaming through `Conversation::listen(callable $listener)`.
The callback receives each text delta as it arrives from the provider's SSE stream, while `send()` still returns the final aggregated `Message`.

#### Streaming in CLI / Worker Contexts

```php
use Utopia\Agents\Agent;
use Utopia\Agents\Conversation;
use Utopia\Agents\Adapters\OpenAI;
use Utopia\Agents\Message;
use Utopia\Agents\Roles\User;

$agent = new Agent(new OpenAI('your-api-key', OpenAI::MODEL_GPT_4O));
$conversation = new Conversation($agent);
$user = new User('user-1', 'John');

$conversation
->listen(function (string $chunk): void {
echo $chunk; // render partial output as soon as it is received
})
->message($user, new Message('Explain vector databases in one paragraph.'));

$final = $conversation->send(); // final, complete assistant message
```

#### Exposing Model Output as HTTP SSE

```php
use Utopia\Agents\Agent;
use Utopia\Agents\Conversation;
use Utopia\Agents\Adapters\OpenAI;
use Utopia\Agents\Message;
use Utopia\Agents\Roles\User;

header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('Connection: keep-alive');

$agent = new Agent(new OpenAI('your-api-key', OpenAI::MODEL_GPT_4O));
$conversation = new Conversation($agent);
$user = new User('user-1', 'John');

$conversation
->listen(function (string $chunk): void {
// Send each token delta as an SSE frame
echo 'data: '.json_encode(['delta' => $chunk], JSON_UNESCAPED_UNICODE)."\n\n";

if (function_exists('ob_flush')) {
@ob_flush();
}
flush();
})
->message($user, new Message('Write a short release note for today''s deployment.'));

$final = $conversation->send();

// Optional terminal event with complete text
echo 'event: done'."\n";
echo 'data: '.json_encode(['message' => $final->getContent()], JSON_UNESCAPED_UNICODE)."\n\n";
echo 'data: [DONE]'."\n\n";
flush();
```

#### Operational Notes

- Streaming is adapter-dependent and available for chat-capable providers that expose incremental output.
- The listener is optional; if omitted, responses are still collected and returned as a single final message.
- Keep callbacks non-blocking and lightweight to avoid slowing downstream token delivery.
- When serving SSE over HTTP, send `Content-Type: text/event-stream`, flush frequently, and disable intermediary buffering where applicable.
- Usage metrics (input/output tokens and cache counters, where supported) remain available after `send()` completes.

### Working with Messages

```php
use Utopia\Agents\Messages\Text;
use Utopia\Agents\Messages\Image;
use Utopia\Agents\Message;

// Text message
$textMessage = new Text('Hello, how are you?');
// Message content is always text
$textMessage = new Message('Hello, how are you?');

// Image message
$imageMessage = new Image($imageBinaryContent);
// Attachments are binary payloads (for example images)
$imageMessage = new Message($imageBinaryContent);
$mimeType = $imageMessage->getMimeType(); // Get the MIME type of the image

// Attach image to a text prompt
$message = (new Message('Describe this image'))->addAttachment($imageMessage);
```

### Attachment Examples

```php
use Utopia\Agents\Conversation;
use Utopia\Agents\Message;
use Utopia\Agents\Roles\User;

$conversation = new Conversation($agent);
$user = new User('user-1', 'John');

// 1) Attach a single image in the same turn
$conversation->message(
$user,
new Message('What is shown here?'),
[new Message(file_get_contents(__DIR__.'/images/screenshot.png'))]
);

// 2) Attach multiple images in one turn
$conversation->message(
$user,
new Message('Compare these two images and list differences.'),
[
new Message(file_get_contents(__DIR__.'/images/before.png')),
new Message(file_get_contents(__DIR__.'/images/after.png')),
]
);

// 3) Build and reuse a message object with attachments
$prompt = (new Message('Extract visible text from this receipt'))
->addAttachment(new Message(file_get_contents(__DIR__.'/images/receipt.jpg')));

$conversation->message($user, $prompt);
```

### Attachment Limits and Validation

Attachment validation is enforced by default in `Conversation::message(...)`.
Guardrail values come from the selected adapter (not from conversation-level user configuration).

Default adapter guardrails:

- Max attachments per message: `10`
- Max binary size per attachment: `5_000_000` bytes (~5 MB)
- Max total attachment payload per turn: `20_000_000` bytes (~20 MB)
- MIME allowlist: `image/png`, `image/jpeg`, `image/webp`, `image/gif`
- Reject empty or unreadable payloads
- Adapter compatibility checks (attachment type must be supported by the selected adapter)

To customize limits, create an adapter subclass and override limit methods:

```php
<?php

use Utopia\Agents\Adapters\OpenAI;

class StrictOpenAI extends OpenAI
{
public function getMaxAttachmentsPerMessage(): ?int
{
return 3;
}

public function getMaxAttachmentBytes(): ?int
{
return 2_000_000;
}

public function getMaxTotalAttachmentBytes(): ?int
{
return 6_000_000;
}

/**
* @return list<string>|null
*/
public function getAllowedAttachmentMimeTypes(): ?array
{
return ['image/png', 'image/jpeg'];
}
}
```

## Schema and Schema Objects
Expand Down
Loading
Loading