Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions client/react/components.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ The Pipecat React SDK provides several components for handling audio, video, and

## PipecatClientProvider

The root component for providing Pipecat client context to your application.
The root component for providing Pipecat client context to your application. It also includes built-in conversation state management, so any descendant component can use the [`usePipecatConversation`](/client/react/hooks#usepipecatconversation) hook to access messages without adding a separate provider.

```jsx
<PipecatClientProvider client={pcClient}>
{/* Child components */}
{/* Child components can use usePipecatConversation, usePipecatClient, etc. */}
</PipecatClientProvider>
```

Expand Down
97 changes: 97 additions & 0 deletions client/react/hooks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -157,3 +157,100 @@ function MicToggle() {
);
}
```

## usePipecatConversation

The primary hook for accessing the conversation message stream. Returns the current list of messages (ordered for display) and a function to inject messages programmatically.

Each assistant message's text parts are split into `spoken` and `unspoken` segments based on real-time speech progress, so you can style them differently (e.g. dim unspoken text).

```tsx
import { usePipecatConversation } from "@pipecat-ai/client-react";
import type { ConversationMessage } from "@pipecat-ai/client-react";

function Messages() {
const { messages } = usePipecatConversation({
onMessageCreated(message: ConversationMessage) {
console.log("New message:", message);
},
onMessageUpdated(message: ConversationMessage) {
if (message.final) {
console.log("Message finalized:", message);
}
},
});

return (
<ul>
{messages.map((msg, i) => (
<li key={`${msg.createdAt}-${i}`}>
<strong>{msg.role}:</strong>{" "}
{msg.parts?.map((part, j) => {
if (typeof part.text === "string") {
return <span key={j}>{part.text}</span>;
}
// BotOutputText: { spoken, unspoken }
return (
<span key={j}>
<span>{part.text.spoken}</span>
<span style={{ opacity: 0.5 }}>{part.text.unspoken}</span>
</span>
);
})}
</li>
))}
</ul>
);
}
```

**Options**

<ParamField path="onMessageCreated" type="(message: ConversationMessage) => void">
Called once when a new message first enters the conversation. The message may or may not be complete at this point — check `message.final`.
</ParamField>
<ParamField path="onMessageUpdated" type="(message: ConversationMessage) => void">
Called whenever an existing message's content changes (e.g. streaming text appended, function call status changed, message finalized). Check `message.final` to detect finalization.
</ParamField>
<ParamField path="aggregationMetadata" type="Record<string, AggregationMetadata>">
Metadata for aggregation types to control rendering and speech progress behavior. Used to determine which aggregations should be excluded from position-based speech splitting.
</ParamField>

**Returns**

<ParamField path="messages" type="ConversationMessage[]">
The current list of conversation messages, ordered for display. Assistant messages have their text parts split into `{ spoken, unspoken }` based on real-time speech progress.
</ParamField>
<ParamField path="injectMessage" type="(message: { role: string; parts: ConversationMessagePart[] }) => void">
Programmatically inject a message into the conversation (e.g. a system prompt or user-typed input).
</ParamField>

## useConversationContext

Lower-level hook that provides direct access to the conversation context. Use this when you only need `injectMessage` without subscribing to the message stream, or to check whether the connected bot supports BotOutput events.

```tsx
import { useConversationContext } from "@pipecat-ai/client-react";

function TextInput() {
const { injectMessage, botOutputSupported } = useConversationContext();

const send = (text: string) => {
injectMessage({
role: "user",
parts: [{ type: "text", text }],
});
};

return <input onKeyDown={(e) => e.key === "Enter" && send(e.currentTarget.value)} />;
}
```

**Returns**

<ParamField path="injectMessage" type="(message: { role: string; parts: ConversationMessagePart[] }) => void">
Programmatically inject a message into the conversation.
</ParamField>
<ParamField path="botOutputSupported" type="boolean | null">
Whether the connected bot supports BotOutput events (RTVI 1.1.0+). `null` means detection hasn't completed yet.
</ParamField>
23 changes: 21 additions & 2 deletions client/react/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ description: "Build React applications with Pipecat's React client library"
The Pipecat React SDK provides React-specific components and hooks for building voice and multimodal AI applications. It wraps the core JavaScript SDK functionality in an idiomatic React interface that handles:

- React context for client state management
- Built-in conversation state with real-time speech progress
- Components for audio and video rendering
- Hooks for accessing client functionality
- Media device management
Expand All @@ -31,6 +32,7 @@ import {
PipecatClientProvider,
PipecatClientAudio,
usePipecatClient,
usePipecatConversation,
} from "@pipecat-ai/client-react";
import { DailyTransport } from "@pipecat-ai/daily-transport";

Expand All @@ -50,9 +52,10 @@ function App() {
);
}

// Component using the client
// Component using the client and conversation hooks
function VoiceBot() {
const client = usePipecatClient();
const { messages } = usePipecatConversation();

const handleClick = async () => {
await client.startBotAndConnect({
Expand All @@ -61,7 +64,23 @@ function VoiceBot() {
};

return (
<button onClick={handleClick}>Start Conversation</button>;
<div>
<button onClick={handleClick}>Start Conversation</button>
<ul>
{messages.map((msg, i) => (
<li key={`${msg.createdAt}-${i}`}>
<strong>{msg.role}:</strong>{" "}
{msg.parts?.map((part, j) => (
<span key={j}>
{typeof part.text === "string"
? part.text
: `${part.text.spoken}${part.text.unspoken}`}
</span>
))}
</li>
))}
</ul>
</div>
);
}
```
Expand Down