The Pipecat React SDK provides hooks for accessing client functionality, managing media devices, and handling events.
usePipecatClient
Provides access to the PipecatClient instance originally passed to PipecatClientProvider.
import { usePipecatClient } from "@pipecat-ai/client-react";
function MyComponent() {
const pcClient = usePipecatClient();
await pcClient.startBotAndConnect({
endpoint: '/api/start',
requestData: {
// Any custom data your /start endpoint requires
}
});
}
useRTVIClientEvent
Allows subscribing to RTVI client events. It is advised to wrap handlers with useCallback.
import { useCallback } from "react";
import { RTVIEvent, TransportState } from "@pipecat-ai/client-js";
import { useRTVIClientEvent } from "@pipecat-ai/client-react";
function EventListener() {
useRTVIClientEvent(
RTVIEvent.TransportStateChanged,
useCallback((transportState: TransportState) => {
console.log("Transport state changed to", transportState);
}, [])
);
}
Arguments
Manage and list available media devices.
import { usePipecatClientMediaDevices } from "@pipecat-ai/client-react";
function DeviceSelector() {
const {
availableCams,
availableMics,
selectedCam,
selectedMic,
updateCam,
updateMic,
} = usePipecatClientMediaDevices();
return (
<>
<select
name="cam"
onChange={(ev) => updateCam(ev.target.value)}
value={selectedCam?.deviceId}
>
{availableCams.map((cam) => (
<option key={cam.deviceId} value={cam.deviceId}>
{cam.label}
</option>
))}
</select>
<select
name="mic"
onChange={(ev) => updateMic(ev.target.value)}
value={selectedMic?.deviceId}
>
{availableMics.map((mic) => (
<option key={mic.deviceId} value={mic.deviceId}>
{mic.label}
</option>
))}
</select>
</>
);
}
Access audio and video tracks.
import { usePipecatClientMediaTrack } from "@pipecat-ai/client-react";
function MyTracks() {
const localAudioTrack = usePipecatClientMediaTrack("audio", "local");
const botAudioTrack = usePipecatClientMediaTrack("audio", "bot");
}
Arguments
trackType
'audio' | 'video'
required
usePipecatClientTransportState
Returns the current transport state.
import { usePipecatClientTransportState } from "@pipecat-ai/client-react";
function ConnectionStatus() {
const transportState = usePipecatClientTransportState();
}
usePipecatClientCamControl
Controls the local participant’s camera state.
import { usePipecatClientCamControl } from "@pipecat-ai/client-react";
function CamToggle() {
const { enableCam, isCamEnabled } = usePipecatClientCamControl();
return (
<button onClick={() => enableCam(!isCamEnabled)}>
{isCamEnabled ? "Disable Camera" : "Enable Camera"}
</button>
);
}
usePipecatClientMicControl
Controls the local participant’s microphone state.
import { usePipecatClientMicControl } from "@pipecat-ai/client-react";
function MicToggle() {
const { enableMic, isMicEnabled } = usePipecatClientMicControl();
return (
<button onClick={() => enableMic(!isMicEnabled)}>
{isMicEnabled ? "Disable Microphone" : "Enable Microphone"}
</button>
);
}
usePipecatConversation
The primary hook for accessing the conversation message stream. Returns the current list of messages (ordered for display) and a function to inject messages programmatically.
Each assistant message’s text parts are split into spoken and unspoken segments based on real-time speech progress, so you can style them differently (e.g. dim unspoken text).
import { usePipecatConversation } from "@pipecat-ai/client-react";
import type { ConversationMessage } from "@pipecat-ai/client-react";
function Messages() {
const { messages } = usePipecatConversation({
onMessageCreated(message: ConversationMessage) {
console.log("New message:", message);
},
onMessageUpdated(message: ConversationMessage) {
if (message.final) {
console.log("Message finalized:", message);
}
},
});
return (
<ul>
{messages.map((msg, i) => (
<li key={`${msg.createdAt}-${i}`}>
<strong>{msg.role}:</strong>{" "}
{msg.parts?.map((part, j) => {
if (typeof part.text === "string") {
return <span key={j}>{part.text}</span>;
}
// BotOutputText: { spoken, unspoken }
return (
<span key={j}>
<span>{part.text.spoken}</span>
<span style={{ opacity: 0.5 }}>{part.text.unspoken}</span>
</span>
);
})}
</li>
))}
</ul>
);
}
Options
onMessageCreated
(message: ConversationMessage) => void
Called once when a new message first enters the conversation. The message may or may not be complete at this point — check message.final.
onMessageUpdated
(message: ConversationMessage) => void
Called whenever an existing message’s content changes (e.g. streaming text appended, function call status changed, message finalized). Check message.final to detect finalization.
aggregationMetadata
Record<string, AggregationMetadata>
Metadata for aggregation types to control rendering and speech progress behavior. Used to determine which aggregations should be excluded from position-based speech splitting.
Returns
The current list of conversation messages, ordered for display. Assistant messages have their text parts split into { spoken, unspoken } based on real-time speech progress.
injectMessage
(message: { role: string; parts: ConversationMessagePart[] }) => void
Programmatically inject a message into the conversation (e.g. a system prompt or user-typed input).
useConversationContext
Lower-level hook that provides direct access to the conversation context. Use this when you only need injectMessage without subscribing to the message stream, or to check whether the connected bot supports BotOutput events.
import { useConversationContext } from "@pipecat-ai/client-react";
function TextInput() {
const { injectMessage, botOutputSupported } = useConversationContext();
const send = (text: string) => {
injectMessage({
role: "user",
parts: [{ type: "text", text }],
});
};
return <input onKeyDown={(e) => e.key === "Enter" && send(e.currentTarget.value)} />;
}
Returns
injectMessage
(message: { role: string; parts: ConversationMessagePart[] }) => void
Programmatically inject a message into the conversation.
Whether the connected bot supports BotOutput events (RTVI 1.1.0+). null means detection hasn’t completed yet.