Skip to main content
Vercel’s AI SDK v5 is a popular framework designed to help teams build with AI. It is an abstraction that allows users to switch between different LLM providers and offers a pluggable way to define tools, agents, and transports.
This guide is only for those using AI SDK’s useChat abstraction. If you are not using useChat, you do not need to follow this guide.

How Streamstraight integrates with AI SDK

Streamstraight extends AI SDK’s native resume stream functionality by providing a custom transport. By using Streamstraight, you will be able to:
  • Automatically resume an in-flight stream created through useChat if the client reconnects from your server after a disconnect
  • Process your LLM generation stream on the server in addition to sending it to the client
With Streamstraight, you will be able to retrieve the entire contents of the stream even after the stream has ended. AI SDK’s resumable-stream implementation returns no data if the client reconnects after the stream has ended.

Implementation

1. Enable stream resumption on the client

We reuse AI SDK’s resume option in useChat to enable stream resumption. If useChat is mounted when an existing stream is active, Streamstraight will replay and tail that stream from the beginning. If it is mounted when the existing stream has ended or is not active, nothing will happen.
React
import { useChat } from "@ai-sdk/react";
import { StreamstraightChatTransport } from "@streamstraight/client";
import { DefaultChatTransport, type UIMessage, generateId } from "ai";
import { useState } from "react";

export function ChatComponent({
  chatData,
  fetchToken,
}: {
  chatData: { id: string; messages: UIMessage[]; currentlyStreaming: boolean };
  fetchToken: () => Promise<string>;
}) {
  const [textInput, setTextInput] = useState<string>("");
  const { messages, status, error, sendMessage, resumeStream } = useChat({
    id: chatData.id,
    messages: chatData.messages,
    transport: new StreamstraightChatTransport({
      fetchToken,
      api: "/api/chat", // Or wherever your server route handler is located
    }),
    // Set this to true. When useChat is mounted, it will try to resume
    // the stream if one is in-progress for this chatId.
    resume: true,
  });

  return (
    <div>
      {/* Your chat UI */}
      <input value={textInput} onChange={(e) => setTextInput(e.target.value)} />
      <Button onClick={() => sendMessage({ text: textInput })}>Send</Button>
    </div>
  );
}

2. Pass the server stream to Streamstraight and track when a stream is active

useChat calls a POST server route handler, which calls the streamText function. By default, this handler is located at /api/chat and returns a message stream response.
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { readChat, saveChat } from "@util/chat-store";
import { convertToModelMessages, generateId, streamText, type UIMessage } from "ai";
import { after } from "next/server";
import { createResumableStreamContext } from "resumable-stream";

// Defined in https://github.com/vercel/ai/blob/61a16427ad1838d274fe2718ace3e3afc5f83d58/packages/ai/src/ui/http-chat-transport.ts#L183
interface ApiChatData {
  /** Unique identifier for the chat session */
  id: string;
  /** Array of UI messages representing the conversation history */
  messages: Array<UIMessage>;
  /** ID of the message to regenerate, or undefined for new messages */
  messageId?: string;
  /** The type of message submission - either new message or regeneration */
  trigger: "submit-message" | "regenerate-message";
  // Only present when you update prepareSendMessagesRequest
  message?: UIMessage;
}

export async function POST(req: Request) {
  const requestData = (await request.json()) as ApiChatData;
  if (!requestData.messages) {
    return new Response(JSON.stringify({ error: "Invalid request payload" }), {
      status: 400,
      headers: { "Content-Type": "application/json" },
    });
  }

  const chat = await readChat(requestData.id);
  let messages = chat.messages;

  messages = [...messages, requestData.message];

  // Clear any previous active stream and save the user message
  saveChat({ id: requestData.id, messages, currentlyStreaming: true });

  const result = streamText({
    model: openai("gpt-4.1-mini"),
    messages: convertToModelMessages(requestData.messages),
  });

  return result.toUIMessageStreamResponse({
    generateMessageId: generateId,
    onFinish: ({ isAborted, isContinuation, messages, responseMessage }) => {
      // Save to chat however you do so
      saveChat({ id: requestData.id, messages, currentlyStreaming: false });
    },

    // This method allows you to consume the SSE stream however you want, in addition
    // to returning it from this POST endpoint to the client. Here we consume the
    // SSE stream by sending it to Streamstraight.
    async consumeSseStream({ stream }) {
      const server = await streamstraightServer(
        { apiKey: apiKey.key, baseUrl: API_ORIGIN },
        {
          // Use chatId as the unique streamId. This means that Streamstraight will keep
          // at most a single stream for each chat. Subsequent streams will overwrite
          // previous ones in the same chat.
          // We must do it this way because AI SDK does not generate a unique messageId
          // until after the message has been fully generated.
          streamId: requestData.id,
          // This must be true for AI SDK, because we are unique on chatId
          overwriteExistingStream: true,
        },
      );

      await server.stream(stream);
    },
  });
}
Note that you can transform the LLM generation into any custom data type, and stream it with Streamstraight. Reach out to support@streamstraight.com if you need help!

How it works

AI SDK’s stream resumption feature splits the server LLM stream into two: one is returned through Server-sent events, and the other is passed to Streamstraight. Normally, useChat relies on Server-sent events to receive the stream on the client. When the client reconnects after an interruption, useChat will automatically try to reconnect to an existing stream if resume: true. If there is a message stream currently in-progress, Streamstraight will provide that stream to the client from the beginning. Note that AI SDK’s implementation does not allow for fetching a stream that has already completed. Your server must detect when the LLM generation has finished, and let the client know on mount that there is no active stream.

Limitations with AI SDK

While AI SDK is flexible, there are a few limitations you should be aware of if you decide to rely on AI SDK as your LLM abstraction:

Without Streamstraight, streams resume twice in development mode

Without Streamstraight, useChat({ resume: true }) will resume streams twice in development mode. This is because useChat uses a useEffect to reconnect to the stream on mount when resume: true, but does not properly clean up the side effect. As a result, when run in React Strict mode (on by default in development mode), the stream is resumed twice.
Unlike most frameworks, Next.js only turns on React Strict Mode by default starting in 13.5.1. If you’re not seeing this issue without Streamstraight’s transport, try setting reactStrictMode to true in next.config.ts.
React’s useEffect requires that all side effects be properly cleaned up on unmount. To help spot errors, React mounts each component twice in development. Because useChat does not clean up the side effect, the stream is resumed twice, but only in development. Streamstraight fixes this by tracking the active stream per chat ID. When Strict Mode replays the effect, our transport closes the first socket before opening a second one, so only a single stream remains in-flight.

Unique streams are not uniquely identified

You may have noticed that in this implementation, we use the chat ID as the unique stream ID. This is because AI SDK generates a unique ID for each LLM generation only after the generation has fully finished. Because Streamstraight requires a unique stream ID before the stream begins, we must use the chat ID as the unique stream ID. There is very little downside to this approach as it works with useChat, but it does mean that streams cannot be replayed after the stream has ended.
This issue is not caused by Streamstraight, but by AI SDK’s current implementation (v5).
We do have a few customers using our AI SDK integration with unique stream IDs. If you’re interested in joining the beta, please let us know!

Streams can’t originate from async jobs

Currently, StreamstraightTransport still relies on useChat’s default streaming setup using Server-sent Events in order to receive the initial response stream. This means that streams cannot originate from async jobs or background processes, as they require a direct client-server connection to function properly. We have a few customers using a newer version of our AI SDK integration that enables support for async job streaming. If you’re interested in joining the beta, please let us know!