1. Add Streamstraight to your server
Instead of returning the LLM response stream to your client via an HTTP Server-sent Events Response, we’ll forward it to Streamstraight instead. First, install our server SDK for your language. If your SDK isn’t listed here, drop us a line..env entry for the API key you generate in the Streamstraight dashboard. In production, make sure to add this API key to your server secrets.
Pass your stream to Streamstraight
Next, ensure you have a uniquestreamId for the AI response. We recommend using a combination of the AI chat’s chatId and messageId to create a unique identifier.
We’ll then pass the stream to Streamstraight’s SDK.
Depending on your architecture, the LLM stream may block your HTTP response. If you
would like to continue streaming without blocking your HTTP response, follow the
instructions in the advanced usage
guide.
Add a route for fetching your JWT token
Streamstraight requires your frontend client to securely connect via a short-lived JWT token. Add a route to your server that fetches this token using our SDK.2: Add Streamstraight to your client
First, install the Streamstraight client SDK to your app:streamId you generated on the server and a function that fetches a fresh JWT from your backend.
If you are using the
useChat abstraction from Vercel’s AI SDK, we’ve built a way to
integrate directly into that abstraction! Please refer to these
instructions instead.