How to build a streaming agent using Server-Sent Events (SSE)
Last updated
Was this helpful?
Last updated
Was this helpful?
Server-Sent Events (SSE) have become a popular protocol among language model providers to enable streaming tokens as they're generated.
Midio offers first-class support for this protocol, making it extremely easy to integrate and use language models this way. Additionally, you can effortlessly make your Midio endpoints respond to requests using SSE. In this tutorial, I'll demonstrate how quickly you can set up a user-friendly interface in Lovable that streams tokens from a Midio service.
Unlike the non-streamed API, we need to handle each incoming token individually. To do this, we'll use the Next Event
function, which offers multiple triggers for different event types:
got token
: Triggered for each received token.
tool call started
: Triggered when the model initiates a tool call, before parameters arrive.
tools called
: Triggered after the tool calls are executed.
done
: Triggered once all tokens have been received.
got error
: Triggered if an error occurs.
For our purposes, we'll focus primarily on handling tokens (got token
) and the completion event (done
).
Let's quickly verify our setup by clicking the play button on the Chat Complete Streamed
function. You should see tokens appearing sequentially in the log panel.
With our assistant operational, the next step is to expose it via an API accessible by our frontend.
Add an Endpoint
node with a path template like chat?prompt
. This allows the frontend to send a user's prompt through a URL query parameter named prompt
.
To set up the SSE connection, we'll use the Start SSE Response
node. To simplify the example, we'll allow all origins by including permissive CORS headers. In a production scenario, ensure you only allow specific origins.
This step enables us to send events directly to the frontend.
The frontend needs a clear way to differentiate between the following types of messages:
New token arrivals
Errors
Stream completion
We'll define this through structured JSON messages:
New token: { "token": "<the token>" }
Error: { "error": "<the error>" }
Completion: { "lifecycle": "done" }
After sending error or completion messages, it's advisable to explicitly close the SSE connection.
We'll quickly test our newly created endpoint directly from Midio by using Fetch Streamed
alongside the Parse SSE Stream
function, which consumes SSE APIs easily.
Now, let's generate a frontend using Lovable. Paste the provided template, which reliably generates a functional solution. Remember to replace the placeholder BASE_URL
with your actual URL.
Once Lovable is done generating your frontend, go ahead and try it out. You should expect something like you see in the beginning of this article.
That was straightforward! You've now set up a basic streaming chat agent using Midio and Lovable, streaming tokens directly to the user.
However, this demo is just the beginning. Here are several ideas for enhancements:
We'll begin by creating a simple chatbot using the open-ai
package (read about using the package manager ), which supports the streaming version of the OpenAI chat API (Chat Complete Streamed
). We then add a straightforward system message and a test user message, along with our API key.
To continuously handle events, we'll loop back to the Next Event
function after each token is processed. We'll illustrate this by adding a log node that outputs tokens to the .
We'll achieve this using the Send SSE Event
node connected directly to various triggers of the Next Event
node, leveraging to construct these JSON objects.
State Management: Currently, no data is stored between requests. Explore the module for memory-based state management.
Tool Integration: Learn how to integrate tools with the streaming API .
Defined Agent Roles: Experiment with more sophisticated agent roles using inspiration from existing prompt libraries, such as or .