Build a streaming agent using Server-Sent Events (SSE)
Server-Sent Events (SSE) have become a popular protocol among language model providers to enable streaming tokens as they're generated.
Midio offers first-class support for this protocol, making it extremely easy to integrate and use language models this way. Additionally, you can effortlessly make your Midio endpoints respond to requests using SSE. In this tutorial, I'll demonstrate how quickly you can set up a user-friendly interface in Lovable that streams tokens from a Midio service.
Initial Setup
We'll begin by creating a simple chatbot using the open-ai
package (read about using the package manager here), which supports the streaming version of the OpenAI chat API (Chat Complete Streamed
). We then add a straightforward system message and a test user message, along with our API key.

Unlike the non-streamed API, we need to handle each incoming token individually. To do this, we'll use the Next Event
function, which offers multiple triggers for different event types:
got token
: Triggered for each received token.tool call started
: Triggered when the model initiates a tool call, before parameters arrive.tools called
: Triggered after the tool calls are executed.done
: Triggered once all tokens have been received.got error
: Triggered if an error occurs.
For our purposes, we'll focus primarily on handling tokens (got token
) and the completion event (done
).
To continuously handle events, we'll loop back to the Next Event
function after each token is processed. We'll illustrate this by adding a log node that outputs tokens to the log panel.

Let's quickly verify our setup by clicking the play button on the Chat Complete Streamed
function. You should see tokens appearing sequentially in the log panel.
Creating an SSE Endpoint for Frontend Integration
With our assistant operational, the next step is to expose it via an API accessible by our frontend.
Add an Endpoint
node with a path template like chat?prompt
. This allows the frontend to send a user's prompt through a URL query parameter named prompt
.
Establishing an SSE Response
To set up the SSE connection, we'll use the Start SSE Response
node. To simplify the example, we'll allow all origins by including permissive CORS headers. In a production scenario, ensure you only allow specific origins.

This step enables us to send events directly to the frontend.
Defining a Simple Communication Protocol
The frontend needs a clear way to differentiate between the following types of messages:
New token arrivals
Errors
Stream completion
We'll define this through structured JSON messages:
New token:
{ "token": "<the token>" }
Error:
{ "error": "<the error>" }
Completion:
{ "lifecycle": "done" }
We'll achieve this using the Send SSE Event
node connected directly to various triggers of the Next Event
node, leveraging expressions to construct these JSON objects.
After sending error or completion messages, it's advisable to explicitly close the SSE connection.

Testing the SSE Endpoint with Fetch Streamed
We'll quickly test our newly created endpoint directly from Midio by using Fetch Streamed
alongside the Parse SSE Stream
function, which consumes SSE APIs easily.

Generating a Frontend with Lovable
Now, let's generate a frontend using Lovable. Paste the provided template, which reliably generates a functional solution. Remember to replace the placeholder BASE_URL
with your actual URL.
Once Lovable is done generating your frontend, go ahead and try it out. You should expect something like you see in the beginning of this article.
Next Steps and Enhancements
That was straightforward! You've now set up a basic streaming chat agent using Midio and Lovable, streaming tokens directly to the user.
However, this demo is just the beginning. Here are several ideas for enhancements:
State Management: Currently, no data is stored between requests. Explore the Ephemeral Store module for memory-based state management.
Tool Integration: Learn how to integrate tools with the streaming API here.
Defined Agent Roles: Experiment with more sophisticated agent roles using inspiration from existing prompt libraries, such as Anthropic’s library or this comprehensive collection.
Last updated
Was this helpful?