# Build a streaming agent using Server-Sent Events (SSE)

Server-Sent Events (SSE) have become a popular protocol among language model providers to enable streaming tokens as they're generated.

Midio offers first-class support for this protocol, making it extremely easy to integrate and use language models this way. Additionally, you can effortlessly make your Midio endpoints respond to requests using SSE. In this tutorial, I'll demonstrate how quickly you can set up a user-friendly interface in Lovable that streams tokens from a Midio service.

{% embed url="<https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FRdFpuRAnTVYgmlCXLLou%2Fuploads%2FvIbsW8n0lba6QAnmTxmv%2Fchat-sse-lovable-side-by-side.mp4?alt=media&token=640a7754-0614-4a83-abb6-b3c3059d90a1>" %}

### Initial Setup

We'll begin by creating a simple chatbot using the `open-ai` package (read about using the package manager [here](https://docs.midio.com/midio-docs/package-manager)), which supports the streaming version of the OpenAI chat API (`Chat Complete Streamed`). We then add a straightforward system message and a test user message, along with our API key.

<figure><img src="/files/iYo76kZGYnroXXrnF7px" alt="" width="375"><figcaption></figcaption></figure>

Unlike the non-streamed API, we need to handle each incoming token individually. To do this, we'll use the `Next Event` function, which offers multiple triggers for different event types:

* `got token`: Triggered for each received token.
* `tool call started`: Triggered when the model initiates a tool call, before parameters arrive.
* `tools called`: Triggered after the tool calls are executed.
* `done`: Triggered once all tokens have been received.
* `got error`: Triggered if an error occurs.

For our purposes, we'll focus primarily on handling tokens (`got token`) and the completion event (`done`).

To continuously handle events, we'll loop back to the `Next Event` function after each token is processed. We'll illustrate this by adding a log node that outputs tokens to the [log panel](https://docs.midio.com/midio-docs/reference/overview/log).

<figure><img src="/files/DfK7SGuPIK9HJgn3RWO2" alt=""><figcaption></figcaption></figure>

Let's quickly verify our setup by clicking the play button on the `Chat Complete Streamed` function. You should see tokens appearing sequentially in the log panel.

### Creating an SSE Endpoint for Frontend Integration

With our assistant operational, the next step is to expose it via an API accessible by our frontend.

Add an `Endpoint` node with a path template like `chat?prompt`. This allows the frontend to send a user's prompt through a URL query parameter named `prompt`.

#### Establishing an SSE Response

To set up the SSE connection, we'll use the `Start SSE Response` node. To simplify the example, we'll allow all origins by including permissive CORS headers. In a production scenario, ensure you only allow specific origins.

<figure><img src="/files/K0vzjExyZlwhLGSrJ6PH" alt=""><figcaption></figcaption></figure>

This step enables us to send events directly to the frontend.

#### Defining a Simple Communication Protocol

The frontend needs a clear way to differentiate between the following types of messages:

* New token arrivals
* Errors
* Stream completion

We'll define this through structured JSON messages:

* New token: `{ "token": "<the token>" }`
* Error: `{ "error": "<the error>" }`
* Completion: `{ "lifecycle": "done" }`

We'll achieve this using the `Send SSE Event` node connected directly to various triggers of the `Next Event` node, leveraging [expressions](https://docs.midio.com/midio-docs/guides/expressions) to construct these JSON objects.

After sending error or completion messages, it's advisable to explicitly close the SSE connection.

<figure><img src="/files/lpNPJPikzuS3Qbr4ZsJZ" alt=""><figcaption></figcaption></figure>

## Testing the SSE Endpoint with Fetch Streamed

We'll quickly test our newly created endpoint directly from Midio by using `Fetch Streamed` alongside the `Parse SSE Stream` function, which consumes SSE APIs easily.

<figure><img src="/files/Wa4qoDeYtAd9NANv8LCb" alt=""><figcaption></figcaption></figure>

## Generating a Frontend with Lovable

Now, let's generate a frontend using Lovable. Paste the provided template, which reliably generates a functional solution. Remember to replace the placeholder `BASE_URL` with your actual URL.

<details>

<summary>Lovable Prompt (remember to swap out the project url for your own)</summary>

````
# Task: Build a beautiful Modern Streaming Chat Web App

## 1 — Overview
Create a single‑page chat application that lets the user type a message, sends that message to an AI backend via **Server‑Sent Events (SSE)**, and streams the AI’s reply in real time. The design/colors MUST be in dark mode.

---

## 2 — Functional Requirements

1. **Layout**
   * A scrollable chat log fills most of the page.
   * A single‑line text input is fixed at the bottom.
   * User bubbles are right‑aligned; AI bubbles are left‑aligned.
   * Error bubbles are left‑aligned with a red tint.

2. **Networking**
   * When the user submits, open an SSE connection to
     ```
     <https://your-project.midio.dev:3000/chat?prompt=><URL‑encoded‑text>
     ```
   * The stream sends **JSON** lines:
     * Normal tokens: `{ "token": "some text" }`
       → Append `token` to the current AI bubble as soon as it arrives.
     * Errors: `{ "error": "description" }`
       → Close the stream, render a separate error bubble, and politely ask the user to retry.

3. **Streaming Lifecycle**
   * Re‑enable input and close the `EventSource` when the server ends the stream or an error occurs.
   * Optionally show a small “thinking…” placeholder until the first token arrives.
   * A message of the form { lifecycle: "done" } is sent as the last message when the server has finished handling the request. You have to make sure to close the connection when you receive this message, otherwise you will get an error.

4. **Error Handling**
   * Handle both JSON‑error messages **and** network failures (`onerror`): in both cases show the red error bubble and invite the user to try again.

---

## 3 — Technical Constraints

* **Vanilla HTML/CSS/JavaScript only** (no build tools or frameworks).
  *Use the native `EventSource` API or `fetch` with `ReadableStream`—your choice.*
* Self‑contained: opening **index.html** in any modern browser should work.
* Responsive design: mobile‑first, max‑width ≈ 600 px on larger screens.
* Should work well on desktop at 300px width.
* Accessible markup (semantic elements, ARIA roles where appropriate).
* Keep external dependencies to an absolute minimum.

## 4 - Design goals
- Design/colors MUST be DARK MODE.
- Beautiful and modern look.
- Add some descriptive text
- Add some subtle modern colors.
````

</details>

Once Lovable is done generating your frontend, go ahead and try it out. You should expect something like you see in the beginning of this article.

### Next Steps and Enhancements

That was straightforward! You've now set up a basic streaming chat agent using Midio and Lovable, streaming tokens directly to the user.

However, this demo is just the beginning. Here are several ideas for enhancements:

* **State Management**: Currently, no data is stored between requests. Explore the [Ephemeral Store](https://docs.midio.com/midio-docs/built-in-nodes/core-std#ephemeral-store) module for memory-based state management.
* **Tool Integration**: Learn how to integrate tools with the streaming API [here](https://docs.midio.com/midio-docs/guides/building-agents/streaming-agent-api-experimental).
* **Defined Agent Roles**: Experiment with more sophisticated agent roles using inspiration from existing prompt libraries, such as [Anthropic’s library](https://docs.anthropic.com/en/prompt-library/library) or [this comprehensive collection](https://prompts.chat/).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.midio.com/midio-docs/tutorials/how-to-build-a-streaming-agent-using-server-sent-events-sse.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
