githubEdit

Concurrent Research Agent

In this guide we will create a Research Agent that takes a given user question and automatically does several things:

  • Breaks the question into discrete research tasks

  • Launches multiple sub‑agents that search the web in parallel

  • Collects and deduplicates their findings

  • Produces a concise, well‑structured final answer

We leverage the built‑in concurrency primitives such as Std.Spawn Processes from List and Std.MergeAll.


High‑Level Architecture

Our solution is split into three main phases:

Phase
Responsibility

1) Task Planning

Turn the user prompt into a list of concrete research tasks

2) Concurrent Research

Spawn one child process per task, each of which: • Transforms its task into a search engine query • Queries the Tavily Search API • Summarizes the results

3) Aggregation & Synthesis

Merge all child outputs, filter failures, and craft the master answer

Phase 1: Task Planning

The master agent needs a machine‑readable list of subtasks. Midio’s Chat Complete (avilable in the open-ai package) node supports structured output which lets an LLM return valid JSON that is guaranteed to match a user‑defined schema.

  1. We can force the LLM to output a custom schema by providing an object like the following to the response format input. We also need to mention in the system prompt that we expect JSON output.

  2. To the system message input, we pass the following

  3. And the users query is passed directly to the user message input.

Phase 2: Concurrent Research

1. Spawning Child Processes

We can spawn a process for each generated task by using the Std.Spawn Processes from List node, and connect the agents field from the response of the Generate Research Tasks function.

We can then connect the child spawned trigger (which is triggered once per item) to the Search Web and Summarize function.

Inside the Research Sub‑Agent

The sub‑agent receives a single task object. It first searches the web, and then summarizes its findings. Here is an example of a task object. Generated by the previous step.

2. Craft a Search Query

A Chat Complete call turns the (sometimes verbose) task into a concise search‑engine query using the following prompt:

The response is then piped directly into the query input of a Tavily.Searcharrow-up-right node. You can add the Tavily package using the package manager in the editor.

3. Convert Pages to Markdown

For each search result we get back from Tavily, we convert it to Markdown using the Html to Markdown function in the web-tools package. We use Spawn Processes from List here as well to do this task concurrently.

circle-info

Tavily supports returning a summary of the page results automatically, but we do it manually here for demo purposes.

4. Summarize Findings

Another Chat Complete node receives both the task and raw search results, and is prompted to combine them into a task summary.

Phase 3: Aggregation & Synthesis

The final step is to combine all the research data and produce a single response. We do this with another LLM call, with the following prompt:

For its user message we pass the search results and the original user query using the following expression string template:

circle-info

We also make sure to remove any null values from the inputs list, which might appear if a research agent fails for some reason.

Exposing the Agent as a REST API

To make the agent callable from any frontend (e.g. Lovable, curl, or your own app), we can hook it up to an Endpoint event node like this:

And return a response like this:

Your research micro service is now live at: https://<your-project-name>.midio.dev:3001/researcharrow-up-right and expecting a single query parameter named query. You can call it like this using curl, for example:

Last updated

Was this helpful?