Midio docs
  • Getting Started
    • Welcome
    • Quick Start Guide
  • Your First Midio App
  • Guides
    • HTTP
      • Making HTTP Requests
      • Responding to HTTP Requests
      • How to reach your own endpoints
      • CORS
      • Server-Sent Events (SSE)
    • Branching
    • Loops
    • Map, filter, reduce
    • Working with Data
    • Expressions
    • Building Agents
      • Streaming Agent API (experimental)
    • Debugging
    • Secrets and Environment variables
    • Convert JSON to data nodes
    • Writing tests
    • Cleaning up your flows
  • Package Manager
  • Integrating with third party services
  • Troubleshooting
  • Tutorials
    • Connecting LLMs to MCP-servers using the MCP-client package
    • Making Your Own MCP Server in Midio
    • A Fast Path to Functional RAG Agents
    • How to build a streaming agent using Server-Sent Events (SSE)
  • Reference
    • The Midio Editor
      • The Node Editor
      • User Traces
      • Traces (execution)
      • Processes
      • Log
      • Services
      • Problems
      • Function Signature
      • Data
      • Settings
    • The Midio Language
      • Nodes and execution
      • Functions and Events
        • Anonymous functions
      • Modules
      • Contexts
      • Data type
      • Local variables
      • Portals
      • Waypoint node
      • Partial function application
  • The Midio Engine
  • Built in Nodes
    • Core (std)
    • HTTP
    • LLM
Powered by GitBook
On this page
  • All of these packages have a similar API
  • Using Tools

Was this helpful?

Edit on GitHub
  1. Guides

Building Agents

PreviousExpressionsNextStreaming Agent API (experimental)

Last updated 1 month ago

Was this helpful?

Midio has its own native wrapper over the OpenAI completions API, in the native llm package. This is a very raw wrapper, and we recommend you instead use the various high level packages over the various LLM providers, like open-ai, groq, anthropic, and so on, which can be installed from the package manager.

All of these packages have a similar API

The various inputs are used as follows:

  • context - Is used when you want to call Chat Complete in a loop, where each iteration continues with the result context of the previous loop. This can generally be left blank.

  • api key - A valid API key from the LLM provider you're using.

  • model - A valid model can be choosen from the drop down list next to the input.

  • temperature - This parameter tweaks how "random" the output is. Higher values causes more randomness.

  • tools - The tools parameter can be supplied with a list of functions that the model can call. Any Midio function can be used as a tool. See the example below.

  • system message - This input is used to describe how the agent should behave. Use this field to provide it with a role description and inform its general behavior. This is also a good place to provide examples.

  • user message - This is where you put your users input to the agent. If this was in a chat bot setting, this input should receive the users message.

Using Tools

Any Midio function can be used as a tool. Simply connect their top right (the squre) connector to the tools input. Make sure to also auto for the tool choice input.

tool choice - This can be used to force the model to call a certain tool. It can generally be set to auto when you do provide tools, but must be set to none if you provide no tools. See for more information about this parameter.

assistant prefill - This is an optional field, which can be used to guide how the agents response should start. Whatever you write here, is essentially treated as the beginning of the response. You can read about prefilling .

the OpenAI docs
here