Building Agents

Midio has its own native wrapper over the OpenAI completions API, in the native llm package. This is a very raw wrapper, and we recommend you instead use the various high level packages over the various LLM providers, like open-ai, groq, anthropic, and so on, which can be installed from the package manager.

All of these packages have a similar API

The various inputs are used as follows:

  • context - Is used when you want to call Chat Complete in a loop, where each iteration continues with the result context of the previous loop. This can generally be left blank.

  • api key - A valid API key from the LLM provider you're using.

  • model - A valid model can be choosen from the drop down list next to the input.

  • temperature - This parameter tweaks how "random" the output is. Higher values causes more randomness.

  • tools - The tools parameter can be supplied with a list of functions that the model can call. Any Midio function can be used as a tool. See the example below.

  • tool choice - This can be used to force the model to call a certain tool. It can generally be set to auto when you do provide tools, but must be set to none if you provide no tools. See the OpenAI docs for more information about this parameter.

  • response format - Used to force the LLM to output structured JSON data. See the section on structured output for more information.

  • system message - This input is used to describe how the agent should behave. Use this field to provide it with a role description and inform its general behavior. This is also a good place to provide examples.

  • user message - This is where you put your users input to the agent. If this was in a chat bot setting, this input should receive the users message.

  • assistant prefill - This is an optional field, which can be used to guide how the agents response should start. Whatever you write here, is essentially treated as the beginning of the response. You can read about prefilling here.

Using Tools

Any Midio function can be used as a tool. Simply connect their top right (the squre) connector to the tools input. Make sure to also auto for the tool choice input.

Structured output

LLMs can be made to output structured JSON data by providing a value to the response format input. One can either ask it to generate JSON by passing the string "json_object", or pass a schema object like the following:


{
    name: "RandomStreets",
    description: "A list of plausible sounding street names and the cities they are in.",
    schema: {
        streets: [
            {
                name: "The name of the street",
                city: "The city the street is in"
            }
        ]
    }
}

Schema names must only include alphanumeric characters, underscores, and hyphens. Provide a description detailing the structure's contents. The schema field requires an 'example' object. The data type of each field's value in the example object is identified automatically. For string values, the text you provide serves as the field's description. The above example object defines the following data type:

{
    "properties": {
        "streets": {
            "type": "array",
            "items": {
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city the street is in"
                    },
                    "name": {
                        "type": "string",
                        "description": "The name of the street"
                    }
                },
                "type": "object"
            }
        }
    },
    "type": "object"
}

Last updated

Was this helpful?