# Building Agents

Midio has its own native wrapper over the OpenAI completions API, in the native `llm` package. This is a very raw wrapper, and we recommend you instead use the various high level packages over the various LLM providers, like **open-ai, groq, anthropic**, and so on, which can be installed from the package manager.

<figure><img src="/files/tcVs8cgNdtmlvtkwitiB" alt="" width="375"><figcaption></figcaption></figure>

<figure><img src="/files/H06roNrQ7g7B1y7jAzfk" alt="" width="563"><figcaption></figcaption></figure>

## All of these packages have a similar API

<figure><img src="/files/wJqwMixl9g8BboxkoY6J" alt=""><figcaption></figcaption></figure>

The various inputs are used as follows:

* `context` - Is used when you want to call `Chat Complete` in a loop, where each iteration continues with the result context of the previous loop. This can generally be left blank.
* `api key` - A valid API key from the LLM provider you're using.
* `model` - A valid model can be choosen from the drop down list next to the input.
* `temperature` - This parameter tweaks how "random" the output is. Higher values causes more randomness.
* `tools` - The tools parameter can be supplied with a list of functions that the model can call. Any Midio function can be used as a tool. See the example below.
* `tool choice` - This can be used to force the model to call a certain tool. It can generally be set to `auto` when you do provide tools, but must be set to `none` if you provide no tools. See [the OpenAI docs](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice) for more information about this parameter.
* `response format` - Used to force the LLM to output structured JSON data. See the section on [structured output](#structured-output) for more information.
* `system message` - This input is used to describe how the agent should behave. Use this field to provide it with a role description and inform its general behavior. This is also a good place to provide examples.
* `user message` - This is where you put your users input to the agent. If this was in a chat bot setting, this input should receive the users message.
* `assistant prefill` - This is an optional field, which can be used to guide how the agents response should start. Whatever you write here, is essentially treated as the beginning of the response. You can read about prefilling [here](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response).

## Using Tools

Any Midio function can be used as a tool. Simply connect their top right (the squre) connector to the `tools` input. Make sure to also `auto` for the `tool choice` input.

<figure><img src="/files/HUYodialGs9ZJ52V9qaF" alt="" width="563"><figcaption></figcaption></figure>

## Structured output

LLMs can be made to output structured JSON data by providing a value to the `response format` input. One can either ask it to generate JSON by passing the string "json\_object", or pass an example or schema object like the following:

```json

{
    name: "RandomStreets",
    description: "A list of plausible sounding street names and the cities they are in.",
    example: {
        streets: [
            {
                name: "The name of the street",
                city: "The city the street is in"
            }
        ]
    }
}
// or schema 
{
    name: "RandomStreets",
    description: "A list of plausible sounding street names and the cities they are in.",
    schema: {
        "type": "object",
        "properties": {
            "streets": {
                  "type": "array",
                  "items": {
                    "type": "object",
                    "properties": {
                      "name": {
                        "type": "string",
                        "description": "The name of the street"
                      },
                      "city": {
                        "type": "string",
                        "description": "The city the street is in"
                      }
                    },
                    "required": ["name", "city"],
                    "additionalProperties": false
                  }
            }
        },
        "required": ["streets"],
        "additionalProperties": false
    }
}
```

<figure><img src="/files/UgZvaGgJ7XnRIVEiMeu3" alt=""><figcaption></figcaption></figure>

Schema names must only include alphanumeric characters, underscores, and hyphens. Provide a description detailing the structure's contents. The schema field requires an 'example' object. The data type of each field's value in the example object is identified automatically. For string values, the text you provide serves as the field's description. The above example object defines the following data type:

```
{
    "properties": {
        "streets": {
            "type": "array",
            "items": {
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city the street is in"
                    },
                    "name": {
                        "type": "string",
                        "description": "The name of the street"
                    }
                },
                "type": "object"
            }
        }
    },
    "type": "object"
}
```

{% hint style="warning" %}
Most LLM providers demand that you include the word JSON in your prompt (either system message or user message), when asking it for structured output. Example: "Create a list of random Norwegian street names and respond with JSON."
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.midio.com/midio-docs/guides/building-agents.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
