Function Calling

When making an API call, you can specify functions and let the model decide on the JSON output containing function arguments. The Chat Completions API doesn’t execute the function; it simply creates JSON data for you to use in calling the function within your code.

We support function calling for providers and models that do support function calling and this ensures a JSON parsable output in most cases.

Some example of providers that support function calling are:

  1. Anthropic
  2. OpenAI
  3. Open Router
  4. Mistral
curl -X POST \
  https://proxy.tune.app/chat/completions \
  -H "Authorization: <Access Key>" \
  -H "X-Org-Id: <organization id>" \
  -d '{
    "temperature": 0.9,
    "messages": [
        {
            "role": "system",
            "content": "You are TuneStudio"
        },
        {
            "role": "user",
            "content": "this weeks weather in delhi"
        }
    ],
    "model": "<Your model here>",
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA"
                        },
                        "format": {
                            "type": "string",
                            "enum": [
                                "celsius",
                                "fahrenheit"
                            ],
                            "description": "The temperature unit to use. Infer this from the users location."
                        }
                    },
                    "required": [
                        "location",
                        "format"
                    ]
                }
            }
        }
    ],
    "stream": false,
    "penalty": 0.2,
    "max_tokens": 1000
}'

[BETA] Json Mode (guided decoding)

Json Mode does not guarantee a json parsable output

Json mode is is a form of guided decoding where you supply the model with a json schema and instead of the model generating a complete messages it iterates of the json schema and tries to fill in the values for the key value pairs in a json.

curl -X POST "https://proxy.tune.app/chat/completions" \
  -H "Authorization: <access key>" \
  -H "X-Org-Id: <organization id>" \
  -d '{
    "temperature": 0.9,
    "messages": [
      {
        "role": "system",
        "content": "You only output json"
      },
      {
        "role": "user",
        "content": "Generate a person information based on the following json schema:"
      }
    ],
    "model": "MODEL_ID",
    "stream": false,
    "penalty": 0.2,
    "response-format": {
      "type": "json_object"
    },
    "guided_json": "{\n    \"type\": \"object\",\n    \"properties\": {\n        \"name\": {\"type\": \"string\"},\n        \"age\": {\"type\": \"number\"},\n        \"is_student\": {\"type\": \"boolean\"},\n        \"courses\": {\n            \"type\": \"array\",\n            \"items\": {\"type\": \"string\"}\n        }\n    }\n}",
    "max_tokens": 100
}'