The Assistants API offers a robust framework for developers to create AI assistants tailored to their applications. With the API, you can seamlessly integrate conversational agents, define custom actions, and leverage pre-built tools to enhance user experiences.

Comparison with Traditional Models

AI assistants offer a significant advantage over traditional models, especially in terms of speed and flexibility. With a 10x speed improvement, assistants can generate responses, execute actions, and retrieve information much faster, streamlining workflows and saving valuable time.

Assistants excel in use cases where:

  • Real-time interaction is required, such as customer support or live chat.
  • Complex, multi-step tasks need to be executed, like scheduling, task automation, or data retrieval.
  • Custom workflows are needed, leveraging dynamic tool integration to fit specific needs.

By combining speed with customizability, assistants outshine traditional models in dynamic, user-interactive environments.

What You Can Do with a Custom AI Assistant

  • Get Up-to-Date Information: Ask questions like, “What is the current weather in my city?”
  • Do a Web Search: Request current SEO updates.
  • Generate Images: Create images in threads via Flux.
  • Automate Your Workflow: Book a meeting with your team for next Tuesday at 10 a.m.
  • And More

Getting Started

To begin working with AI assistants, you have two options:

  1. Explore the Tune Assistants Playground: Alternatively, you can explore the Assistants capabilities in the interactive Tune Assistants Playground.
  2. Use the Assistants API: Integrate Assistant capabilities directly into your applications.

How to Create an AI Assistant in Minutes using Tune Assistant Playground

  1. Build an Assistant by Providing a Model ID: This ID will be used in all APIs.
  2. Set Specific Instructions: Define the instructions that the assistant needs to follow.
  3. Choose the Response Model: Select a model that combines responses from multiple tools to generate the final output.
  4. Select the Function Calling Model: Pick a model that enables the assistant to decide which function to call.
  5. Pre-Set Tools: Equip your assistant with tools such as:
    • Search: Enable the assistant to perform web searches (note: it cannot visit the search results).
    • Image Generation: Allow the assistant to generate creative images.
  6. Specify Actions: Define the actions your assistant can perform. Actions serve as messengers between your LLMs and external applications, helping you:
    • Interact with External Apps: Connect with outside applications using RESTful APIs by providing an OpenAPI spec.
    • Retrieve Data: Fetch information from external applications, such as recent orders.
    • Take Actions: Perform tasks in other applications, like sending an email.
  7. Function Calling: Describe your function to the model.

Function calling allows you to access real-time data and execute custom functions within a specific context, enabling the assistant to perform tasks that go beyond simple text-based conversations.

Explanation of the Assistant’s Processing Flow

Tune Assistant Workflow is powered by a function call model which holds the capacity to semantically check for any potential calls to a function instead of the usual llm operations, and further if there are Tune Assistant Tools defined executes them on a priority over the existing OpenAI Format Tools.

1. Normal Query (Tools = OFF)

  • Request: A simple query, e.g., “What is 2 + 2?”
  • Process:
    • The request is sent to the function model.
    • The function model processes the request using only the language model (LLM) without any additional tools.
    • The response is generated and sent directly back to the user.

2. Search Internet (Tools = ON)

  • Request: A more complex query that requires external information, e.g., “What movies are playing…”
  • Process:
    • The request is sent to the function model.
    • The function model identifies that this query requires a specific tool (“find_movies”) to fetch real-time data.
    • It checks if a Tune Tool is available for this operation.
      • If “Yes”, it proceeds with running the Tune Tool.
      • The tool performs its operation (e.g., searching for current movie listings).
      • Results from the tool are processed by the response model.
      • Finally, the response is delivered back to the user.

3. Custom Functions

  • Request: Similar complex query requiring specific operations, e.g., “What movies are playing…”
  • Process:
    • The request is sent to the function model.
    • The function model identifies that this query requires a specific tool (“find_movies”).
    • It checks if a Tune Tool is available for this operation.
      • If “No”, it handles like a normal function call using only LLM or default methods without any additional tools.
      • Directly generates and sends back a response to the user.

Summary

  1. Normal Query: Simple queries handled entirely by LLM without any external tools.
  2. Search Internet (Tools ON): Complex queries requiring real-time data fetched using available Tune Tools before generating a response.
  3. Custom Functions: Similar complex queries but prioritize custom functions; if no custom tool exists, handled like normal queries using LLM or default methods.

Large Language Models (LLMs) that Support Function Calling

  • OpenAI: GPT-3.5 Turbo, GPT-4
  • Google: Gemini
  • Anthropic: Claude
  • Cohere: Command R, Command R+
  • Mistral: Small, Large, Mixtral 8x22B
  • NexusRaven
  • Gorilla OpenFunctions
  • Nous Hermes 2 Pro