Tune Studio
Get started with Tune Studio.
Tune Studio is a playground and sandbox for developers integrating large language models (LLMs) into their workflows. Providing a unified dashboard and API, Tune Studio gives you access to over 100 pre-integrated LLMs, including popular models like Llama 3, Qwen, Open Hermes, and Gemma, and it’s easily integrated with additional services like Weights & Biases and Hugging Face.
With Tune Studio, you can:
- Manage and train custom models in the dynamic sandbox environment.
- Test multiple models in the playground.
- Upload custom datasets for fine-tuning.
- Download usage and performance logs.
When you’re satisfied with its performance, you can deploy a model and integrate it with your services using the Tune Studio API.
Security is a priority at Tune Studio and the platform is SOC 2 Type 2, HIPAA, and ISO 27001 compliant.
For enterprises looking for tailored solutions, Tune Studio Enterprise can be privately hosted to align seamlessly with your hybrid cloud strategy.
For further inquiries or information, please reach out to our sales team.
Signing Up
You can use Tune Studio model APIs and integrations with a free account. You will need a Pro account to fine-tune and deploy private models.
Sign up to use Tune Studio here.
To set up a team, navigate to the members page. With a free account, you can invite up to three members to a team. Upgrade to a Pro account if you need more members in your team. Take a look at the Tune Studio pricing for more information.
Getting Your Organization ID
You will need your Tune Studio Organization ID to use the API. The Organization ID is used in the HTTP header when accessing the API in certain methods.
Find your Organization ID on the settings page, which you can navigate to by clicking your organization icon at the top right and selecting Organization settings from the dropdown.
Getting an API key
You’ll need an API key to use the Tune Studio API to integrate prompts, deploy tuned models, and configure settings.
Tune Studio generates a default API key for your account on sign-up. Find your API key by clicking your organization icon at the top right and selecting View API keys from the dropdown or by clicking Access Keys from the sidebar in the Organization view.
Create a new API key by clicking + Create new key, entering a name for the new API key, and clicking Create.
Copy an API key by clicking the Copy icon next to the key in the API key table. Remember to securely save the API key and never commit it to a public repository.
Getting a Model ID
You will require the Model ID to integrate into API commands. Model ID serves as an identification among the vast plethora of models that Tune Studio provides.
In the case of using Tune Studio UI, Model ID is automatically set according to the chosen model. Find your Model ID by going over to Models tab on Studio and opening the model of choice, be it what we offer or your own custom GenAI Assistant.
Once on the model page, simply copy the model id for the chosen model using the Copy icon next to it.
Using the Tune Studio API
The Tune Studio API can be used to:
- Produce chat and text completions.
- Retrieve and list models (public models and custom deployed models).
- Create, update, terminate, and delete custom models.
- Start and terminate models.
- Log usage and performance metrics for custom models.
In the following examples, we’ll call the HTTP API directly using the Python built-in requests
package and not the tuneapi
Python package.
Take a look at the API reference page for a comprehensive description of the methods available to use with the Tune Studio API.
Creating a List Models Request
You can view the public LLMs available to use with Tune Studio in the Models tab of the Tune Studio dashboard.
You can also list the public models available to use with Tune Studio using the API.
In the list models request, only one HTTP header is needed and the Content-Type
should be set to application/json
to indicate that the body of the request contains JSON data.
The body of the list models request has the following required fields:
- A
page
object containing pagination details.limit
specifying the number of rows to return.prevPageToken
containing the previous page token (empty allowed).nextPageToken
containing the next page token (empty allowed).totalPage
specifying the total number of pages to return.
We can use the curl
CLI to test the request to the API:
The API returns a list of the models that are publicly available to use. Here’s an example of one model:
We can identify the model’s URI (the ID to use to specify the model in requests) from this JSON object, in this case, rohan/Meta-Llama-3-8B-Instruct
. We will use this URI when we build the request for a chat completion in the next section.
List Models Using Python
Here’s how you can list available models using Python.
- In your working directory, create a new file named
list_models.py
. - Paste the following code into the new file:
- Run the Python script with the following command:
You should get a successful response with the list of models:
You can list your privately deployed models in a similar way. See the list models API reference for more info.
Creating a Chat Completion Request
A request for a chat completion methods must include the following HTTP headers:
Authorization
containing your Tune Studio API key.Content-Type
set toapplication/json
to indicate JSON data in the body of the request.
The JSON body of the request must have the following structure:
messages
- An object array containing a list of messages that require completion. Each message item requiresrole
andcontent
fields.model
- The model to be used for the chat completion, specified as a string.stream
- A boolean indicating how the API should return responses. When set totrue
, messages are streamed (returned one at a time). When set tofalse
, a complete list of messages is returned.penalty
- A float number value that penalizes new tokens if they are found to have appeared in previous generated text.max_tokens
- Indicating the maximum number of tokens to generate per output sequence.
Here’s an example chat completion API request using the curl
CLI to send to the Tune Studio API (Replace <YOUR_API_KEY>
with your actual API key):
Here we specify one of the public models available in Tune Studio with the model ID rohan/Meta-Llama-3-8B-Instruct
, and supply the weather forecast for Barcelona as context. Note that the temperature
field controls the level of randomness in the LLM response (in Tune Studio, the available range for temperature
is 0-2).
We get the following response from the model:
We know the API request is working successfully, as we get a valid response from the model.
Creating a Chat Completion Request Using Python
Here’s how you can implement a chat completion request using Python.
- Create a file named
tunestudio_api.py
and open it in your IDE or editor. - Paste the following code in the new file (Replace
YOUR-API-KEY
with your actual API key):
- Run the script with the following command:
LLM responses will differ each time they are generated. Here is the output we received when we ran the script:
Tune API Python Package
The Tune API Python package makes integrating Tune Studio into your Python projects easy.
Install the Tune API Python package with the following command:
At the time of writing the tuneapi
Python package is still in beta phase. Feel free to test it out but we recommend you to use the HTTP API directly using requests
for production purposes.
You can request new features and report bugs on the Tune Studio GitHub issues repository.
Tune Studio Advanced use
Let’s take a closer look at some of the more advanced functionality of Tune Studio.
Chat Completion vs. Text Completion
In the Tune Studio playground, you can switch between Chat and Text Complete interaction modes with LLMs. Selecting Chat sets the interaction mode to chat completion, and Text Complete to text completion.
Chat completion is optimized for back-and-forth interactions, similar to a conversation. This interaction mode maintains context over a thread of messages and can generate responses that depend on the conversation history, so it’s ideal for applications needing more complex dialog, like interactive chatbots or virtual assistants.
Text completion generates single responses based on a prompt, and is useful for tasks that need a single, concise output, for example, completing a story, generating a list, or answering specific questions.
What Are Custom Or Private Models?
Your Tune Studio account gives you access to the available public models that are predeployed on the platform, or you can deploy models from OpenAI, Anthropic, Mistral, Open Router, Groq, Bucket, or Hugging Face if you have an API key for the provider.
Alternatively, you can deploy pre-trained custom models that are tailored to meet your unique needs. These pre-trained language models (LMs) are private, and you can use Tune Studio to manage, train, and fine-tune your LM to enhance performance for specific tasks or domains.
Explore the model options available or deploy a custom model on the models page.
Fine-tuning Custom Models
Tune Studio provides streamlined fine-tuning tools that help you leverage your own data to train your custom model so that it’s optimized for its specific purpose. Fine-tuning improves custom models’ performance and optimizes latency to reduce costs.
When your fine-tuning job is complete, you can download the fine-tuned weights for further use or deploy your fine-tuned model to use in the Tune Studio playground and integrate into your applications and workflows.
Hugging Face Integration
Integrate Tune Studio with your Hugging Face account to access your custom model data and automatically push fine-tuned weights to your repository.
Take a look at the integrations documentation for more information.
Accessing And Utilizing Logs
Logs are useful resources for assessing your models’ performance, identifying problems, and helping to fine-tune results. You can monitor your model’s performance, debug issues, and optimize responses using Tune Studio.
- Enable logs for your model in the model “Settings” section.
- Generated logs for custom private models that you have deployed can be accessed using the API or in the dashboard.
- Logs for integrated models (like ChatGPT and Groq) are currently only accessible in the dashboard.
- See the get logs documentation in the Tune Studio API reference for more information.
How to get support
If you have any issues while using Tune Studio, need further information, or need help with your account or billing, click the Support link at the top-right of the docs to send us an email.
We’d also love to hear your feedback about Tune Studio. Get in touch using the Support link, or find us on the following channels:
Was this page helpful?