FAQs
Frequently Asked Questions
Organization
x-org-id
can be found in the Organization Settings
x-tune-key
can be found in Access Keys
See Authentication and Organizations for more detailed explanation.
- Click on the “Members” tab within the “Organization” section.
- Click the “Invite Members” button and enter the email addresses of the people you want to invite.
- Choose their role within the organization (e.g., Owner, Reader) and click “Send Invite.”
Two-factor authentication (2FA) adds an extra layer of security to your account by requiring a second factor, such as a code from your phone for all team members, in addition when logging in. To enable 2FA, go to the “Settings” tab and toggle the “Require two-factor authentication for everyone in the organization” switch to “on.”
Datasets
You can use Playground and select threads form the first dropdown to add new data to your existing dataset.
Using a Model’s Dataset for Fine-Tuning:
- Select the Model: Choose the model that you want to use the data from.
- Download API Logs: Download the API logs associated with the selected model.
- Use for Fine-Tuning: You can then upload these logs as a dataset for your fine-tuning jobs.
To Update a Fine-Tuning Job with New Data:
Simply create a new fine-tuning job with the new data. This will ensure that your model is finetuned on the latest information.
Models
Checkout the public model list for the latest models available on Tune Studio.
If you’re looking for a particular model that’s not on our list, don’t hesitate to reach out to support.
We provide Nvidia L4
If you have any specific hardware requests, feel free to reach out to support.
If a model doesn’t receive any requests for a certain period, it will automatically shut down to conserve resources and reduce costs. This feature helps optimize resource utilization and minimize expenses.
Customizing Auto-Shutdown and Inactive Shutdown:
By default, your deployed model is set to auto-shutdown after 4 hours of inactivity and inactive shutdown after 2 hours. However, you can easily customize these settings to fit your needs.
When deploying your model, you can adjust the auto-shutdown and inactive shutdown timings to your preference. Alternatively, if your model is already deployed, simply navigate to the “Settings” section and choose the “Auto-Shutdown” or “Inactive Shutdown” options to customize the timings.
Model Deletion Timeline:
When a model is deleted, it undergoes a hard deletion process, which means that it is permanently removed and cannot be retrieved. Please note that this is a permanent action and should be taken with caution.
If you’re looking to reduce costs, we recommend terminating the model instead of deleting it. This way, you can easily restart the model when needed.
If the model was deployed through a job, you can access the weight files for up to 7 days after deletion. To retrieve the weight files, please reach out to support.
Deploying a Fine-Tuned Model on Tune Studio:
Once you’ve successfully fine-tuned your model and the job status is “Done”, you can easily deploy it on Tune Studio. To do so, simply click on the job and navigate to the “Overview” section. There, you’ll find the option to deploy your model. Click on it, and you’re all set!
If a model from Hugging Face or the model list failed to start, and the server logs are something on the lines of “cannot access gated repo” or “access to model is restricted”, you can try the following steps:
- Hugging face integration: Make sure you have huggingface integrated to TuneStudio.
- Model Access: A lot of times the model access is restricted and so you need to apply for it on the respective Hugging Face model page.
If you encounter any issues with deploying a model, please reach out to our support team at
Finetune Jobs
We offer a wide range of base models for fine-tuning, including:
- LLaMA 2 7B Base
- LLaMA 2 7B Chat
- CodeLLaMA 7B Base
- CodeLLaMA 7B Instruct
- EleutherAI/llemma_7b
- Finance-LLM
- Law-LLM
- Biomedicine-LLM
- LLaMA 2 13B Base
- LLaMA 2 13B Chat
- CodeLLaMA 13B Base
- LLaMA 3 8B Instruct
- Mistral-7B-Instruct-v0.2
- OpenChat 3.5 m7B 16k
- OpenHermes 2.5 - Mistral 7B
- Mixtral-8x7B-Instruct
- Phi 2
If you’re looking for a particular model that’s not on our list, reach out to support.
You can upload your own custom datasets or use datasets from hugging face or choose from publicly available ones (like: vicgalle/alpaca-gpt4, OpenAssistant/oasst1 and more)
This depends on factors like model size, dataset size, hardware, and chosen hyperparameters (number of epochs, learning rate)
You can use logs to stay updated. To view logs, simply:
- Click on the job you want to monitor
- Click on the “Logs” tab
There, you’ll find all the information you need to track the progress of your fine-tuning job.
Rerunning a Job:
You can rerun a job in two scenarios:
Latest Dataset: If you’ve updated your dataset and want to rerun the job with the new data. Unexpected Error: If an error occurred during the initial run, and you want to retry the job. To rerun a job, simply terminate the original job and click the “Retry Job” button. This will allow you to restart the job with the updated dataset or retry the job to resolve any errors that occurred.
Reasons could include:
- Insufficient resources: Not enough GPU or compute power.
- Dataset issues: Errors or formatting problems in your dataset.
- Hyperparameter choices: Learning rate too high etc.
- Platform errors: Bugs or temporary issues with the platform. For any platform related issues, reach out to support.
To access the fine-tuned weights, follow these steps:
- Click on any successful job that has a status of “Done”.
- Navigate to the “Files” section of that job.
- There, you’ll find the fine-tuned weights
In the Files tab, you might notice that there are two separate config.json files. Here’s what they represent:
Adaptor Config: One of the config.json files belongs to the Adaptor, which is responsible for merging the weights. Fully Merged Weights Config: The other config.json file contains the fully merged weights.
Integrations
To use Studio through LangChain, you can follow these steps
- Install langchain openai
pip install langchain-openai
- Export studio key as OpenAI key
export OPENAI_API_KEY="<Tune Studio API Key>"
- Init ChatOpenAI()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://proxy.tune.app/",
model="MODEL_ID",
)
llm.invoke("how can langsmith help with testing")
- Profit 💯 Try building a translator app using LangChain + Tune + Streamlit Follow the LangChain tutorial here
To use Studio through LLama Index, you can follow these steps
- Install llama-index
pip install llama-index
pip install llama-index-llms-openrouter
- Export studio key as OpenAI key
export OPENROUTER_API_KEY="<Tune Studio API Key>"
- Init
llm
using OpenRouter
from llama_index.llms.openrouter import OpenRouter
llm = OpenRouter(
api_base="https://proxy.tune.app",
max_tokens=256,
model="MODEL_ID",
)
response = llm.complete("Paul Graham is ")
print(response)
- Profit 💯 Follow along with the LlamaIndex tutorial here
Additional
No. At Tune AI, we take data privacy and security very seriously. We have strict policies in place to ensure that your models, data, and code remain confidential and are never shared with anyone. We are SOC 2, ISO 27001, and HIPAA compliant, ensuring that our infrastructure, data storage, and handling practices meet the highest standards of security and privacy.
Share Your Feedback about Tune AI!
We’re all ears! 🗣️ We’d love to hear your thoughts and feedback about Tune AI. To share your input, simply head over to our Support section and let us know what’s on your mind. Our talented product team regularly reviews feedback and takes action to improve the product, making it better for you 😊
We support all the issues related to Tune AI product, account, and billing 😊
We aim to respond to your inquiries within 3 days. Our support team is dedicated to providing timely assistance, and we’ll do our best to get back to you as quickly as possible.
Was this page helpful?