Text Completion
Generate completions for a prompt in a text context.
Authorizations
Tune API Keys are the preferred way to authenticate with the API. You can create an API Key from your Tune Studio Profile → 'Access Keys' (in sidebar)
Headers
Organization ID. If not provided, the user's default organization will be used.
Body
The model named to use for completion, looks like username/model-name
Maximum number of tokens to generate per output sequence.
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
Number of output sequences to return for the given prompt.
If true, the API will return a stream of responses, one at a time. If false, the API will return all responses at once.
List of tokens where the API will stop generating completions.
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Dictionary of logit bias values for specific tokens. For example, you can use this to force the model to generate a specific token by setting the bias value to a large positive number.
If true, the API will echo the prompt in the response.
Response
Was this page helpful?