POST
/
v0
/
custom_endpoint

Creates a custom endpoint. This endpoint must either be a fine-tuned model from one of the supported providers (/v0/providers), in which case the “provider” argument must be set accordingly. Otherwise, the endpoint must support the OpenAI /chat/completions format. To query your custom endpoint, replace your endpoint string with <endpoint_name>@custom when querying any general custom endpoint. You can show all custom endpoints by querying /v0/endpoints and passing custom as the provider argument.

Authorizations

Authorization
string
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Query Parameters

name
string
required

The endpoint name for your custom endpoint, in model@provider format. If it’s a custom endpoint following the OpenAI format then the provider must be @custom, otherwise if it’s a fine-tuned model from one of the existing providers it can be specified with a prepending custom-, i.e. @custom-anthropic.

url
string
required

Base URL of the endpoint being called. Must support the OpenAI format.

key_name
string
required

Name of the API key that will be passed as part of the query.

model_arg
string

The value passed to the model arugment of the underlying API which is being wrapped into Unify. For example, you might call your endpoint llama-3-baseten@custom to distinguish the custom endpoint within Unify, but under the hood need to pass llama-3.2-90b-chat to the Baseten endpoint.