You can interact with the chat completions API via:

HTTP Requests

Run the following command in a terminal (replacing $UNIFY_KEY with your own).

curl -X 'POST' \
    'https://api.unify.ai/v0/chat/completions' \
    -H "Authorization: Bearer $UNIFY_KEY" \
    -H 'Content-Type: application/json' \
    -d '{
    "model": "llama-3-8b-chat@fireworks-ai",
        "messages": [{"role": "user", "content": "Say hello."}]
    }'

The model field is used to specify both the model and the provider, in the format model@provider. The full specification for this REST API endpoint is available here.

See the previous section for an explanation on how to list all available models, providers and endpoints (model + provider pairs).

Requests can easily be made from any language, for example using Python:

import requests

url = "https://api.unify.ai/v0/chat/completions"
headers = {
    "Authorization": "Bearer UNIFY_KEY",
}
payload = {
    "model": "llama-3-8b-chat@together-ai",
    "messages": [{"role": "user", "content": "Say hello."}]
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)

The output from the LLM is accessed via choices[0].message.content. The usage field contains the number of prompt/completion tokens, as well as the total cost of the request, in US$. In the request you can specify other common parameters, like temperature, stream and max_completion_tokens.

You can specify all of the parameters that OpenAI supports, but they may not be compatible with every model or provider. As mentioned above, you can find all arguments in the chat completions API reference.

Unify Python Package

First, download our Python package with pip install unifyai. You can then quickly get started like so:

import unify
client = unify.Unify("llama-3-8b-chat@fireworks-ai", api_key="$UNIFY_KEY")
response = client.generate("hello world!")

If you save your API key to the environment variables UNIFY_KEY, then you don’t need to specify the api_key argument in the example above.

As explained here, you can list the models, providers and endpoints using the functions unify.utils.list_models(), unify.utils.list_providers() and unify.utils.list_endpoints()

OpenAI Python Package

The Unify API is designed to be compatible with the OpenAI standard, so if you have existing code that uses the OpenAI Python package, it’s straightforward to try out our API.

from openai import OpenAI

client = OpenAI(
    base_url="https://api.unify.ai/v0/",
    api_key="UNIFY_KEY"
)

stream = client.chat.completions.create(
    model="llama-3.1-405b-chat@fireworks-ai",
    messages=[{"role": "user", "content": "Say hi."}],
    stream=True,
)
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Remember that the model field needs to contain a string of the form model@provider.

OpenAI NodeJS Package

Likewise, if you have existing code that uses the OpenAI NodeJS package, it’s again very straightforward to try out our API.

const openai = new OpenAI({
  baseUrl: "https://api.unify.ai/v0/",
  apiKey: "UNIFY_KEY"
});

const stream = await openai.chat.completions.create({
  model: "llama-3.1-405b-chat@fireworks-ai",
  messages: [{"role": "user", "content": "Say hi."}],
  stream: true
});

Again, remember that the model field needs to contain a string of the form model@provider.