With so many LLMs and providers constantly coming onto the scene, each of these is increasingly striving to provide unique value to end users, and this means that there are often diverging features offered behind the API.

For example, some models support RAG directly in the API, others support function calling, tool use, image processing, audio, structured output (such as json mode), and many other increasingly complex modes of operation.

Unified Arguments

Our API builds on top of and extends LiteLLM under the hood. As a starting point, we recommend you go through the Input Params in their chat completions docs to find out the unified arguments we support (an alias to their “Input Params”). We use the latest PyPI version at all times. In general, all models and providers are unified to adopt the OpenAI Chat Completions API standard, as can be seen in our own chat completion API reference. We also extend support to several other providers not handled by LiteLLM, such as Lepton AI and OctoAI.

Tool Use Example

OpenAI and Anthropic have different interfaces for tool use. Since our API adheres to the OpenAI standard, we accept tools as specified by this standard.

This is the default function calling example from OpenAI, working with an Anthropic model:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
            },
        },
    }
]

client = unify.Unify("claude-3.5-sonnet@aws-bedrock")
client.generate("What is the current temperature of New York, San Francisco and Chicago?", tools=tools, tool_choice="auto", message_content_only=False)

Vision Example

Unify also supports multi-modal inputs. Below are a couple of examples analyzing the content of images.

Firstly, let’s use gpt-4o to work out what’s in this picture:

import unify
client = unify.Unify("gpt-4o@openai")
response = client.generate(
  messages=[
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "What’s in this image?"},
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
          },
        },
      ],
    }
  ],
  max_completion_tokens=300,
)

print(response)

We get something like the following:

The image depicts a serene landscape featuring a wooden pathway that extends into the distance, cutting through a lush green field.
The field is filled with tall grasses and a variety of low-lying shrubs and bushes.
In the background, there are scattered trees, and the sky above is a clear blue with some wispy clouds.
The scene exudes a calm, peaceful, and natural atmosphere.

Let’s do the same with claude-3-sonnet, with a different image this time:

import unify
client = unify.Unify("claude-3-sonnet@anthropic")
response = client.generate(
  messages=[
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "What’s in this image?"},
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg",
          },
        },
      ],
    }
  ],
  max_completion_tokens=300,
)

print(response)

We get something like the following:

The image shows a close-up view of an ant. The ant appears to be a large black carpenter ant species.
The ant is standing on a light-colored surface, with its mandibles open and antennae extended.
The image captures intricate details of the ant's body segments and legs.
The background has a reddish-brown hue, creating a striking contrast against the dark coloration of the ant.
This macro photography shot highlights the remarkable structure and form of this small insect in great detail.

Platform Arguments

In addition to the unified arguments, we also accept other arguments specific to our platform, referred to as Platform Arguments

  • signature specifying how the API was called (Unify Python Client, NodeJS client, Console etc.)
  • use_custom_keys specifying whether to use custom keys or the unified keys with the provider.
  • tags: to mark a prompt with string-metadata which can be used for filtering later on.
  • drop_params: in case arguments passed aren’t supported by certain providers, uses this
  • region: the region where the endpoint is accessed, only relevant for certain providers like vertex-ai and aws-bedrock.

Passthrough Arguments

The passthrough arguments are not handled by Unify at all, they are passed through directly to the backend provider, without any modification. We build upon LiteLLM’s Provider-specific Params for handling these arguments. “Passthrough arguments” is effectively an alias for LiteLLM’s “Provider-specific Params”.

There are three types of passthrough arguments:

  1. extra headers, which are passed to the curl request like so:

    curl --request POST \
    --url https://api.unify.ai/v0/chat/completions \
    --header "<header_name>: <value>"
    ...
    
  2. extra query parameters, which are passed to the curl request like so:

    curl --request POST \
    --url https://api.unify.ai/v0/chat/completions?<parameter>=<value> \
    ...
    
  3. extra json properties, which are passed to the curl request like so:

    curl --request POST \
    --url https://api.unify.ai/v0/chat/completions \
    --data '{
    "<key>": "<value>"
    ...
    

In the Python client, these extra arguments are handled by the extra_headers argument, extra_query argument, and direct **kwargs of the generate function, respectively.

For arguments which are part of the OpenAI standard and also another provider standard, then only the OpenAI behaviour is supported.

For example, messages is an argument used in the APIs of both OpenAI and Anthropic , as can be seen here and here respectively. Any provider-specific aspects of this argument (different to OpenAI behaviour) are therefore not supported, and only the OpenAI behaviour is supported.

As an example, the following example works with Anthropic:

import anthropic
import base64
import httpx

image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_media_type = "image/jpeg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")

client = anthropic.Anthropic()
response = client.messages.create(
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What’s in this image?"},
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": image_media_type,
                        "data": image_data,
                    },
                },
            ],
        }
    ],
    max_completion_tokens=300,
    model="claude-3-opus-20240229"
)
print(response)

But an example using the same formatting would fail with our API:

import unify
import base64
import httpx

image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_media_type = "image/jpeg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")

client = unify.Unify("claude-3-sonnet@anthropic")
response = client.generate(
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What’s in this image?"},
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": image_media_type,
                        "data": image_data,
                    },
                },
            ],
        }
    ],
    max_completion_tokens=300,
)
print(response)

To make use of vision features, then the OpenAI format must be adopted in the messages argument, as per the working Anthropic vision example above.

Anthropic Example

Anthropic exposes the top_k argument, which isn’t provided by OpenAI. If you include this argument, it will be sent straight to the model. If you send this argument to a provider that does not support top_k, you will get an error.

curl --request POST \
  --url 'https://api.unify.ai/v0/chat/completions' \
  --header 'Authorization: Bearer $UNIFY_KEY' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "claude-3.5-sonnet@anthropic",
    "messages": [
        {
            "content": "Tell me a joke",
            "role": "user"
        }
    ],
    "top_k": 5,
    "max_tokens": 1024,
}'

This can also be done in the Unify Python SDK, as follows:

client = unify.Unify("claude-3-haiku@anthropic")
client.generate("hello world!", top_k=5)

The same is true for headers. Features supported by providers outside of the OpenAI standard are sometimes released as beta features, which can be accessed via specific headers, as explained in this tweet from Anthropic.

These headers can be queried directly from the Unify API like so:

curl --request POST \
  --url 'https://api.unify.ai/v0/chat/completions' \
  --header "Authorization: Bearer $UNIFY_KEY" \
  --header 'Content-Type: application/json' \
  --header 'anthropic-beta: max-tokens-3-5-sonnet-2024-07-15' \
  --data '{
    "model": "claude-3.5-sonnet@anthropic",
    "messages": [
        {
            "content": "Tell me a joke",
            "role": "user"
        }
    ]
}'

Again, this can also be done in the Unify Python SDK as follows:

client = unify.Unify("claude-3-haiku@anthropic")
client.generate("hello world!", extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"})

Python SDK

All of these arguments (unified, platform and passthrough arguments) are supported in the Python SDK. The unified and platform arguments are explicitly mirrored in the generate function of the various derivative clients, such as Unify and AsyncUnify.

Default Arguments

When querying LLMs, you often want to keep many aspects of your prompt fixed, and only change a small subset of the prompt on each subsequent call.

For example, you might want to fix the temperate, the system message, and the tools available, whilst passing different user messages coming from a downstream application. All of the clients in unify make this very simple via default arguments, which can be specified in the constructor, and can also be set any time using setters methods.

For example, the following code will pass temperature=0.5 to all subsequent requests, without needing to be repeatedly passed into the .generate() method.

client = unify.Unify("claude-3-haiku@anthropic", temperature=0.5)
client.generate("Hello world!")
client.generate("What a nice day.")

All parameters can also be retrieved by getters, and set via setters:

client = unify.Unify("claude-3-haiku@anthropic", temperature=0.5)
print(client.temperature)  # 0.5
client.set_temperature(1.0)
print(client.temperature)  # 1.0

Passing a value to the .generate() method will overwrite the default value specified for the client.

client = unify.Unify("claude-3-haiku@anthropic", temperature=0.5)
client.generate("Hello world!") # temperature of 0.5
client.generate("What a nice day.", temperature=1.0) # temperature of 1.0

Prompt instances are explained in the next section, but it’s worth mentioning here that default prompts can also be both extracted and set for clients. The default prompt is determined dynamically based on all of the default values mentioned above during retrieval:

import unify
client = unify.Unify("gpt-4o@openai", temperature=0.5, n=2)
assert client.default_prompt.temperature == 0.5
assert client.default_prompt.n == 2

In other direction, setting the default prompt will update the various default parameters that were explained above:

import unify
client = unify.Unify("gpt-4o@openai", temperature=0.5, n=2)
prompt = unify.Prompt(temperature=0.4)
client.set_default_prompt(prompt)
assert client.temperature == 0.4
assert client.n is None

Calls to set default arguments can also be chained together like so, as each call set_<some_parameter> returns self:

import unify
client = unify.Unify("gpt-4o@openai")
client.set_temperature(0.5).set_n(2)
assert client.temperature == 0.5
assert client.n == 2

Feedback

If you believe any of these arguments could be supported by a certain model or provider, but is not currently supported, then feel free to let us know on discord and we’ll get it supported as soon as possible! ⚡

Similarly, if you would like to see new arguments and features supported in the platform, then let us know! We always love to hear how we can improve features and functionality.