Arguments
With so many LLMs and providers constantly coming onto the scene, each of these is increasingly striving to provide unique value to end users, and this means that there are often diverging features offered behind the API.
For example, some models support RAG directly in the API, others support function calling, tool use, image processing, audio, structured output (such as json mode), and many other increasingly complex modes of operation.
Unified Arguments
Our API builds on top of and extends LiteLLM under the hood. As a starting point, we recommend you go through the Input Params in their chat completions docs to find out the unified arguments we support (an alias to their “Input Params”). We use the latest PyPI version at all times. In general, all models and providers are unified to adopt the OpenAI Chat Completions API standard, as can be seen in our own chat completion API reference. We also extend support to several other providers not handled by LiteLLM, such as Lepton AI and OctoAI.
Tool Use Example
OpenAI and Anthropic have different interfaces for tool use. Since our API adheres to the OpenAI standard, we accept tools as specified by this standard.
This is the default function calling example from OpenAI, working with an Anthropic model:
Vision Example
Unify also supports multi-modal inputs. Below are a couple of examples analyzing the content of images.
Firstly, let’s use gpt-4o
to work out what’s in this picture:
We get something like the following:
Let’s do the same with claude-3-sonnet
, with a different image this time:
We get something like the following:
Platform Arguments
In addition to the unified arguments, we also accept other arguments specific to our platform, referred to as Platform Arguments
signature
specifying how the API was called (Unify Python Client, NodeJS client, Console etc.)use_custom_keys
specifying whether to use custom keys or the unified keys with the provider.tags
: to mark a prompt with string-metadata which can be used for filtering later on.drop_params
: in case arguments passed aren’t supported by certain providers, uses thisregion
: the region where the endpoint is accessed, only relevant for certain providers likevertex-ai
andaws-bedrock
.
Passthrough Arguments
The passthrough arguments are not handled by Unify at all, they are passed through directly to the backend provider, without any modification. We build upon LiteLLM’s Provider-specific Params for handling these arguments. “Passthrough arguments” is effectively an alias for LiteLLM’s “Provider-specific Params”.
There are three types of passthrough arguments:
-
extra headers, which are passed to the
curl
request like so: -
extra query parameters, which are passed to the
curl
request like so: -
extra json properties, which are passed to the
curl
request like so:
In the Python client, these extra arguments are handled by the extra_headers
argument, extra_query
argument,
and direct **kwargs
of the generate function, respectively.
For arguments which are part of the OpenAI standard and also another provider standard, then only the OpenAI behaviour is supported.
For example, messages
is an argument used in the APIs of both OpenAI and Anthropic ,
as can be seen here
and here respectively.
Any provider-specific aspects of this argument (different to OpenAI behaviour) are
therefore not supported, and only the OpenAI behaviour is supported.
As an example, the following example works with Anthropic:
But an example using the same formatting would fail with our API:
To make use of vision features, then the OpenAI format must be adopted in the messages argument, as per the working Anthropic vision example above.
Anthropic Example
Anthropic exposes the top_k
argument, which isn’t provided by OpenAI.
If you include this argument, it will be sent straight to the model.
If you send this argument to a provider that does not support top_k
, you will get an error.
This can also be done in the Unify Python SDK, as follows:
The same is true for headers. Features supported by providers outside of the OpenAI standard are sometimes released as beta features, which can be accessed via specific headers, as explained in this tweet from Anthropic.
These headers can be queried directly from the Unify API like so:
Again, this can also be done in the Unify Python SDK as follows:
Python SDK
All of these arguments (unified, platform and passthrough arguments) are supported in the Python SDK. The unified and platform arguments are explicitly mirrored in the generate function of the various derivative clients, such as Unify and AsyncUnify.
Default Arguments
When querying LLMs, you often want to keep many aspects of your prompt fixed, and only change a small subset of the prompt on each subsequent call.
For example, you might want to fix the temperate, the system message, and the tools available, whilst passing different user messages coming from a downstream application. All of the clients in unify make this very simple via default arguments, which can be specified in the constructor, and can also be set any time using setters methods.
For example, the following code will pass temperature=0.5
to all subsequent requests,
without needing to be repeatedly passed into the .generate()
method.
All parameters can also be retrieved by getters, and set via setters:
Passing a value to the .generate()
method will overwrite the default value specified
for the client.
Prompt
instances are explained in the next section, but it’s worth mentioning here
that default prompts can also be both extracted and set for clients.
The default prompt is determined dynamically based on all of the default values
mentioned above during retrieval:
In other direction, setting the default prompt will update the various default parameters that were explained above:
Calls to set default arguments can also be chained together like so,
as each call set_<some_parameter>
returns self
:
Feedback
If you believe any of these arguments could be supported by a certain model or provider, but is not currently supported, then feel free to let us know on discord and we’ll get it supported as soon as possible! ⚡
Similarly, if you would like to see new arguments and features supported in the platform, then let us know! We always love to hear how we can improve features and functionality.