Terms like prompt, message, query etc. are used very interchangeably, and we’ve reached a point where the terms mean everything and nothing.

For example, in OpenAI’s prompt engineering guide, they talk about tool use, structured output, and task decomposition. In their mental model, the “prompt” is therefore the entire set of arguments passed to the chat completions endpoint. OpenAI are therefore very careful to refer to the system message and user message, and avoid calling these messages “prompts”.

In contrast, Anthropic does use the term system prompt to refer to the system message.

There is therefore no consensus on whether the “prompt” is only the text content of the message, the entire json body, or something in between.

Our Definition

We generally adopt OpenAI’s vocabulary, but more specifically we refer to the prompt as the components of the OpenAI Request Body which affect the output. As such, stream, stream_options etc. are omitted from our Prompt.

Our motivation for this is simple. Our platform is primarily built for evaluations. In other words, iterating on the input and assessing how these iterations impact the output, and more specifically how these changes affect evaluations on the output.

In this light, we’re only concerned with storing and tracking the components of the chat completions request which have an impact on the output, and thus we have adopted this “prompt” definition.

A prompt is a json object containing all arguments in the OpenAI chat completions request body which impact the output.

Creating Prompts

Creating prompts is easy. If a single string is passed, then this is interpreted as the user message:

import unify
prompt = unify.Prompt("This is a user message.")
print(prompt)
Prompt(
    messages=[{'content': 'This is a user message.', 'role': 'user'}],
    frequency_penalty=None,
    logit_bias=None,
    logprobs=None,
    top_logprobs=None,
    max_completion_tokens=None,
    n=None,
    presence_penalty=None,
    response_format=None,
    seed=None,
    stop=None,
    temperature=None,
    top_p=None,
    tools=None,
    tool_choice=None,
    parallel_tool_calls=None,
    extra_headers=None,
    extra_query=None,
    extra_body=None
)

By default the representation mode is "verbose", which means the entire pydantic object is printed. We can prune all None values from the representation by setting unify.set_repr_mode("concise"), printing a more concise representation:

Prompt(
    "messages": [
        {
            "content": "This is a user message",
            "role": "user"
        }
    ]
)

Even when "concise" mode is set, the full representation can be viewed any time like so:

print(prompt.full_repr())

For all subsequent printed prompts in the docs, we will assume "concise" mode for brevity.

Returning to the topic of creating prompts, the messages can also be passed explicitly in the Prompt constructor:

import unify
prompt = unify.Prompt(
    messages=[{"role": "user", "content": "This is a user message"}]
)
print(prompt)
Prompt(
    "messages": [
        {
            "content": "This is a user message",
            "role": "user"
        }
    ]
)

Other parameters can also be passed:

import unify
another_prompt = unify.Prompt(
    messages=[{"role": "user", "content": "This is another user message"}],
    temperature=0.5
)
print(another_prompt)

Only the parameters explicitly provided will be printed:

Prompt(
    "messages": [
        {
            "content": "This is another user message",
            "role": "user"
        }
    ],
    "temperature": 0.5
)

Passing Prompts

Prompts can be passed to our various clients by simply unpacking the return of their .dict() them as keyword arguments into the constructor (to set default arguments) or into the .generate() method to limit to current query.

import unify
prompt = unify.Prompt(
    temperature=0.5
)
client = unify.Unify(**prompt.dict())
assert client.temperature == 0.5
import unify
prompt = unify.Prompt(
    messages=[{"role": "user", "content": "This is a user message"}]
)
client = unify.Unify("gpt-4o@openai")
client.generate(**prompt.dict())

As explained in the previous Arguments section, default prompts can be both extracted and set for clients directly. The default prompt is determined dynamically based on all of the default values mentioned above during retrieval:

import unify
client = unify.Unify("gpt-4o@openai", temperature=0.5, n=2)
assert client.default_prompt.temperature == 0.5
assert client.default_prompt.n == 2

In other direction, setting the default prompt will update the various default parameters that were explained above:

import unify
client = unify.Unify("gpt-4o@openai", temperature=0.5, n=2)
prompt = unify.Prompt(temperature=0.4)
client.set_default_prompt(prompt)
assert client.temperature == 0.4
assert client.n is None

In the next Logging section, we learn how prompts are logged into your account for future retrieval, and in the Datasets section we’ll learn how prompts can be grouped into datasets and uploaded to your account, for running evals and monitoring performance etc.