Prompts
Terms like prompt, message, query etc. are used very interchangeably, and we’ve reached a point where the terms mean everything and nothing.
For example, in OpenAI’s
prompt engineering guide,
they talk about tool use, structured output, and task decomposition.
In their mental model, the “prompt” is therefore the entire set of arguments passed to
the chat completions endpoint.
OpenAI are therefore very careful to refer to the system message
and user message
,
and avoid calling these messages “prompts”.
In contrast, Anthropic does use the term system prompt to refer to the system message.
There is therefore no consensus on whether the “prompt” is only the text content of the message, the entire json body, or something in between.
Our Definition
We generally adopt OpenAI’s vocabulary, but more specifically we refer to the prompt as
the components of the
OpenAI Request Body
which affect the output. As such, stream
, stream_options
etc. are omitted from
our Prompt.
Our motivation for this is simple. Our platform is primarily built for evaluations. In other words, iterating on the input and assessing how these iterations impact the output, and more specifically how these changes affect evaluations on the output.
In this light, we’re only concerned with storing and tracking the components of the chat completions request which have an impact on the output, and thus we have adopted this “prompt” definition.
A prompt is a json object containing all arguments in the OpenAI chat completions request body which impact the output.
Creating Prompts
Creating prompts is easy. If a single string is passed, then this is interpreted as the user message:
By default the representation mode is "verbose"
, which means the entire pydantic
object is printed. We can prune all None
values from the representation by setting
unify.set_repr_mode("concise")
, printing a more concise representation:
Even when "concise"
mode is set,
the full representation can be viewed any time like so:
For all subsequent printed prompts in the docs,
we will assume "concise"
mode for brevity.
Returning to the topic of creating prompts,
the messages can also be passed explicitly in the Prompt
constructor:
Other parameters can also be passed:
Only the parameters explicitly provided will be printed:
Passing Prompts
Prompts can be passed to our various clients by simply unpacking the return of their
.dict()
them as keyword arguments into the constructor (to set default arguments) or
into the .generate()
method to limit to current query.
As explained in the previous Arguments section, default prompts can be both extracted and set for clients directly. The default prompt is determined dynamically based on all of the default values mentioned above during retrieval:
In other direction, setting the default prompt will update the various default parameters that were explained above:
In the next Logging section, we learn how prompts are logged into your account for future retrieval, and in the Datasets section we’ll learn how prompts can be grouped into datasets and uploaded to your account, for running evals and monitoring performance etc.