We’re on a mission to unify and simplify the LLM landscape. Unify lets you:

  • 🔑 Use any LLM from any Provider: With a single interface, you can use all LLMs from all providers by simply changing one string. No need to manage several API keys or handle different input-output formats. Unify handles all of that for you!

  • 📊 Improve LLM Performance: Add your own custom tests and evals, and benchmark your own prompts on all models and providers. Comparing quality, cost and speed, and iterate on your system prompt until all test cases pass, and you can deploy your app!

  • 🔀 Route to the Best LLM: Improve quality, cost and speed by routing to the perfect model and provider for each individual prompt.

Guiding Example

Throughout the docs, we adopt a “show don’t tell” approach, whereby we use a consistent example to illustrate the various functionality. Our intention is that this helps to tie everything together in a more concrete manner.

Specifically, lets imagine we are building an educational AI application, to assist students in secondary education in the UK, who are studying for their GCSEs with the examiner OCR. The app should help the students to answer questions, and should help to explain concepts.

The app needs to be able to answer questions based on the specific syllabus, across the following subjects: Maths, Computer Science, Physics, Chemistry, Biology, English Literature and English Language.

Each of these subjects requires unique knowledge and abilities, spanning more objective and more subjective criteria. This makes it a very complex problem to reason about. Perhaps some subjects would be better with a fine-tuned model, perhaps others would be better with a foundational model. Some models might be more creative and/or more analytical, making them more appropriate for different subject areas.

Let’s see how our universal API, benchmarking and routing might be able to help us out!