Overview

At Maxim we have enables all supporting components to aid you in your journey to ship high quality AI reliably with confidence. While Pre release Tests and Post Release Observe is our hero flows for you to be able to have a smooth testing experience we have added several crucial pieces to aid you with testing under LIBRARY in Maxim.

Evaluators

You will get access to all the evaluators under the evaluators tab in the left side menu. You will also have access to evaluator store where you can browse evaluators and add them to your workspace. Read more on evaluators here

Datasets

You can create robust multimodal datasets on Maxim which you can then use for your testing workflow. Read more on datasets here

Context sources

To test your RAG pipeline it’s very important to evaluate the retrieved context along with evaluating the final generated output. We allow you to bring your retrieved context using an HTTP workflow. Read more about context sources here.

Prompt tools

Being able to attach function calling to prompts ensures you can test your actual application flow mimicking agentic workflows in your real application. Read more about prompt tools here.

Custom models

At Maxim you can create and update datasets and these datasets keep evolving as you navigate through the lifecycle of the application. We allow you to use this datasets for your fine-tuning needs by partnering with fine-tuning providers. If you have such a need please feel free to drop us a line at [email protected]

On this page