Maxim Logo
How to

Evaluate Datasets

Learn how to evaluate your AI outputs against expected results using Maxim's Dataset evaluation tools

Get started with Dataset evaluation

Have a dataset ready directly for evaluation? Maxim lets you evaluate your AI's performance directly without setting up workflows.

Prepare Your Dataset

Include these columns in your dataset:

  • Input queries or prompts
  • Your AI's actual outputs
  • Expected outputs (ground truth)

Configure Test Run for Datasets

Configure the Test Run

  • On the Dataset page, click the "Test" button, on the top right corner
  • Your Dataset should be already be pre-selected in the drop-down. Attach the Evaluators and Context Sources you want to use.
  • Click "Trigger Test Run"

The Dataset appears pre-selected in the drop-down. You can attach the evaluators and context sources (if any) you want to use.

Configure Test Run for Datasets

If you want to test only certain entries from your Dataset, you can create a Dataset split and run the test run on the split the same way as above.

On this page