Create custom AI Evaluators
Learn how to create custom AI Evaluators when built-in Evaluators don't meet your specific evaluation needs.
While Maxim offers a comprehensive set of evaluators in the Store, you might need custom evaluators for specific use cases. Create your own AI evaluator by selecting an LLM as the judge and configuring custom evaluation instructions.
Configure model and parameters
Select the LLM you want to use as the judge and configure model-specific parameters based on your requirements.

Define evaluation logic
Configure how your evaluator should judge the outputs:
-
Requirements: Define evaluation criteria in plain English
-
Evaluation scale: Choose your scoring type
- Scale: Score from 1 to 5
- Binary: Yes/No response
-
Grading logic: Define what each score means
You can use variables in Requirements and Grading logic

Normalize score (Optional)
Convert your custom evaluator scores from a 1-5 scale to match Maxim's standard 0-1 scale. This helps align your custom evaluator with pre-built evaluators in the Store.
For example, a score of 4 becomes 0.8 after normalization.

Set pass criteria
Configure two types of pass criteria:
Pass query Define criteria for individual evaluation metrics
Example: Pass if evaluation score > 0.8
Pass evaluator (%) Set threshold for overall evaluation across multiple entries
Example: Pass if 80% of entries meet the evaluation criteria

Test your Evaluator
Test your Evaluator in the playground before using it in your workflows. The right panel shows input fields for all variables used in your Evaluator.
- Fill in sample values for each variable
- Click Run to see how your evaluator performs
- Iterate and improve your evaluator based on the results
