You can compare up to five different Prompts side by side in a single comparison.
Compare Prompts in the playground
Iterating on Prompts as you evolve your AI application would need experiments across models, prompt structures, etc. In order to compare versions and make informed decisions about changes, the comparison playground allows a side by side view of results.
Why use Prompt comparison?
Prompt comparison combines multiple single Prompts into one view, enabling a streamlined approach for various workflows:
- Model comparison: Evaluate the performance of different models on the same Prompt.
- Prompt optimization: Compare different versions of a Prompt to identify the most effective formulation.
- Cross-Model consistency: Ensure consistent outputs across various models for the same Prompt.
- Performance benchmarking: Analyze metrics like latency, cost, and token count across different models and Prompts.
Create a new comparison
Navigate to the Prompt
tab on the left side panel.
Click on Prompt
and choose Prompt comparison
.

Click the +
icon in the left sidebar.
Choose to create a new Prompt comparison directly or create a folder for better organization.

Name your new Prompt comparison and optionally assign it to an existing folder.

Configure Prompts
Choose Prompts from your existing Prompts or just select a model from the dropdown menu directly.


Add more Prompts to compare using the "+" icon.
Customize each Prompt independently. This won't affect the base Prompt in any manner so experiment freely.
Run your comparison
You can choose to have the Multi input
option either enabled or disabled.
- If enabled, provide input to each entry in the comparison individually.
- If disabled, the same input is taken for all the Prompts in the comparison.

