Evaluating Prompt

Prompt comparison

Learn how to use Maxim's Prompt comparison feature to experiment with multiple prompts and models simultaneously, enabling efficient evaluation and optimization of your AI-driven applications.

Prompt comparison is a powerful feature in Maxim that allows you to experiment with multiple prompts and models side by side. This tool is essential for efficient prompt engineering, model evaluation, and optimizing your AI applications.

Why use Prompt comparison?

Prompt comparison combines multiple single prompts into one view, enabling a streamlined approach for various workflows:

  1. Model comparison: Evaluate the performance of different models on the same prompt.
  2. Prompt optimization: Compare different versions of a prompt to identify the most effective formulation.
  3. Cross-Model consistency: Ensure consistent outputs across various models for the same prompt.
  4. Performance benchmarking: Analyze metrics like latency, cost, and token count across different models and prompts.

Getting started with Prompt comparison

To begin using Prompt comparison, follow these steps to create and configure a new comparison:

Create a new Comparison

Navigate to the "Prompt" tab on the left side panel.

Click on "Prompt" and choose "Prompt comparison".

Prompt comparison tab in navigation

Click the "+" icon in the left sidebar.

Choose to create a new prompt comparison directly or create a folder for better organization.

Dropdown menu for creating a new prompt comparison

Name your new prompt comparison and optionally assign it to an existing folder.

Creating a new prompt comparison in Maxim

Configuring the Comparison

Choose prompts from your existing Single prompts or just select a model from the dropdown menu directly.

imageimage

Add more prompts to compare using the "+" icon.

Make changes to each individual prompt as required (this won't be affecting the base Single prompt, if selected, in any manner so you can experiment without worrying).

Run your Comparison

You can choose to have the Multi input option either enabled or disabled.

  • If enabled, you can provide input to each entry in the comparison individually.
  • If disabled, you will be providing input to all the entries in the comparison at once.
imageimage

Exploring Prompt comparison use cases

Let's examine two common use cases for Prompt comparison to demonstrate its versatility and power.

Case Study 1: Multiple Models on a single prompt

In this scenario, we'll compare the performance of different models on a single entity recognition prompt.

Case Study 2: Multiple prompts with a single model

In this example, we'll compare different versions of an entity recognition prompt using a single model.

Conclusion

Prompt comparison is a powerful feature in Maxim that enables efficient experimentation, evaluation, and optimization of your AI prompts and models. By leveraging this tool, you can:

  • Compare model performances
  • Optimize prompt formulations
  • Ensure consistency across models
  • Make informed decisions for your AI applications based on experimentation results

Start using Prompt comparison today to enhance your prompt engineering workflow and improve the effectiveness of your AI-driven solutions.

You can compare up to five different prompts side by side in a single comparison.

On this page