1. Set up your environment
First, configure your AI model providers:1
Go to `Settings` → `Models`.
Click on the tab of the provider for which you want to add an API key.
2
Configure model provider
Click on
Add New
and fill in the required details.Maxim requires at least one provider with access to GPT-3.5 and GPT-4 models. We use industry-standard encryption to securely store your API keys.
2. Create your first prompt or HTTP endpoint
Create prompts to experiment and evaluate a call to a model with attached context or tools. Use endpoints to easily test your complex AI agents using the HTTP endpoint for your application without any integration.Prompt
1
Create prompt
Navigate to the
Prompts
tab under the Evaluate
section and click on Single prompts. Click Create prompt
or Try sample
to get started.2
Write your first prompt
Write your system prompt and user prompt in the respective fields.
3
Configure model and parameters.
Configure additional settings like model, temperature, and max tokens.
4
Iterate
Click
Run
to test your prompt and see the AI’s response. Iterate on your prompt based on the results.5
Save prompt and publish a version.
When satisfied, click
Save
to create a new version of your prompt.HTTP Endpoint
1
Create endpoint
Navigate to the
HTTP Endpoints
option under the tab Agents
located in the Evaluate
section. Click Create Endpoint
or Try sample
.2
Configure agent endpoint
Enter your API endpoint URL in the
URL
field. Configure any necessary headers or parameters. You can use dynamic variables like {input}
to reference static context easily in any part of your endpoint using {}
3
Test your agent
Click
Run
to test your endpoint in the playground.4
Configure endpoint for testing
In the
Output Mapping
section, select the part of the response you want to evaluate (e.g., data.response
). Click Save
to create your endpoint.3. Prepare your dataset
Organize and manage the data you’ll use for testing and evaluation:1
Create dataset
Navigate to the Datasets tab under the
Library
section. Click Create New
or Upload CSV
. We also have a sample dataset created for you. Click on View our sample dataset
to get started.2
Edit dataset
If creating a new dataset, enter a name and description for your dataset. Add columns to your dataset (e.g., ‘input’ and ‘expected_output’).
3
Save
Add entries to your dataset, filling in the values for each column. Click
Save
to create your dataset.5. Add evaluators
Set up evaluators to assess your prompt or endpoint’s performance:1
Add evaluators from store
Navigate to the
Evaluators
tab under the Library
section. Click Add Evaluator
to browse available evaluators.2
Configure added evaluators
Choose an evaluator type (e.g., AI, Programmatic, API, or Human). Configure the evaluator settings as needed. Click
Save
to add the evaluator to your workspace.6. Run your first test
Execute a test run to evaluate your prompt or endpoint:1
Select endpoint/prompt to test
Navigate to your saved prompt or endpoint. Click
Test
in the top right corner.2
Configure test run
Select the dataset you created earlier. Choose the evaluators you want to use for this test run.
3
Trigger
Click
Trigger Test Run
to start the evaluation process.If you’ve added human evaluators, you’ll be prompted to set up human annotation on the report or via email.
7. Analyze test results
Review and analyze the results of your test run:1
View report
Navigate to the
Runs
tab in the left navigation menu. Find your recent test run and click on it to view details.2
Review performance
Review the overall performance metrics and scores for each evaluator. Drill down into individual queries to see specific scores and reasoning.
3
Iterate
Use these insights to identify areas for improvement in your prompt or endpoints.
Next steps
Now that you’ve completed your first cycle on the Maxim platform, consider exploring these additional capabilities:- Prompt comparisons: Evaluate different prompts side-by-side to determine which ones produce the best results for a given task.
- Agents via no-code builder: Create complex, multi-step AI workflows. Learn how to connect prompts, code, and APIs to build powerful, real-world AI systems using our intuitive, no-code editor.
- Context sources: Integrate Retrieval-Augmented Generation (RAG) into your agent endpoints.
- Prompt tools: Enhance your prompts with custom functions and agentic behaviors.
- Observability: Use our stateless SDK to monitor real-time production logs and run periodic quality checks.