Test your agentic workflows using chains
Test Prompt Chains using datasets to evaluate performance across examples
After testing in the playground, evaluate your Prompt Chains across multiple test cases to ensure consistent performance. You can do the same via test run.
Create a Dataset
Add test cases by creating a Dataset. For this example, we'll use a Dataset of product images to generate descriptions.

Build your Prompt Chain
Create a Prompt Chain that processes your test examples. In this case, the chain generates product descriptions, translates them to multiple languages, and formats them to match specific requirements.

Start a test run
Open the test configuration by clicking in the top right corner.
Review results
Monitor the test run to analyze the performance of your Prompt Chain across all inputs.

Deploy Prompt Chains
Quick iterations on prompt chains should not require code deployments every time. With more and more stakeholders working on prompt engineering, its critical to keep deployments of prompt chains as easy as possible without much overhead. Prompt chain deployments on Maxim allow conditional deployment of prompt chain changes that can be used via the SDK.
Generate and translate product descriptions with AI
Build an AI workflow to generate product descriptions from images using Prompt Chains