✨ Maxim AI March 2025 Update

✨ Maxim AI March 2025 Update

Feature spotlight

🔗 Prompt chains: Now more powerful

Maxim’s revamped Prompt Chains lets you prototype every step of your complex agentic workflow with greater clarity and control—right from our more intuitive UI. Key highlights:

  • 🔀 Create parallel chains to execute concurrent or conditional tasks.
  • 🔁 Flexibly transfer data between ports—choose whether outputs act as variables, inputs, or context in other blocks.
  • 🧪 Experiment with prompts directly within the prompt block and publish new versions instantly.

Our new UI is designed for cross-functional teams to dive in and start building with ease.

Watch this video walkthrough to learn more

📡 Public APIs

While our SDK support has made it seamless to work with Maxim, we’ve now introduced a comprehensive REST API—giving you even more flexibility to integrate Maxim into your existing workflows. Use Maxim APIs to:

  • Experiment with prompts, organize them, and programmatically create and deploy prompt versions.
  • Configure alerts and integrate with tools like Slack and PagerDuty to stay on top of prod issues
  • Manage datasets at a granular level—perform CRUD operations on entries, columns, and datasets, or create data splits directly via endpoint.
  • Create and manage log repos, and fetch logs without needing to access the UI.

✅ Prompt Playground now has 100% Jinja2 support

Whether you're building dynamic prompts, injecting structured context, or running personalized prompt tests at scale—Jinja2 syntax makes it easier than ever to craft powerful, composable prompts right inside our Prompt playground.

With Jinja2 support, you can structure your prompts like actual code—loop through lists, add conditional statements, and dynamically inject variables—all while keeping your prompt logic clean and reusable.

Follow this demo to learn more

🔐 Manage your secrets with Vault!

Tired of managing your secrets across multiple places? Vault is a centralized, encrypted home for all your sensitive variables—from API keys to tokens. Use your secrets in Maxim workflows or API evals by simply referencing them from Vault.

All secrets are encrypted at rest and in transit using RSA 2048-bit encryption, ensuring that even if someone gains access to the system, your data remains protected without the private key. Vault is built for robust security and peace of mind.

🚀 xAI models are supported on Maxim!

Obsessed with Grok? As a step towards becoming your one-stop solution for all experimentation and evaluation needs, we’ve added support for xAI models—giving you a greater selection pool and flexibility to test, compare, and refine your AI workflows.

No complex setup—just plug in your API key and start building in minutes.

🆚 Diff view: Compare 2 versions of a prompt

Our new Prompt Diff feature lets you compare two prompt versions side-by-side with a Git diff-style view revealing every tweak. See how your prompt is evolving over time and correlate changes in performance with prompt changes.

Follow this video to get started

🧠 Gemini 2.5 Pro is live on Maxim!

Google's latest Gemini 2.5 Pro model is now available on Maxim. Leverage its advanced reasoning and multimodal capabilities to design custom evaluators and prompt experiments.

Start using this model via the Google provider:
✅ Go to Settings > Models > Google and add Gemini 2.5 Pro Experimental

Customer story

🚀 How Mindtickle cut AI time-to-production by 76%

Mindtickle, a market-leading revenue enablement platform (ranked #1 by G2), empowers sellers with AI-driven tools like interactive role-plays, conversation intelligence, and automated content creation.

As their GenAI capabilities grew, the complexity of testing these systems and ensuring AI quality became hard to manage and scale. Manual dataset management on spreadsheets, slow human review cycles, and having no single source of truth for their AI experiments created major bottlenecks.

Mindtickle partnered with Maxim to streamline their GenAI workflows—from curating multi-modal datasets to running rigorous evaluations and generating detailed quality reports. The result? Their time-to-production reduced from 21 days to just 5, while decision-making and cross-team collaboration on features became more efficient.

Learn how they achieved this—and more—with Maxim.

Upcoming release

🗣️ Voice agent evals

Evaluate the quality of your voice agents with a suite of voice-specific metrics. Interact with your agent via API or phone number—and the best part: simulate the interactions of your agent using our AI-powered simulation for granular analysis.

📊 Live dashboard

Monitor how your application's quality scores change across experiments. Build dashboards with custom charts tailored to your needs, and gain full control over the analysis of your production data and performance metrics.

Knowledge nuggets

💡 Model Context Protocol (MCP): A deep dive

The Model Context Protocol (MCP) is an open standard designed to streamline how applications provide structured context to LLMs. Think of MCP like the USB-C of AI—a universal connector between LLMs and the tools or data they need. Just as USB-C simplifies connectivity across devices, MCP enables seamless integration between AI models, local data sources, web APIs, and various tools. It empowers developers to build complex, agentic workflows where models can securely access the right data, trigger tools, and maintain continuity across multi-step tasks—all without hardcoding every interaction.

We've created a detailed blog and a video walkthrough to help you get hands-on with MCP. Whether you're building local-first AI tools, integrating enterprise systems, or exploring multi-agent architectures, MCP is a foundational protocol worth exploring — check out our resources to get started!