Skip to main content

Evaluation-Level Cost Tracking

For each evaluation run, Maxim displays detailed cost breakdowns:
  • Cost breakdown: View total evaluation cost with separate input and completion token counts
  • Side-by-side comparison: Use the Test Runs Comparison Dashboard to compare costs across different prompt versions, models, and configurations
  • Cost per prompt: Identify which variations are most cost-effective for your use case

Repository-Level Monitoring

At the log repository level, Maxim’s dashboard visualizes cost and token metrics over time through interactive charts:
  • Multiple aggregation options: View data using average, p50, p90, p95, or p99 aggregations
  • Interactive analysis: Hover over chart elements for detailed breakdowns, select time periods to drill down into specific logs
  • Saved views: Create custom views for monitoring cost patterns that matter to your team
  • Custom metrics: Track application-specific cost indicators alongside standard metrics
This granular visibility helps teams optimize their AI applications by identifying expensive operations, comparing model costs, and making data-driven decisions about resource allocation.