Learn more about tracking generation results.
How to/Log your application
Track token usage and costs
Learn how to efficiently track token usage and associated costs in your LLM application using Maxim's logging capabilities.
Code examples by language
Log token usage by including the usage object in your generation result:
Custom pricing
Need different pricing for your models? Read more on custom pricing.
Track errors in traces
Learn how to effectively track and log errors from LLM results and Tool calls in your AI application traces to improve performance and reliability.
Configure filters and saved views
Learn how to efficiently filter and organize your logs with custom criteria and saved views for streamlined debugging and quick access to frequently used search patterns.