# Maxim Docs ## Docs - [Create Alert](https://www.getmaxim.ai/docs/alerts/alert/create-alert.md): Create a new alert - [Delete Alert](https://www.getmaxim.ai/docs/alerts/alert/delete-alert.md): Delete an alert - [Get Alerts](https://www.getmaxim.ai/docs/alerts/alert/get-alerts.md): Get alerts for a workspace - [Update Alert](https://www.getmaxim.ai/docs/alerts/alert/update-alert.md): Update an alert - [Generate and share comparison reports](https://www.getmaxim.ai/docs/analyze/how-to/comparison-reports.md): Learn how to create and analyze comparison reports to track improvements, identify trends, and make data-driven decisions across different test runs. - [Overview](https://www.getmaxim.ai/docs/analyze/overview.md): Explore powerful analysis tools in Maxim for generating comparison reports, creating live dashboards, and gaining actionable insights from your AI application data. - [Create Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/create-dataset-columns.md): Create dataset columns - [Delete Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/delete-dataset-columns.md): Delete dataset columns - [Get Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/get-dataset-columns.md): Get dataset columns - [Update Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/update-dataset-columns.md): Update dataset columns - [Create Dataset entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/create-dataset-entries.md): Create dataset entries - [Delete Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/delete-dataset-entries.md): Delete dataset entries - [Get Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/get-dataset-entries.md): Get dataset entries - [Update Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/update-dataset-entries.md): Update dataset entries - [Create Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/create-dataset-split.md): Create dataset split - [Delete Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/delete-dataset-split.md): Delete dataset split - [Get Dataset Splits](https://www.getmaxim.ai/docs/datasets/dataset-split/get-dataset-splits.md): Get dataset splits - [Update Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/update-dataset-split.md): Update dataset split - [Create Dataset](https://www.getmaxim.ai/docs/datasets/dataset/create-dataset.md): Create a new dataset - [Delete Dataset](https://www.getmaxim.ai/docs/datasets/dataset/delete-dataset.md): Delete a dataset - [Get Datasets](https://www.getmaxim.ai/docs/datasets/dataset/get-datasets.md): Get datasets or a specific dataset - [Update Dataset](https://www.getmaxim.ai/docs/datasets/dataset/update-dataset.md): Update a dataset - [Concepts](https://www.getmaxim.ai/docs/evaluate/concepts.md): Learn about the key concepts in Maxim - [Build an AI-powered customer support email agent](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/create-customer-support-agent.md): Create a workflow that automatically categorizes support emails, creates help desk tickets, and sends responses - [Generate and translate product descriptions with AI](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/create-product-description-generator.md): Build an AI workflow to generate product descriptions from images using Prompt Chains - [Debug AI agent errors step by step](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/debug-errors-at-every-node.md): Identify and fix errors at each step of your AI workflow with detailed diagnostics - [Deploy Prompt Chains](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/deploy-prompt-chains.md): Quick iterations on prompt chains should not require code deployments every time. With more and more stakeholders working on prompt engineering, its critical to keep deployments of prompt chains as easy as possible without much overhead. Prompt chain deployments on Maxim allow conditional deployment of prompt chain changes that can be used via the SDK. - [Build complex AI workflows with Prompt Chains](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/experiment-with-prompt-chains.md): Connect prompts, code, and APIs to create sophisticated AI systems using our visual editor - no coding required - [Query Prompt Chains via SDK](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/querying-prompt-chains.md): Learn how to efficiently query and retrieve prompt chains using the Maxim SDK, enabling advanced AI workflow management and customization - [Test your agentic workflows using chains](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/test-prompt-chains.md): Test Prompt Chains using datasets to evaluate performance across examples - [Use API nodes within chains](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-chains/use-api-nodes-within-chains.md): Make external API calls at any point in your Prompt Chain to integrate with third-party services. The API node lets you validate data, log events, fetch information, or perform any HTTP request without leaving your chain. Simply configure the endpoint, method, and payload to connect your AI workflow with external systems. - [Evaluate Datasets](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-datasets.md): Learn how to evaluate your AI outputs against expected results using Maxim's Dataset evaluation tools - [Automate Prompt evaluation via CI/CD](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/automate-via-ci-cd.md): Trigger test runs in CI/CD pipelines to evaluate prompts automatically. - [Run bulk comparisons across test cases](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/bulk-comparisons-across-test-cases.md): Experimenting across prompt versions at scale helps you compare results for performance and quality scores. By running experiments across datasets of test cases, you can make more informed decisions, prevent regressions and push to production with confidence and speed. - [Compare Prompt versions](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/compare-prompt-versions.md): Track changes between different Prompt versions to understand what led to improvements or drops in quality. - [Compare Prompts in the playground](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/compare-prompts-playground.md): Iterating on Prompts as you evolve your AI application would need experiments across models, prompt structures, etc. In order to compare versions and make informed decisions about changes, the comparison playground allows a side by side view of results. - [Create and manage Prompt versions](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/create-prompt-versions.md): As teams build their AI applications, a big part of experimentation is iterating on the prompt structure. In order to collaborate effectively and organize your changes clearly, Maxim allows prompt versioning and comparison runs across versions. - [Deploy Prompts](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/deploy-prompts.md): Quick iterations on Prompts should not require code deployments every time. With more and more stakeholders working on prompt engineering, its critical to keep deployments of Prompts as easy as possible without much overhead. Prompt deployments on Maxim allow conditional deployment of prompt changes that can be used via the SDK. - [Experiment in the Prompt playground](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/experiment-in-prompt-playground.md): Create, refine, experiment and deploy your prompts via the playground. Organize of your prompts using folders and versions, experimenting with the real world cases by linking tools and context, and deploying based on custom logic. - [Set up a human annotation pipeline](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/human-annotation-pipeline.md): Human annotation is critical to improve your AI quality. Getting human raters to provide feedback on various dimensions can help measure the present status and be used to improve the system over time. Maxim's human-in-the-loop pipeline allows team members as well as external raters like subject matter experts to annotate AI outputs. - [Organize Prompts](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/organize-prompts.md): Building AI applications collaboratively needs Prompts to be organized well for easy reference and access. Adding Prompts to folders, tagging them, and versioning on Maxim helps you maintain a holistic Prompt CMS. - [Query Prompts via SDK](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/querying-prompts.md): Learn how to efficiently query and retrieve prompts using Maxim AI's SDK, including deployment-specific and tag-based queries for streamlined prompt management. - [Measure the quality of your RAG pipeline](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/rag-quality.md): Retrieval quality directly impacts the quality of output from your AI application. While testing prompts, Maxim allows you to connect your RAG pipeline via a simple API endpoint and evaluates the retrieved context for every run. Context specific evaluators for precision, recall and relevance make it easy to see where retrieval quality is low. - [Run a Prompt with tool calls](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/run-prompt-tool-calls.md): Ensuring your prompt selects the accurate tool call (function) is crucial for building reliable and efficient AI workflows. Maxim's playground allows you to attach your tools (API, code or schema) and measure tool call accuracy for agentic systems. - [Save and track Prompt experiments with sessions](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/save-prompt-session.md): Sessions act as a history by saving your prompt's complete state as you work. This allows you to experiment freely without fear of losing your progress. - [Use Prompt partials in your Prompts](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-prompts/use-prompt-partials.md): Learn how to use Prompt partials within your Prompts - [Automate workflow evaluation via CI/CD](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/automate-via-ci-cd.md): Trigger test runs in CI/CD pipelines to evaluate workflows automatically. - [Evaluate simulated sessions for agents](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/evaluate-simulated-sessions-for-agents.md): Learn how to evaluate your AI agent's performance using automated simulated conversations. Get insights into how well your agent handles different scenarios and user interactions. - [Transform API data with Workflow scripts](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/scripting-to-configure-response-structures.md): Customize your API requests and responses using Workflow scripts - [Simulate multi-turn conversations](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/simulate-multi-turn-conversations.md): Test your AI's conversational abilities with realistic, scenario-based simulations - [Test multi-turn conversations manually](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/test-multi-turn-conversations-manually.md): Learn how to test and simulate multi-turn conversations with your AI endpoint using Maxim's interactive Workflows - [Test your AI application using an API endpoint](https://www.getmaxim.ai/docs/evaluate/how-to/evaluate-workflows-via-api-endpoint/test-your-ai-outputs-using-application-endpoint.md): Expose your AI application to Maxim using your existing API endpoint - [Customize and share reports](https://www.getmaxim.ai/docs/evaluate/how-to/optimize-evaluation-processes/customize-share-reports.md): The run report is a single source of truth for you to understand exactly how your AI system is performing during your experiments or pre-release testing. You can customize reports to gain insights and make decisions. - [Re-use your test configurations using presets](https://www.getmaxim.ai/docs/evaluate/how-to/optimize-evaluation-processes/re-use-configuration-via-presets.md): As your team starts running tests regularly on your entities, make it simple and quick to configure tests and see results. Test presets are a way to help you reuse your configurations with a single click, reducing the time it takes to start a run. You can create labeled presets combining a dataset and evaluators and use them with any entity you want to test. - [Receive notifications for Test Run status](https://www.getmaxim.ai/docs/evaluate/how-to/optimize-evaluation-processes/receive-notifications-test-runs.md): Test runs are a core part of continuous testing workflows and could be triggered via UI or in the CI/CD pipeline. Teams need visibility into triggered runs, status updates, and result summaries without having to come to the dashboard to constantly check. Integrations with Slack and PagerDuty allow notifications to be configured for some of these events. - [Scheduled test runs](https://www.getmaxim.ai/docs/evaluate/how-to/scheduled-test-runs.md): Learn how to schedule test runs for your prompts, prompt chains and workflows at a regular interval. - [Trigger Test Runs using SDK](https://www.getmaxim.ai/docs/evaluate/how-to/trigger-test-runs-using-sdk.md): Learn how to programmatically trigger test runs using Maxim's SDK with custom datasets, flexible output functions, and evaluations for your AI applications. - [Overview](https://www.getmaxim.ai/docs/evaluate/overview.md): Learn how to evaluate AI application performance through prompt testing, workflow automation, and continuous log monitoring. Streamline your AI testing pipeline with comprehensive evaluation tools. - [Test your AI application via an API endpoint](https://www.getmaxim.ai/docs/evaluate/quickstart/run-your-first-test-on-api-workflow.md): Run your first test on an AI application via HTTP endpoint with ease, no code changes needed. - [Run your first Prompt test](https://www.getmaxim.ai/docs/evaluate/quickstart/run-your-first-test-on-prompt.md): Test your Prompts with Datasets and Evaluators in minutes. View results across your test cases to find areas where it works well or needs improvement. - [Test your first Prompt Chain](https://www.getmaxim.ai/docs/evaluate/quickstart/run-your-first-test-on-prompt-chains.md): Test your agentic workflows using Prompt Chains with Datasets and Evaluators in minutes. View results across your test cases to find areas where it works well or needs improvement. - [Test multi-turn AI conversations](https://www.getmaxim.ai/docs/evaluate/quickstart/simulate-and-evaluate-multi-turn-conversations.md): Evaluate AI chat interactions automatically using conversation simulation, without code changes - [Execute an evaluator](https://www.getmaxim.ai/docs/evaluators/evaluator/execute-an-evaluator.md): Execute an evaluator to assess content based on predefined criteria and return grading results, reasoning, and execution logs - [Get evaluators](https://www.getmaxim.ai/docs/evaluators/evaluator/get-evaluators.md): Get an evaluator by ID, name or fetch all evaluators for a workspace - [Get Folder Contents](https://www.getmaxim.ai/docs/folders/folder-contents/get-folder-contents.md): Get the contents (entities) of a specific folder, identified by folderId or name+parentFolderId. - [Create Folder](https://www.getmaxim.ai/docs/folders/folder/create-folder.md): Create a new folder for organizing entities - [Get Folders](https://www.getmaxim.ai/docs/folders/folder/get-folders.md): Get folder details. If id or name is provided, returns a single folder object. Otherwise, lists sub-folders under the parentFolderId (or root). - [Create Integration](https://www.getmaxim.ai/docs/integrations/integration/create-integration.md): Create a new integration for notification channels - [Delete Integration](https://www.getmaxim.ai/docs/integrations/integration/delete-integration.md): Delete an integration - [Get Integrations](https://www.getmaxim.ai/docs/integrations/integration/get-integrations.md): Get integrations for a workspace - [Update Integration](https://www.getmaxim.ai/docs/integrations/integration/update-integration.md): Update an integration - [Overview](https://www.getmaxim.ai/docs/introduction/overview.md): Maxim streamlines AI application development and deployment by applying traditional software best practices to non-deterministic AI workflows. - [Running your first test](https://www.getmaxim.ai/docs/introduction/quickstart/running-first-test.md): Learn how to get started with your first test run in Maxim - [Setting up your workspace](https://www.getmaxim.ai/docs/introduction/quickstart/setting-up-workspace.md): Learn how to set up workspaces, invite team members, and manage role-based access control (RBAC) in Maxim. Streamline your AI project organization and control user permission within your enterprise. - [Concepts](https://www.getmaxim.ai/docs/library/concepts.md): Explore key concepts in AI evaluation, including evaluators, datasets, and custom tools for assessing model performance and output quality. - [Bring your RAG via an API endpoint](https://www.getmaxim.ai/docs/library/how-to/context-sources/bring-your-rag-via-an-api-endpoint.md): To integrate RAG context into Maxim, you need to create a context source and add your RAG context API endpoint. This context source can then be used in prompts and workflows for inferencing, enabling the model to access and utilize the relevant context during processing. - [Evaluate your context](https://www.getmaxim.ai/docs/library/how-to/context-sources/evaluate-your-context.md): Learn how to evaluate the quality and effectiveness of your RAG context sources in Maxim for improved AI performance and accuracy. - [Ingest files as a context source](https://www.getmaxim.ai/docs/library/how-to/context-sources/ingest-files-as-a-context-source.md): Ingest files as a context source in Maxim to enable RAG context for your GenAI application. - [Add Dataset entries using SDK](https://www.getmaxim.ai/docs/library/how-to/datasets/add-new-entries-using-sdk.md): Learn how to add new entries to a Dataset using the Maxim SDK - [Create a Dataset with images](https://www.getmaxim.ai/docs/library/how-to/datasets/create-dataset-with-files-and-images.md): Learn how to create a Dataset with images - [Curate data from production](https://www.getmaxim.ai/docs/library/how-to/datasets/curate-data-from-production.md): Learn how to extract and transform production logs into structured Datasets for model training and evaluation - [Curate a golden Dataset from Human Annotation](https://www.getmaxim.ai/docs/library/how-to/datasets/curate-golden-dataset-for-human-annotation.md): Learn how to curate a golden Dataset for human annotation - [Create a Dataset using templates](https://www.getmaxim.ai/docs/library/how-to/datasets/use-dataset-templates.md): Datasets are collections of data used for training, testing, and evaluating AI models within workflows and evaluations. Test your prompts, workflows or chains across test cases in this dataset and view results at scale. Begin with a template and customize column structure. Evolve your datasets over time from production logs or human annotation. - [Use splits within a Dataset](https://www.getmaxim.ai/docs/library/how-to/datasets/use-splits-within-a-dataset.md): Learn how to use splits within a Dataset - [Use variable columns in Datasets](https://www.getmaxim.ai/docs/library/how-to/datasets/use-variable-columns-in-datasets.md): Learn how to use variable columns in datasets - [Bring your existing Evaluators via API](https://www.getmaxim.ai/docs/library/how-to/evaluators/create-api-evaluators.md): Connect your evaluation system to Maxim using simple API endpoints. - [Create custom AI Evaluators](https://www.getmaxim.ai/docs/library/how-to/evaluators/create-custom-ai-evaluator.md): Learn how to create custom AI Evaluators when built-in Evaluators don't meet your specific evaluation needs. - [Set up human evaluation](https://www.getmaxim.ai/docs/library/how-to/evaluators/create-human-evaluators.md): Set up human raters to review and assess AI outputs for quality control - [Create Programmatic Evaluators](https://www.getmaxim.ai/docs/library/how-to/evaluators/create-programmatic-evaluator.md): Build custom code-based evaluators using Javascript or Python - [Use pre-built Evaluators](https://www.getmaxim.ai/docs/library/how-to/evaluators/use-pre-built-evaluators.md): Get started quickly with ready-made evaluators for common AI evaluation scenarios - [Create Prompt Partials](https://www.getmaxim.ai/docs/library/how-to/prompt-partials/create-prompt-partial.md): Store common prompt elements as reusable snippets that you can include across different prompts, helping you maintain consistency and reduce repetition. - [Create a code-based Prompt Tool](https://www.getmaxim.ai/docs/library/how-to/prompt-tools/create-a-code-tool.md): Code-based Prompt Tools allow you to create custom functions directly within the editor. This guide will show you how to create and test these tools. - [Create a Schema-based Prompt Tool](https://www.getmaxim.ai/docs/library/how-to/prompt-tools/create-a-tool-schema.md): Schema-based prompt tools provide a structured way to define tools that ensure accurate and schema-compliant outputs. This approach is particularly useful when you need to guarantee that the LLM's responses follow a specific format. - [Create an API-based Prompt Tool](https://www.getmaxim.ai/docs/library/how-to/prompt-tools/create-an-api-tool.md): Maxim allows you to expose external API endpoints as prompt tools. The platform automatically generates function schemas based on the API's query parameters and payload structure. - [Evaluate Tool Call Accuracy](https://www.getmaxim.ai/docs/library/how-to/prompt-tools/evaluate-tool-call-accuracy.md): Learn how to evaluate the accuracy of tool calls - [Overview](https://www.getmaxim.ai/docs/library/overview.md) - [Create a new log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/create-a-new-log-repository.md): Create a new log repository - [Delete a log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/delete-a-log-repository.md): Delete a log repository - [Get log repositories](https://www.getmaxim.ai/docs/log repositories/log-repository/get-log-repositories.md): Get log repositories - [Get trace by ID](https://www.getmaxim.ai/docs/log repositories/log-repository/get-trace-by-id.md): Get a specific trace by ID - [Search logs in a log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/search-logs-in-a-log-repository.md): Search logs in a log repository - [Update log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/update-log-repository.md): Update log repository - [Concepts](https://www.getmaxim.ai/docs/observe/concepts.md): Learn about the key concepts of Maxim's AI Observability. - [Set up auto evaluation on logs](https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/auto-evaluation.md): Evaluate captured logs automatically from the UI based on filters and sampling - [Set up human evaluation on logs](https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/human-evaluation.md): Use human evaluation or rating to assess the quality of your logs and evaluate them. - [Node level evaluation](https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/node-level-evaluation.md): Evaluate any component of your trace or log to gain insights into your agent's behavior. - [Use spans to group units of work](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/add-spans-to-traces.md): Spans help you organize and track requests across microservices within traces. A trace represents the entire journey of a request through your system, while spans are smaller units of work within that trace. - [Log LLM generations in your AI application traces](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/adding-llm-call.md): Use generations to log individual calls to Large Language Models (LLMs) - [Export logs and evaluation results as CSV](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/export-logs.md): Learn how to export your logs and evaluation results as a CSV file, enabling easy analysis and reporting of your AI application's performance data. - [Configure filters and saved views](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/filters-and-saved-views.md): Learn how to efficiently filter and organize your logs with custom criteria and saved views for streamlined debugging and quick access to frequently used search patterns. - [Log multi-turn interactions as a session](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/log-multiturn-interactions-as-session.md): Learn how to group related traces into sessions to track complete user interactions with your GenAI system. - [Capture your RAG pipeline](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/logging-rag-pipeline.md): Retrieval-Augmented Generation (RAG) is a technique that enhances large language models by retrieving relevant information from external sources before generating responses. - [Send feedback for AI application traces](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/send-user-feedback.md): Track and collect user feedback in application traces using Maxim's Feedback entity. Enhance your AI applications with structured user ratings and comments - [Setting up your first trace](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/setting-up-trace.md): Learn how to set up tracing using the Maxim platform - [Set up automated email summaries to monitor your logs](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/summary-emails.md): Learn how to set up and manage weekly summary emails for your log repository - [Track errors in traces](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/track-llm-errors.md): Learn how to effectively track and log errors from LLM results and Tool calls in your AI application traces to improve performance and reliability. - [Track token usage and costs](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/track-token-usage-and-cost.md): Learn how to efficiently track token usage and associated costs in your LLM application using Maxim's logging capabilities. - [Track tool calls](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/track-tool-calls.md): Track external system calls triggered by LLM responses in your agentic workflows. Tool calls represent interactions with external services, allowing you to monitor execution time and responses. - [Set up custom token pricing](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/use-custom-pricing.md): Learn how to set up custom token pricing in Maxim for accurate cost reporting in AI evaluations and logs, ensuring displayed costs match your actual expenses. - [Connect your logs to external platforms](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/use-data-connectors.md): Learn how to integrate Maxim with external observability platforms using data connectors for enhanced log analysis and monitoring. - [Use Events to send point-in-time information](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/use-events.md): Track application milestones and state changes using event logging - [Use tags on nodes](https://www.getmaxim.ai/docs/observe/how-to/log-your-application/use-tags-on-nodes.md): Tag your traces to group and filter workflow data effectively. Add tags to any node type - spans, generations, retrievals, events, and more. - [Create a PagerDuty integration](https://www.getmaxim.ai/docs/observe/how-to/set-up-alerts/create-a-pagerduty-integration.md): Send alert notifications to your PagerDuty service by creating a PagerDuty integration in Maxim. - [Create a Slack integration](https://www.getmaxim.ai/docs/observe/how-to/set-up-alerts/create-a-slack-integration.md): Send alert notifications directly to your Slack channels by creating a Slack integration in Maxim. - [Setting up alerts for performance metrics](https://www.getmaxim.ai/docs/observe/how-to/set-up-alerts/set-up-alerts-for-performance-metrics.md): Learn how to set up alerts to monitor your application's performance metrics in Maxim. - [Set up alerts for quality metrics](https://www.getmaxim.ai/docs/observe/how-to/set-up-alerts/set-up-alerts-for-quality-metrics.md): Learn how to set up alerts to monitor evaluation scores and quality checks in Maxim. - [OpenAI Agents SDK](https://www.getmaxim.ai/docs/observe/integrations/openai-agents-sdk.md): How to integrate Maxim's observability and real-time evaluation capabilities with OpenAI Agents SDK. - [Overview](https://www.getmaxim.ai/docs/observe/overview.md): Monitor AI applications in real-time with Maxim's enterprise-grade LLM observability platform. - [Quickstart](https://www.getmaxim.ai/docs/observe/quickstart.md): Set up distributed tracing for your GenAI applications to monitor performance and debug issues across services. - [Get Prompt Config](https://www.getmaxim.ai/docs/prompts/prompt-config/get-prompt-config.md): Get prompt configuration - [Update Prompt Config](https://www.getmaxim.ai/docs/prompts/prompt-config/update-prompt-config.md): Update prompt configuration - [Deploy Prompt Version](https://www.getmaxim.ai/docs/prompts/prompt-deployment/deploy-prompt-version.md): Deploy a prompt version - [Create a prompt version](https://www.getmaxim.ai/docs/prompts/prompt-version/create-a-prompt-version.md): Create a prompt version - [Get Prompt Versions](https://www.getmaxim.ai/docs/prompts/prompt-version/get-prompt-versions.md): Get versions of a prompt - [Run Prompt Version](https://www.getmaxim.ai/docs/prompts/prompt-version/run-prompt-version.md): Run a specific version of a prompt - [Create Prompt](https://www.getmaxim.ai/docs/prompts/prompt/create-prompt.md): Create a new prompt - [Delete Prompt](https://www.getmaxim.ai/docs/prompts/prompt/delete-prompt.md): Delete a prompt - [Get Prompts](https://www.getmaxim.ai/docs/prompts/prompt/get-prompts.md): Get prompts for a workspace - [Update Prompt](https://www.getmaxim.ai/docs/prompts/prompt/update-prompt.md): Update an existing prompt - [Overview](https://www.getmaxim.ai/docs/public-apis/overview.md): Welcome to the Maxim API documentation. This guide provides comprehensive information about our available APIs, their endpoints, and how to use them. - [Introduction](https://www.getmaxim.ai/docs/sdk/overview.md): Dive into the Maxim SDK to supercharge your AI application development - [Langchain with & without streaming](https://www.getmaxim.ai/docs/sdk/python/integrations/langchain/langchain.md): Learn how to integrate Maxim observability with LangChain OpenAI calls. - [Tavily Search & LangGraph Agent with Maxim Observability](https://www.getmaxim.ai/docs/sdk/python/integrations/langgraph/langgraph.md): Tutorial showing how to integrate Tavily Search API with LangChain and LangGraph, plus instrumentation using Maxim for full observability in just 5 lines. - [LiteLLM Proxy one-line integration](https://www.getmaxim.ai/docs/sdk/python/integrations/litellm/litellm-proxy.md): Learn how to integrate Maxim with the LiteLLM Proxy - [LiteLLM SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/litellm/litellm-sdk.md): Learn how to integrate Maxim with LiteLLM for tracing and monitoring - [Agents SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/openai/agents-sdk.md): Learn how to integrate Maxim with the OpenAI Agents SDK - [OpenAI SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/openai/one-line-integration.md): Learn how to integrate Maxim observability with the OpenAI SDK in just one line of code. - [Overview](https://www.getmaxim.ai/docs/sdk/python/overview.md): Introduction to Maxim python SDK. - [Upgrading to v3](https://www.getmaxim.ai/docs/sdk/python/upgrading-to-v3.md): Changes in the Maxim SDK - [Data plane deployment](https://www.getmaxim.ai/docs/self-hosting/dataplane.md): This guide details Maxim's data plane deployment process, outlining how to establish data processing infrastructure within your cloud environment. It emphasizes enhanced security, control, and data tenancy, ensuring compliance with data residency requirements while leveraging cloud-based services. - [Overview](https://www.getmaxim.ai/docs/self-hosting/overview.md): Maxim offers self hosting and flexible enterprise deployment options with either full VPC isolation (Zero Touch) or hybrid setup with secure VPC peering (Data Plane), tailored to your security needs. - [Zero Touch Deployment](https://www.getmaxim.ai/docs/self-hosting/zerotouch.md): This guide outlines Maxim's zero-touch deployment process, covering infrastructure components, security protocols, and supported cloud providers. - [Get test run entries](https://www.getmaxim.ai/docs/test run entries/test-run-entries/get-test-run-entries.md): Get test run entries - [Share test run report](https://www.getmaxim.ai/docs/test run reports/test-run-report/share-test-run-report.md): Share a test run report - [Delete test runs](https://www.getmaxim.ai/docs/test runs/test-run/delete-test-runs.md): Delete test runs from a workspace - [Get test runs](https://www.getmaxim.ai/docs/test runs/test-run/get-test-runs.md): Get test runs for a workspace ## Optional - [Blog](https://www.getmaxim.ai/blog) - [Cookbooks](https://github.com/maximhq/maxim-cookbooks) - [Tutorials](https://www.youtube.com/playlist?list=PLJh32rQ0yHHIC_nNZ6i2taEzAYiH8s6rP)