Maxim Logo

Quickstart

Set up distributed tracing for your GenAI applications to monitor performance and debug issues across services.

This guide demonstrates distributed tracing setup using an enterprise search chatbot (similar to Glean) example that:

  • Connects to company data sources (Google Drive, Dropbox)
  • Enables natural language search across data via Slack or web interface

System architecture

The application uses 5 microservices:

System architecture showing microservice components

  1. API Gateway: Authenticates the users and routes API requests

  2. Planner: Creates execution plans for queries

  3. Intent detector: Analyzes query intent

  4. Answer generator: Creates prompts based on planner instructions and RAG context

  5. RAG pipeline: Retrieves relevant information from vector database

Setting up the Maxim dashboard

1. Create Maxim repository

Create a new repository called "Chatbot production":

2. Generate API key

Navigate to Settings → API Keys

Generate and save new API key

3. Install SDK

npm install @maximai/maxim-js
Python
pip install maxim-py
Go
go get github.com/maximhq/maxim-go
Java
compileOnly("ai.getmaxim:sdk:0.1.3")

4. Initialize logger

Add this code to initialize the logger in each service:

import { Maxim } from "@maximai/maxim-js";
 
const maxim = new Maxim({ apiKey: "api-key" });
const logger = await maxim.logger({ id: "log-repository-id" });

5. Create trace in API gateway

Use cf-request-id as trace identifier:

const trace = logger.trace({
    id: req.headers["cf-request-id"],
    name: "user-query",
    tags: {
        userId: req.body.userId,
        accountId: req.body.accountId
    },
});

You can get a hold of a trace in two ways:

// Method 1: Using logger and trace ID
logger.traceTag("trace-id", "newTag", "newValue");
logger.traceEnd("trace-id");
 
// Method 2: Using trace object
const trace = logger.trace({ id: "trace-id" });
trace.addTag("newTag", "newValue");
trace.end();

You can manipulate every entity of Maxim observability framework (Span, Generation, Retrieval, Event) in the same way.

6. Add spans in services

Create spans to track operations in each service:

// Getting hold of trace using request id / trace id
const trace = logger.trace({id: req.headers["cf-request-id"]});
// Creating a new span
const span = trace.span({
    id: uuid(),
    name: "plan-query",
    tags: {
        userId: req.body.userId,
        accountId: req.body.accountId
    },
});

When creating spans, consider adding relevant tags that provide context about the operation being performed. These tags help in filtering and analyzing traces later. Remember to end each span once its operation completes to ensure accurate timing measurements.

7. Log LLM calls

Track LLM interactions using generations:

// Creating a new generation
const generation = span.generation({
const generation = span.generation({
    id: uuid(),
    name: "plan-query",
    provider: "openai",
    model: "gpt-3.5-turbo-16k",
    modelParameters: { temperature: 0.7 },
    tags: {
        userId: req.body.userId,
        accountId: req.body.accountId
    },
});

Log LLM responses:

generation.result({
    id: uuid(),
    object: "chat.completion",
    created: Date.now(),
    model: "gpt-3.5-turbo-16k",
    choices: [{
        index: 0,
        message: {
            role: "assistant",
            content: "response"
        },
        finish_reason: "stop"
    }],
    usage: {
        prompt_tokens: 100,
        completion_tokens: 50,
        total_tokens: 150
    }
});

Maxim currently supports OpenAI message format. to convert other messaging formats to OpenAI format in the SDK.

View traces

Access your traces in the Maxim dashboard within seconds of logging. The dashboard shows:

  • Complete request lifecycle
  • Durations and relationships of the Entities (Span/Trace)
  • LLM generation details
  • Performance metrics

On this page