Maxim Logo
How toLog your application

Log LLM generations in your AI application traces

Use generations to log individual calls to Large Language Models (LLMs)

Each trace/span can contain multiple generations.

Send and record LLM request

// Initialize a trace with a unique ID
const trace = logger.trace({id: "trace-id"});
 
// Adding a generation
const generation = trace.generation({
    id: "generation-id",
    name: "customer-support--gather-information",
    provider: "openai",
    model: "gpt-4o",
    modelParameters: { temperature: 0.7 },
    messages: [
        { "role": "system", "content": "you are a helpful assistant who helps gather customer information" },
        { "role": "user", "content": "My internet is not working" },
    ],
});
// Note: Replace 'trace.generation' with 'span.generation' when creating generations within an existing span
 
// Execute the LLM call
// const aiCompletion = await openai.chat.completions.create({ ... })

Record LLM response

generation.result({
    id: "chatcmpl-123",
    object: "chat.completion",
    created: Date.now(),
    model: "gpt-4o",
    choices: [{
        index: 0,
        message: {
            role: "assistant",
            content: "Apologies for the inconvenience. Can you please share your customer id?"
        },
        finish_reason: "stop"
    }],
    usage: {
        prompt_tokens: 100,
        completion_tokens: 50,
        total_tokens: 150
    }
});

On this page