Observability

Logging

Before you start logging you will have to create a LogRepository on the Maxim dashboard. To create a log repository,click on Logs and the click on + icon on the sidebar. You can use the ID of that repository to push logs.

Basic of Maxim logging and tracing

Unlike most of the logging frameworks, Maxim logging SDK is completely stateless. This means that you can create a logger object and use it across your multiple services, nodes or functions. You don't need to pass on, maintain any thread pools to sequence logs.

Let's take an example with the following 2 services

service1.ts
const logger = await maxim.logger({ id: "log-repository-id" });
const trace = logger.trace({ id: "trace-id" });
trace.event({ id: "event-id", name: "event-name" });
service2.ts
const logger = await maxim.logger({ id: "log-repository-id" });
// Here you are adding generation to the same trace
const generation = logger.traceGeneration("trace-id",...)

Initialize logger

You will have to Initialize logger for logging into each Log Repository.

const logger = await maxim.logger({ id: "log-repository-id" });

If the corresponding Log Repository is not found, it will throw an error.

Logging one shot logs with traces

const trace = logger.trace({
	id: "trace-id", // required
	name: "trace-name", // optional
	tags: { key: "value" }, // optional, you can filter logs based on metadata
});

Logging multi-turn logs with sessions

const session = logger.session({
	id: "session-id", // required
	name: "session-name", // optional
	tags: { key: "value" }, // optional, you can filter logs based on metadata
});

Once session object is created, you can add multiple traces across the lifecycle of the conversation.

const session = logger.session({ id: "session-id" });
 
// Here if the trace with the same id is available, it will return the same trace object else create a new one.
const trace = session.trace({ id: "trace-id" });

Using custom ids for traces/sessions

Using custom ids allows you to fetch a trace in any function using the same id. This can be useful in updating trace across your workflow.

Trace

// ... in function 1
const trace = logger.trace({ id: "trace-id" });
 
// ... in function 2
const trace = logger.trace({ id: "trace-id" });
// it returns the same trace object
trace.addTag({ key: "value" });

Session

// ... in function 1
const session = logger.session({ id: "session-id" });
 
// ... in function 2
const session = logger.session({ id: "session-id" });
// it returns the same session object
session.addTag({ key: "value" });

Elements of traces

Once you have a trace object, you can add

Span

A span (short for timespan), groups a bunch of items (events, retrieval, generation, feedback).

const span = trace.span({
	id: "span-id", // optional
	name: "name", // optional
	tags: { key: "value" }, // optional
});
span.event({ id: "event-id"... });
span.end();

Event

An event is a point in time when something happened in the system.

await trace.event({
	id: "event-id", // optional
	name: "name", // optional
	tags: { key: "value" }, // optional
});

Retrieval

A retrieval is a point in time when something was retrieved using your RAG pipeline or database.

const retrieval = trace.retrieval({
	id: "retrieval-id", // optional
	name: "name", // optional
	metadata: { key: "value" }, // optional
});
retrieval.input("How many PTO days do I have?");
retrieval.output(["doc1", "doc2"]);
retrieval.end();

Generation

A generation is a point in time when something was generated in the system.

generation.result expects result to be in OpenAI response format. Here is the reference to the OpenAI response format

const generation = await trace.generation({
	id: "generation-id", // optional
	name: "name", // optional
	maximPromptId: "prompt-id", // optional
	provider: "openai",
	model: "gpt-3.5-turbo-16k",
	messages: [{ role: "system", content: "This is the main prompt" }], // optional
	modelParamters: { key: "value" },
	tags: { key: "value" }, // optional
});
generation.addMessage({
	role: "user",
	content: "This is the main prompt",
	tool_calls: [{}],
});
// We mark generation ended when you call completionResult method
generation.result({
	id: "",
	choices: [
		{
			role: "",
			content: "",
			tool_call: [
				{
					type: "function",
					name: "function-name",
					parameters: [
						{
							key: "value",
						},
					],
				},
			],
		},
	],
	usage: {
		prompt_tokens: 100,
		completion_tokens: 50,
		total_tokens: 150,
	},
});

Feedback

A feedback is a point in time when feedback was given in the system.

trace.feedback({
	score: 0.5, // optional (0-1)/(scale)
	feedback: "string feedback", // optional
});

Langchain Callbacks

Install

npm install @maximai/maxim-js-langchain

You can use Maxim's Langchain Tracer which can be passed as a callback to your langchain LLM calls. This will return responses in the format the logging repository accepts, and can directly be passed to the respective logging functions.

Import

import MaximLangchainTracer from "@maximai/maxim-js-langchain"

Usage

import { OpenAI } from "@langchain/openai";
const maxim = new Maxim({ apiKey: "maxim-api-key" });
const logger = await maxim.logger({ id: "log-repository-id" });
const maximTracer = new MaximLangchainTracer({
	onGenerationStart: (generationInfo) => {
		logger.traceGeneration("trace-id", generationInfo);
	},
	onGenerationEnd: (generationId, result) => {
		logger.generationResult(generationId, result);
	},
	onGenerationError: (generationId, error) => {
		logger.generationError(generationId, error);
	},
});
const llm = new ChatOpenAI({
	openAIApiKey: openAIKey,
	modelName: "gpt-4o",
	temperature: 0,
	callbacks: [maximTracer],
});
const query = "What's the sum of 3 and 2?";
// Optional step to set input of the trace
trace.input = query;
const result = await llm.invoke(query);
// Optional step to set output of the trace
trace.output = result;
// Ending the trace
logger.traceEnd("trace-id");
For now, we only support generations. We're adding support for more functions soon.

On this page