Before you start logging you will have to create a LogRepository on the Maxim dashboard. To create a log repository,click on Logs and the
click on + icon on the sidebar. You can use the ID of that repository to push logs.
Unlike most of the logging frameworks, Maxim logging SDK is completely stateless. This means that you can create a logger object and use it across your multiple services, nodes or functions. You don't need to pass on, maintain any thread pools to sequence logs.
Let's take an example with the following 2 services
const logger = await maxim. logger ({ id: "log-repository-id" });
const trace = logger. trace ({ id: "trace-id" });
trace. event ({ id: "event-id" , name: "event-name" });
const logger = await maxim. logger ({ id: "log-repository-id" });
// Here you are adding generation to the same trace
const generation = logger. traceGeneration ( "trace-id" , ... )
You will have to Initialize logger for logging into each Log Repository.
JS/TS Python Go
const logger = await maxim. logger ({ id: "log-repository-id" });
If the corresponding Log Repository is not found, it will throw an error.
JS/TS Python Go
const trace = logger. trace ({
id: "trace-id" , // required
name: "trace-name" , // optional
tags: { key: "value" }, // optional, you can filter logs based on metadata
});
JS/TS Python Go
const session = logger. session ({
id: "session-id" , // required
name: "session-name" , // optional
tags: { key: "value" }, // optional, you can filter logs based on metadata
});
Once session
object is created, you can add multiple traces across the lifecycle of the conversation.
JS/TS Python Go
const session = logger. session ({ id: "session-id" });
// Here if the trace with the same id is available, it will return the same trace object else create a new one.
const trace = session. trace ({ id: "trace-id" });
Using custom ids allows you to fetch a trace in any function using the same id. This can be useful in updating trace across your workflow.
JS/TS Python Go
// ... in function 1
const trace = logger. trace ({ id: "trace-id" });
// ... in function 2
const trace = logger. trace ({ id: "trace-id" });
// it returns the same trace object
trace. addTag ({ key: "value" });
JS/TS Python Go
// ... in function 1
const session = logger. session ({ id: "session-id" });
// ... in function 2
const session = logger. session ({ id: "session-id" });
// it returns the same session object
session. addTag ({ key: "value" });
Once you have a trace object, you can add
A span (short for timespan), groups a bunch of items (events, retrieval, generation, feedback).
JS/TS Python Go
const span = trace. span ({
id: "span-id" , // optional
name: "name" , // optional
tags: { key: "value" }, // optional
});
span. event ({ id: "event-id" ... });
span. end ();
An event is a point in time when something happened in the system.
JS/TS Python Go
await trace. event ({
id: "event-id" , // optional
name: "name" , // optional
tags: { key: "value" }, // optional
});
A retrieval is a point in time when something was retrieved using your RAG pipeline or database.
JS/TS Python Go
const retrieval = trace. retrieval ({
id: "retrieval-id" , // optional
name: "name" , // optional
metadata: { key: "value" }, // optional
});
retrieval. input ( "How many PTO days do I have?" );
retrieval. output ([ "doc1" , "doc2" ]);
retrieval. end ();
A generation is a point in time when something was generated in the system.
generation.result expects result to be in OpenAI response format. Here is the reference to the OpenAI response
format
JS/TS Python Go
const generation = await trace. generation ({
id: "generation-id" , // optional
name: "name" , // optional
maximPromptId: "prompt-id" , // optional
provider: "openai" ,
model: "gpt-3.5-turbo-16k" ,
messages: [{ role: "system" , content: "This is the main prompt" }], // optional
modelParamters: { key: "value" },
tags: { key: "value" }, // optional
});
generation. addMessage ({
role: "user" ,
content: "This is the main prompt" ,
tool_calls: [{}],
});
// We mark generation ended when you call completionResult method
generation. result ({
id: "" ,
choices: [
{
role: "" ,
content: "" ,
tool_call: [
{
type: "function" ,
name: "function-name" ,
parameters: [
{
key: "value" ,
},
],
},
],
},
],
usage: {
prompt_tokens: 100 ,
completion_tokens: 50 ,
total_tokens: 150 ,
},
});
A feedback is a point in time when feedback was given in the system.
JS/TS Python Go
trace. feedback ({
score: 0.5 , // optional (0-1)/(scale)
feedback: "string feedback" , // optional
});
npm pnpm yarn bun
npm install @maximai/maxim-js-langchain
You can use Maxim's Langchain Tracer which can be passed as a callback to your langchain LLM calls.
This will return responses in the format the logging repository accepts, and can directly be passed to the respective logging functions.
JS/TS
import MaximLangchainTracer from "@maximai/maxim-js-langchain"
JS/TS
import { OpenAI } from "@langchain/openai" ;
const maxim = new Maxim ({ apiKey: "maxim-api-key" });
const logger = await maxim. logger ({ id: "log-repository-id" });
const maximTracer = new MaximLangchainTracer ({
onGenerationStart : ( generationInfo ) => {
logger. traceGeneration ( "trace-id" , generationInfo);
},
onGenerationEnd : ( generationId , result ) => {
logger. generationResult (generationId, result);
},
onGenerationError : ( generationId , error ) => {
logger. generationError (generationId, error);
},
});
const llm = new ChatOpenAI ({
openAIApiKey: openAIKey,
modelName: "gpt-4o" ,
temperature: 0 ,
callbacks: [maximTracer],
});
const query = "What's the sum of 3 and 2?" ;
// Optional step to set input of the trace
trace.input = query;
const result = await llm. invoke (query);
// Optional step to set output of the trace
trace.output = result;
// Ending the trace
logger. traceEnd ( "trace-id" );
For now, we only support generations. We're adding support for more functions soon.