In the previous section, we explored the quick integration of Maxim observability using decorators. However, there are scenarios where you might need finer control over your code's tracing. This section will guide you through the process of manually integrating Maxim observability into your codebase, providing you with more flexibility and granular control over your observability implementation.
from maxim.logger import TraceConfig# ... in function 1trace = logger.trace(TraceConfig(id="trace-id"))# ... in function 2trace = logger.trace(TraceConfig(id="trace-id"))# it returns the same trace objecttrace.add_tag({"key":"value"})
from maxim.logger import SessionConfig# ... in function 1session = logger.session(SessionConfig(id="session-id"))# ... in function 2session = logger.trace(SessionConfig(id="session-id"))# it returns the same session objectsession.add_tag({"key":"value"})
A Retrieval is a special type of Span in Maxim, which represents a retrieval query to a knowledge base or vector database.
Adding a retrieval to a span
from maxim.logger import RetrievalConfigretrieval = span.retrieval(RetrievalConfig(id="retrieval-id", name="Test Retrieval"))retrieval.input("How many PTO days do I have?")retrieval.output(["doc1", "doc2"])retrieval.end()
A Tool Call is a special type of Span in Maxim, which represents an external system or service call done based on an LLM response.
Adding a Tool Call to a Trace
from maxim.logger import ToolCallConfigtool_call = completion.choices[0].message.tool_calls[0]tool_call_config = ToolCallConfig( id=tool_call.id, name=tool_call.function.name, description="Get current temperature for a given location.", args=tool_call.function.arguments, tags={ "location": toolCall.function.arguments["location"] })trace_tool_call = trace.tool_call(tool_call_config)result = call_external_service(tool_call.function.name, tool_call.function.arguments)trace_tool_call.result(result)
Maxim supports node-level evaluation, which allows you to evaluate each individual node in your whole trace. This evaluation can be used to measure the quality of each node's output, and can be used to improve the performance of the node.
To evaluate a node, you can use the evaluate method of the node.
Evaluating a node
# for this example we are evaluating a particular generation# but you can evaluate any node in your trace similarly# ...receive user input and process itgeneration.evaluate() .with_evaluators("clarity", "toxicity") .with_variables({ "input": user_input })# ...generate llm responsegeneration.evaluate().with_variables( { "output": llm_response.choices[0].message.content }, ["clarity", "toxicity"])# ...code continues
More infomation about Agentic Evaluation can be found in the Agentic Evaluation section under Observability -> Evaluating Logs.