ObservabilityPython

Getting started

Before you start logging you will have to create a LogRepository on the Maxim dashboard. To create a log repository,click on Logs and the click on + icon on the sidebar. You can use the ID of that repository to push logs.

In this guide, we'll walk you through the process of getting started with the Maxim Python SDK.

Let's consider the following example system:

sample-system

Flask web server + Langchain + OpenAI + Weaviate : Movie Search Engine
import json
import os
 
import weaviate
from flask import Flask, jsonify, request
from langchain.chains import ChatVectorDBChain
from langchain.chat_models.openai import ChatOpenAI
from langchain.tools import tool
from langchain_community.vectorstores import Weaviate
 
openai_key = os.environ.get("OPENAI_API_KEY", "")
weaviate_url = os.environ.get("WEAVIATE_URL", "")
weaviate_key = os.environ.get("WEAVIATE_API_KEY", "")
 
llm = ChatOpenAI(api_key=openai_key, model="gpt-4o-mini")
client = weaviate.connect_to_weaviate_cloud(
    cluster_url=weaviate_url,
    auth_credentials=weaviate.auth.AuthApiKey(api_key=weaviate_key),
    headers={
        "X-OpenAI-Api-Key": openai_key,
    },
)
 
app = Flask(__name__)
 
 
def retrieve_docs(query: str):
    collection = client.collections.get("Awesome_moviate_movies")
    response = collection.query.near_text(query=query, limit=3)
    docs = [
        {
            "title": item.properties["title"],
            "year": item.properties["year"],
            "description": item.properties["description"],
            "genres": item.properties["genres"],
            "actors": item.properties["actors"],
            "director": item.properties["director"],
        }
        for item in response.objects
    ]
    return docs
 
 
def execute(query: str):
    context = retrieve_docs(query)
    messages = [
        (
            "system",
            f"You answer questions about movies. Use provided list of movies to refine the response.\n\n List of movies: {json.dumps(context)}\n Respond in proper markdown format",
        ),
        ("human", query),
    ]
    result = llm.invoke(messages)
    return result.content
 
 
@app.post("/api/v1/chat")
def handler():
    query = request.json["query"]
    result = execute(query)
    return jsonify({"result": result})
 
 
app.run(port=8000)
client.close()

Integrating Maxim SDK

Initialize Maxim logger.

To get the Maxim API key

Initializing Maxim logger
from maxim import Maxim,Config
from maxim.logger import LoggerConfig
from maxim.decorators import trace, retrieval, current_retrieval
from maxim.decorators.langchain import langchain_llm_call, langchain_callback
 
maxim = Maxim(Config(api_key="maxim-api-key"))
logger = maxim.logger(LoggerConfig(id="log-repository-id"))

Decorate handler to trigger a new Trace

Decorate handler to trigger a new Trace
@app.post("/api/v1/chat")
@trace(logger=logger,name="movie-chat-v1")
def handler():
    ...

Decorate LLM call

Decorate LLM call
@langchain_llm_call(name="llm-call")
def execute(query: str):
    callback = langchain_callback()        
    ...
    result = llm.invoke(messages, config={"callbacks":[callback]})

Decorate Retrieval call

Decorate Retrieval call
@retrieval(name="weaviate-call")
def retrieve_docs(query: str):
    current_retrieval().input(query)
    ...
    return docs

You can see entire code here.

Log repository on Maxim dashboard

On this page