Context sources

Overview

One of the most popular use case for applications building on top of Large Language Models is building a RAG architecture. RAG enables you to give LLMs access to your data so that the answers the LLM generates are grounded in your data.

The challenge however is LLMs have a habit of hallucinating which can only be controlled by having a good retriever that feeds relevant and meaningful context to the LLM from your RAG. Thus, while testing the final generated output remains very important, evaluating the retrieved context is equally important if not more.

Context sources in Maxim allow you to expose your RAG pipeline via a simple endpoint irrespective of the complex steps within it. This context source can then be linked as a variable in your prompt or workflow and selected for evaluation.

On this page

No Headings