Using context in prompts/workflows
RAG (Retrieval-Augmented Generation) context can be utilized in Maxim in both prompts and workflows. When a user input is provided to the model via prompts or workflows, the input is first processed by the context source API to retrieve context for your application. For prompts this retrieved context, along with the user input, is then passed to the model to ensure that the model's response takes the context into account.
Using context in prompts
To use context in prompts you first need to have a context source in that workspace that provides retrieved context via an API endpoint. If you haven’t already configured this, please follow the steps mentioned here.
Once u have your context source, lets connect it to your prompt.
First, simply reference your context as a variable in your prompt using {{variable-name}}
Eg. You could say Please reference the {{context}} about this customer before sending the email
.
Next, in the variables section on the right side of the playground, link the context source as the value of this variable by clicking on the icon.
- On clicking the icon button, you will see a dropdown with all context sources in your workspace to choose from.
Now when you send this prompt in the playground or via a test run, you will see the retrieved context that is fetched.
You can select evaluators to evaluate the retrieved context. Many such evaluators like context recall, context precision, etc are already available in our evaluator store.
- We also have evaluators that evaluate the output taking this context into consider. Eg. Faithfulness checks that the LLM output maintains the right context provided by retrieval when giving an answer.
Using context in workflows
Similar to prompts, your API endpoints can return context along with the response. While running test runs, you can use that as retrieved context for evaluation.
View retrieved context in test runs
Within the test run report for a prompt or workflow, you can view the retrieved context as a column within the table. This helps you validate whether the right context was retrieved for any entry. If context relevant metrics scores are low, you can check the exact retrieved context to debug and make changes to your RAG pipeline as needed.