Evaluating Workflow

HTTP Workflow

Workflow allows you to expose your AI application to Maxim using your existing API endpoint.

Create a workflow for a public endpoint

HTTP workflow allows you bring your applications’s API endpoint within the Maxim framework. You will need to enter the URL of the API endpoint to your AI application and if necessary, add headers and parameters needed for the API request.

You can then configure the payload for your API. The payload can include whatever is needed for your backend to process the request. When triggering a test run you have to attach dataset to your workflow. We allow you to use any of the column values that you have in the dataset that you would attach.

You can refer to the column values by wrapping the column name with {{<column_name>}} (we refer to them as variables). Variables will be resolved during the test run time from the dataset you attach. You can also use variables in headers and parameters, allowing for greater flexibility in configuring your workflow.

All of this is shown in the video below:

Playing with the workflow

Once you have your worflow configured, you can start sending messages to your API from the Messages panel and have a multi-turn/prompt conversation like experience to test your endpoint. This is shown in the video above towards the end.

Mapping the output for evaluation

After playing around with your workflow to make sure it works how you expect it to, you can run tests on this endpoint. Before you are able to do that though, you would need to let us know what part of your response do you want to evaluate, essentially map an output from the response payload.

This can be done by clicking on the Test button on the top right corner, which would open the Test run configuration panel. Here you will see selector (a dropdown) for Output which will list all the mappable fields from the response. In case you need to have a look at the response payload, you can click on the Show response button to see the full response payload. Optionally, you can also map the Context to Evaluate field using the Context field selector.

Here's a video showing how to map the output for evaluation:

Note:

  • The Test button would be disabled if you haven't yet sent any messages to your endpoint. This is because we need to know what your response payload looks like before we can map the output for evaluation.

  • If you are trying to map the output without running a test run (not clicking the Trigger test run button in the configuration sheet), you will need to save the workflow explicitly for the mapping to take effect. This can be done by mapping in the sheet and the clicking outside to close the sheet so that the Save workflow button behind it is visible; pressing it would save the workflow with the mapping.

Create a workflow for Local API

api.py
from flask import Flask
 
# Flask constructor takes the name of
app = Flask(__name__)
 
@app.route('/rag',methods=['POST'])
def rag_output():
    body = request.get_json()
    print(body)
    output = runPrompt(body['query'])
    response = {'response':output}
    return response
 
# main driver function
if __name__ == '__main__':
 
    # run() method of Flask class runs the application
    # on the local development server.
    app.run()
}

n this example, we have built a demo RAG application using Google’s PaLM model based on "Harry Potter and the Sorcerer's Stone - CHAPTER ONE." The application is served locally through a Flask endpoint at http://127.0.0.1:5000/rag .

To test your endpoint outputs on Maxim AI, you need a public API. You can achieve this by using Ngrok, which makes the endpoint public through tunnelling.

Setup Ngrok

You can follow these steps or refer to the Ngrok documentation

To install Ngrok on a Mac, use the following command:

Shell command to install ngrok on MacOS
brew install ngrok/ngrok/ngrok

Next, connect your Ngrok agent to your Ngrok account. If you haven't already, sign up for an Ngrok account and copy your authtoken from your Ngrok dashboard.

Run the following command in your terminal to install the authtoken and connect the Ngrok agent to your account:

Authenticating ngrok
ngrok config add-authtoken <TOKEN>

Start ngrok by running the following command.

Starting ngrok on :5000
ngrok http http://localhost:5000

We assume you have a working web application listening at http://localhost:5000 If your app is listening on a different URL, adjust the command accordingly.

You will see something similar to the following console UI in your terminal.

Bring endpoint to Maxim

You can now take this forwarding URL and use it in our platform by following the steps mentioned above as now your localhost endpoint is now public

On this page